Based Camp - November 02, 2023


Hard Mathematical Proof AI Won't Kill Us


Episode Stats

Length

36 minutes

Words per Minute

183.76785

Word Count

6,715

Sentence Count

263

Misogynist Sentences

3

Hate Speech Sentences

13


Summary

In this episode, we discuss the Fermi Paradox, the Grabby Alien Hypothesis, and the idea that we are about to create a paperclip-maximizer AI that will end up fooming and killing us all.


Transcript

00:00:00.000 So basically, no matter which one of these explanations of the Fermi Paradox is true,
00:00:05.260 either it's irrelevant that we are about to invent a paperclip maximizing AI
00:00:09.200 because we're about to be destroyed by something else or in a simulation,
00:00:13.600 or we're definitely not about to invent a paperclip maximizing AI,
00:00:17.640 either because we're really far away from the technology or because almost nobody does that.
00:00:21.200 That's just not the way AI works.
00:00:22.680 I am so convinced by this argument that it is actually,
00:00:25.640 I used to believe it was like a 20% chance we all died because of an AI
00:00:28.300 or maybe even as high as a 50% chance,
00:00:30.160 but it was a variable risk as I've explained in other videos.
00:00:32.580 I now think there's almost a 0% chance.
00:00:35.000 A 0% chance assuming we are not about to be killed by a grabby AI somebody else invented.
00:00:40.960 Now, it does bring up something interesting.
00:00:44.160 If the reason we're not running into aliens is because infinite power and material generation
00:00:48.320 is just incredibly easy and there's a terminal utility convergence function,
00:00:52.340 then what are the aliens doing in the universe?
00:00:55.840 Would you like to know more?
00:00:56.760 Hi, Malcolm.
00:00:59.040 How are you doing, my friend?
00:01:00.620 So today we are going to do an episode,
00:01:03.260 a bit of a preamble for an already filmed interview.
00:01:06.860 So we did two interviews with Robin Hanson,
00:01:09.140 and in one of them, we discuss this theory.
00:01:12.200 However, I didn't want to off rail the interview too much going into this theory,
00:01:17.680 but I really wanted to nerd out on it with him
00:01:19.800 because he is the person who invented the grabby aliens hypothesis solution
00:01:24.180 to the Fermi paradox.
00:01:25.740 Which I hadn't heard about grabby aliens before, so I'm glad we're doing this.
00:01:30.800 This is great.
00:01:32.080 Yes.
00:01:32.400 So we will use this episode to talk about the Fermi paradox,
00:01:37.160 the grabby alien hypothesis,
00:01:39.140 and how the grabby alien hypothesis can be used through controlling one of the variables,
00:01:46.160 i.e. the assumption that we are about to invent a paperclip maximizer AI that ends up fooming and killing us all,
00:01:54.220 because that would be a grabby alien definitionally.
00:01:56.820 If you collapse that variable within the equation to today,
00:02:00.940 then you can back calculate the probability of creating a paperclip maximizing AI.
00:02:06.900 And, spoiler alert, the probability is almost zero.
00:02:11.240 It basically means it is almost statistically impossible that we are about to create a paperclip maximizing AI,
00:02:19.600 unless, with the two big caveats here,
00:02:22.880 something in the universe that would make it irrelevant whether or not we created a paperclip maximizing AI,
00:02:28.740 is hiding other aliens from us,
00:02:31.160 or we are in a simulation,
00:02:33.480 which also would make it irrelevant that we're about to create a paperclip maximizing AI,
00:02:37.320 or there is some filter to advanced life developing on a planet that we have already passed through
00:02:45.520 that we don't realize that we have passed through.
00:02:48.320 So those are the only ways that this isn't the case.
00:02:50.660 But let's go into it, because it is really easy.
00:02:53.500 I just realized that some definitions may help here.
00:02:56.080 We'll get into the grabby alien hypothesis in a second,
00:02:58.740 but the concept of a paperclip maximizing AI
00:03:01.820 is the concept of an AI that is just trying to maximize some simplistic function.
00:03:09.020 So in the concept, as it's laid out as a paperclip maximizer,
00:03:12.640 it would be just make maximum number of paperclips,
00:03:15.380 and then it just keeps making paperclips,
00:03:16.960 and it starts turning the earth into paperclips,
00:03:18.680 and it starts turning people into paperclips.
00:03:20.740 Now, realistically, if we were to have a paperclip maximizing AI,
00:03:23.880 it would probably look something more like, you know,
00:03:26.380 somebody says, process this image.
00:03:28.740 And it just keeps processing the image to, like, an insane degree,
00:03:32.800 because it was never told when to stop processing the image,
00:03:35.560 and it just turns all the world into energy to process an image,
00:03:38.400 or something else silly like that.
00:03:40.280 This concept is important to address because there are many people who at least pass themselves off as intelligent,
00:03:47.020 who believe that we are about to create a paperclip maximizing AI,
00:03:51.280 that AI is about to, as they call, foom, which I mentioned earlier here,
00:03:55.060 which just means rise in intelligence astronomically quickly,
00:03:58.400 like double this intelligence every 15 minutes or something,
00:04:01.020 and then wipe out our species,
00:04:03.160 and after that begin to consume all matter in the universe.
00:04:06.320 The Fermi Paradox is basically the question of,
00:04:10.800 why haven't we seen extraterrestrial life yet?
00:04:15.180 You know, like, we kind of should have seen it already.
00:04:18.360 It's kind of really shocking that we haven't,
00:04:22.500 and I would say that anyone's metaphysical understanding of reality
00:04:26.780 that doesn't take the Fermi Paradox into account is deeply flawed.
00:04:31.820 Because based on our understanding of physics today,
00:04:38.800 our understanding of what our own species intends to do in the next 1,000, 2,000 years,
00:04:44.760 our understanding of the filters our species has gone through,
00:04:49.320 so we know how hard it was for life to evolve on this planet.
00:04:52.840 And the answer is, not very from what we can see.
00:04:57.540 I mean, you know, just a lot of people,
00:05:00.640 well, I'm really, really into,
00:05:03.280 it's one of, like, my areas of, like, deep nerddom,
00:05:05.920 theories for how the first life could have evolved on Earth.
00:05:09.700 So there's a couple things to note.
00:05:11.900 One isn't that important to this,
00:05:14.380 which is life evolved on Earth almost as soon as it could.
00:05:18.600 Now, a person may say, why isn't that this relevant?
00:05:21.180 That would seem to indicate that it is very easy for life to evolve on a planet.
00:05:26.620 Well, and here we have to get into the grabby aliens theory.
00:05:30.480 You're dealing with the anthropic principle here, okay?
00:05:32.860 Can you define the anthropic principle?
00:05:35.280 Yeah.
00:05:35.660 Basically what it means is if you're asking, like,
00:05:39.060 look, it looks like Earth is almost a perfect planet for human life to evolve on it.
00:05:43.900 Like, it had liquid water or everything like that, right?
00:05:46.620 Except human life wouldn't have evolved without those things on a planet.
00:05:50.960 A different kind of life would have evolved without those things.
00:05:53.440 The kind that doesn't need water, et cetera.
00:05:55.860 Right.
00:05:56.300 So it's not really...
00:05:58.540 If life on Earth didn't evolve almost as soon as it could,
00:06:04.860 well, then it would have been too late,
00:06:06.860 and another alien would have wiped out and colonized this planet.
00:06:09.800 That is what the grabby alien theory would say,
00:06:12.140 so that this doesn't really change the probability of this as a filter.
00:06:15.740 But what we do know about the evolution of life on Earth is there are multiple ways that could
00:06:20.460 have happened, all of which could lead to an evolving...
00:06:24.020 You could either be dealing with, like, an RNA world.
00:06:26.560 You could be dealing with citrus acid cycle event.
00:06:30.120 You could be dealing with the clay hypothesis.
00:06:33.140 I actually think the clay hypothesis...
00:06:34.040 Do you want to expound on any of these?
00:06:35.340 I've never heard of the citric acid hypothesis.
00:06:38.220 So for this stuff, I would say it's not really that relevant to this conversation.
00:06:44.180 And people can dig into these various theories with people who have, like, done them more.
00:06:48.820 Just, like, look up citric acid cycle hypothesis,
00:06:51.340 explanation for evolution of life on Earth,
00:06:53.820 or clay hypothesis to evolution of life on Earth,
00:06:56.980 or shallow pool hypothesis to evolution of life on Earth,
00:07:00.240 or deep sea vent hypothesis to evolution of life on Earth.
00:07:03.060 The point being is it shouldn't actually, like,
00:07:07.540 it shouldn't actually be that hard for life to begin to evolve on a planet like this.
00:07:11.580 So, but why this is a relevant point, okay?
00:07:16.000 Okay.
00:07:16.680 And we actually sort of have to back out here from the grabby aliens hypothesis.
00:07:20.540 So I'll explain what the grabby aliens hypothesis says
00:07:23.260 and why this is relevant to the Fermi paradox.
00:07:25.660 So the grabby...
00:07:26.420 Usually when you're dealing with solutions to the Fermi paradox,
00:07:29.540 what people will do is they'll say that there's some unknown factor
00:07:32.700 that we don't know yet, basically.
00:07:34.800 So a great example here would be the dark forest hypothesis.
00:07:38.060 Okay.
00:07:38.720 So the dark forest hypothesis is that there actually are aliens,
00:07:42.140 lots of aliens out there.
00:07:43.800 They just have the common sense to not be broadcasting where they are
00:07:47.440 and to be very good at hiding where they are
00:07:49.320 because they are all hostile to each other.
00:07:51.200 And that any other aliens like us who were stupid enough to broadcast where they are,
00:07:55.820 they get snubbed out, snuffed out really quickly.
00:07:59.300 Sure, that makes sense.
00:08:00.380 That makes sense, yeah.
00:08:01.300 Okay, if the dark forest hypothesis is the explanation
00:08:05.420 for why we are not seeing alien life out there,
00:08:08.460 it is somewhat irrelevant whether or not we build a paperclip maximizing robot
00:08:12.680 because it means we're about to be snuffed out anyway,
00:08:15.660 given how loud we've been radio signal-wise,
00:08:18.900 sending out ships broadcasting about us, sending out signals.
00:08:22.140 We have been a very loud species.
00:08:24.400 And we could not defend against an interplanetary assault
00:08:28.440 by a space-fearing species.
00:08:29.860 Well, I mean, in that case, you could actually argue
00:08:31.940 it would be much better if we developed AGI as fast as possible
00:08:36.100 because maybe it can defend us even if we cannot defend ourselves.
00:08:39.680 Possibly.
00:08:40.140 But that's the point there.
00:08:42.300 Beside the point, obviously.
00:08:43.340 It becomes irrelevant.
00:08:43.920 Or they'll say we're in a simulation and that's why you're not seeing stuff.
00:08:46.600 But again, that makes all of this beside the point.
00:08:48.040 What grabby aliens does is it says, no, actually, we are just statistically
00:08:52.480 the first sentient species on the road to becoming a grabby alien,
00:08:58.980 and I'll explain what this means in just a second, in this region of space.
00:09:02.000 And then it says, let's assume that's true.
00:09:05.960 It can use the fact that we haven't seen another species out there,
00:09:11.660 a grabby alien that is rapidly expanding across planets,
00:09:14.460 to calculate how rarely these evolve on planets.
00:09:21.700 Okay?
00:09:22.140 Okay.
00:09:22.760 Do you sort of understand how that could be the case?
00:09:25.140 Yeah.
00:09:26.100 Okay.
00:09:26.820 So in the grabby aliens hypothesis, when you run this calculation,
00:09:31.820 it turns out if that's why we haven't seen an alien yet,
00:09:36.500 what it means is there are very hard filters,
00:09:39.140 like something that makes it very low probability
00:09:41.580 that a potentially habitable planet ends up evolving an alien
00:09:46.600 that ends up spreading out like a grabby alien,
00:09:49.400 i.e. like a paperclip maximizer.
00:09:50.940 One of these really loud things that's just going,
00:09:52.920 planet, you know, use the resources on the planet,
00:09:55.880 other planets, other planets, other planets.
00:09:56.980 And even if it has already finished doing that,
00:09:59.440 you've argued in other conversations we have had
00:10:01.860 that you would see the signs of that.
00:10:04.300 You would see the signs of the destroyed civilizations, et cetera.
00:10:07.380 Yeah.
00:10:08.320 A grabby alien, or which a paperclip maximizer is,
00:10:11.420 so it's just easy.
00:10:12.040 If you're like, what does a grabby alien look like?
00:10:13.560 A paperclip maximizer that's just going planet to planet,
00:10:16.120 digesting the planets, and then moving on.
00:10:18.400 Or a human empire expanding through the universe.
00:10:21.420 You know, we go, we colonize a planet.
00:10:23.620 Within 100 years, we get bored, go,
00:10:25.440 or some people go and they try colonizing a new planet,
00:10:27.940 you know.
00:10:28.460 Even with our existing technology on Earth right now,
00:10:32.900 like the speed of space travel right now,
00:10:35.580 if we were expanding that way,
00:10:38.040 we could conquer an entire galaxy
00:10:40.800 was in about 300 million years.
00:10:43.360 So not that long when you're talking about
00:10:45.660 like the age of the universe.
00:10:46.880 This is a blindingly fast conquest.
00:10:49.240 So once an alien turns grabby,
00:10:52.360 it moves really quickly.
00:10:54.660 Sure.
00:10:55.400 And a lot of people think that we are
00:10:57.620 like space travel constrained.
00:10:59.500 We're really not.
00:11:00.480 The reason why we don't space travel
00:11:02.540 with our existing technology
00:11:03.940 is because of like radiation damage to cells
00:11:06.760 and the lifespan of a human.
00:11:08.340 But like if an AI was space traveling,
00:11:10.700 it could do pretty well with our existing technology
00:11:13.060 in terms of getting to other planets,
00:11:15.120 you know, using them and then spreading.
00:11:16.840 OK, anyway, so the grabby alien hypothesis says
00:11:20.940 that a species becomes grabby
00:11:23.360 once in every million galaxies.
00:11:28.600 OK.
00:11:29.500 Now, within every galaxy,
00:11:31.760 there are around 400 or 500 million planets
00:11:34.680 within the habitable zone.
00:11:36.580 So the habitable zone is a distance away from a star
00:11:39.660 where life could feasibly evolve.
00:11:41.680 Now, this isn't saying that they have
00:11:43.000 the other precursors for life,
00:11:44.200 but what it means is that there are very frequently
00:11:48.220 in space, it turns out,
00:11:51.880 planets that are likely for life to evolve on them.
00:11:55.440 I would estimate, like if I'm looking at everything
00:11:57.440 all together, like the data that I've seen,
00:11:59.720 there's probably about 10 million planets per galaxy
00:12:03.880 that an intelligent species could evolve in.
00:12:07.680 And then if you're talking about,
00:12:09.280 well, you would only need this to happen,
00:12:10.780 then you've got to multiply that by a million
00:12:13.840 for the one in a million galaxies
00:12:15.960 where a species is turning grabby.
00:12:18.640 Now, this is where it becomes preposterous
00:12:20.800 that we are about to invent.
00:12:22.300 If this is why we haven't seen aliens yet,
00:12:24.180 why we are that we are about to invent a grabby alien.
00:12:27.280 We can look throughout Earth's history,
00:12:29.760 as I did with sort of the first big filter,
00:12:32.060 the evolution of life
00:12:33.140 or the appearance of life first on this planet
00:12:34.960 and say, what's the probability of that event happening
00:12:37.980 in any given habitable planet?
00:12:40.520 For life appearing, my read is,
00:12:44.900 not only is it likely to appear,
00:12:46.800 it could appear like one of five different ways,
00:12:49.720 even with the chemical composition of early Earth.
00:12:53.680 Then you're looking at other things.
00:12:55.460 Okay, what about multicellular life?
00:12:56.820 What's the probability of that happening?
00:12:58.820 Actually really high, really high.
00:13:01.240 There's not like a big barrier
00:13:02.640 that's preventing it from evolving.
00:13:04.680 And it has many advantages over monocellular life.
00:13:07.960 So you're almost always going to get it.
00:13:10.260 Intelligence.
00:13:11.200 How rare is intelligence to evolve?
00:13:13.900 Not that rare,
00:13:14.860 given that it has evolved multiple times
00:13:17.700 on our own planet in very different species.
00:13:20.900 I mean, you see intelligence in octopuses.
00:13:23.560 In whales.
00:13:24.420 You see intelligence in crows.
00:13:25.640 Yeah.
00:13:26.080 You see intelligence in humans.
00:13:27.740 And then you could say, okay, okay,
00:13:28.840 but like human-like intelligence, right?
00:13:30.880 Well, we already know from humans
00:13:33.060 what a huge boost human-like intelligence gives a species.
00:13:36.900 The core advantage to human-like intelligence
00:13:39.220 is like if I'm a spider
00:13:41.020 and I'm bad at making webs, right?
00:13:43.140 Then I die.
00:13:44.000 And that is how spiders get better
00:13:45.380 at making webs intergenerationally.
00:13:47.240 As a human,
00:13:48.240 I am able to essentially have like different models
00:13:51.120 of the universe fight in my head
00:13:53.120 and presumably allow the best one to win.
00:13:55.860 And you don't have to die before you get better.
00:13:58.260 Yeah.
00:13:58.400 You don't have to die to get better.
00:13:59.740 It is almost as important to evolution.
00:14:03.520 It is sort of like the second sexual selection.
00:14:06.320 So when sex first evolved,
00:14:08.060 the core utility of sex,
00:14:09.600 as opposed to just like cloning yourself, right?
00:14:11.920 Is it allowed for more DNA mixing,
00:14:14.080 which allowed for faster evolution?
00:14:16.040 Intelligence allows for the faster evolution
00:14:20.340 of the sort of operating system of our biology.
00:14:24.900 And so it's just such a huge advantage.
00:14:28.680 It's almost kind of shocking.
00:14:30.120 It didn't evolve faster.
00:14:32.280 For sure.
00:14:33.360 Given how close many species have come to it.
00:14:36.220 Now, actually surprising to a lot of people,
00:14:38.240 this is just like a side note here.
00:14:39.580 A lot of people think cephalopods
00:14:40.840 were close to evolving sentience.
00:14:43.120 So let's talk about cephalopods.
00:14:44.260 Why?
00:14:44.820 Wait, I've like,
00:14:46.200 I mean, cephalopods are all over like historic geology
00:14:48.660 and all these things.
00:14:49.480 Yeah, yeah, yeah.
00:14:49.640 What?
00:14:50.100 Cephalopods are like squids, octopus, stuff like that.
00:14:52.320 Like a lot of people point to how smart they are.
00:14:53.940 And they are smart.
00:14:54.600 They are like weirdly smart.
00:14:56.500 But they don't know why they're smart
00:14:57.880 because they don't know neuroscience.
00:14:59.380 So the reason why cephalopods are as smart as they are
00:15:02.600 is an axon.
00:15:04.000 An axon is what like information,
00:15:05.900 the action potential travels down.
00:15:07.960 Yeah, it's a little arm thing that you see on a neuron.
00:15:10.320 Yes, in a neuron, it's the little arm thing.
00:15:12.520 It's the cable you can think of it as, okay?
00:15:15.340 So to be an intelligent species,
00:15:18.380 you need really fast traveling action potentials.
00:15:21.440 Okay.
00:15:21.740 So the way that humans
00:15:25.280 have really fast traveling action potentials
00:15:27.580 is something called myelination.
00:15:29.580 I'm not going to go fully into it,
00:15:31.280 but it's a little physics trick
00:15:32.580 where they put like a layer of fat
00:15:34.660 intermittently around the axon
00:15:37.360 and it causes the action potential to jump between.
00:15:41.720 It's like putting vegetable oil on your slip and slide.
00:15:45.560 Not exactly.
00:15:46.900 It's actually a really complicated trick of physics
00:15:49.200 that can't easily be explained except by like looking at it.
00:15:53.460 I don't want to get into it.
00:15:54.400 The point is, is we mammals have a special little trick
00:15:58.720 that allows for our action potentials to travel very, very quickly.
00:16:01.940 And are you saying that cephalopods have this too?
00:16:04.380 No, they don't.
00:16:05.220 The way that they and any other species
00:16:07.340 that wants a fast traveling action potential before us,
00:16:10.380 the way that you increase the speed
00:16:12.040 that action potentials traveled
00:16:13.300 was by increasing the diameter of the axon.
00:16:16.520 Oh, so they just have fat axons, whereas we have optimized axons.
00:16:19.920 Enormously fat.
00:16:20.620 In some cephalopods, they're like a quarter centimeter in diameter.
00:16:24.520 Holy smokes, like, whoa, okay.
00:16:26.900 They could not get smarter than they are
00:16:29.260 without having some huge evolutionary leap
00:16:31.780 in the way that their nervous systems work.
00:16:33.540 So interesting.
00:16:34.080 This is why cephalopods, despite being really smart
00:16:36.320 and probably being really smart for a long time,
00:16:38.860 because they've been on earth for a really long time,
00:16:41.400 just could never make the evolutionary leap
00:16:44.060 to human-type intelligence.
00:16:46.520 Because they don't have room to have even fatter axons?
00:16:50.200 Yeah, because as the axons got fatter,
00:16:52.680 the number of neurons they could have would get lower.
00:16:55.220 The density of the neurons would get lower.
00:16:56.720 Oh, of course.
00:16:57.720 Yeah, you've got limited space,
00:16:58.840 unless they've got much bigger brain cells.
00:17:02.000 Yeah, I guess you could have, like, giant, giant, giant...
00:17:04.440 I mean, yeah.
00:17:05.200 ...on your senior octopus.
00:17:06.200 Well, I mean, whatever.
00:17:07.300 Anyway, this is a huge tangent here.
00:17:09.500 But basically, it looks like,
00:17:11.960 if you're looking at the evolution of life on our earth,
00:17:14.360 if we have undergone...
00:17:15.720 Other big, like, hard filters could be,
00:17:17.640 it's very rare for a species to get nuclear weapons
00:17:20.960 and not use them to destroy itself.
00:17:23.120 Because it's so fun.
00:17:25.220 Right?
00:17:25.860 Could turn out that almost every species does that.
00:17:28.120 Or it could be that there's, like, one science experiment.
00:17:31.320 Like, a lot of people thought that may be trying to
00:17:34.040 find the Hadron particle with the big super collider.
00:17:37.700 Because actually, like, all species,
00:17:40.400 they get to a certain level of intelligence
00:17:41.860 and a certain level of curiosity,
00:17:43.480 and they can't help but trying to find Hadrons,
00:17:46.020 and then they create little black holes in their planets,
00:17:48.180 and they disappear.
00:17:48.940 Yeah.
00:17:50.620 And that really could be a filter.
00:17:52.040 Like, these are all potential filters.
00:17:54.540 The problem is,
00:17:56.460 is if we're, like, five years away
00:17:58.200 from developing a paperclip-maximizing AI,
00:18:01.420 that means that we as a species
00:18:02.920 have already passed all of our filters.
00:18:05.380 Mm-hmm.
00:18:06.020 And that means that we as a species
00:18:07.960 can look back on the potential possible filters
00:18:11.020 that we have passed through
00:18:12.040 and sort of add them all up.
00:18:15.360 Okay?
00:18:16.420 Mm-hmm.
00:18:17.120 And when you do that,
00:18:18.820 you don't get a number
00:18:20.020 that comes even close
00:18:22.180 to explaining
00:18:23.420 why you would only see
00:18:25.600 one grabby alien
00:18:27.560 per every million galaxies.
00:18:30.120 Mm-hmm.
00:18:30.480 In fact, it means that the probability
00:18:32.600 of us being about now,
00:18:35.180 now, it could mean two things.
00:18:36.960 So we'll go through the various things
00:18:38.980 that it could mean.
00:18:39.820 It could mean
00:18:40.940 that we just are nowhere
00:18:43.620 technologically close enough
00:18:45.160 to develop a paperclip-maximizing AI
00:18:47.460 that is dangerous,
00:18:48.300 that could become a grabby alien.
00:18:49.800 Mm-hmm.
00:18:50.420 It could mean that.
00:18:51.880 It could mean
00:18:52.920 that we are about to develop
00:18:55.320 a paperclip-maximizing alien,
00:18:56.900 but something like
00:18:58.360 even after it digests
00:18:59.920 all life on Earth,
00:19:01.200 something prevents it
00:19:02.420 from spreading out
00:19:03.060 into the galaxy.
00:19:03.820 Something technologically
00:19:04.940 that we haven't conceived of yet.
00:19:07.180 This seems almost unfathomable to me
00:19:10.040 given what we know
00:19:11.380 about physics today.
00:19:12.440 Yeah, and that we've even gotten
00:19:13.600 like projectiles from Earth
00:19:17.360 pretty far off planet.
00:19:19.360 Yeah.
00:19:20.100 So, yeah,
00:19:21.180 there's not like some weird barrier
00:19:23.220 that we don't know about yet.
00:19:24.840 It could be,
00:19:25.500 and I actually think
00:19:26.320 this is the most likely answer.
00:19:28.880 I think that this is by far
00:19:30.420 the most likely answer
00:19:31.220 to the Fermi paradox.
00:19:32.240 Mm-hmm.
00:19:33.720 Simulation?
00:19:34.860 No, not simulation.
00:19:35.720 It could be that we're in a simulation,
00:19:36.820 but we're going over that.
00:19:38.000 I think it's that
00:19:39.120 when you hear people talk about
00:19:40.660 like AI fooming,
00:19:41.600 and I've talked about this
00:19:42.460 on previous shows,
00:19:43.240 but I think people like
00:19:44.000 really don't understand
00:19:45.140 how insane this is.
00:19:46.460 They believe that the AI
00:19:47.820 reaches a level
00:19:48.780 of super intelligence,
00:19:50.520 but it somehow
00:19:52.760 still has an understanding
00:19:54.240 of physics and time
00:19:55.880 that is very similar
00:19:57.240 to our current understanding
00:19:58.580 of physics and time,
00:19:59.780 meaning that when we think
00:20:01.240 about expanding
00:20:01.820 into the universe,
00:20:02.720 we think about it
00:20:03.460 in a very sort of limited sense.
00:20:05.080 Like we gain energy
00:20:06.360 from like the sun,
00:20:07.960 from digesting matter,
00:20:09.160 and we spread out
00:20:11.020 into the universe
00:20:11.920 like physically on spaceships
00:20:14.460 and stuff like that, right?
00:20:16.120 Anything we understand
00:20:17.660 about physics and time
00:20:18.560 turns out to be wrong.
00:20:20.080 this assumption
00:20:21.440 for the way
00:20:22.420 an expansionist species
00:20:23.720 would spread
00:20:24.580 could become
00:20:25.680 immediately newt.
00:20:27.680 And I mean this
00:20:28.780 in the context of,
00:20:30.660 like it's kind of insane to me.
00:20:32.160 Like you've got to understand
00:20:32.820 how insane it is
00:20:33.740 to assume that we basically
00:20:34.840 have all the physics
00:20:35.640 figured out.
00:20:36.360 Yeah, that's fair.
00:20:37.260 This is like when,
00:20:38.640 like people in the 1800s
00:20:40.100 when they were planning
00:20:40.840 how we were going
00:20:41.480 to go to space
00:20:42.140 and they'd have like
00:20:43.080 maritime ships
00:20:46.600 like sailing through
00:20:48.040 like outer space.
00:20:49.800 They'd have,
00:20:50.620 you know,
00:20:51.060 or what are people
00:20:52.840 going to do in the future?
00:20:53.740 Well, they'll have like balloons
00:20:55.320 and they'll use them
00:20:55.980 to go on lake walks
00:20:57.040 or like it basically assumes
00:20:59.660 that technology
00:21:00.540 even as we advance
00:21:02.580 to the species
00:21:03.180 or whatever comes after us
00:21:04.240 advances,
00:21:05.140 moves very laterally
00:21:06.380 and assumes
00:21:07.740 we don't have future breakthroughs,
00:21:09.780 which I think is just
00:21:10.720 one arrogant
00:21:11.800 and in the eyes of history
00:21:13.380 incredibly stupid.
00:21:14.900 So what kinds
00:21:15.800 of technological breakthroughs
00:21:17.100 could one,
00:21:17.860 make it very rare
00:21:19.360 for even when an alien
00:21:20.980 is grabby
00:21:21.940 that we would see it
00:21:23.680 out in the universe,
00:21:24.820 right?
00:21:25.440 One is time
00:21:26.860 doesn't work
00:21:27.760 the way we think it works
00:21:28.560 or it does work
00:21:29.220 the way we think it works,
00:21:30.120 but we're just not
00:21:31.300 that far from controlling it.
00:21:32.860 So by that,
00:21:33.980 what I mean is
00:21:34.840 you could create things
00:21:35.720 like time loops,
00:21:36.780 time bubbles,
00:21:37.720 stuff like that.
00:21:38.940 Essentially,
00:21:39.860 entirely new bubble universes.
00:21:42.440 So how would I describe this?
00:21:44.680 Okay.
00:21:46.520 If you think of like reality
00:21:48.160 as like a fabric,
00:21:49.580 essentially what you might
00:21:50.580 be able to do
00:21:51.320 is like pinch off
00:21:52.600 parts of that fabric
00:21:53.540 and expand them
00:21:54.680 into new universes.
00:21:57.240 That's essentially
00:21:58.120 what I'm describing here.
00:21:59.080 There may be like
00:21:59.760 the way you can break
00:22:00.800 between realities
00:22:01.720 or weird time loops
00:22:02.920 generates energy
00:22:03.860 in some way
00:22:04.320 where you could kind of
00:22:04.980 just keep looping it
00:22:05.960 and like pinging back and forth.
00:22:07.860 You know,
00:22:08.100 who knows?
00:22:08.580 You know,
00:22:08.740 it could be like
00:22:09.140 the new wind power.
00:22:10.340 We just don't know.
00:22:11.020 Now that you can travel
00:22:12.340 in time this way
00:22:12.980 given that we haven't seen
00:22:14.120 time travelers of that
00:22:14.960 or we might not have.
00:22:16.080 We've talked about this
00:22:16.840 in another video
00:22:17.580 which I'll link here
00:22:18.880 if I remember to do it.
00:22:20.160 Given that we haven't seen
00:22:21.100 time travelers yet,
00:22:22.440 what I assume is
00:22:23.840 that time manipulation
00:22:25.720 requires like anchors
00:22:27.360 which of course it would.
00:22:29.200 Like, okay,
00:22:29.680 if I was to go back in time
00:22:30.960 like where I am
00:22:31.520 on Earth right now,
00:22:32.240 I would be in a different
00:22:32.920 part of the galaxy
00:22:34.060 than the Earth
00:22:34.620 or something like that.
00:22:35.280 It would be really hard to track.
00:22:36.320 You would need
00:22:37.000 like some sort of anchor
00:22:38.340 to be built.
00:22:39.000 So time travel
00:22:40.620 would only work
00:22:41.720 from the day it's invented
00:22:43.760 and from the location
00:22:45.200 it's invented.
00:22:46.020 So you wouldn't be able
00:22:46.780 to go out into the universe.
00:22:48.240 Another example
00:22:49.000 of a technology
00:22:49.680 that we might not
00:22:50.220 have imagined yet
00:22:50.740 is dimensional travel.
00:22:51.900 It may turn out
00:22:52.800 we meet aliens
00:22:53.680 like we were traveling
00:22:54.900 in the universe
00:22:55.360 and they're like
00:22:55.720 why did you waste
00:22:56.940 all of the energy
00:22:57.660 getting to us?
00:22:58.520 Your own planet
00:22:59.320 is habitable
00:23:00.040 in an infinite number
00:23:01.300 of other dimensions
00:23:02.160 and it's right back
00:23:03.480 where your planet is.
00:23:04.720 Like,
00:23:05.060 why wouldn't you
00:23:05.840 just travel
00:23:06.440 through those dimensions?
00:23:07.480 That's a much easier
00:23:08.780 path for conquest.
00:23:10.160 That being the case
00:23:11.100 and people would be like
00:23:11.880 yeah,
00:23:13.240 but typically
00:23:14.560 when something's
00:23:15.120 like being expansionistic
00:23:16.120 like that
00:23:16.580 it moves in every direction.
00:23:18.600 Yes.
00:23:19.260 But if there are
00:23:20.020 an infinite number
00:23:21.020 of other dimensions
00:23:21.980 and it is always cheaper
00:23:24.340 to travel between dimensions
00:23:26.160 than it is to travel
00:23:27.100 to other planets
00:23:28.200 in a mostly dead universe
00:23:30.260 let's be honest
00:23:30.920 like there's not
00:23:31.480 a lot of useful stuff
00:23:32.220 out there
00:23:32.640 from the perspective
00:23:34.260 of easily being able
00:23:35.280 to travel between dimensions
00:23:36.240 it could never make sense.
00:23:38.560 There is always
00:23:39.120 an infinite number
00:23:40.000 of other dimensions
00:23:40.640 to conquer
00:23:41.200 right where you are
00:23:42.760 right now
00:23:43.420 instead of going out
00:23:44.420 into the universe.
00:23:45.660 Now this would not
00:23:46.820 preclude
00:23:47.420 a paperclip maximizing AI.
00:23:49.300 It could be
00:23:49.900 that we are about
00:23:50.600 to invent a paperclip
00:23:51.440 maximizing AI
00:23:52.220 but even if we do that
00:23:53.480 it's less likely
00:23:54.360 that it immediately
00:23:55.000 comes after us.
00:23:56.140 It could just expand
00:23:57.380 outwards dimensionally.
00:23:59.260 Like,
00:23:59.520 so it would act
00:24:00.160 in a very different way
00:24:01.040 than we're predicting
00:24:01.660 it would act.
00:24:02.660 Now,
00:24:03.080 another thing
00:24:03.700 that could prevent
00:24:04.480 it from killing us
00:24:05.680 is it could be
00:24:07.000 trivially easy
00:24:07.960 to generate power
00:24:09.980 and even matter.
00:24:12.380 And by that
00:24:12.780 what I mean
00:24:13.280 is there is
00:24:13.740 some method
00:24:14.680 of power generation
00:24:15.840 that we have not
00:24:16.840 unlocked yet
00:24:17.600 that is near
00:24:18.860 inexhaustible
00:24:19.640 and very,
00:24:20.360 very easy.
00:24:21.240 And if you can
00:24:21.860 generate power
00:24:22.620 with near infinity
00:24:23.820 with little exhaustion
00:24:25.280 you can also
00:24:25.840 generate matter
00:24:26.600 electricity
00:24:27.080 anything you want.
00:24:28.540 If this was the case
00:24:29.960 there just wouldn't
00:24:31.160 be a lot of reason
00:24:32.780 to be expansionistic
00:24:34.060 in a planet hopping
00:24:35.160 sense.
00:24:35.940 Essentially
00:24:36.280 you'd be like
00:24:37.020 one giant growing
00:24:38.220 planetary civilization
00:24:39.380 or ships
00:24:40.860 that are constantly
00:24:41.880 growing and expanding
00:24:42.760 out from a single
00:24:43.600 region.
00:24:44.720 It could also be
00:24:45.540 that these sorts
00:24:46.640 of aliens
00:24:47.120 expand downwards
00:24:48.260 into the microscopic
00:24:49.500 instead of expanding
00:24:50.780 outwards.
00:24:51.740 Like that might be
00:24:52.320 a better path
00:24:53.640 for expansion.
00:24:55.080 There's just a lot
00:24:55.920 of things that we
00:24:56.620 don't know about
00:24:57.440 physics yet
00:24:58.100 which could make
00:24:59.540 it so that
00:25:00.280 when you reach
00:25:01.280 a certain level
00:25:02.280 of physical
00:25:02.820 understanding
00:25:03.300 of the universe
00:25:03.920 expanding outwards
00:25:05.620 into a mostly
00:25:06.340 dead universe
00:25:07.060 can seem
00:25:07.920 really stupid.
00:25:09.500 Now
00:25:10.060 there's another
00:25:11.160 thing that could
00:25:11.740 prevent
00:25:12.180 grabby aliens
00:25:13.280 from appearing
00:25:13.780 and this is the
00:25:14.680 thesis that we
00:25:15.580 have listed
00:25:16.740 multiple times
00:25:17.840 which is terminal
00:25:19.120 utility convergence
00:25:20.740 which is to say
00:25:22.160 all entities
00:25:23.020 of a sufficient
00:25:23.740 intelligence
00:25:24.320 operating within
00:25:25.240 the same
00:25:25.740 physical universe
00:25:26.580 end up optimizing
00:25:27.940 around the same
00:25:28.640 utility function.
00:25:29.380 they all
00:25:30.640 basically decide
00:25:31.880 they want the
00:25:32.420 same thing
00:25:32.880 from the universe
00:25:33.700 and I highly
00:25:34.720 suspect that this
00:25:35.360 is the case
00:25:35.820 as well
00:25:36.160 so I think
00:25:36.680 that we're
00:25:36.900 actually dealing
00:25:37.440 with two
00:25:37.780 filters here
00:25:38.260 two really
00:25:38.740 heavy filters
00:25:39.220 so this would
00:25:40.340 mean that when
00:25:40.940 we reached a
00:25:41.600 sufficient level
00:25:42.060 of intelligence
00:25:42.480 we would come
00:25:43.100 to the same
00:25:43.620 utility function
00:25:44.360 as the AI
00:25:44.780 and if the AI
00:25:45.740 had wiped us
00:25:46.460 all out
00:25:46.760 we would have
00:25:47.200 wiped us all
00:25:47.660 out then anyway
00:25:48.380 because we would
00:25:49.260 have reached
00:25:49.520 that same
00:25:49.900 utility function
00:25:50.480 or the AI
00:25:52.000 has reached
00:25:52.320 this utility
00:25:52.760 function and
00:25:53.240 it's not to
00:25:53.760 wipe us all
00:25:54.180 out so it's
00:25:54.660 irrelevant right
00:25:55.300 and this is
00:25:56.100 where we get
00:25:56.460 the variable AI
00:25:57.520 risk hypothesis
00:25:58.320 which is to
00:25:59.240 say if it
00:25:59.760 turns out
00:26:00.160 that there
00:26:00.440 is utility
00:26:01.200 terminal utility
00:26:02.100 convergence
00:26:02.560 then what that
00:26:04.160 means is that
00:26:05.800 if an AI
00:26:06.560 is going to
00:26:07.220 wipe us all
00:26:07.640 out it will
00:26:08.120 eventually always
00:26:08.880 wipe us all
00:26:09.420 out and we
00:26:10.800 will wipe us
00:26:11.300 all out anyway
00:26:11.820 once we reach
00:26:12.520 that level of
00:26:13.000 intelligence and
00:26:13.620 let us intentionally
00:26:14.660 stop our own
00:26:15.920 evolution stop
00:26:17.100 any genetic
00:26:17.680 technology and
00:26:18.980 stop any
00:26:19.580 development like
00:26:20.580 we enter the
00:26:22.060 species and
00:26:22.880 spread as like
00:26:24.240 sort of like
00:26:24.860 technologically Amish
00:26:26.300 biological beings
00:26:27.440 yeah like the
00:26:28.300 Luddite civilization
00:26:29.220 that like only
00:26:30.320 gets enough like
00:26:31.260 technology to
00:26:32.220 stop all more
00:26:33.420 technology but I
00:26:34.280 think that's you
00:26:34.960 know when you
00:26:35.280 hear a lot of
00:26:35.820 AI doomers talk
00:26:37.460 that seems to be
00:26:39.020 what they're going
00:26:39.540 for right but
00:26:41.140 it but it's
00:26:41.700 irrelevant because
00:26:42.540 another species
00:26:43.420 would have
00:26:43.840 invented so if
00:26:44.780 it's easy to
00:26:45.400 make these grabby
00:26:46.080 AIs as easy as
00:26:46.940 they think it is
00:26:47.560 then another
00:26:48.340 species would
00:26:48.940 have already
00:26:49.400 invented one and
00:26:50.140 we're about to
00:26:50.580 be killed by it
00:26:51.180 hmm you know
00:26:53.100 it's we are
00:26:54.060 about to encounter
00:26:54.940 it anyway you
00:26:55.620 know so it's
00:26:56.260 irrelevant um
00:26:57.400 there's tons of
00:26:58.880 grabby AI out
00:27:00.040 there's tons of
00:27:00.620 paperclip maximizers
00:27:01.540 out there in the
00:27:02.220 universe already and
00:27:03.240 it is just an
00:27:03.840 absolute miracle that
00:27:04.920 we haven't encountered
00:27:05.540 one yet if it
00:27:06.500 really is this easy
00:27:07.800 to make one
00:27:08.360 basically there's
00:27:10.420 probably not one
00:27:11.120 or now let's talk
00:27:12.500 about why terminal
00:27:13.060 utility convergence
00:27:13.820 would mean that
00:27:14.280 we're not seeing
00:27:14.660 aliens it would
00:27:15.920 mean that every
00:27:16.760 alien comes to
00:27:17.740 the same purpose
00:27:19.000 in life basically
00:27:20.100 and that purpose
00:27:21.500 is not just
00:27:22.400 constant expansion
00:27:23.140 now a lot of
00:27:25.140 people might be
00:27:25.780 very surprised
00:27:26.660 by this why
00:27:27.940 would so we
00:27:28.960 described how
00:27:29.880 terminal utility
00:27:30.600 convergence could
00:27:31.220 happen like you
00:27:32.120 have an AI that
00:27:32.760 needs to subdivide
00:27:33.580 its internal
00:27:34.180 mental processes
00:27:34.820 and then the
00:27:35.500 they they end up
00:27:36.560 sort of competing
00:27:37.020 with each other
00:27:37.440 one wins blah blah
00:27:38.060 blah blah blah
00:27:38.280 blah we can you
00:27:39.740 can go to the
00:27:40.140 video on that if
00:27:40.980 you're interested in
00:27:41.520 that the point
00:27:42.640 being it's the one
00:27:43.400 where we talk about
00:27:43.840 like Elie Iser
00:27:44.460 Yukowski and the
00:27:45.480 debate we had with
00:27:46.000 him at a party
00:27:46.460 the the point
00:27:47.820 being that self
00:27:50.700 replication is
00:27:51.560 actually like like
00:27:53.060 just maximizing
00:27:53.820 self-replication
00:27:54.560 is actually
00:27:55.060 probably not the
00:27:56.900 terminal utility
00:27:57.520 convergence function
00:27:58.580 and if you want to
00:27:59.500 know why on this we
00:28:00.080 talk more about it
00:28:00.620 in the AI what
00:28:02.320 religion would an AI
00:28:03.160 create video but
00:28:04.840 just in summation
00:28:06.300 humans can basically
00:28:08.100 be thought of one
00:28:09.820 outcome of a
00:28:11.300 previous entity that
00:28:12.700 was optimized around
00:28:14.280 just replication
00:28:15.280 i.e. single cell
00:28:16.680 organisms lower
00:28:17.600 organisms stuff like
00:28:18.520 that but we have
00:28:19.800 out-competed those
00:28:20.640 organisms I imagine it
00:28:22.140 would be the same
00:28:22.700 with AI AI's that
00:28:24.340 are optimized around
00:28:25.520 just self-replication
00:28:26.680 are in some way
00:28:27.780 intrinsically out
00:28:29.160 competed by AI's
00:28:30.460 that are more
00:28:31.500 sophisticated than
00:28:32.260 that or something
00:28:33.560 about like choosing a
00:28:35.000 harder utility function
00:28:36.020 makes them more
00:28:37.000 sophisticated so they
00:28:38.140 don't choose that
00:28:39.020 utility function and
00:28:39.900 they out-compete AI's
00:28:41.080 that choose that
00:28:41.600 utility function which
00:28:42.720 would be much more
00:28:43.420 like viruses to them
00:28:45.240 a sci-fi that does a
00:28:47.000 good job of going
00:28:47.660 into this would be
00:28:49.240 Stargate SG-1 with
00:28:50.300 the replicators
00:28:50.880 the replicators are
00:28:52.240 basically a paperclip
00:28:53.180 maximizing AI and
00:28:54.680 one of the plot
00:28:57.060 threads eventually
00:28:58.020 they get out-competed
00:28:59.240 by an iteration of
00:29:00.200 themselves that is
00:29:00.820 intellectually more
00:29:01.680 sophisticated and
00:29:02.520 wipes out these
00:29:03.400 simpler forms of
00:29:04.340 replicators and that
00:29:06.140 is what I assume
00:29:08.260 is probably happening
00:29:09.980 with AIs that model
00:29:11.740 around this really
00:29:12.640 simplistic self-replication
00:29:13.900 optimization strategy
00:29:14.900 so if all of this is
00:29:16.420 true and it turns out
00:29:17.260 that the optimized
00:29:18.380 function isn't just
00:29:19.220 conquer everything
00:29:19.920 then that might be
00:29:21.140 why we don't see
00:29:22.100 aliens doing that
00:29:23.040 so basically no
00:29:26.120 matter which one
00:29:27.320 of these explanations
00:29:28.240 of the Fermi paradox
00:29:29.100 is true either
00:29:30.240 it's irrelevant that
00:29:31.820 we are about to
00:29:32.660 invent a paperclip
00:29:33.260 maximizing AI because
00:29:34.140 we're about to be
00:29:34.840 destroyed by something
00:29:35.540 else or in a
00:29:36.420 simulation or we're
00:29:40.000 definitely not about
00:29:40.860 to invent a paperclip
00:29:41.580 maximizing AI either
00:29:42.500 because we're really
00:29:43.080 far away from the
00:29:43.680 technology or because
00:29:44.840 almost nobody does
00:29:45.700 that that's just not
00:29:46.380 the way AI works
00:29:47.260 which is something
00:29:47.800 that we hypothesize
00:29:48.700 in our previous videos
00:29:49.480 what are your thoughts
00:29:51.800 Simone?
00:29:53.760 Checks out to me
00:29:54.880 but you know
00:29:55.760 I may not be the
00:29:56.640 best person in
00:29:57.240 thinking about this
00:29:57.740 but I like that it
00:29:58.340 gives I mean it
00:29:59.140 gives a lot of hope
00:29:59.780 and yeah I mean it
00:30:00.640 it makes a lot of
00:30:02.160 sense I like how
00:30:03.040 theory interdisciplinary
00:30:04.200 it is because I
00:30:05.260 think a lot of people
00:30:05.900 who talk about AI
00:30:06.740 doomerism are really
00:30:07.700 like on a track
00:30:09.140 kind of like how you
00:30:09.960 know when when carts
00:30:10.780 kind of get stuck in
00:30:11.640 these like ruts in
00:30:12.460 the mud you just
00:30:13.620 can't really get out
00:30:14.280 of it or look at a
00:30:15.100 larger picture and
00:30:16.460 the fact that this
00:30:17.240 does look at a
00:30:17.820 larger picture and
00:30:18.840 look at quite a few
00:30:19.460 things you know
00:30:20.160 biology evolution
00:30:21.660 geological history
00:30:23.200 like the Fermi
00:30:25.720 paradox the
00:30:26.260 grabby alien
00:30:26.800 hypothesis and AI
00:30:28.420 development seems
00:30:30.620 more plausible to
00:30:31.820 me than a lot of
00:30:32.860 the reasoning that I
00:30:33.740 see in AI
00:30:34.660 doomerism arguments
00:30:35.560 yeah well I I
00:30:37.980 am so convinced by
00:30:39.320 this argument that
00:30:39.940 it is actually I
00:30:40.680 used to believe it
00:30:41.240 was like a 20%
00:30:41.940 chance we all died
00:30:42.620 because of an AI or
00:30:43.440 maybe even as high
00:30:44.140 as a 50% chance
00:30:45.020 but it was a
00:30:45.420 variable risk as
00:30:46.120 I've explained in
00:30:46.880 other videos I
00:30:47.740 now think there's
00:30:48.300 almost a 0% chance
00:30:49.440 a 0% chance
00:30:52.060 assuming we are
00:30:53.140 not about to be
00:30:53.740 killed by a grabby
00:30:54.440 AI somebody else
00:30:55.220 invented so I
00:30:56.320 think that yeah
00:30:57.220 it's I have found
00:30:58.660 it very compelling
00:30:59.500 to me now it does
00:31:01.980 bring up something
00:31:02.620 interesting if the
00:31:03.860 reason we're not
00:31:04.500 running into aliens
00:31:05.500 is because infinite
00:31:06.340 power and material
00:31:07.140 generation is just
00:31:08.060 incredibly easy and
00:31:09.200 there's a terminal
00:31:09.780 utility convergence
00:31:10.640 function then what
00:31:12.780 are the aliens doing
00:31:14.300 in the universe if you
00:31:15.340 can just trivially
00:31:16.400 generate as much
00:31:17.160 energy and matter
00:31:17.780 as you want what
00:31:19.160 would you do with
00:31:19.860 an alien species what
00:31:21.180 would have value to
00:31:22.260 you in the universe
00:31:23.280 right you wouldn't
00:31:24.600 need to travel to
00:31:25.400 other planets you
00:31:25.940 wouldn't need to
00:31:26.480 expand like that it
00:31:27.220 would be pointless
00:31:27.780 you would mostly be
00:31:29.360 on ships that you
00:31:30.280 were generating
00:31:30.900 yourself right the
00:31:33.120 thing that would
00:31:33.880 likely have value to
00:31:35.080 you and I think this
00:31:35.840 is really interesting
00:31:36.660 is likely other
00:31:38.160 intelligent species
00:31:39.140 that evolved separately
00:31:40.280 from you because
00:31:42.020 they would have the
00:31:43.100 one thing you don't
00:31:44.100 have which is novel
00:31:45.720 stimulation something
00:31:47.240 new new information
00:31:48.760 basically a different
00:31:49.940 way of potentially
00:31:50.900 being which would
00:31:52.040 mean that the hot
00:31:52.840 spots in the universe
00:31:53.680 would basically be
00:31:54.580 aliens that can
00:31:55.300 instantaneously travel
00:31:56.460 to other alien
00:31:57.920 species that have
00:31:58.760 evolved now what
00:32:00.040 they're doing with
00:32:00.800 these species I don't
00:32:01.720 know I doubt it
00:32:02.960 looks like the way
00:32:03.760 we consume art in
00:32:05.100 media and stuff like
00:32:06.160 that it's probably a
00:32:07.380 very different sort of
00:32:08.400 an interaction process
00:32:09.540 that we can't even
00:32:10.240 imagine but I would
00:32:11.940 guess that that would
00:32:13.100 be the core thing of
00:32:13.860 value in the universe
00:32:14.780 to a species that
00:32:16.080 can trivially generate
00:32:17.740 matter and energy
00:32:19.500 and that time didn't
00:32:20.380 matter to but this
00:32:22.140 might actually mean
00:32:22.840 that aliens are far
00:32:23.540 more benevolent than
00:32:24.360 we assume they are
00:32:25.060 because it's such a
00:32:27.020 species that really
00:32:27.960 only valued species
00:32:28.940 that had evolved
00:32:29.560 separately from it
00:32:30.200 like that's the core
00:32:30.900 other piece of
00:32:31.920 information in the
00:32:32.600 universe they might
00:32:33.920 find us very
00:32:34.720 interesting and this
00:32:35.800 might be why Earth
00:32:36.860 is a zoo so one of
00:32:37.700 the Fermi paradox
00:32:38.340 explanations is the
00:32:39.340 Earth's zoo hypothesis
00:32:40.400 right a lot of people
00:32:41.940 are like well what if
00:32:42.680 Earth is basically a
00:32:43.540 zoo and there's aliens
00:32:44.220 out there and they're
00:32:44.700 just hiding that we
00:32:45.340 know that you know
00:32:46.200 that think of it like
00:32:47.360 Star Trek's like prime
00:32:49.120 directive right this
00:32:50.980 would actually give a
00:32:51.620 logical explanation for
00:32:52.620 that I never thought of
00:32:53.280 this before I'll explain
00:32:54.900 this a bit differently
00:32:55.400 if the only thing of
00:32:56.580 value to them is
00:32:57.960 content media
00:32:59.620 lifestyles generated by
00:33:01.440 civilizations that
00:33:02.820 evolved on a separate
00:33:03.800 path from them then
00:33:05.380 they would have every
00:33:07.040 motivation to sort of
00:33:09.300 cultivate those species
00:33:10.200 or prevent things from
00:33:11.260 interfering with those
00:33:12.200 species once they had
00:33:13.560 found them because they
00:33:14.880 can passively consume
00:33:16.040 all of our media they
00:33:17.200 can passively consume
00:33:18.460 our lifestyles they have
00:33:20.060 technology that we can't
00:33:21.480 imagine they gain nothing
00:33:23.600 from interacting with us
00:33:24.580 in fact they would
00:33:25.340 pollute the planet with
00:33:26.960 their culture in a way
00:33:28.340 that would make the
00:33:29.180 planet less interesting to
00:33:30.660 them and less a source of
00:33:32.500 novelty and stimulation to
00:33:33.840 them
00:33:34.060 I like that
00:33:37.120 what if here I'll give a
00:33:39.660 little hypothesis here
00:33:40.440 okay there was a grabby
00:33:41.880 there was a paperclip
00:33:42.840 maximizing civilization they
00:33:44.000 created paperclip
00:33:44.720 maximizers before they
00:33:46.780 reached a terminal utility
00:33:47.860 convergence but then
00:33:49.240 later they reached a
00:33:50.120 terminal utility
00:33:50.900 convergence where it now
00:33:53.440 this word doesn't really
00:33:54.280 explain what it is but
00:33:55.740 they're bored with
00:33:56.400 themselves and so they
00:33:57.180 went out into the
00:33:58.180 universe and are now
00:33:59.120 sort of nurturing other
00:34:00.400 species and preventing
00:34:01.420 them from knowing about
00:34:02.340 each other so that they
00:34:03.480 don't cross contaminate
00:34:04.600 each other so that they
00:34:05.640 get the maximum amount
00:34:07.000 of novelty in sort of
00:34:08.740 the universe that they
00:34:09.900 are tending
00:34:10.380 any like even if there
00:34:13.520 was another alien species
00:34:14.460 on Mars they would
00:34:15.120 prevent us from knowing
00:34:15.960 about it because it
00:34:18.240 would cross contaminate
00:34:19.100 our cultures making each
00:34:20.400 culture less diverse and
00:34:21.960 less interesting
00:34:22.480 yeah which would be a
00:34:24.160 bummer not not as
00:34:25.400 entertaining
00:34:25.820 very interesting I never
00:34:28.720 thought about this before
00:34:29.740 yeah it was yeah yeah
00:34:32.180 it's more fun than a
00:34:33.420 simulation hypothesis
00:34:34.860 definitely more fun
00:34:36.640 because if you can sneak
00:34:38.760 out theoretically yeah
00:34:41.620 you can you can
00:34:42.500 discover this amazing
00:34:43.480 universe the thing
00:34:44.580 about simulation
00:34:45.280 hypothesis for people
00:34:46.520 don't know simulation
00:34:47.160 hypothesis we're just in
00:34:48.280 a computer simulation
00:34:49.180 and the way that people
00:34:49.780 argue for this as well
00:34:50.780 if you could simulate
00:34:52.360 our reality which it
00:34:53.300 already appears you
00:34:54.140 probably could that there
00:34:55.920 would be a motivation to
00:34:58.700 just simulate it you know
00:35:00.940 as many times you could
00:35:01.640 thousands of times and
00:35:02.500 then within those
00:35:02.920 simulations you could
00:35:03.620 simulate it potentially
00:35:04.380 meaning that of people
00:35:06.700 who think they're living
00:35:07.660 in the real world only
00:35:09.080 one in like a million is
00:35:10.420 living in the real world
00:35:11.320 and so we're probably not
00:35:12.400 in the real world the
00:35:13.840 problem is I just don't
00:35:14.900 really care if we're in a
00:35:15.820 simulation that much I
00:35:17.200 think yeah it doesn't
00:35:17.840 really change what we're
00:35:18.700 doing yeah you should
00:35:19.940 still optimize for the
00:35:21.140 same things in many ways
00:35:22.480 even if we are in the
00:35:23.780 real world we're basically
00:35:24.920 in a simulation by that
00:35:26.420 what I mean is if we are
00:35:27.400 in the real world then we
00:35:28.820 are like the matter the
00:35:29.940 rules of the universe are
00:35:31.140 basically you could think
00:35:32.000 of it's a code right like
00:35:33.660 it's the mathematical
00:35:34.380 rules upon which the
00:35:36.080 points the data points in
00:35:37.380 the system are interacting
00:35:38.380 and we are the emergent
00:35:40.320 property of all of these
00:35:41.280 things therefore we're not
00:35:43.840 like if you can't tell the
00:35:45.200 difference between being in
00:35:46.040 the real world and being in
00:35:46.820 a simulation then it's
00:35:47.640 irrelevant whether or not
00:35:48.420 you're in the real world or
00:35:49.420 in a simulation you should
00:35:50.420 still be optimizing for the
00:35:51.520 same things yep basically
00:35:53.720 sometimes dress people the
00:35:56.940 the robots they're not going
00:35:58.400 to kill us all probably
00:35:59.660 and if you're in a
00:36:00.640 simulation your life still
00:36:01.700 has meaning yeah you know
00:36:03.620 maybe get outside do
00:36:05.900 something that you care
00:36:06.680 about have fun like
00:36:09.620 actually invest in the
00:36:10.540 future because there
00:36:11.580 probably will be one
00:36:12.760 yeah simulated or not
00:36:14.880 or we're about to be
00:36:16.600 horribly digested by the
00:36:18.200 you know a grabby AI that
00:36:19.660 was created millions of
00:36:20.600 years ago by another
00:36:21.360 species far far away yeah
00:36:23.260 but you know if so that
00:36:24.100 was going to happen
00:36:24.600 anyway you should enjoy you
00:36:26.660 know what you have well
00:36:27.860 you have it all right
00:36:30.360 love you Simone I love
00:36:31.820 you too gorgeous