Hard Mathematical Proof AI Won't Kill Us
Episode Stats
Words per Minute
183.76785
Summary
In this episode, we discuss the Fermi Paradox, the Grabby Alien Hypothesis, and the idea that we are about to create a paperclip-maximizer AI that will end up fooming and killing us all.
Transcript
00:00:00.000
So basically, no matter which one of these explanations of the Fermi Paradox is true,
00:00:05.260
either it's irrelevant that we are about to invent a paperclip maximizing AI
00:00:09.200
because we're about to be destroyed by something else or in a simulation,
00:00:13.600
or we're definitely not about to invent a paperclip maximizing AI,
00:00:17.640
either because we're really far away from the technology or because almost nobody does that.
00:00:22.680
I am so convinced by this argument that it is actually,
00:00:25.640
I used to believe it was like a 20% chance we all died because of an AI
00:00:30.160
but it was a variable risk as I've explained in other videos.
00:00:35.000
A 0% chance assuming we are not about to be killed by a grabby AI somebody else invented.
00:00:44.160
If the reason we're not running into aliens is because infinite power and material generation
00:00:48.320
is just incredibly easy and there's a terminal utility convergence function,
00:00:52.340
then what are the aliens doing in the universe?
00:01:03.260
a bit of a preamble for an already filmed interview.
00:01:12.200
However, I didn't want to off rail the interview too much going into this theory,
00:01:19.800
because he is the person who invented the grabby aliens hypothesis solution
00:01:25.740
Which I hadn't heard about grabby aliens before, so I'm glad we're doing this.
00:01:32.400
So we will use this episode to talk about the Fermi paradox,
00:01:39.140
and how the grabby alien hypothesis can be used through controlling one of the variables,
00:01:46.160
i.e. the assumption that we are about to invent a paperclip maximizer AI that ends up fooming and killing us all,
00:01:54.220
because that would be a grabby alien definitionally.
00:01:56.820
If you collapse that variable within the equation to today,
00:02:00.940
then you can back calculate the probability of creating a paperclip maximizing AI.
00:02:06.900
And, spoiler alert, the probability is almost zero.
00:02:11.240
It basically means it is almost statistically impossible that we are about to create a paperclip maximizing AI,
00:02:22.880
something in the universe that would make it irrelevant whether or not we created a paperclip maximizing AI,
00:02:33.480
which also would make it irrelevant that we're about to create a paperclip maximizing AI,
00:02:37.320
or there is some filter to advanced life developing on a planet that we have already passed through
00:02:45.520
that we don't realize that we have passed through.
00:02:48.320
So those are the only ways that this isn't the case.
00:02:50.660
But let's go into it, because it is really easy.
00:02:53.500
I just realized that some definitions may help here.
00:02:56.080
We'll get into the grabby alien hypothesis in a second,
00:03:01.820
is the concept of an AI that is just trying to maximize some simplistic function.
00:03:09.020
So in the concept, as it's laid out as a paperclip maximizer,
00:03:12.640
it would be just make maximum number of paperclips,
00:03:16.960
and it starts turning the earth into paperclips,
00:03:20.740
Now, realistically, if we were to have a paperclip maximizing AI,
00:03:23.880
it would probably look something more like, you know,
00:03:28.740
And it just keeps processing the image to, like, an insane degree,
00:03:32.800
because it was never told when to stop processing the image,
00:03:35.560
and it just turns all the world into energy to process an image,
00:03:40.280
This concept is important to address because there are many people who at least pass themselves off as intelligent,
00:03:47.020
who believe that we are about to create a paperclip maximizing AI,
00:03:51.280
that AI is about to, as they call, foom, which I mentioned earlier here,
00:03:55.060
which just means rise in intelligence astronomically quickly,
00:03:58.400
like double this intelligence every 15 minutes or something,
00:04:03.160
and after that begin to consume all matter in the universe.
00:04:06.320
The Fermi Paradox is basically the question of,
00:04:15.180
You know, like, we kind of should have seen it already.
00:04:22.500
and I would say that anyone's metaphysical understanding of reality
00:04:26.780
that doesn't take the Fermi Paradox into account is deeply flawed.
00:04:31.820
Because based on our understanding of physics today,
00:04:38.800
our understanding of what our own species intends to do in the next 1,000, 2,000 years,
00:04:44.760
our understanding of the filters our species has gone through,
00:04:49.320
so we know how hard it was for life to evolve on this planet.
00:04:52.840
And the answer is, not very from what we can see.
00:05:03.280
it's one of, like, my areas of, like, deep nerddom,
00:05:05.920
theories for how the first life could have evolved on Earth.
00:05:14.380
which is life evolved on Earth almost as soon as it could.
00:05:18.600
Now, a person may say, why isn't that this relevant?
00:05:21.180
That would seem to indicate that it is very easy for life to evolve on a planet.
00:05:26.620
Well, and here we have to get into the grabby aliens theory.
00:05:30.480
You're dealing with the anthropic principle here, okay?
00:05:35.660
Basically what it means is if you're asking, like,
00:05:39.060
look, it looks like Earth is almost a perfect planet for human life to evolve on it.
00:05:43.900
Like, it had liquid water or everything like that, right?
00:05:46.620
Except human life wouldn't have evolved without those things on a planet.
00:05:50.960
A different kind of life would have evolved without those things.
00:05:58.540
If life on Earth didn't evolve almost as soon as it could,
00:06:06.860
and another alien would have wiped out and colonized this planet.
00:06:09.800
That is what the grabby alien theory would say,
00:06:12.140
so that this doesn't really change the probability of this as a filter.
00:06:15.740
But what we do know about the evolution of life on Earth is there are multiple ways that could
00:06:20.460
have happened, all of which could lead to an evolving...
00:06:24.020
You could either be dealing with, like, an RNA world.
00:06:26.560
You could be dealing with citrus acid cycle event.
00:06:35.340
I've never heard of the citric acid hypothesis.
00:06:38.220
So for this stuff, I would say it's not really that relevant to this conversation.
00:06:44.180
And people can dig into these various theories with people who have, like, done them more.
00:06:48.820
Just, like, look up citric acid cycle hypothesis,
00:06:53.820
or clay hypothesis to evolution of life on Earth,
00:06:56.980
or shallow pool hypothesis to evolution of life on Earth,
00:07:00.240
or deep sea vent hypothesis to evolution of life on Earth.
00:07:03.060
The point being is it shouldn't actually, like,
00:07:07.540
it shouldn't actually be that hard for life to begin to evolve on a planet like this.
00:07:16.680
And we actually sort of have to back out here from the grabby aliens hypothesis.
00:07:20.540
So I'll explain what the grabby aliens hypothesis says
00:07:26.420
Usually when you're dealing with solutions to the Fermi paradox,
00:07:29.540
what people will do is they'll say that there's some unknown factor
00:07:34.800
So a great example here would be the dark forest hypothesis.
00:07:38.720
So the dark forest hypothesis is that there actually are aliens,
00:07:43.800
They just have the common sense to not be broadcasting where they are
00:07:51.200
And that any other aliens like us who were stupid enough to broadcast where they are,
00:07:55.820
they get snubbed out, snuffed out really quickly.
00:08:01.300
Okay, if the dark forest hypothesis is the explanation
00:08:05.420
for why we are not seeing alien life out there,
00:08:08.460
it is somewhat irrelevant whether or not we build a paperclip maximizing robot
00:08:12.680
because it means we're about to be snuffed out anyway,
00:08:18.900
sending out ships broadcasting about us, sending out signals.
00:08:24.400
And we could not defend against an interplanetary assault
00:08:29.860
Well, I mean, in that case, you could actually argue
00:08:31.940
it would be much better if we developed AGI as fast as possible
00:08:36.100
because maybe it can defend us even if we cannot defend ourselves.
00:08:43.920
Or they'll say we're in a simulation and that's why you're not seeing stuff.
00:08:46.600
But again, that makes all of this beside the point.
00:08:48.040
What grabby aliens does is it says, no, actually, we are just statistically
00:08:52.480
the first sentient species on the road to becoming a grabby alien,
00:08:58.980
and I'll explain what this means in just a second, in this region of space.
00:09:05.960
It can use the fact that we haven't seen another species out there,
00:09:11.660
a grabby alien that is rapidly expanding across planets,
00:09:14.460
to calculate how rarely these evolve on planets.
00:09:22.760
Do you sort of understand how that could be the case?
00:09:26.820
So in the grabby aliens hypothesis, when you run this calculation,
00:09:31.820
it turns out if that's why we haven't seen an alien yet,
00:09:39.140
like something that makes it very low probability
00:09:41.580
that a potentially habitable planet ends up evolving an alien
00:09:46.600
that ends up spreading out like a grabby alien,
00:09:50.940
One of these really loud things that's just going,
00:09:52.920
planet, you know, use the resources on the planet,
00:09:56.980
And even if it has already finished doing that,
00:09:59.440
you've argued in other conversations we have had
00:10:04.300
You would see the signs of the destroyed civilizations, et cetera.
00:10:08.320
A grabby alien, or which a paperclip maximizer is,
00:10:12.040
If you're like, what does a grabby alien look like?
00:10:13.560
A paperclip maximizer that's just going planet to planet,
00:10:18.400
Or a human empire expanding through the universe.
00:10:25.440
or some people go and they try colonizing a new planet,
00:10:28.460
Even with our existing technology on Earth right now,
00:11:10.700
it could do pretty well with our existing technology
00:11:16.840
OK, anyway, so the grabby alien hypothesis says
00:11:36.580
So the habitable zone is a distance away from a star
00:11:44.200
but what it means is that there are very frequently
00:11:51.880
planets that are likely for life to evolve on them.
00:11:55.440
I would estimate, like if I'm looking at everything
00:11:59.720
there's probably about 10 million planets per galaxy
00:12:24.180
why we are that we are about to invent a grabby alien.
00:12:34.960
and say, what's the probability of that event happening
00:12:46.800
it could appear like one of five different ways,
00:12:49.720
even with the chemical composition of early Earth.
00:13:04.680
And it has many advantages over monocellular life.
00:13:33.060
what a huge boost human-like intelligence gives a species.
00:13:48.240
I am able to essentially have like different models
00:13:55.860
And you don't have to die before you get better.
00:14:03.520
It is sort of like the second sexual selection.
00:14:09.600
as opposed to just like cloning yourself, right?
00:14:20.340
of the sort of operating system of our biology.
00:14:46.200
I mean, cephalopods are all over like historic geology
00:14:50.100
Cephalopods are like squids, octopus, stuff like that.
00:14:52.320
Like a lot of people point to how smart they are.
00:14:59.380
So the reason why cephalopods are as smart as they are
00:15:07.960
Yeah, it's a little arm thing that you see on a neuron.
00:15:18.380
you need really fast traveling action potentials.
00:15:37.360
and it causes the action potential to jump between.
00:15:41.720
It's like putting vegetable oil on your slip and slide.
00:15:46.900
It's actually a really complicated trick of physics
00:15:49.200
that can't easily be explained except by like looking at it.
00:15:54.400
The point is, is we mammals have a special little trick
00:15:58.720
that allows for our action potentials to travel very, very quickly.
00:16:01.940
And are you saying that cephalopods have this too?
00:16:07.340
that wants a fast traveling action potential before us,
00:16:16.520
Oh, so they just have fat axons, whereas we have optimized axons.
00:16:20.620
In some cephalopods, they're like a quarter centimeter in diameter.
00:16:34.080
This is why cephalopods, despite being really smart
00:16:36.320
and probably being really smart for a long time,
00:16:38.860
because they've been on earth for a really long time,
00:16:46.520
Because they don't have room to have even fatter axons?
00:16:52.680
the number of neurons they could have would get lower.
00:17:02.000
Yeah, I guess you could have, like, giant, giant, giant...
00:17:11.960
if you're looking at the evolution of life on our earth,
00:17:17.640
it's very rare for a species to get nuclear weapons
00:17:25.860
Could turn out that almost every species does that.
00:17:28.120
Or it could be that there's, like, one science experiment.
00:17:31.320
Like, a lot of people thought that may be trying to
00:17:34.040
find the Hadron particle with the big super collider.
00:17:43.480
and they can't help but trying to find Hadrons,
00:17:46.020
and then they create little black holes in their planets,
00:18:07.960
can look back on the potential possible filters