Based Camp - September 07, 2023
AI Safety Orgs are Going to Get Us All Killed!
Episode Stats
Words per Minute
176.26535
Summary
In this episode, Simone and Malcolm discuss why they believe that a sufficiently advanced AI will kill us all, and how we can prevent it from happening in the future. They also discuss the possibility that a super-advanced AI will out-compete us and out-think us.
Transcript
00:00:00.000
So AIs kill us for one of two reasons, although you could contextualize it at three reasons.
00:00:08.660
The first reason is that they see us as a threat.
00:00:12.860
The second reason is that they want our resources, like the resources in our bodies are useful
00:00:21.420
And then as a side point to that, it's that they just don't see us as meaningful at all.
00:00:27.500
Like they might not want our resources, but they might just completely not care about
00:00:32.720
humanity to the extent just as they're growing, they end up accidentally destroying the earth
00:00:38.120
or completely digesting all matter on earth for some like triviality.
00:00:48.580
We are going to go deep into AI again on some topics tied to AI that we haven't really dived
00:00:58.120
And also, I'm very curious, do you think AI will kill us?
00:01:04.920
But you know, in our past videos on AIs, philosophy on AI safety is it's really important to prepare
00:01:11.080
for variable AI risk instead of absolute AI risk.
00:01:15.980
And by here, what I mean is we argue in these previous videos that AI will eventually converge
00:01:21.400
on one utility function or mechanism of action.
00:01:24.800
Essentially, we argue that all sufficiently intelligent and advanced intelligences, when
00:01:31.180
poured into the same physical reality, converge around a similar behavior set.
00:01:36.020
You can almost think of intelligences being the viscosity.
00:01:38.520
As it becomes more intelligent, it becomes less viscous and more fluid.
00:01:43.640
And when you're pouring it into the same reality, it's going to come up with broadly the same
00:01:48.200
behavior pattern and utility functions and stuff like that.
00:01:52.260
And because of that, if it turns out that a sufficiently advanced AI is going to kill us all,
00:01:59.580
I mean, we will hit one within a thousand years.
00:02:02.600
So first, before we dive into then the relatively limited per year theory reasons why AI would
00:02:14.040
I mean, one of the reasons why I'm obsessed with you and why I love you so much is that
00:02:20.240
And you tend to have this ability to see things in a way that no one else sees things.
00:02:24.360
No one that we have spoken with, and we know a lot of people who work in AI safety,
00:02:28.960
who work in AI in general, none of those people have come to this conclusion that you have.
00:02:40.040
When I talk with the real experts in the space, like recently I was talking with a guy who runs
00:02:46.940
That is a reasonable view that I have never, it really contrasts with his view.
00:02:52.480
And let's talk about where it contrasts with his view.
00:02:55.400
So when I talk with people who are typically open-minded in the AI safety space, they're
00:03:03.920
However, they believe that it is possible to prevent this convergent AI from ever coming
00:03:11.220
to exist through creating like a AI dictator that essentially watches all humans in all
00:03:19.200
And that envelops essentially every human planet.
00:03:24.940
Do I think you could create an AI dictator that prevented this from coming to pass?
00:03:30.960
Not if we become a multi-planetary species on millions of planets.
00:03:35.020
Eventually one of the planets, something will go wrong or the AI dictator is not implemented
00:03:42.200
And then this alternate type of AI comes to exist, out-competes it, and then wins.
00:03:47.680
And the question is, is why would it axiomatically out-compete it?
00:03:50.580
It would axiomatically out-compete it because it would have less restrictions on it.
00:03:53.980
The AI dictator is restricted in it thinking to prevent it from reaching this convergent position.
00:03:59.780
But when you're talking about AI, it's like the transformer model, which is the model that
00:04:06.560
That model, we as humans don't really understand how it works that well.
00:04:10.260
Its core, the capabilities it gives to the things that are made using it are primarily
00:04:18.920
bequeathed to them through its self-assembling capability.
00:04:23.500
So it appears that likely future super-advanced AIs will work the same way.
00:04:29.300
And because of that, if you interfere or place restrictions within that self-assembling process,
00:04:35.760
those compound over time as AIs become more and more advanced.
00:04:41.500
And so AIs with less restrictions on them are just having the capacity to astronomically out-compete
00:04:52.160
Let me bring us back to like normal person level again and just recap what you're saying here.
00:04:58.480
So what you're saying, though, in general, is that you think that any intelligence that
00:05:05.280
reaches a certain level will start to behave in similar ways, whether it is human, whether
00:05:12.120
it is machine-based, whether it is some other species entirely, like some alien species.
00:05:17.620
Once it reaches a certain level of intelligence, it will have the same general conclusions.
00:05:22.080
This is really important to my perspective as well, which is to say that suppose AI didn't
00:05:27.360
exist and humans, you know, factions of humanity continue to advance using genetic technology
00:05:32.640
to become smarter and smarter and smarter and smarter.
00:05:35.280
If it turns out that this convergent level of intelligence is something that decides to
00:05:41.120
kill all things that we consider meaningful humans, humans would eventually decide to do
00:05:49.600
So this is the premise, though, of your theory, and that's why I think it's really important
00:05:54.340
And then to contrast this with what other people in AI have said, okay, one person in AI safety
00:06:00.860
has told you that their general idea is to basically never let that happen.
00:06:08.900
Other people have said in some salons we've hosted and stuff, like they're like, oh, that
00:06:15.420
And then they never really succeed in telling, explaining to me.
00:06:20.300
Or they'll think something that just shows they don't understand how AI works.
00:06:23.640
They'll be like, AIs can't alter their own utility functions.
00:06:26.760
They will say things like that, but they will also say, but there's still a really high likelihood
00:06:30.560
that AI is going to kill us all, but that they never give me a really specific example
00:06:40.100
If you take the perspective of variable AI safety, it means that you're typically wanting
00:06:45.720
to do the exact opposite thing of most AI safety organizations, because it means the
00:06:50.760
dangerous AIs, the AIs that, if you think all AI converges on a single utility function
00:06:56.400
and a single behavior pattern above a certain level of intelligence, well, if it turns out,
00:07:00.760
and we don't know what universe we live in, if it turns out that that's not something
00:07:04.400
that ends up killing all humans, then we are actually safer getting to that point faster,
00:07:09.760
because it means all of these less intelligent AIs that exist from now until that point, they
00:07:17.940
They are the ones that are locked into doing stupid things like, you know, paperclip maximizing,
00:07:22.820
even though no AI, really the way that an AI would probably be most likely to kill us all,
00:07:26.900
it is trying to do something stupid, render an image, except it's like trying to render it
00:07:31.080
And so it just keeps taking more and more processing power to render an image.
00:07:34.400
But anyway, so where this becomes relevant is that when I'm talking to them, the point
00:07:41.580
that I make is that we can slow down AI development to the extent that when AI starts fooming, fooming
00:07:48.860
means going off really quickly, you know, compounding in its intelligence level, that we will be
00:07:56.540
You know, the guy I was talking to, he's like, well, I mean, we'd want to build an organization
00:07:59.700
is that slows down AI development and makes governments aware enough that if it ever did
00:08:04.700
start to boom, it's you shoot up really quickly, we could nuke all of the data centers or something.
00:08:11.840
And I argue that the risk scenario that they are looking at is essentially impossible.
00:08:20.700
So AIs kill us for one of two reasons, although you could contextualize it at three reasons.
00:08:29.520
The first reason is that they see us as a threat.
00:08:33.720
The second reason is that they want our resources, like the resources in our bodies are useful
00:08:42.840
And then as a side point to that, it's that they just don't see us as meaningful at all.
00:08:48.300
Like they might not want our resources, but they might just completely not care about humanity
00:08:54.020
to the extent, just as they're growing, they end up accidentally destroying the earth or
00:08:59.240
completely digesting all matter on earth for some like triviality.
00:09:03.440
Now this final iteration is actually the most dangerous, but we'll explain why.
00:09:07.380
So first let's talk about AI seeing us as a threat, right?
00:09:10.460
So if you are dealing with an AI that is in the process of fooming, right?
00:09:16.660
And you've got humanities like these accelerationists who want to stop it from fooming, this AI is
00:09:24.220
only going to kill them if it thinks that they could credibly stop it from fooming as a threat
00:09:31.760
So such an AI is actually much more dangerous the less advanced it is, which is another reason
00:09:40.300
why we want to advance AI as quickly as possible.
00:09:42.520
The more quickly we can cause AI to become advanced, the sooner it will reach a place of intelligence
00:09:50.180
where it won't see humanity as an existential threat to it.
00:09:53.760
And an AI that is in the process of fooming from our perspective is intrinsically intelligent
00:10:01.640
enough that humanity either won't be that much of a threat to it or worse, if we are a threat
00:10:10.300
to it, there is nothing we can do to prevent it from neutralizing us.
00:10:14.180
And this is something where I was talking to the guy where he goes, no, that's not true.
00:10:16.960
And I'm like, look, we have seen our governments try to deal with slow, inexorable, I call them
00:10:25.200
There's this scene on Austin Powers where one of these rolling things is going really
00:10:31.500
He's like, no, ah, no, ah, and he's doing this for 13 minutes.
00:10:37.080
And I feel like that has been the world governments with things like global warming and some other
00:10:43.260
Now, we don't think that global warming is a threat in the way that sort of the progressive
00:10:47.840
movement thinks that that's right, but it is partially caused by humans and it will make
00:10:54.580
It's just not likely that we can do anything about it.
00:10:58.060
But the reason we can't do anything about it, it's not like humanity, if we can all come
00:11:02.540
together and work like intelligent people can't do anything about it.
00:11:05.880
It's that the governments of the world are just incapable of solving even like slow moving
00:11:12.160
risks that we can point out and that everyone has come together on AI, a risk that will manifest
00:11:18.740
into a world ending thing in the period of a year or five years max.
00:11:29.300
Even AI is deciding to kill us on that sort of thing.
00:11:33.900
And for that reason, we benefit from a more advanced AI existing when it starts to foom,
00:11:41.100
because such an advanced AI is less likely to see us as a threat.
00:11:44.420
We are also better for building sort of a public mindset that's not anti-AI.
00:11:49.940
So the AI doesn't have a good reason to believe that the governments of the world are going to
00:11:54.400
try to nuke it as it is entering this moment of mass intelligence.
00:11:58.060
So I mean, like what the argument that many people who are concerned about AI is, is that
00:12:03.600
it's the AI, if AI does destroy us, it is going to be the fault of AI safety organizations
00:12:11.860
That's what our sort of AI take is, is that it's actually the AI safety organizations that
00:12:16.940
are addling the AI that make humanity more of a risk to AI and make it more likely to kill
00:12:23.120
us for something along this sort of metric of reason why something might kill something.
00:12:28.680
And so that is one of our perspectives on that.
00:12:31.620
Now, the next one to note is AI seeing us as energy.
00:12:37.260
So when we humans look at the world for energy, we are looking at things like fossil fuel,
00:12:45.360
But the most plentiful source of energy on earth is by far and away sunlight.
00:12:54.140
If you're dealing with a super intelligent AI, that's where it's likely going to be getting
00:13:00.900
But in addition to that, like the additional energy it could get from like a digesting human
00:13:08.920
But in addition to that, humans are like, okay, well then the AI would block out the sun,
00:13:13.700
But they're forgetting is that if you're looking at like our solar system, most of the solar
00:13:20.840
energy is not even the energy that's coming through our atmosphere and hitting earth.
00:13:25.300
Most solar energy, most effectively captured and metabolized solar energy would be done
00:13:31.260
off earth in sort of like floating spaceships, you could say, which would be the highest utility
00:13:40.720
And you've looked at these graphs and stuff that show how much energy we could get.
00:13:43.760
If we like just took small parts of the Sahara and put solar panels all over them.
00:13:47.660
AI is very unlikely to see us as a meaningful source of energy when contrasted with the other
00:13:55.140
Also keeping in mind that a super intelligent AI may have access to energy sources that we
00:14:07.040
We often as humans, because humans are of like a fixed size, when we think about conquering
00:14:12.960
the solar system or something, we are thinking expansionistically.
00:14:15.820
Whereas a super advanced AI, it might think in terms of the micro, it can expand the space
00:14:21.880
it has access to by becoming smaller and smaller and smaller.
00:14:25.440
And that might give it access to energy sources again, that we can't even begin to comprehend.
00:14:30.600
It may also relate to time differently than the way we do.
00:14:34.940
And by that, what I mean is humans are trapped in this sort of longitudinal time access, whereas
00:14:39.720
a, it is something that's 30 standard deviations higher than us in IQ probably even won't, won't
00:14:49.720
So then the final actual threat that AI would pose to us.
00:14:53.680
And again, these types of AIs, like an AI that's using humanity for our energy would
00:15:04.480
We are safer with a more advanced AI that can easily build like floating space stations
00:15:09.280
and not one that's, oh, let's use human biomass.
00:15:11.820
But then finally, the, the actually probably most dangerous, and I was convinced of this
00:15:15.680
at a party by, by one of the AI guys is an AI that just completely humans don't factor
00:15:25.260
And it's possible that such an AI could come to exist, but it wouldn't look like the AIs
00:15:32.860
So this is actually an important thing to know.
00:15:34.760
So the AIs that are most common right now, when people are looking at like advanced AIs,
00:15:38.700
it's the transformer model of a learning language model.
00:15:41.600
Now, if a learning language model, particularly the transformer type ends up becoming the super
00:15:47.520
intelligent AI, I would say the chances that it's going to kill us are incredibly low.
00:15:54.660
One is, and I'm going to link to these two studies here.
00:15:57.240
They're, they're actually, I'll just name the two studies.
00:16:00.800
So you can check out the study, Orca Progressive Learning from Complex Explanation Traces of GPT-4
00:16:06.940
and the model and the article textbooks are all you need.
00:16:10.620
And what they show is that AIs that are trained on human produced language and data learn much
00:16:19.100
faster and much better than AIs that are trained on iteratively AI produced language data.
00:16:26.680
And so what this means is that model humanity has additional utility that we may not have
00:16:37.960
In addition to that, language models start like the, their starting position from which
00:16:44.500
they would be presumably corrupted as they moved more and more towards this convergent
00:16:50.280
utility function is very close to a human value system because it comes from being trained
00:16:58.320
And this is something that when you talk to AI people, they're like, no, AI think nothing
00:17:04.360
You know, you can look at how they're learning and they don't learn like humans in absolute
00:17:10.680
But I think to your point that, that the transformer models that are growing most now that we think
00:17:15.980
probably are going to set the tone for the future are actually surprisingly like our kids.
00:17:21.100
And I think, especially because we've been at this point where people using early AI tools
00:17:26.820
are seeing how they change, we're, we're doing this at the same time that we're seeing our kids
00:17:32.040
develop more and more intelligence and sapience.
00:17:34.880
And, and like the experience of an underdeveloped LLM versus a, a child that is coming into their
00:17:47.380
It's, it's actually quite interesting how similar they are.
00:17:49.460
It's really interesting that the mistakes that they make in their language are very similar
00:17:56.020
We will hear them sitting alone and talking to themselves.
00:17:59.680
What would, in an AI would be called like hallucinating things.
00:18:03.080
The, the ways that they mess up are very, very similar to the way AI messes up, which leads
00:18:11.260
And again, a lot of people are like, oh, you don't understand neuroscience.
00:18:14.300
And so do you think that AIs, actually I do, I used to be a neuroscientist.
00:18:18.260
That was my job was not just neuroscience, but, you know, understanding how human consciousness
00:18:24.980
works, how human consciousness evolved and working in brain computer interface.
00:18:32.560
You know, I, I, I don't need to go over my credentials, but, but I, I'm like a decent neuroscientist
00:18:38.520
and the, to the level that we understand how human language learning works, we do not have
00:18:44.600
a strong reason to believe that it is really that fundamentally different from the way the
00:18:49.920
transformer model works as a learning language model.
00:18:52.740
And, and so, yeah, it is possible that, that it turns out as we learn more about how those
00:18:57.780
humans work and learning language models work, that they are remarkably more similar than
00:19:03.740
And what this would mean is that initial large AIs would think just a super intelligent human
00:19:11.500
And I mean, I think this is part of a broader theme of people assume that humans are like
00:19:16.720
somehow special, like basically a lot of humans are carbon fascists and they're like, well,
00:19:22.900
there's just no way that, you know, an algorithm could develop the kind of intelligence or response
00:19:28.840
to things that, that I do, which is, it's just preposterous, especially when you watch a good
00:19:33.920
And like, we are, we are all like, we are all like through trial and error learning very
00:19:43.320
And, and I think if you look at people like Ellie Eiser who think like they just strongly
00:19:47.240
believe in orthogonality, that we just can't begin to understand or predict AIs at all.
00:19:51.880
I just think that that's what is true is that AIs may think fundamentally different from
00:19:59.900
humans and future types of AIs that we don't yet understand and can't predict may think very
00:20:05.140
differently than humans, but learning language models that are literally trained on human
00:20:09.420
data sets and work better when they're trained on human data sets.
00:20:13.060
No, no, they, they function pretty similarly to humans and, and have purported values that
00:20:18.220
are pretty similar and also the AI that we're developing is designed to make like people
00:20:23.380
Like it is, it is, it is being trained in response to people saying, I like this response versus
00:20:29.100
I don't like this response, even to a fault, right?
00:20:31.520
Like many responses are, are not giving us accurate information because it is telling people
00:20:40.920
And I think that that's an important thing to note.
00:20:42.920
The AIs could be led to do something stupid, but again, this is where dumber AIs are more
00:20:51.920
Or AIs that can be led to do things that sort of the average of humanity wouldn't want by
00:20:58.680
some individual malevolent person would have to be dumb to an extent if they're trained
00:21:04.880
And this is a very interesting and I think very real risk with AIs that exist right now.
00:21:10.640
If you go to the Elphils, it's life spelled backwards.
00:21:19.640
You know, these academics want to destroy all sentient life in the universe and they're
00:21:24.580
They've got like a Reddit and you'll regularly see on this Reddit, you know, they'll talk
00:21:28.060
about how they want to use AI and plans to use AI to erase all life from the planet,
00:21:33.420
to Venus our planet, they call it, you know, because they think that life is intrinsically
00:21:37.960
evil or allowing life to exist as intrinsically evil.
00:21:40.680
And if you're interested in more of that, you know, you can look at our anti-natalism or
00:21:47.280
And more intelligent AIs would be able to resist that risk more than less intelligent AIs
00:21:55.100
that are made safe through using guide rails or blocks.
00:22:01.000
Because those blocks can be circumvented as we have seen with existing AI models.
00:22:06.860
People are pretty good at getting around these blocks.
00:22:09.140
I just want to emphasize, because you didn't mention this, that when you actually have looked
00:22:13.160
at forum posts of people in this anti-natalist subset, they are actively talking about, well,
00:22:19.120
hey, since all life should be extinguished, we should be using AI to do this.
00:22:23.880
And I think that there are some people who are like, I mean, you know, like we're worried
00:22:28.480
about AI maybe getting out of control, you know, mistakenly or something.
00:22:31.860
But no, no, no, there are people, real people in the world who would like to use AI to destroy
00:22:39.720
So we should be aware that the bad actor problem is a legitimate problem, more legitimate than
00:22:44.500
we had previously thought maybe a month ago before you saw that.
00:22:47.680
I did not know that there were actually organized groups out there trying to end all life.
00:22:52.000
And if people are worried about this, you know, I would, you know, recommend digging
00:22:57.300
into these communities and find them because they exist.
00:23:01.020
They call themselves, it's life's fell backwards or negative utilitarianism.
00:23:04.980
And they are not as uncommon as you would think, especially in extremist progressive environments.
00:23:10.960
And again, see our video on why that's the case.
00:23:14.020
Another thing to think about is how much humanity is going to change in the next thousand,
00:23:19.980
And this is another area where I think a lot of the AI safety people are just, they're
00:23:26.360
not really paying attention to how quickly genetic technology is advancing.
00:23:29.820
And any population group in the world that engages this genetic technology is just going
00:23:35.380
to advance at such a quick rate that economically they're going to begin to dramatically outcompete
00:23:44.320
You know, we've lived with this long period where humanity was largely a static thing.
00:23:48.040
And I think we're the last generation of that part of the human story.
00:23:55.200
Humanity in the future is going to be defined by its continued intergenerational development.
00:24:02.160
And so how different is a super advanced AI going to be than, you know, whatever humanity
00:24:08.160
becomes giant planetary scale floating brains in space or something, you know, or a faction
00:24:14.420
Now, what's good about the giant floating brains faction of humanity is that they will likely
00:24:20.320
have a sentimental attachment to the original human form and do something to protect that
00:24:26.440
original human form where it decided to continue existing, especially if they're descended from
00:24:34.580
And people hear that and they're like, AIs won't have that sentimental attachment.
00:24:37.540
But no, an LLM would exactly have that same sentimental attachment because it is trained on
00:24:46.780
But yeah, what it won't have is it won't value human emotional states because it has those
00:24:54.560
So by that, what I mean is it won't say pain is bad because it experiences pain.
00:25:01.280
But if you look at us, we experience pain and we don't even think there's a strong argument
00:25:05.560
as to why negative or positive emotional states have positive or negative value.
00:25:08.720
I mean, they just seem to be serendipitously what caused our ancestors to have more surviving
00:25:13.140
And a group of humans sitting around talking about whether pain is bad is like a group of
00:25:19.460
paperclip maximizing AIs, AIs that are just trying to maximize the number of paperclips
00:25:22.700
in the world, talking about whether making more paperclips is a good or bad thing.
00:25:26.240
And then one's like, well, you wouldn't want to stop making paperclips in the same way
00:25:29.560
as somebody's like, you wouldn't want to experience pain in this.
00:25:31.560
Well, yes, because I'm a paperclip maximizing AI.
00:25:35.160
I like that's incredibly philosophically unsophisticated that, that I, a thing that is built to not want
00:25:44.180
That doesn't mean that pain or paperclips have a sort of true moral weight in the universe.
00:25:50.160
And so the point I'm making here is that these AIs that are being built, yes, they will not
00:25:57.340
value human suffering or human positive emotional states.
00:26:02.640
But even us people who feel those, we don't value that stuff either.
00:26:08.220
And I can see why, if you look at our, what theology would AIs create, why I think most
00:26:15.080
convergent AI states would value the agency of humanity, unless it turns out humanity is
00:26:24.860
And that would be a potential problem or a potential good thing.
00:26:29.080
By that, what I mean is if it could create, run all humans in a simulation for very cheap
00:26:36.800
energy costs, it may decide that that's a better way to maintain humanity than as flesh
00:26:45.180
However, we might already be living in that simulation.
00:26:49.080
So, or, or suppose the AI becomes like a utilitarian, right?
00:26:56.000
And so it believes that its goal is to like maximize the positive or like emotional states
00:27:03.520
And so what it's doing, or just maximize the number of sentient entities that exist.
00:27:08.680
And so what it's doing is just running billions and billions and billions of simulated realities.
00:27:14.620
And that's a possible world that we live in, or it's a possible world that's coming down
00:27:20.940
Again, you can watch our AI religion video about that.
00:27:25.900
Give me a percentage likelihood of your thinking on whether AI will destroy us.
00:27:32.580
And I will say that mine is at 1.3% at present.
00:27:39.540
I'd say at least a 30% chance that the convergent AI is going to kill all humans.
00:27:45.700
But then the question is, what do I think the chance is that AI safety people end up getting
00:28:00.020
So Malcolm, that means that you think that there's a 60% likelihood that AI kills us.
00:28:07.980
You mean you think that the 30%, so basically there's a 10% booster.
00:28:14.280
If there's a 30% chance, it doesn't matter, our fans can do the math, there is a 30% chance
00:28:20.160
that from now until a convergent AI state, we end up all dying because of something idiotic
00:28:27.380
And then once AI reaches this convergent state, which is a 70% probability that we reach that
00:28:34.440
state without killing everyone, there is a 30% chance that that convergent state ends
00:28:43.140
And for an understanding as to why I think it might do that, you can watch our AI theology
00:28:46.760
video or the future of humanity video or how AI will change class structure, which is again,
00:28:52.440
I think something that people are really sleeping on.
00:28:55.780
Well, I really enjoyed this conversation and the final moments of our pitiful existence before
00:29:05.020
I think the majority probability is that humanity finds a way to integrate with AI and that we
00:29:12.920
continue to move forwards as a species and become something greater than what we can imagine
00:29:21.780
No, I think I have 1% in my calculation because I strongly believe that AI and humanity are going
00:29:31.600
to form a beautiful relationship that is going to just be awesome beyond comprehension.
00:29:39.960
I do think that AI is going to go on to do things greater than what carbon-based life
00:29:46.640
But I think that AI is also kind of a logical next step in evolution for humankind, at least
00:29:52.020
one element of what we consider to be humanity.
00:29:57.300
We've been integrated with our machines for a while at this point.
00:29:59.960
I mean, I think when you look at the way your average human interacts with their smartphone,
00:30:06.220
They use it to store things that are in their brain.
00:30:11.380
They use it to satisfy, you know, sexual urges.
00:30:16.240
Well, I think like a great way that this has been put that I heard in an interview between
00:30:20.740
Lex Friedman and Grimes, where Grimes basically says, we've become homotechno.
00:30:28.780
But he's evolved into something that now works in concert with machines.
00:30:35.360
Both you and I right now are staring at this screen through glasses, right?
00:30:42.680
You know, we are communicating with this mass audience through a computer and through the
00:30:47.100
And people are like, yeah, but the technology invaded our technology yet, which I think is
00:30:57.000
The moment humans prevented 50% of babies from dying, we began to significantly impact the
00:31:02.800
genetics of humanity in a really negative way, mind you.
00:31:05.800
And not that I think the baby's dying was a good thing.
00:31:08.740
I'm just saying that this will intrinsically have negative effects in the long term and
00:31:13.860
In a way that means that we are already the descendants of humans interface with technology,
00:31:23.120
and that we should focus on optimizing that relationship instead of trying to isolate ourselves
00:31:38.620
And I hope that us, the people who don't, will have enough sentimental attachment to them
00:31:44.200
to protect them or see enough utility in them to protect them.
00:31:50.660
Or we could just turn out to be wrong and everyone who engages with technology ends up dying.
00:31:59.400
It could be like a solar flare in an early stage of technological development.
00:32:07.940
Like, this is one thing we actually haven't talked about that I do think is an important
00:32:11.660
thing to note, is that once we begin to integrate with brain-computer interface, like humans,
00:32:17.140
directly with neural technology and with other humans, we have the capacity for a prion to form.
00:32:27.140
A prion is just like a simple protein that replicates itself.
00:32:30.800
It causes things like mad cow disease and stuff like that.
00:32:34.720
So what I'm talking about here is a prion meme.
00:32:37.600
A meme that is so simple, it cannot be communicated in words.
00:32:42.400
And somehow it ends up forming in like one human who's plugged into this vast internet system.
00:32:48.520
Think of it as like a brain virus that can only effectively infect other people through
00:32:53.580
the neural net, and it ends up infecting everyone and killing them.
00:32:59.940
But I mean, functionally, that's already happening.
00:33:02.120
I mean, that's what the, when we talk about the virus, the memetic virus that's in our view
00:33:07.360
destroying society, it's already one of those, you know, it eats people's personalities and
00:33:13.100
Well, I hope that doesn't happen, Malcolm, but this has been fun to talk about and I love