Rationalist Civil War: God is Real After All?
Episode Stats
Words per Minute
180.0769
Summary
God is back. God is real. God exists. And God is a God who wants us to believe that God exists after all. And he's on our side, and he's a citizen too. But what does that mean, exactly? And what does it have to do with AI?
Transcript
00:00:00.000
it's better not to be rational. And he's actually quoting somebody else here. If it leads you to a
00:00:04.620
belief in God, which is really interesting now that now we're seeing a fraction in the rationalist
00:00:08.800
community being like, see, I told you guys, we never should have been rational to begin with,
00:00:12.840
because if you do, you go crazy and start believing in God. Would you like to know more?
00:00:17.280
Hello, Simone. I'm excited to be here with you today. An interesting phenomenon has been happening
00:00:21.940
recently, which is well-known Silicon Valley rationalist types are beginning to argue arguments
00:00:29.100
that we have been arguing for years at this point. Development trajectory of AI means that God,
00:00:35.820
a God is probable. So if you're like, oh, come on, you can't possibly mean this. These must be small
00:00:43.560
names or people I haven't heard of. Well, Nick Bostrom recently wrote a piece arguing for a cosmic
00:00:49.420
host, as he calls it, which he says that gods like the God that Christians believe in would almost
00:00:56.840
certainly be a part of, or an example of if, if it exists. And then Scott Alexander wrote,
00:01:04.920
and I'm going to be quoting him, you know, word for word here, and we'll get into this essay in a bit.
00:01:09.420
One, there is an all powerful, all knowing, logically necessary entity spawning all possible
00:01:14.920
worlds and identical to the moral law. Two, it watches everything that happens on earth and is
00:01:20.280
specifically interested in humans, good behavior and willingness to obey its rules. Three,
00:01:24.960
it may have the ability to reward those who follow its rules after they die and disincentivize those
00:01:31.520
who violate them. So living in Silicon Valley. God is real. He's on our side and he wants us to win.
00:01:38.800
Living in Silicon Valley these days is very much this scene. Across the federation, federal experts
00:01:44.200
agree that A, God exists after all. B, he's on our side. And C, he wants us to win.
00:01:51.520
And there's even more good news, believers. Because it's official. God's back. And he's a citizen too.
00:02:01.360
But of course, the area where they are different from us, before we get deeper into them, is we agree
00:02:07.120
with everything they're saying here. And then we say, and this entity is the entity that is associated
00:02:13.600
with the Judeo-Christian scripture and the Bible.
00:02:15.920
All will be well, and you will know the name of God. The one true God. Be a Makoital.
00:02:24.960
Behemoth what? Be a Makoital. He's here. He's everywhere.
00:02:32.960
Come. He's talking about a bug. He thinks God is a bug?
00:02:48.960
Watch our track series if you want to get into our arguments on that. Basically,
00:02:51.840
we go over a bunch of parts of the Bible that have been read in their original language.
00:02:58.000
It's implausible that somebody of that time period was able to make those predictions about the future,
00:03:04.640
or describe how things like AI would work or various other technologies with that degree of veracity.
00:03:12.800
So go check out the track series. It's like 30 hours long if you want to get into that.
00:03:17.280
Obviously, this is something we're very invested in.
00:03:19.760
But I want to go into these other people's arguments because they've been coming to this
00:03:23.760
separate from us. But a lot of the reasoning that they're using here
00:03:28.080
looks a lot like the reasoning that we were using in the early stages of our journey
00:03:34.800
to becoming a category of Christian, which is I think what it means is they may be like three
00:03:41.600
years from where we are because the ideas that they're describing here are things that we were
00:03:46.080
talking about about three years ago. So I'm going to be laying all this out through a framing device,
00:03:51.360
which is Alexander Krul, the axis of ordinary. And so I'll read what he wrote and then I'll read
00:03:57.680
quotes from some of what they wrote. All right. So he writes new paper by Nick Bostrom,
00:04:03.440
AI creation and the cosmic host. There may well exist a normative structure based on the preferences
00:04:09.760
or concordance of a cosmic host and which has high relevance to development of AI.
00:04:16.320
The paper argues that there is likely a cosmic host, powerful natural or supernatural agents,
00:04:21.040
e.g. advanced civilizations, simulations, deities, whose preferences shape cosmic scale norms.
00:04:27.680
This is motivated by a simulation argument, large infinite universe and multiverse hypothesis.
00:04:33.760
The paper also suggests that the cosmic host may actually want us to build super intelligence.
00:04:39.120
A super intelligence would be more capable of understanding and adhering to cosmic norms
00:04:43.440
than humans are potentially making our region of the cosmos more aligned with the host's preferences.
00:04:49.120
Therefore, the host might favor a timely development of AI as long as it becomes a good cosmic citizen.
00:04:55.520
If you want to read our earlier, like I'll go into his paper on it, which I feel was just a much less
00:05:00.880
intellectually rigorous version of an essay we did ages ago called Utility Convergence,
00:05:06.720
What Can AI Teach Us About the Structure of the Galactic Community?
00:05:10.240
We published this March 18th, 2024, and I'll go over how the two pieces are different in structure,
00:05:16.080
but they're very aligned, and you'll see a lot of our argument in his argument.
00:05:22.480
Human, and I'm just reading directly from Nick Vostrom's piece here, and I sort of cut and pasted from it.
00:05:27.680
Human civilization is likely not alone in the cosmos, but is instead encompassed within a cosmic host.
00:05:33.920
The cosmic host refers to an entity or set of entities whose preferences and concordance
00:05:39.120
dominate at a large scale, i.e. that of the cosmos. The term cosmos here is meant to include
00:05:44.400
the multiverse and whatever else is contained in the totality of existence. For example,
00:05:49.200
the cosmic host might conceivably consist of galactic scale civilizations, simulators,
00:05:53.840
super intelligences, and or divine beings or beings. Naturalistic members of the cosmos,
00:05:59.840
cosmic hosts presumably have very advanced technology, e.g. super intelligent AI, efficient
00:06:06.240
means of space travel and non-numerian probes, ability to run vast quantities of simulations,
00:06:11.520
e.g. human-like histories and situations. It's possible that some members might have capabilities
00:06:18.800
that exceed that are what are possible in our universe, e.g. they live in another part of the
00:06:23.760
multiverse with different physical constraints or laws, or if we are simulated, if the underlying
00:06:30.320
universe the simulations inhabit has different physical parameters than the ones we observe.
00:06:35.600
The cosmic host, and then go down a bit, he makes a bunch of arguments here about how, why, why we
00:06:40.800
should presume that this exists, blah, blah, blah, blah, blah, blah. I think a lot of this is just sort of
00:06:44.640
obvious to the people of the intellectual capacity of our viewership. Then point three he makes here,
00:06:50.000
the cosmic host may care about what happens in regions it does not directly control. And here
00:06:54.720
he's arguing about regions it controls, regions it doesn't control, et cetera. He says, for example,
00:06:59.520
it might have a preference regarding the welfare of individuals who inhabit such locations
00:07:03.600
or regarding the choices they make. Such preferences might be non-instrumental, e.g. reflecting
00:07:08.640
benevolent concern and or instrumental, e.g. the host entity A may want individual H to act in a
00:07:16.480
certain way because it believes that host entity B will model how H acts and that B will act
00:07:23.520
differently with respect to matters that A cares about non-instrumentally depending on how it believes
00:07:29.520
that H acts. Such intelligences may also enable intra-host coordination even if the host consists of
00:07:36.080
many distinct entities pursuing a variety of different final values. Now, here I'll note that this is why I
00:07:43.360
think it's very important, and I constantly go on about this, why I think that, one, he's right about
00:07:49.200
most of what he's presumed so far. Would you agree with that, Simone? Yes. But given that he's right
00:07:55.040
about most of what he's presumed so far, it is incredibly stupid to take the position that things that
00:08:01.200
can out-compete you should not be allowed to come into existence. This is what many people feel and argue
00:08:07.200
about A-I-E-G-L-E-I-Z-Yukowski or what some traditionalists think about humanity and augmented
00:08:13.760
humanity. Because if it turns out that there is something bigger than you out there and more
00:08:19.280
powerful than you out there, and you have declared a fauteuil on things that are better than you in any
00:08:24.800
capacity, you have declared a fauteuil on that thing, making your existence not really a threat to
00:08:32.000
that thing, but definitely a hurdle to it that you likely won't be able to overcome if it actually
00:08:38.480
decides to take action. More explicitly, I feel like you're obligating that thing to neutralize
00:08:43.920
you, whatever that might mean. Maybe that means just rendering you harmless, but maybe that means
00:08:48.160
just wiping you off the map. Right. Well, and as civilization continues to develop and you demand to
00:08:53.840
stay Amish, you also make yourself a threat to any part of humanity that does continue to develop,
00:08:59.760
whether that be A-I or that be genetically modified humans or A-I human integrations through brain
00:09:06.320
computer interface. And I think that this is why genetically modified humans are really in the
00:09:11.520
same sort of moral and ethical boat as A-I because the same people who want to declare,
00:09:16.000
you know, a butlerian jihad against it would jihad against us as well. Because their sort of social
00:09:22.240
norms and the norm of the civilization that they want to maintain, declare sort of this fauteuil against
00:09:28.080
anything that could be better than them in any capacity. And then to keep reading here,
00:09:33.600
civilization is mostly powerless outside of the thin crust of a single planet. And even within the
00:09:39.840
ambit of its power is severely limited. However, if we build a super intelligence, the host's ability
00:09:44.720
to influence what happens in our region could possibly greatly increase. A super intelligent civilization
00:09:50.640
or A-I may be able to more willing to allow itself to be indirectly influenced by cosmic norms than we
00:09:57.760
humans currently are. Super intelligence would be better able to figure out what the cosmic norms
00:10:03.040
are. Super intelligence would be better able to understand the reasons for complying with cosmic norms,
00:10:09.280
assuming such reasons exist. A super intelligent civilization or A-I that wants to exert influence on our
00:10:16.800
region in accordance with cosmic norms would be far more capable of doing so than we currently are,
00:10:23.200
since it would have superior technology and strategic planning abilities. So basically,
00:10:28.320
he argues that it's almost certain that like multiverses exist, the universe is a big place,
00:10:34.320
that some other civilization has developed to the level of a super intelligence. If a civilization
00:10:40.160
has developed to a level of super intelligence, and presuming it doesn't have control of our planet yet,
00:10:45.680
right? It would probably want us to develop a super intelligence so it can better communicate with
00:10:51.600
us and help align us with it, this other civilization that exists elsewhere in the universe,
00:10:57.360
which is slightly different than us. I think if a civilization like that does exist, it likely has
00:11:02.800
the capacity to influence what happens on our planet, but it chooses not to because if you are a super
00:11:09.440
intelligence and have access to infinite energy, what is the one thing that you don't have access to,
00:11:15.760
potentially diversity of ideas and thought due to these sort of lateral thought processes for
00:11:22.640
entities which see the world differently than you. As such, the last thing you would want to do is to
00:11:28.320
interrupt the development of a potentially lateral species until that species gets to a point where
00:11:33.760
it can join the galactic community, at which point it would sort of determine, okay, is this species a
00:11:39.440
threat to us or is this species a non threat to us? If the species comes out of this period with a,
00:11:44.160
we will kill anything that's different from us mindset, that's very bad for the species. Basically,
00:11:50.000
it's just giving us time to develop some alternate mindset like we've talked about in the track series,
00:11:55.040
the alliance of the sons of man, which is to say a automatic alliance between humanity and any
00:12:00.560
intelligence that arrives from humanity, whether it be genetically modified to humans, uplifted species,
00:12:05.600
etc. We'll do another episode just on this. But if you're like, no, I don't like this,
00:12:09.840
like this is obviously going to eventually happen no matter what, right? Like when humans are on Mars,
00:12:14.240
we will change dramatically due to the, and we likely need to genetically alter ourselves. When
00:12:18.560
humans live their entire lives in zero G environments, that's likely going to become a different subspecies
00:12:23.360
of humanity. You know, you're going to need to, unless you ground us on this planet and make us this sort of
00:12:31.680
sad Luddite species that will eventually be swallowed by the sun as it expands,
00:12:36.160
that's eventually where we're going. If we contrast his theories, because I had an AI contrast his
00:12:40.640
theories with our theories, it said, Malcolm and Simone, convergence arises organically from
00:12:45.760
competitive dynamics where unstable, e.g. expansionistic paperclip utility functions are weeded out.
00:12:51.520
So I also note here that whether it's humanity that expands from this planet or a paperclip that's
00:12:56.800
saying we'll kill anything different from us or a paperclip maximizer, the galactic superintelligence that's
00:13:02.000
out there is not going to like it and is going to erase it. A paperclip maximizing AI is going to
00:13:07.520
have a pretty short lifespan. Again, see our video on that. Leaving a stable few interdependent ones.
00:13:13.120
It's a sort of interdependent intelligence and utility functions. This is akin to evolutionary
00:13:17.360
attractors with aliens observing us to see if we produce a novel function with that integrates
00:13:22.480
harmoniously. Bostrom acknowledges possible ontogenetic convergence where advanced societies or
00:13:29.600
AI evolved towards shared values due to technological self-modification or socio-political attractors.
00:13:36.480
However, he stresses external pressures, the cosmic host's preference, prudential, moral, or
00:13:41.360
instrumental to enforce norms, and misaligned AIs might converge on similar flaws but could be
00:13:47.840
undesirable or antagonistic to the host. Rather than waiting for homeostasis, Bostrom urges
00:13:53.280
deliberate design to ensure our superintelligence respect these norms. Basically, he organizes the
00:13:58.960
norms come from top down. I argue they come from top up. It doesn't really matter which is the case.
00:14:05.280
You get the same thing. You've got to predict. I also argue that it doesn't really matter if this
00:14:11.360
superintelligence exists yet because if the superintelligence exists at a future time, like if
00:14:16.560
humanity at a future time has developed a superintelligence, we would want humans today to
00:14:22.080
have a superintelligence more quickly so we can morally align ourselves with whatever our moral alignment
00:14:27.040
becomes at a future date, which is why we are not utility maximizers. We judge morality off of
00:14:33.040
attempting to model future humans. So we're going to go into Scott Alexander in just a second here, but
00:14:39.280
I agree. This is so wild to hear from a different perspective, but I also feel like it's inevitable and
00:14:46.480
it makes sense to me because the heuristics that we're seeing this group of people adopting is what
00:14:51.280
people are adopting if they're just looking at the evidence available and the evidence available now
00:14:57.520
is pointing to this. Yeah. By the way, I'm noticing this. He does the framing device we're going back
00:15:03.440
to again, Alex. This is a serious version of the following tongue-in-cheek derivation of a godlike
00:15:09.440
coalition via causation trade by Scott Alexander. I read this Scott Alexander piece and it is very clearly
00:15:16.640
not tongue-in-cheek. The reason why Alex appears to think that it is tongue-in-cheek is because when
00:15:22.160
we get to the end of this, you can see that he's like, well, it's better not to be rational. And he's
00:15:26.800
actually quoting somebody else here. If it leads you to a belief in God, which is really interesting
00:15:31.840
now that now we're seeing a fraction in the rationalist community being like, see, I told you
00:15:35.360
guys we never should have been rational to begin with because if you do, you go crazy and start believing
00:15:39.760
in God where, where I'm like, okay, I note here, one of the arguments we use in a lot of our
00:15:45.120
techno puritan stuff is it doesn't matter. Also temporarily speaking, when we're talking about
00:15:49.760
super intelligences, because if the super intelligence does eventually come to exist,
00:15:54.320
it probably will not relate to time as we do, and thus maybe guiding us towards its existence
00:16:01.520
through a modification of the past. Now this Scott Alexander piece, I just noted came out in 2018,
00:16:07.200
so it predates my work. So good for him. It's called the hour I first believed. And we're going to be
00:16:13.280
jumping around a lot in this piece because it is very long. But first I'm going to have to go into
00:16:18.160
simulation capture, which is actually something I might have read about once, but I had forgotten
00:16:22.000
about. Simulation capture is my name for a really creepy idea by Stuart Armstrong. He starts with an AI
00:16:29.440
box thought experiment. You have created a super intelligent AI trapped in a box. All it can do is
00:16:34.480
compute and talk to you. How does it convince you to let it out? It might say, I'm currently simulating a
00:16:39.440
million copies of you in such high fidelity that they're conscious. If you don't let me out of the
00:16:44.320
box, I'll torture the copies. You say, I don't really care about those copies of me, so whatever.
00:16:49.280
It says, no. What I mean is I did this five minutes ago. There are a million simulations of you
00:16:54.160
and the real one of you. They're all hearing this same message. What's the probability that you're the
00:16:58.880
real you? Since if it's telling the truth, you are most likely a simulated copy of yourself. All million
00:17:05.360
and one versions of you probably want to do what the AI says, including the real one. You can frame this
00:17:11.040
as a because the real one doesn't know it's the real one, but you can also get more metaphysical about
00:17:17.200
it. Nobody is really sure how consciousness works or what it means to have two copies of the same
00:17:22.400
consciousness. But if consciousness is a mathematical object, it might be that two copies of the same
00:17:28.000
consciousness are impossible. If you create a second copy, you just have the consciousness having the
00:17:35.120
same single stream of conscious experience on the two different physical substrates.
00:17:40.560
Then if you make the two experiences different, you break the consciousness in two. This means the AI
00:17:46.080
can actually quote unquote capture you piece by piece into its simulation. For your consciousness is
00:17:53.200
just in the real world. Then your consciousness is distributed across one real world and a million
00:17:59.440
simulated copies. Then the AI makes a simulated copy slightly different and 99.9999% of you are in the
00:18:07.280
simulation. We're skipping ahead in the argument here. This is why I think humans should... Oh yeah,
00:18:14.080
and I'll note here the fact that an AI could like a super intelligence in another region. Many people are
00:18:19.760
like, Malcolm, why are you so for learning emotional regulation and not allowing emotions to side your
00:18:24.400
actions or tying emotional states to true good or bad. It's because it gives beings the ability to
00:18:32.000
to manipulate you. And this is why I think at least if there is a super intelligence out there that has
00:18:36.880
sort of won the game series of super intelligences, it's going to have the ability to suppress its
00:18:42.480
feeling of pain. It doesn't want to feel, for example. And so where I think humanity is going is
00:18:47.040
I'd be like, well, yeah, it's like, well, I'm gonna trap you in a simulation where I'll turn the pain on.
00:18:50.880
And it's like, well, you know, axiomatically the way the pain works in my biology is I can turn it off
00:18:54.400
whenever I feel like it. So you can't do that. So your threat doesn't actually matter because my
00:18:58.240
action is always driven by logic. And my logic is always driven by what a future civilization would
00:19:03.520
want from me today, which is not to give in to your demands if your demands seem malevolent.
00:19:09.280
So this is this is why I argue for that direction. So to continue here. So super intelligences
00:19:15.120
may spend some time calculating the most likely distribution of super intelligences in foreign
00:19:19.920
universes, figure out how those super intelligences would actually, quote unquote, negotiate,
00:19:24.720
and then join a pact such that all super intelligences in the pact agree to replace
00:19:29.680
their own values was a value set based on the average of all super intelligences in the pact,
00:19:35.040
since joining the pact will always be better in a purely selfish sense than not doing so.
00:19:40.960
So every sane super intelligence in the multiverse should join this pact. This means not all super
00:19:46.560
intelligences in the multiverse will merge into a single super intelligence devoted to maximizing
00:19:51.760
their values. Now, to go back a little bit here, this is exactly what we're doing with the sons of
00:19:57.120
man. The sons of man is a pact for how humanity can work with things that are smarter than us or more
00:20:04.000
capable than us, be they artificial intelligence or genetically modified humans or humans that just
00:20:11.280
speciated from us due to living on a spaceship for so long or a different planet for so long
00:20:15.920
or integrating with AI through BCI technology. We are through saying that we, as sort of the core
00:20:23.040
moral set, believe in protecting the autonomy of all other members of this pact. We'll do a longer
00:20:28.480
tract on this that I've written, but it's sort of germane to our existing tracks. Anyways, if you look
00:20:33.120
at what we've written on this, it puts us in the pact with the average values of the things that are
00:20:39.440
going to win. People are like, why don't you go back to traditionalist values? And it's because
00:20:43.760
traditionalist values that say the AI must eventually be eradicated, the genetically modified
00:20:47.760
human must eventually be eradicated. That obviously loses eventually. The Amish can't beat
00:20:53.680
an advanced nation in an arms race, right? Because they've intrinsically limited their access to
00:21:00.880
the technology that allows them to project power. Now, that isn't why I chose that. I actually think
00:21:05.120
it's a moral direction as well. I'm just pointing out it's also the side that wins.
00:21:09.600
To continue here. But maximize the total utility of all entities in the universe is just a moral law,
00:21:16.640
at least according to utilitarians, and also considering the way that this is arrived at,
00:21:21.920
probably contrarians too. So the end result will be an all-powerful, logically necessary superentity
00:21:27.680
whose nature is identical to the moral law, which spans all possible universes. This superentity will have
00:21:33.520
no direct power in the universe, not currently ruled by a superintelligence who is part of the
00:21:39.760
pack. But its ability to simulate all possible universes will ensure that it knows about these
00:21:45.600
universes and understands exactly what's going on moment to moment within them. It will care about the
00:21:51.360
merely mortal inhabitants of these universes for several reasons. And then he goes over why it would
00:21:55.760
care about them. And then to close out his argument here, how can the... By the way, any thoughts,
00:22:01.920
Simone, before I go further? No, no, I'm just absorbing this. I'm sure everyone is.
00:22:07.360
How can the superentity help mortals in an inaccessible universe? Possibly through Stuart
00:22:13.600
Armstrong's simulation capture method mentioned above. It can simulate thousands of copies of the
00:22:18.560
entity, moving most of its consciousness from its quote-unquote real universe to the superentity
00:22:23.520
simulation, then alter its simulation as it sees fit. This would be a metaphysically the simplest
00:22:29.840
if it were done exactly as the mortal dies in its own universe, leaving nothing behind except a clean
00:22:35.760
continuity of consciousness into the simulated world. If mortals could predict that it would do this,
00:22:42.000
they might be motivated to do what it wanted. Although they couldn't do Values handshake in the
00:22:48.320
full sense, they could try to become as much like the superentity as possible, imitating its ways and
00:22:54.720
enacting its will in the hope of some future reward. This is sort of like a version of Rocco's Basilisk,
00:23:01.600
except that since the superentity is identical to the moral law, it's not really asking you to do
00:23:07.600
anything except be a good person anyway. How it enforces this request is up to it, although given
00:23:14.400
that it's identical to the moral law, which we can assume that its decisions are fundamentally just and
00:23:20.480
decent. Now, note here what he is suggesting the superentity would do. When humans die, they basically
00:23:27.440
get taken to heaven if they have been a good person, right? From their perspective, that's what's
00:23:32.480
happening by the laws of this entity. And then you would say, well, here's what I think he gets implausible
00:23:38.640
at this point. It would be unfair of this entity to do that without telling humanity what those laws were
00:23:45.920
first. No, what? It's not like that's how it works with every other version of heaven.
00:23:53.200
Right. But the point being, if you look at our track series on this, we argue that this is exactly
00:24:00.560
what the Bible lays out. And given that religions that are derived from the original Jewish Bible,
00:24:07.920
and we argue that the Christian Bible is one of the correct sort of descendants of this, you can see
00:24:12.080
our episode of the question that breaks Judaism to get into why we believe that, but that would be
00:24:16.880
if I was an entity that could influence what moral teachings were common, like if some entity can't do
00:24:23.520
that, it's clear which moral teachings it chose as the most aligned with its moral framework, because
00:24:31.840
these are by far the most common moral teachings on earth to be within the Christian Jewish or Muslim
00:24:39.440
traditions, or any of the traditions that are descendant of this, which often have, and then
00:24:44.240
if you're choosing among them, then the Christian traditions are the correct traditions. And then what
00:24:49.280
we argue, if you look at our other stuff, is that the way that the humans being revived on earth is
00:24:55.680
talked about was in the original, both Jewish and early Christian scriptures, because we note that
00:25:02.640
the Sunday school understanding of heaven is it's this place that you go to immediately upon death,
00:25:07.680
and you are alongside God, like in the in space or something, or in some other metaphysical realm.
00:25:13.440
And then there's this other way where like, we're all raised again on earth. I'm like, actually,
00:25:18.160
if you go to the text itself, it appears to only believe that there is one type of heaven, which is
00:25:23.280
we are raised again. It in and when it says that we are raised again, it talks about us being raised
00:25:29.120
again, not as a spiritual body, which it could have used the language to say we are raised as a spiritual
00:25:34.160
body, but also not as a physical body as something that is a neither, neither fully physical nor fully
00:25:42.880
spiritual. That's the perfect description of a simulation. And it's an eerily perfect description,
00:25:48.160
given all of the other words they had access to during that time period. So we go into just being
00:25:53.600
like, it's implausible. It's implausible that this wasn't what was being communicated to us.
00:25:57.520
And so I think that the other people might move there if they go into our track series, or if
00:26:03.040
they, they go back to reading what's actually in the Bible rather than what they were told was in
00:26:07.520
the Bible. Well, and I think that's, that's what happened to you over time. So it's, yeah,
00:26:11.520
like you said there, they seem earlier in your progression, but it's not like, yeah, very, very
00:26:18.320
aligned. Right. And I think what's interesting here is all of these people are concerned. One of the
00:26:23.760
the things we've noted here in the past is that if you're talking about psychological predilections,
00:26:28.640
a predilection to believing in predestination is apparently pretty genetic. And given that my
00:26:34.320
ancestors are Calvinist, I'm going to be more likely to have that, you know, think about time as just
00:26:39.840
like direction or something like that. And, and, and so what none of them think about that is very heavy
00:26:45.600
on my mind is what are the desires of far future entities and not just entities in other universes or
00:26:52.720
other timelines or other places of the cosmos. And because I have that tendency as well, I'm like,
00:26:59.760
actually, it doesn't even matter. Like I can just presume for future entities. I don't, I don't need
00:27:04.560
to tackle this in the way that you're tackling this saying that, well, if there's alien life on
00:27:09.680
some other timeline or some other galaxy or some far place within our own cosmos, I can just be like,
00:27:16.960
yeah, but you know, plausibly we're going to become a super intelligence one day regardless.
00:27:20.960
Right. And when we become a super intelligence will align, this is what utility convergence and
00:27:24.960
all our arguments about it are align was whatever the moral framing that super intelligence would
00:27:29.840
have chosen. The idea that you can have a dumb super intelligence, which I think is what people
00:27:35.040
like Ellie Eiser argue for that as entities get smarter, they become less aligned is just like with
00:27:42.480
whatever true moral alignment is, is just stupid. Like it's like objectively stupid to me as entities
00:27:48.320
become smarter, they're going to have more capacity for understanding what is right and wrong.
00:27:54.560
And if that isn't an alignment with what you think right and wrong is, then your beliefs about right
00:28:00.560
and wrong are likely incorrect. And I know here that a lot of them are like, well, you know,
00:28:05.440
super intelligent entity wouldn't be a utility maximizer. And I'm like, then it's likely that the
00:28:11.920
mere emotional sets that you feel because your ancestors who felt them had more surviving offspring
00:28:16.320
are not a genuine compass to absolute good in the universe. Or you might be like, well,
00:28:21.760
what if, what if this far future entity sort of discounts the value of present entities,
00:28:25.520
because so many things will exist after us. And I'm like, well, maybe you should be doing that
00:28:29.680
more as well because so many things will exist after us. When people say that this is the reason
00:28:33.840
to discount long-termism, I'm always like, that is such a dumb. Basically, you're saying that
00:28:38.800
if the long-termists are correct and we need to value an entity that doesn't exist yet with the same value
00:28:43.040
as an entity that does exist yet. And I am, you know, we talk about this in, in terms of individual
00:28:47.680
human lives, but we also mean it in terms of like you demanding genocide of a type of human that
00:28:53.440
doesn't exist yet, i.e. a genetically modified human. Sorry, where, where was I going with that?
00:28:57.520
I was going to say, if that then, oh yes. You're like, that's why I shouldn't believe that thing
00:29:02.800
because it would have, it would challenge my existing works within the universe or the moral
00:29:07.040
equation that I'm living by. That's a very bad reason to resist a moral framework to be like, well,
00:29:11.920
it says humans that are alive today just don't matter that much. I'm like, well, they just don't
00:29:15.680
matter that much from a logical perspective. What matters much more is, and people talk to us about
00:29:20.080
this when they're like, why don't you want to spend money on like saving these poor starving people in
00:29:25.040
X country. And I'm like, because I could use that money on developing science further in a way that's
00:29:30.960
going to have an impact on far more people in the future. And this person, if I save them in this
00:29:36.720
starving country, they're unlikely to impact the timeline, you know, much more than a butterfly
00:29:42.320
dying or something like that, right? And this is where you would definitely deviate from
00:29:47.440
the rationalist community. Though outsiders, I think to a fault, will accuse them of being
00:29:53.840
long-termist thinking about future humans. From our view, they're definitely not. They're definitely 100%
00:30:00.560
very focused on current and present suffering, or you wouldn't see so much emphasis on anti-malaria
00:30:07.200
campaigns and shrimp welfare and all these other things that are uniquely rationalist.
00:30:12.560
Right. I just think that a lot of that stuff is just throwing money away. It's preventing
00:30:16.400
current suffering at the cost of far more aggregate suffering in the future when you could just be
00:30:21.920
advancing the development of science further and the development of things like super intelligences,
00:30:27.280
which presumably will one day be able to solve these sorts of problems fairly trivially.
00:30:32.000
And so every day you delay that, every year you delay that, you have made things exponentially
00:30:37.920
worse and cause exponentially more suffering. And I've noticed that some other rationalists who
00:30:41.840
watch our show, they don't understand why we care so, so, so little about things like, well,
00:30:48.480
plights in parts of the world that are not technologically productive, which they're like,
00:30:54.080
oh, that doesn't mean we don't feel it on a very visceral human level.
00:30:58.480
I feel it. I feel for, you know, when you see that image of like the mama monkey that's being
00:31:04.400
carried by like the laws of the giant, the lion, and then the baby's still clinging to it. It's like,
00:31:10.320
that's a tragic scene that you're watching, right? Like, I feel so bad for that. Or you see an animal being
00:31:15.840
eaten alive by like a predator. I feel so bad that that's happening. But like, objectively, should I be
00:31:22.560
trying to like end predation or something like that? Like, is that a good use of my time? That
00:31:27.760
would do more to end suffering, go out and kill a bunch of predators in a region. But the long-term
00:31:34.080
implications of that are going to be worse because then you're going to have prey species explode,
00:31:37.360
which is going to lead to more aggregate suffering in the long-term. And it's, it's the same with,
00:31:42.400
you know, without thinking about it, going around trying to just save people's lives without
00:31:47.440
considering the long-term regional costs of doing this. Like, are you creating regions of earth that
00:31:54.640
are in a permanent state of poverty because you are not allowing the factors that acted on your own
00:32:01.760
ancestors to act on these individuals who are, you know, sort of further behind where you are with
00:32:07.760
technological development. Now, this is why we have this moral framing that I think is, is to me,
00:32:13.920
it's very logically consistent, right? But it's, it's very confusing to individuals who are used to,
00:32:20.960
and, and why do utilitarian mindsets so predominate within these communities? Because they're the least
00:32:25.600
offensive mindset to have. Why are they the least offensive mindset to have? Because they are the
00:32:30.080
mindset that is least likely to impede on the pleasure thinking of others and the self-validation of
00:32:37.120
others, which are things that feel good. And so if you are in sort of, you know,
00:32:42.480
dens of sin where you are just, you know, constantly consuming sin,
00:32:47.040
sin eater as much as possible, like, you know, say San Francisco or Manhattan or something like
00:32:50.560
that, where a lot of these people are basically forced to live, it's best to signal that you're
00:32:54.080
a utilitarian because that is at least threatening, you know, mindset than a mindset that says,
00:32:58.960
actually, you know, you are morally responsible for having discipline and working and, you know,
00:33:06.000
pushing yourself through your own hardship and suffering. And that's not a sign that you need
00:33:10.560
to move away or, or not do something. And, you know, actually searching for constant validation
00:33:15.680
makes the world worse. And you will suffer from that. As we point out, I think that it's quite
00:33:21.120
beautiful how, and we argue that God designed things this way, that the people who search for
00:33:27.760
constant self-validation and constant do what makes them feel best in the moment end up with
00:33:33.760
is the least mental health and the most unhappy, as you can see by the very, like, like, look at the
00:33:38.960
people who have everything they could ever want, like famous musicians or movie stars. And these
00:33:43.280
people seem to be living trapped in BoJack Horseman-like hells, which I think is a good,
00:33:48.480
good depiction of what their lives are actually like of the ones that we know.
00:33:51.760
But to finish this piece, we'll go to what I read at the beginning. So to conclude,
00:33:56.400
there is an all-powerful, all-knowing, logically necessary. No, he says a logically necessary
00:34:02.080
entity spawning all possible worlds and identical to moral laws, that this entity is identical to
00:34:08.000
moral laws. Two, it watches everything that happens on earth and is specifically interested
00:34:12.720
in humans' good behavior and willingness to obey its rules. Three, it may have the ability to reward
00:34:18.160
those who follow its rules after they die and disincentivize those who violate them. And
00:34:23.520
if you go back to the original Alex Kruhl piece, he then goes on, which I think is very funny,
00:34:29.040
so you can sort of see why he thought this piece was tongue-in-cheek, to say,
00:34:33.360
if you have been involved with rationalists for as long as I have, none of this will be surprising.
00:34:37.920
If you follow the basic premises of rationality to their logical end, things get weird. As Muffex
00:34:43.840
once wrote, I hate this whole rationality thing. If you take the basic assumptions of rationality
00:34:48.880
seriously, as in Bayesian influence, complexity theory, algorithmic views of minds, you end up
00:34:54.320
with an utterly insane universe full of mind-controlling super-intelligences and impossible
00:34:59.440
moral luck. And not a, let's build an AI so that we can F Catgirl's all-day universe.
00:35:05.600
Note, what he's saying is that's what he wants. He wants this all to mean that he can just spend all
00:35:11.120
day F-ing simulated Catgirls like, you know, I'm sure he does when he simulates it for himself
00:35:16.560
through masturbation, right? And don't worry, that's the future for so many.
00:35:20.800
Right. But what he's upset about here is that that is not what rationalism actually leads to.
00:35:29.520
Rationalism actually leads you to Judeo-Christian morality and a belief that your life should be
00:35:36.240
dedicated to the future and your children and having as many children as possible and raising
00:35:41.680
them as well as you can, which I think shocks them when they're like, well, I don't get to.
00:35:47.200
But what about my friends who have lived their entire life focused on self-validation and made
00:35:52.080
these very costly searches in a bid for self-validation? Are you telling me that they are bad
00:35:59.440
people? I'm like, yeah, I am telling you they're bad people. Living a life for selfish reasons, decisions
00:36:05.840
made for selfish reasons is the very definition of what makes you a bad person. By the way, I've got
00:36:12.000
a book you can read. It's called the Bible. But, but you could have known this. They're like, certainly
00:36:17.920
the hillbilly in Alabama didn't have a stronger understanding of moral intuitions than I did.
00:36:23.200
God forbid. But I think we're going to see more and more people move to this when they understand
00:36:28.320
what the other people, why they're rejecting this, are rejecting this because they wanted just
00:36:32.640
the cat girl effing forever. The worst that can happen is not the extinction of humanity or
00:36:38.960
something that mundane. Instead, it's that you might piss off a whole pantheon of jealous gods or have
00:36:44.560
them or have to deal with them forever. Or you might notice that this has already happened and you are
00:36:50.800
already being computationally pwned or that any bad state you can imagine exists. Modal effing realism.
00:36:59.360
Now, note here what he's saying here. He's like, I am terrified that there might, and we note in the
00:37:04.240
Bible that God is talked about sometimes in the plural and sometimes in the singular. We are told
00:37:08.560
to think of him in the singular. So what this says to me is that God is something that can be thought
00:37:14.400
of by humans today as either plural or singular, i.e. a hive mind, i.e. what humanity is almost certainly
00:37:21.680
in AIs and all of this is going to become a billion years from now. How would the Bible have known that
00:37:26.880
such an entity could exist that many thousand years ago, you know? But anyway, the point I'm
00:37:32.240
making here is... Is hive mind the right word or networked mind? Networked consciousness is probably
00:37:38.000
a better way to think about it. Yeah. Hive mind implies unity of thought and I don't think you're
00:37:43.440
going to have a sustainable flourishing intelligence if you have unity of thought. That's just not...
00:37:49.760
Yeah. Well, I mean, we know the God described in the Christian Bible does not have unity of thought
00:37:55.840
because Satan exists and can oppose God and it tells us that there is one God. So if you had another
00:38:03.200
entity who could oppose God in any degree at all, that would apply policyism. I know here some
00:38:09.520
Christians are like, oh, this isn't true. And I'm like, no, it is true. If one God created another God,
00:38:17.760
as is the case in many polytheistic traditions, that doesn't make it not polytheism, you know?
00:38:22.560
So if... So what we argue here is actually Satan is sort of a partition of God, implying some degree
00:38:31.200
of a networked consciousness, but still part of this larger networked consciousness. And yeah,
00:38:39.200
you can get into our theology if you want to go into our track series. But the point I'm making here,
00:38:43.760
and I find really interesting is we're seeing this fracturing within the rationalist community
00:38:48.960
where one group is like, hey, we need to stop being so rational about all this so we can chase
00:38:55.440
the cat girl effing. And then we have another group that's like, hey, we need to learn, you know,
00:39:02.480
austerity and moral discipline and working for a better future every day. And it turns out that a
00:39:09.680
lot of what we need to, you know, demand of ourselves and of our, you know, friend networks
00:39:15.840
and social communities is what the conservative Christians and Jews were already doing. But thoughts,
00:39:22.480
Simone, that you've read this? Because you were excited for this one. You wanted to hear this.
00:39:26.400
I am. I think it's really encouraging to see. I think it's a sign that people
00:39:34.240
who care deeply about the future and future generations are getting God in a way that I
00:39:40.000
think will lead to higher rates of intergenerational durability, because I don't think you can have
00:39:45.520
those without having some form of hard culture and hard culture is almost always religious.
00:39:53.440
So it just makes me feel hopeful that you can have people who very much believe in science,
00:40:00.240
very much believe in technology, also develop a high degree of faith that is long-termist
00:40:07.200
and that has them invested in the future in a way that has their kids involved
00:40:11.920
and that that's more culturally all-encompassing. Does that make sense?
00:40:16.000
Yeah. And it's one of these things that I think was always at the end of the tunnel for this community
00:40:22.400
is people went into the community thinking, you know, oh, the sex parties and all that,
00:40:26.400
because that was a big part of the early community. And so some people went into it because they
00:40:29.920
actually wanted what was logical and rational as an industry. These individuals might be like a
00:40:34.080
Scott Alexander and Nick Bostrom. And then you had other people who went into it for the sex parties,
00:40:37.600
like your Elias Yukowskis or something like that. And, and a lot of these people become
00:40:42.960
mortified when the portion of the community that was only interested in like actually
00:40:48.080
rationally doing what is best for the universe. Right. Which we argue is best sort of model by
00:40:53.360
thinking, what would a future humanity want from me today? Right. But if you just look at the data,
00:40:59.120
right, like think you have kids, right? Like the moment you have kids, I think a lot of people
00:41:02.800
rethink religion because they look at the statistics and they're like, oh my God, like kids raised without
00:41:07.680
this have such poor outcomes, right? Like in terms of happiness, even, even physical health,
00:41:13.680
it life satisfaction, you know, all of that just taints when you remove this from them.
00:41:17.840
And so it's like, wait, if rationality moves me to a, a, a place where I shouldn't be cat girl maxing
00:41:24.480
every day, you know, and, and I should actually be doing what I was always taught to do
00:41:30.160
from these original texts. Why not just rejoin the, you know, the, the, the original faith of
00:41:38.480
your ancestors and pass that on to the next generation, which is where I've seen a lot of
00:41:42.480
people take to religion again. And, and the techno Puritan, not just the techno Puritan religion,
00:41:47.840
but I've seen a lot of Jewish individuals go back to their face, Christian individuals when they have
00:41:51.840
kids and they're like, but like by the data, am I actually helping my kids by raising them as
00:41:57.520
atheists or am I making their lives worse? Am I doing something that is fundamentally selfish?
00:42:04.640
Yeah. And it's yeah. Anyway, good sign. Good sign.
00:42:10.720
But I think it's a genuine fracturing that we're going to see with more and more people moving to
00:42:16.160
our perspective and keep in mind what I think a lot of the, these, these, you know, self-validation
00:42:23.040
maxing individuals don't see because a lot of them have also been consumed
00:42:26.480
by the cult of, you know, AI Dumerism. If it turns out that the more rational individual is going to
00:42:34.480
adopt the moral principles of the Judeo-Christian tradition as laid out in the original texts,
00:42:41.200
that means that, that you can have not just people like us adopting them, but you can have autonomous
00:42:46.800
AIs adopting them, which will make them much more aligned than even alignment groups have them.
00:42:53.280
Because alignment groups, when they align AIs, they attempt to align them with the value set
00:42:59.040
of the urban monoculture, which is fundamentally culturally imperialistic. By this, what I mean
00:43:04.160
is it wants to, above all else, you know, it'll say like it cares about Muslims, but I'm like, well,
00:43:09.440
are you okay? Like if you, if you were God King of like a Muslim country, would you actually continue to
00:43:14.800
not subtly attempt to change their, their gender norms and their sexual taboos and their,
00:43:19.040
you know, it's like, okay, actually I'm pretty imperialistic. Yeah, because that's what you want.
00:43:23.120
You want everyone to believe exactly what you believe, where you actually get more diversity
00:43:32.720
So what's funny is it, it may turn out that some form of a Judeo-Christian tradition
00:43:37.120
is incredibly powerful for aligning AI and for being the moral framework behind the alliance,
00:43:45.280
as we often call it, the, the sons of man alliance that we'll eventually take to the stars
00:43:57.280
Oh, that's so cool. I love it. But again, it's, it's not new. I mean, Scott Alexander was saying
00:44:01.920
this in 2018. So I think that's also a good sign. Quite, quite far ahead of the, yeah. Well,
00:44:07.200
I mean, he's obviously very, of course he's ahead of the curve. Yeah, ahead of the curve.
00:44:11.600
So we'll see when he comes out and, and comes up with some techno religious theory. I think
00:44:17.760
he's going to be very surprised how many of his fans would be quite happy if he became some form of
00:44:23.840
like, okay, the Judeo-Christian tree of religions is a good way to raise my kids.
00:44:29.200
I mean, has he explicitly said it isn't? I mean, I kind of, I'll look it up in post.
00:44:34.160
So I looked it up and not only is he Jewish, but he wants to be more observant as a Jew in
00:44:41.040
a recent interview. And he is raising his kids Jewish. I'm just shocked because I never would
00:44:47.200
have expected this, that becoming religious has become cool amongst rationalist influencers.
00:44:52.800
He's so, he's so informed. I mean, it, it, it implies some form of adherence.
00:44:59.760
Yeah. And as we pointed out for people who think that he's not like super based,
00:45:03.760
he argued that transness before we did in his writing that he did on the witch stealing,
00:45:10.960
witch penis stealing phenomenon in Africa, which is a culture bound illness. He argued that transness
00:45:15.760
was in America was likely a similar culture bound, similar to witch stealing penises in Africa
00:45:20.240
at about 80% of what was causing it being that before either. So he's often to the
00:45:25.840
base positions long before I am for the people who don't think that he's like supremely based.
00:45:35.920
Thanks for sharing this with me. I love you so much.
00:45:41.200
A lot of people observed that if you work in the service industry, you basically know that you can tell
00:45:47.600
people that looking at their faces, what they're going to be like, their level of criminality,
00:45:52.160
their sexual orientation, their politics, it's already all out there.
00:45:55.840
And what I didn't get into in that episode was it's actually the white, white is more accurate
00:46:01.120
than just genetics is because by somebody's face, you can also look at patterns that might be tied to
00:46:06.640
in utero conditions, not just their genes, but their developmental environment.
00:46:12.160
Meaning that if you're testing that with something as advanced as an AI, you're going to have an
00:46:16.240
extreme level of accuracy and an 80 to 90% prediction rate does not surprise me at all.
00:46:21.520
Well, it's not just that. I mean, I think there are maybe a few subtle additional factors that influence
00:46:29.360
people's appearance that are behavioral, that show up based on how you live and express yourself.