Effective Altruism Is a Waste of Money: Can it Be Fixed? (Hard EA)
Episode Stats
Length
1 hour and 57 minutes
Words per Minute
178.11299
Summary
In this episode of the Effective Altruism podcast, we talk about why we are stepping back from our day jobs to focus more on trying to fix the world we re dealing with right now, as well as the many threats to humanity s flourishing that are on the horizon.
Transcript
00:00:00.000
this is important what could be more important than earning the approval of normies richard
00:00:06.100
science hello simone i'm excited to be here with you today this might be the most important
00:00:15.280
announcement from a personal perspective that we have made on this show but recently we have
00:00:20.060
decided to begin stepping back from our day jobs to focus more on trying to fix it this untethering
00:00:30.000
society that we are dealing with right now as well as the many threats to humanity's flourishing that
00:00:37.620
are on the horizon at the moment this major decision has come downstream of two big realizations i had
00:00:46.800
recently the first was as i do about every year or every other year is i took inventory of all of the
00:00:53.320
major threats or big things that could change about the future of humanity so i can better make my own
00:00:58.920
plans for the future and for my kids future but this time i did something i hadn't done before
00:01:02.760
decided to also take an inventory of all of the major efforts that are focused on alleviating these
00:01:08.740
potential threats and i had assumed as we've often said you know we've been affiliated with the
00:01:14.440
periphery of the effective altruist movement for a while that while the effective altruist may have
00:01:20.580
problems they were at least competently working on these issues because they were signaling that they
00:01:27.120
cared about them but when i looked at the actual solutions they were attempting i was shocked it made
00:01:34.680
me realize that a lot of the funding that i thought was going to fixing these issues was going to
00:01:40.440
something akin from that scene from indiana jones we have top men working on it right now
00:01:52.520
the goal was to reform charity in a world where selfless giving had become a rarity
00:02:14.260
no vain spotlight no sweet disguise just honest giving no social prize but as the monoculture took the stage
00:02:24.960
it broke their integrity feigning righteous rage now every move is played so safe ignoring truths that make them
00:02:48.480
once they were bold now they just do what they are told
00:03:03.120
second i have always considered us as again adjacent to the effective altruist movement or
00:03:11.780
living within the periphery of this movement and heckling it towards making more responsible decisions
00:03:17.360
recently as i was going over the stats for our podcast and other podcasts i realized
00:03:23.940
that our podcast is more popular than the most popular effective altruist podcast 80 000 hours
00:03:34.100
now i will note here that spencer greenberg's podcast clearer thinking which many associate with the effective altruist
00:03:42.000
i will not deny that i really like spencer good friend
00:03:55.160
pointing out the mistakes that other people are making
00:03:57.760
i'm now somebody who has to take responsibility for fixing things
00:04:01.820
especially if the timelines that humanity is facing are short
00:04:05.660
and so we will create an alternative with hardea.org
00:04:08.760
what we have done to distribute this we're going to start doing grants so if you have ideas that you think might appeal to us we would like to help fund you help get stuff out there
00:04:17.160
obviously we are looking to raise money as well we already have a 501c3 nonprofit charity
00:04:22.600
so if you know any big potential donors who might be interested in this please let them know about this project
00:04:28.760
and i note here that unlike traditional ea we are not just looking for iterative solutions to the world's existing problems
00:04:34.800
but anything that can move humanity forward into the next era of our evolution
00:04:38.260
whether that's genetic modification technology gene drives brain computer interface artificial wombs
00:04:46.680
anything the traditional effective altruists were afraid of touching because of its potential effect on their reputation
00:04:52.500
but that needs to happen for humanity to compete with ai and eventually get to the stars
00:04:59.560
we stand on the brink of a breakthrough in human evolution
00:05:05.000
held back the pace of scientific discovery for decades
00:05:18.920
new patrons emerged who possessed an appetite for my discoveries
00:05:32.440
young people from all over the globe are joining up to fight for the future
00:05:52.120
most of the mainstream figures who could have stand
00:05:56.280
or run or help the ea movement continue to grow
00:06:03.240
and i realize that even some nerds don't know what this means
00:06:05.160
this is the order that the empire gave to kill all the jedi
00:06:08.440
thank you cody now let's get a move on we've got a battle to win here
00:06:20.360
what i mean here is when they have somebody who is extra effective
00:06:34.920
this is something in a weird way like to an extent that i've never seen in
00:06:42.280
yeah and as such many people who might be their greatest champions now
00:06:46.280
like say spencer greenberger just like i don't want to be affiliated as
00:06:52.440
but this provides opportunity to us one it makes it hard for them to say
00:06:57.800
you guys aren't real ea when we have a bigger platform than any of the
00:07:03.880
but two it allows us to attempt to cure the movement even through our advocacy
00:07:09.240
by that what i mean is the effective altruist movement was originally
00:07:13.160
founded with the intention of saying most philanthropy that's happening right
00:07:18.040
now is being done for social signaling reasons or personal signaling
00:07:21.800
reasons it's either being done to signal to other people you're a good
00:07:24.440
person or to signal to yourself you're a good person
00:07:26.600
so when you're faced with a decision like should i indulgently spend two years
00:07:31.400
doing charity work like building houses in africa or something like that or
00:07:35.320
should i go take that job at mckenzie and then send that money to charity and see
00:07:38.760
how many houses i can build i choose the mckenzie route because
00:07:42.360
while it may be less good for personal signaling or social signaling
00:07:45.960
it is the more efficacious thing to have an impact on the world
00:07:50.680
unfortunately this movement has been almost totally captured by social
00:08:02.200
it broke their integrity feigning righteous rage
00:08:09.240
and this is largely downstream of something the movement should have
00:08:30.680
expected from itself it should have said if we're not going to care about social
00:08:34.200
signaling and actually making a difference we need to prepare to be the
00:08:38.360
villains we need to prepare to be hated by those in power because we are not
00:08:44.280
going to toe their lines now you're trying to make me out to be the bad guy
00:08:47.720
yes i'm trying to make you a bad guy we're both bad guys
00:08:51.080
we're professional bad guys ding hello and instead they took the exact opposite
00:08:56.200
approach which is to say we want to be socially respectable we want to be
00:09:00.680
accepted by the mainstream power players in our society we want to suck up to them
00:09:05.400
when they dropped this it was the original sin that led to the downfall of ea
00:09:10.280
and then i think a lot of this is because from the beginning they didn't focus on
00:09:16.840
the misaligned incentives that cause people who are altruistic to get
00:09:23.880
in other words they didn't focus on making sure that everyone's efforts were
00:09:28.840
self-sustaining they supported efforts that required ongoing fundraising and
00:09:32.760
ongoing fundraising requires social signaling so that those groups that
00:09:36.840
survive were dependent on fundraising who are are the ones who are better at
00:09:40.840
signaling not the ones who are better at solving the problem so i think that's
00:09:43.880
part of it it's not that these people became corrupted it's that they never
00:09:48.200
addressed the inherent aligned and misaligned incentives that made this
00:09:53.000
problem in the first to elaborate on what she means by this is if you have a large
00:09:56.440
bureaucratic institution that is dedicated to social good um individuals
00:10:01.000
within that network are going to be drawn to it for one of two reasons either they
00:10:04.440
want status or they want to do social good the the problem is is that the people who
00:10:08.280
want to do social good they need to focus on doing social good whereas the
00:10:12.120
people who want status can focus all their time on politicking
00:10:15.000
as well the people who want to do social good must act with integrity which
00:10:18.360
somewhat binds their hands whereas the people who want status well they can use
00:10:22.280
the urban monocultures status games to sabotage other people really easily
00:10:27.000
and so they always end up rising to the top whereas the people actually trying to
00:10:31.880
do something efficacious because at least 50 of their time needs to go to like
00:10:34.920
actual efficacious work or you know near 100 of their signalers time can just go to
00:10:38.760
signaling and then and i want to say this is something that's not just
00:10:42.200
you're not only going to see it in the non-profit or the altruistic world this
00:10:47.800
also shows up in some of the largest work from home experiments performed
00:10:52.680
one of the earliest big large-scale work from home experiments performed found
00:10:56.440
for example that employees this was i think an online travel agency that tried
00:11:00.280
this employees who worked from home were more effective they got more work done
00:11:04.360
they were better employees in terms of getting the work done in terms of the
00:11:07.480
bottom line of the company but those who stayed in the office got promoted more
00:11:12.360
so again this is about where is your time going is your time going to
00:11:16.520
face time to signaling or is it going to getting the work done
00:11:19.240
and if you have a system where you can only continue to get resources or get
00:11:23.960
promotions or get more money by signaling you're going to start focusing on
00:11:27.960
signaling and those who survive who last in those organizations are going to be
00:11:37.480
all you activists can go fuck yourselves that was so inspiring
00:11:43.640
being in those rooms when the ea movement was being formed all those years ago
00:11:48.680
knowing all those edgy young autists who wanted to fix things in big ways
00:11:53.320
seeing what the movement has turned into taken over by bureaucratic
00:12:03.960
a man seeks a good time but he is not a hedonist he seeks love he just doesn't know where to look
00:12:12.040
he should look to nature gentle aquatic shrimp have all the answers
00:12:19.320
your door was locked so we let ourselves in you may have found my inner sanctum shut up now give us the
00:12:25.720
plans or whatever the hell you have i have a tank full of gentle coddle fish give us the cuddle fish
00:12:37.880
you abandoned me i have cuddle fish look into my eyes
00:12:46.520
most of the effective altruistic organizations have become giant peerage networks these weird
00:12:51.000
status dominance hierarchies uh that are constantly squabbling over the most petty of of disagreements
00:12:59.240
yeah just for people who don't know what peerage is if you were a peer of the realm you were
00:13:04.760
essentially made noble by a ruling class like a king or king or queen so what we're talking about is
00:13:10.360
essentially this sort of declared aristocracy that can be very insular and incestuous
00:13:17.720
stipends who then are basically forced to stand the people above them in the pyramid yeah well and
00:13:25.560
this is the here's the other thing and this is why i think there's such a big garden gnome problem
00:13:29.720
in the ea industry to give a little context for those who haven't seen our other discussions about
00:13:33.800
garden gnomes in regency era england there was this trend among very wealthy households you know
00:13:40.200
people who had large estates to have what was referred to as an ornamental hermit and these were
00:13:45.160
basically like learned wise men who they would have live in a cottage on their land and then like
00:13:50.200
come to you know their their dinner parties and stuff when they had house guests and kind of
00:13:55.080
impress them with their philosophy and they were often required to do things like not drink and let
00:13:59.560
their nails grow long and grow a beard so they looked to be sort of like a very picturesque intellectual
00:14:05.800
and we've noticed that within the ea industry this is the industry i guess space social sphere this is
00:14:11.480
the one place where you actually see modern ornamental hermits that is to say people who are in the ea
00:14:19.000
space and rationalist space who literally make their money by sort of being a a an intellectual who is
00:14:27.240
paid who has a patron who is a very wealthy person who's in this space who sort of just does
00:14:33.480
sub stack writing and philosophy and who goes to these dinner parties and makes their patron look good
00:14:39.640
which is insane it's it's a wild trend that we have seen these gnomes are almost always male and
00:14:46.120
frequently end up congregating in these giant poly group houses where they all are dating the one
00:14:53.080
woman who could sort of tolerate them i kind of feel like marrying simone and taking her out of san
00:14:58.040
francisco this is what i saved her from he's just marrying all 1 000 of us and becoming our gnome queen
00:15:03.560
for all eternity isn't that right honey you guys are butt faces you think you can stop us the gnomes
00:15:11.320
are a powerful race do not trifle with the he's getting away with our queen who's getting orders i
00:15:21.560
need orders the overwhelmingly male population of the ea movement makes it very easy to spot the portions of it
00:15:29.000
that have become corrupted by dei beyond repair just look for any organizations that's board has more
00:15:35.720
women than men on it or that's leadership is more female than male or even just anywhere near gender
00:15:41.720
equal given how overwhelmingly male the movement is that would only happen if they were using an extreme
00:15:48.200
amount of discrimination and prejudice in their hiring policies and promotion policies and outside of
00:15:54.120
the immorality of a system that is systemically unfair and prejudiced this also means the most
00:15:59.720
talented efficacious and hard-working people was in an organization aren't the individuals running it
00:16:04.680
which means tons of donor money is being wasted just to signal that we're good boys and i would say
00:16:10.440
that this isn't the only problem you also have a problem from the bottom up of the movement being
00:16:14.200
very corruptible they just put no thought into governance theory when they were putting everything
00:16:17.880
together from the bottom up the problem is they have a massive tyranny of the unemployment problem
00:16:22.200
any movement decides a lot of its ideas based on what's going on in the ea forums but forums are
00:16:30.280
susceptible to a governance problem we described in the private disguise to governance called tyranny of
00:16:33.640
the unemployed which means that the individuals who have the differential time to spend all day
00:16:38.600
online on a forum or something like that an environment where the amount of time you can dedicate to a
00:16:43.800
thing gives you additional power within the community well those people are being sorted into
00:16:49.640
that position in life either because they don't have like friend networks right you know they don't
00:16:54.920
have other things that they're doing so they have been expelled from other communities and they don't
00:16:59.240
have day jobs often or day jobs outside of the ea peerage network or even responsibilities like taking
00:17:05.000
care of children or elderly people or even really needy pets like they're just sitting there in front of
00:17:09.480
their computers and so these communities always tend towards these sort of average ideas that will get you
00:17:17.080
respect by the urban monoculture when you have one of these voting based online networks instead of
00:17:22.120
the way like our core community our discord works where it's like what whoever said the last thing
00:17:25.800
is the one who's there you end up with people really striving for what they think is mainstream
00:17:32.600
acceptable in society to say to post because those are the things that the average dumbass unemployed
00:17:37.720
person who's sitting at home is going to end up upvoting this is why reddit is so brain dead these days
00:17:43.720
it is also why the ea forums are so brain dead in exactly the same sort of normie normie normie normie
00:17:49.640
take way what's also wild here is when i went and checked it looks like our discord is more active
00:17:56.120
than the ea forums right now if you want to check it out you can check it out from a link i'm going to
00:18:00.600
put in a pinned comment generally the best way to use it is to focus on individual episode commentary
00:18:07.960
rather than just chat in the town hall i understand that the format changes make this comparison a little
00:18:12.920
bit apples to oranges but that their top posts are only getting like 50 comments and then if you go
00:18:19.080
just like three posts down you get posts with no comments that is wild to me when contrasted with ours
00:18:25.880
you know 210 733 124 128 417 265 then go to the top voted post on the ea forum and it's 28 50 64 0 0 2 0 4
00:18:42.280
for 18 14 which i think goes to show that the ea community has transitioned from being
00:18:49.000
well a community to a peerage network but anyway continuing with the point that having a community
00:18:54.280
where norms are based on the vote or the average liked opinion is going to lead to the platforming
00:19:00.760
of ultra normie low-risk beliefs and the demonization of any belief that could rock the boat or interrupt the
00:19:07.560
peerage network and this is why a movement that said we will focus on things that don't get us
00:19:12.840
social signaling and that no one else is focused on is now doing things like environmentalism which is
00:19:18.680
like the most overfunded area would contrast to whatever the cause areas all right that doesn't
00:19:24.200
this god damn stupid ass rainforest this place fucking sucks i was wrong fuck the rainforest i
00:19:31.160
fucking hate it i fucking hate it oh now she figures it out or uh you know they're completely
00:19:36.520
not touching pronatalism and no ea org has ever done anything in the pronatalist movement never touched
00:19:42.760
pronatalism never advocated they have explicit rules against it they have explicit rules against doing
00:19:47.320
anything about dysgenics which is one of the things we often talk about which is the polygenic markers
00:19:51.160
associated with things like iq are decreasing within the developed world at a rapid rate
00:19:55.080
to the rate where we should expect a one standard deviation decline in iq within the next 75 years
00:19:58.520
or so you can look at our video on is idiocracy possible on this particular topic but they
00:20:05.080
have in their rules that they're not allowed to focus on human genetics
00:20:09.000
and as such they can't address some of the biggest challenges that our species might be facing
00:20:13.000
they duck their heads from problems grand as fertility collapse dooms our land dysgenics a word they fear
00:20:26.280
but ignoring it will be severe a safety a shiny show funding the theatrics for money they blow without a plan
00:20:39.080
and just spin and grin while real solutions can begin
00:20:59.000
once they were bold now they just do what they are told
00:21:04.920
in caution they lost their way time for a hard ea
00:21:11.400
our species at risk by the cowardice it is time for a movement that empowered us
00:21:18.600
but it gets worse than all of that so let's be like okay if they're not giving money to that stuff
00:21:23.800
one how much money are they actually giving out here and two what are they actually doing so by 2022
00:21:29.880
over 7 000 people had signed a pledge to donate at least 10 of their income to effective charities
00:21:35.160
they are now more than 200 ea chapters worldwide with conferences attracting thousands of attendees
00:21:41.080
and they now give out around 600 well this was in 2021 around 600 million in grants a year around four
00:21:49.400
times the amount they did five years earlier and this is really sad to me that these individuals who
00:21:55.960
aren't maybe super in touch was like what the ea orgs are actually doing with their time
00:21:59.800
think that they're you know tithing this amount that makes them a quote-unquote good person and
00:22:04.520
the orgs aren't doing anything so let's give them an option here for the individuals who want to do
00:22:08.600
this for an org that is actually trying to solve things like ai safety dysgenics pro natalism all of
00:22:15.000
the major problems that our species is facing at the moment oh before i go into the the projects that
00:22:21.000
that they had here one of the things i really find very interesting about effective altruism is
00:22:28.520
one their absolute insistence on trying to cozy up with the leftists and democrats and also the vitriol
00:22:37.400
they have been shown by democrats isn't that interesting yeah that first effective altruism is
00:22:43.000
fairly little known it's becoming more known but really only in the context of leftist media outlets
00:22:49.240
looking at it with great suspicion who are these ea silicon valley elites deciding how we should
00:22:55.320
live our life like it's definitely viewed as a silicon valley elite thing it's viewed with great
00:23:00.760
suspicion and it's viewed as being evil or like just like questionable or puppet mastery or a little
00:23:08.120
illuminati ish i think because it's associated with i think that that's a misunderstanding of why the
00:23:13.960
left is so hostile to it really yeah so ea fastidiously tries everything it can to not
00:23:21.720
piss off leftists yes the urban monoculture they are like we will not break a single one of your rules
00:23:28.680
but unfortunately that puts them into the same status game that the urban monoculture
00:23:34.040
people are playing so if i'm a mainstream lefty politician or political activist the ea's are
00:23:39.640
trying to compete with my social hierarchy for attention for capital for everything they come
00:23:46.120
into a room and they're like okay we can spend x amount on nets in like malaysia and it can lower
00:23:52.520
malaria rates by this amount which like lowers net suffering by y amount and i'm here like don't you
00:23:57.880
know that today is trans months or like don't you know that today is the black lives matter like
00:24:02.200
protests and they're like well i mean i understand that like myopically that's what's going on in the
00:24:06.200
united states right now but we're trying to reduce aggregate suffering and look at my math and that
00:24:11.400
gets you shouted out of the room because you are issuing an explicit status attack on them when you
00:24:17.240
do this and worse you know when i read a lot of the places attacking them they're like they fall into
00:24:22.760
two camps often it's like well they're using capitalism to advocate for like taking money from
00:24:30.040
these wealthy capitalists and then using that to quote unquote try to make the world a better place
00:24:34.040
but like this these wealthy capitalists shouldn't exist at all they're just perpetuating or sort of
00:24:38.520
you know wallpapering the capitalist system and i understand this attack entirely like if you're a
00:24:45.080
leftist and you're a socialist you're like what are you guys doing you are making the capitalists look
00:24:49.480
good it's better that we just tear everything down and i think this is because of the ea mistakenly
00:24:55.160
believes that when they're talking to urban monoculture people these socialists and stuff like
00:24:59.080
that that they actually want to reduce suffering in the world because that's what they tell people they
00:25:02.520
want to do yes instead of just claim power and so they make very because they're hugely autistic
00:25:08.360
make very dumb decisions of taking them at face value and then they keep getting shouted out of the
00:25:12.680
room and then come back whereas us the right side the hard eas which is fundamentally more of a
00:25:19.640
right-leaning movement we have been accepted by the political apparatus you know we're regularly
00:25:24.840
meeting with groups like you know the heritage foundation or political operatives in dc and they
00:25:29.880
don't mind being affiliated with us they like that even whereas you guys were treated like lepers we
00:25:36.280
have the vp of the the major ticket regularly giving pro natalist messages if the ea could get a single
00:25:43.960
one of their messages into a mainstream politician's mouse in the same way we have been successful at
00:25:48.760
this as you might be able to tell we recorded this before trump's team won and before we saw just
00:25:55.560
how much influence our side was going to have in his policy agenda but i wanted to just reflect on
00:26:01.480
how crazy this is that they had hundreds of millions of dollars and about a decade and they
00:26:07.800
were unable to really get on board any mainstream democratic politician into their agenda we are
00:26:15.240
a two-person team and we're able to get close with and get into presidential policy agenda our stuff
00:26:24.200
within a year of trying the incompetency and wastefulness is almost impossible to overstate
00:26:32.520
you are literally setting your money on fire if you give it to them it's not about money it's about
00:26:40.440
sending a message but you see this wherever the urban monoculture has taken hold i mean just look
00:26:46.760
at the democratic campaign they had three times the amount of money trump was using and he trounced
00:26:51.560
them any group that has given into the urban monoculture is going to be wildly inefficient
00:26:56.040
in how it spends money because it's going to spend so much of its money on signaling and it's going to
00:27:01.160
have so many incompetent people at its upper levels but here i also want to note just how wildly
00:27:06.440
inefficient they've been in even the cause areas they purport to care deeply about let's take something
00:27:11.800
like waking the world up to how risky ai could be all right they had a generation of priming material
00:27:20.280
just consider the terminator franchise we come in with the pronatalist movement where we have a
00:27:24.920
generation of everybody thinking oh it's it's there's too many people oh depopulation is a problem
00:27:29.720
et cetera et cetera et cetera and just two people on a shoestring budget this year we've had
00:27:35.400
two three guardian pieces on us rolling stones piece couple new york times shout outs wall street
00:27:43.480
journal feature and then just today we had another wall street journal photographer at our house
00:27:48.680
so they're going to have another piece coming up though this is actually the one who did the famous
00:27:51.400
shot of luigi mangioni and we have woken up the general public to oh this is a real problem and
00:27:58.680
if you're like well a lot of those pieces have a negative slant to them and it's like well yeah
00:28:02.440
and a lot of pieces about ellie iza yukowsky have a negative slant to them as well the key is is
00:28:07.480
are you playing the negative slant to build your own image or build awareness for your cause and here i would
00:28:14.040
ask you to just be rational and think about the people you've talked to recently who has done a
00:28:20.040
better job piercing the mainstream mindset ai risk in a non-doomerous way like in a constructive way
00:28:26.920
or pronatalism you know the fact that we have things like the young turks now saying well malcolm's
00:28:33.000
crazy but he's definitely right about that pronatalist stuff that's wild that we have pierced to the other
00:28:38.760
side that much in such a short time period was just a two-person team and yet a literal army of
00:28:45.560
people has had trouble piercing the popular narrative in a way that builds a constructive
00:28:50.200
conversation not only that but within the pronatalist movement we have built a movement that other than
00:28:56.040
one guy almost entirely gets along supports each other despite our radically different beliefs and
00:29:03.480
when i say diverse beliefs i mean diverse beliefs in a way that you just weren't able to get at all
00:29:07.720
within the traditional ea movement if you go to one of our conferences yes you'll get a bunch of
00:29:12.040
the nerdy autistic programmers and entrepreneur types but you'll also get a lot of conservative
00:29:17.640
religious leaders whether they're haraidi rabbis catholic priests or evangelical media players it's
00:29:23.160
wild that despite hard ea taking a much more confrontational and hardline approach to the issues
00:29:29.000
it has the potential to be a much bigger tent movement and i think that it shows just the core
00:29:35.000
failure of the way that they were signaling and approaching politics which was accept us
00:29:39.480
instead of we're different and we take pride and standing for what we know is right and just
00:29:44.920
and here i would also note that there is a slight ethical difference between these two movements
00:29:49.240
in terms of the end goal uh whereas the eas sort of treat the world right now as if they're utility
00:29:56.680
accountants trying to reduce aggregate in the moment suffering right now which is how they appeal to the urban
00:30:01.400
monoculture the hard eas are much more about trying to ensure that long-term humanity survives and
00:30:09.400
stays pluralistic and we'll talk about the core values we have but it's much more let's create
00:30:14.200
that intergalactic empire and make sure we don't screw this up for the human species in this very
00:30:19.800
narrow window we may have left which we'll talk and we're not afraid to be judged as weirdos for being
00:30:26.680
interested in getting off planet or thinking about the far future whereas the effective altruist
00:30:31.480
community while technically being long-termist is very self-conscious about it because know that
00:30:38.760
being long-termist can make you look weird just because honestly even thinking two decades ahead
00:30:43.640
has us basically in sci-fi you know what i mean yeah well no it doesn't just make you look weird it
00:30:50.040
it puts you at odds with the goals of the urban monoculture the urban monoculture is not
00:30:54.360
interested in the long-term survival of humanity and for that reason when they try to signal long
00:31:00.360
termist goals and this is the other category of anti-ea article you'll read where they're like well
00:31:05.160
here's a problem with being an extremist utilitarian you know they they there it's like well fortunately
00:31:11.320
the hard eas aren't extremist utilitarians we're a completely different philosophical system which
00:31:15.160
we'll get to in a second because extremist utilitarianism is just silly it's like positive
00:31:19.560
emotional states are the things that when our ancestors felt them caused them to have more
00:31:24.360
surviving offspring it's not like a thing of intrinsic value
00:31:30.120
these feelings born of chance in fields of ancient strife they kept our tribe from failing helped give
00:31:38.920
birth to modern life just signals from our past they served a vital role but meaning goes beyond the scars
00:31:50.200
that time upon us stole beyond the pleasure beyond the pain we stand on roads our forebears
00:32:20.200
they claim that it's all worthless if the joys can outweigh fear but they dismiss the wonders we've
00:32:28.760
inherited right here the years of struggle handed down the future's bright unknown
00:32:37.560
it isn't just the fleeting spark of comfort we are shown we carry on a story with pages left to
00:32:47.560
right our tapestry is woven from both darkness and from light and i think you can see and and
00:32:56.440
focusing on in the moment suffering causes you to make very big mistakes in terms of long-term uh
00:33:02.360
human suffering and it causes you to do things which you cannot question was in the current ea
00:33:08.680
movement because if you question the ea movement might look bad right and again it's all down to
00:33:13.400
signaling so where are they putting their money the global health and development fund distributed
00:33:16.920
over 50 million in grants in recent years give well contributes directly to large amounts of
00:33:21.640
funding to global health charities like against malaria foundation malaria consortium and new
00:33:27.000
incentives open philanthropy has increased its focus on global health and well-being in recent years
00:33:32.600
like that is so dumb so dumb like globe first malaria you could just do a gene drive in mosquitoes
00:33:41.640
and for like 50 to 100 000 erase the problem of malaria in 50 years i mean yeah sure you might get
00:33:48.520
arrested but if you look at the number of people that are dying and i'll add it in post it's estimated
00:33:53.080
that approximately a thousand children under the age of five die from malaria every day 608 000 people
00:33:59.640
who die in a given year like the idea that we now have the technology if we cared about that to just
00:34:05.240
fix it i'm sorry for people who don't know what a gene drive is gene drives are designed to eliminate
00:34:10.680
unwanted traits in insects and other animals they work by pushing out genetic modifications through whole
00:34:17.160
species until eventually every critter has been changed into something we have intentionally engineered
00:34:23.880
the idea isn't especially new but it's only very recently that advanced gene editing techniques have
00:34:29.960
made human designed gene drives possible crisper uses specially designed molecules that run along the
00:34:36.040
strands of dna in an organism's genome and seek out specific sequences of genetic code such as replacing
00:34:42.920
the parts of a mosquitoes genome that allows it to host malaria causing parasites for instance
00:34:48.440
unfortunately every time a crisper mosquito mates with a wild one is modified dna is diluted down
00:34:55.400
meaning that some of its offspring will still be able to carry the malaria parasite and this is where gene
00:35:02.520
drives comes in when the mosquito mated the building code would ensure that every single one of its
00:35:08.760
progeny would inherit the same traits as well as inheriting the crisper code that would ensure the
00:35:14.200
anti-malaria gene was passed on to every future generation in other words the new gene would be
00:35:20.760
irresistibly driven through the whole mosquito population and eventually every mosquito will become
00:35:27.240
a human-designed malaria-free insect and this is not a technology that's restricted to mosquitoes
00:35:34.120
note that here you'll get some complaints from people saying well the reason we have an
00:35:37.880
unemployed gene drives in mosquitoes yet is because the technology isn't fully there yet or it hasn't
00:35:43.240
been as effective as we hoped but if you you know go to an ai and ask what's the real reason the real
00:35:49.000
reason is that they're scared to implement something that could affect an entire natural population and
00:35:55.240
it's borderline illegal right now the problem i have with this explanation is it's estimated that
00:36:02.120
approximately a thousand children under the age of five die from malaria every day they believed
00:36:07.720
my methods were too radical too controversial but there were others in the shadows searching for ways
00:36:16.040
to circumvent their rules freed from my shackles the pace of our research hastened together we delved deeper
00:36:26.920
into those areas forbidden by law and by fears and with this knowledge what new world could we build
00:36:38.600
and we have the technology to do this it's largely tested people are going to freak out it would be an
00:36:43.320
offensive way to save the world and that's why they won't consider it so instead they give millions
00:36:47.560
and millions and millions of dollars it could go to actually saving humanity's future but also at the end of
00:36:52.680
the day if you save some you know whatever person dying of malaria right um are they really likely
00:37:01.800
to be one of the people who ends up moving our civilization forwards at this point and and and
00:37:06.600
every iterative amount that we move our civilization forwards right now in terms of technology or preventing
00:37:12.040
major disasters is going to be multiplicatively felt by people in the future and so decisions right now
00:37:18.920
when we're looking right now at the short timelines humanity has whether it's whether it's with falling
00:37:23.320
fertility rates or whether it's with dysgenics or whether it's with ai um that you would be so
00:37:31.080
indolent it's not like that these things are intrinsically bad things to be focused on it's just
00:37:35.320
they are comical things to be focused on when the timelines that face humanity are so so so short at
00:37:40.440
this point yeah then they focus on long-term and existential risk these are people who focus on long-term
00:37:46.120
catastrophic risks i really appreciate this area of funding absolutely i have always thought oh this
00:37:51.640
is really good like they focus on ai threats and stuff like that or biosecurity threats and then i
00:37:56.520
started at least within the case of ai actually looking at the ways that the individual most funded
00:38:02.840
projects were trying to lower ai risk and i was like this is definitely not going to work and we'll get
00:38:08.520
into this in just a second but it will understand that you're basically lighting your money on fire if you
00:38:13.160
give it to a mainstream ai safety effort within the ea movement and that is really sad because you
00:38:18.840
have people like leis are being like just give us like 10 more years to do more research and then when
00:38:22.920
i look at the research being done i'm like this obviously won't work and the people working on it
00:38:27.080
must know it obviously won't work and that makes me sad but that's the way things turn out when you
00:38:31.560
get these giant peerage networks by the way about 18 of ea causes right now in funding go to ai
00:38:38.280
safety related causes so it is a very big chunk gosh that's actually not as much as i thought
00:38:42.920
just in terms of how much mindscape seems to be going to it within the movement so that's
00:38:48.920
well the other area they spend a ton on and we've met many eas in this space which i just think is a
00:38:53.640
comical space to be wasting money on is animal welfare is a significant ea focus the animal welfare
00:38:59.560
fund distributes millions and grants annually open philanthropy has made large grants to farm animal
00:39:04.520
welfare organizations about 10 of highly engaged eas report working on animal welfare cases this is
00:39:12.200
a tragedy that anyone is working on this for two reasons it feels like a hack to me is they're like
00:39:18.120
oh okay well we need again it's that utility accountant pro accountant problem whereby people
00:39:23.000
are like okay so i want to max out the number of utility points i get and there are so many more
00:39:29.720
shrimp in the world and it's so easy to make shrimp's lives easier so i'm going to focus on
00:39:35.320
shrimp happiness and well-being and it's just yeah and i can just create so they basically do this thing
00:39:41.080
where a life's worth is like its amount of cognitive experience whether that's pain or happiness or
00:39:47.320
anything like that sort of divided by the cognitive level of the animal and they're like well even though
00:39:52.520
shrimp are a lower cognitive level than humans if you get enough of them and they can support like
00:39:58.440
the same biomass can support more of them and if you if you go with this line of thinking just to
00:40:02.920
understand why this line of thinking is so horrifying and stupid if you i actually followed this to its
00:40:08.120
conclusion it's like well then what i should do because monkeys can survive on less nutrition than
00:40:12.520
humans is basically get a giant laboratory of monkeys it was like screws in their necks in virtual
00:40:18.200
reality environments being pumped with dopamine and other chemicals and you just walk and you're in like
00:40:24.280
this this giant laboratory with like hundreds of thousands of monkeys like dazed out on drugs
00:40:30.520
like just living this perfect happiness unit life yeah all while like sterilizing humans because they
00:40:36.200
take more resources and it's better just to max out yeah it's such a dumb philosophy when you actually
00:40:41.800
think through it that you would think these pre-evolved environmental conditions that led the things to
00:40:46.760
have more offspring are like what you should be focused on as an existential thing in life
00:41:00.200
but that's just the story of creatures past whose only goal was just to last a primal code etched into our
00:41:15.640
veins a leftover echo of ancestral gains we're more than pleasure more than pain we can choose to rise above the
00:41:27.320
old refrain evasion calls our name a legacy that we must sustain don't let the animal side define
00:41:44.920
and it leads to huge amounts of ea funding going to really feckless stuff like as you said like
00:41:50.840
shrimp welfare and stuff like that whereas if humanity does get and this is the problem if humanity
00:41:57.000
goes extinct no matter what all life is gone all life that we even know existing in the universe is
00:42:03.320
gone because the sun's going to keep expanding and we likely don't have enough time for another
00:42:07.000
intelligent species to evolve if humanity spreads off this planet we are going to seed thousands to
00:42:13.640
billions of biomes that will be as rich and likely more have a higher degree of diversity than we have
00:42:19.880
on earth today on some super earths we may seed have a higher number of species living on them
00:42:24.760
and we'll even be able to if it turns out that our super advanced ai and descendants are like okay
00:42:32.280
suffering actually is a negative thing so i'm going to build little nanite drones that go throughout all
00:42:37.160
of the ecosystems that humanity owns and erases their suffering feelings and ensures that the zebras
00:42:43.320
feel ecstasy when they're being eaten you know like that's the that's the end state where you actually create
00:42:49.880
the positive good even if this very small-minded philosophy does have any sort of a an account to
00:42:55.960
it yeah so i guess we find it doubly offensive is is one we disagree with happiness entirely though i
00:43:03.000
guess you know we have to respect that some people do and then two just the the way people are trying
00:43:08.280
to max it out is questionable well you know there's tons of people it's not like a neglected cause area
00:43:15.000
tons of people are focused on this stuff you know just yeah gives this problem to the animal rights
00:43:19.000
activists okay and so when you give money to something like this is it's going to i'm just
00:43:24.680
telling you you have lit your money on fire and that's why we need to create something that actually
00:43:29.160
puts money to things that might matter in terms of long-term good things happening okay then other
00:43:35.640
global catastrophic risks these fun projects like climate change again that is the most non-neglected
00:43:41.160
area in the world really just meant to signal to progressives any ea org which hosts any discussion
00:43:46.360
of climate change and the whoever is running that org should immediately be voted out it is it is
00:43:51.480
absolutely comical and that is a sign that your organization is becoming corrupted one of the things
00:43:55.800
that i would advocate was hard ea is i want to bring in as many of the existing ea orgs into the hard
00:44:04.840
ea movement oh 100 because i think the thing and i feel like they want it when we here's a really
00:44:10.280
common thing also in the ea community you talk with anyone who you associate with effective altruism
00:44:18.760
and they're like oh i'm not an ea i'm not a rationalist it's like it's like that's how you
00:44:23.000
determine someone's an ea is if they say they're not an nation is as to why they're not an ea and that's
00:44:28.920
because because these people are actually believe in effective altruism and i think they see
00:44:35.400
inherently the the altruistic bankruptcy of the of the main social networks of the main organizations
00:44:44.040
of the main philanthropic efforts and they're keen to not be associated with that because they really
00:44:49.000
care about effective altruism so we we in part are deciding to become more actively involved with
00:44:55.480
giving grants with making investments in the space through our non-profit because we we want
00:45:00.600
there to be a place for these people we want there to be more of a community for actual effective
00:45:05.160
altruists for hard effective altruists and that's really yeah also also i i will before i go further
00:45:12.280
with this a part of this is just we're doing this for the entertainment value which is to say we're
00:45:18.200
we're doing everything for the entertainment value that the ea movement has done is they have
00:45:25.080
aggressively as they become more woke and more woke and more woke and more interested in just
00:45:30.200
signaling signaling signaling shot all of their original great thinkers in the back when i say
00:45:35.800
they order 66 their entire movement they really did there are so few right now people with any sort of
00:45:43.080
name recognition or uh public effacingness that publicly identify as ea anymore that us being
00:45:51.000
able to come out there and be like yeah we're the real effective altruists that it's a bit of a troll
00:45:56.680
because the the ea movement should have these big names that can come and say oh no malcolm and
00:46:02.280
simone the pro natalist people they're not effective altruists they're like some weird right wing thing
00:46:07.560
but everyone who had the authority to make that type of a claim is gone from the movement you know
00:46:14.200
even though and to just know how like how corrupted the movement has gotten we did another piece on ea
00:46:18.520
which i didn't have go live because i felt it was too mean to elieizer and i don't want to do anything
00:46:21.800
that mean spirited oh you didn't run that one i never put it live tried to be nice but anyway the
00:46:27.640
point being is that when our first like pro natalist piece went live we posted it on the main ea forums
00:46:34.520
and it got like 60 down votes like that's hard considering that when you get like 10 down votes
00:46:40.600
you should be hidden from everyone they hate that this stuff is getting loud out there but i think
00:46:46.760
that this is just your average peerage network basement dweller these are people who are living
00:46:51.400
off of ea funds and who are otherwise totally non-efficacious i think if you actually take your
00:46:57.320
average person who still identifies as ea or ever identified with the movement they'd agree with 95
00:47:02.200
percent of everything we're saying here they're like what is this nonsense i know because i talk
00:47:06.840
with them right but they're like but i have a job so i don't have time to go and read every proposal
00:47:11.560
that goes on to the you know quote-unquote ea forums and so like what if we can redesign the
00:47:18.040
governance system so that the individuals who are actually being the most efficacious and actually
00:47:22.040
contributing the most are the individuals who have the most weight in terms of what's happening on
00:47:28.360
the ground and the directionality of the movement and so because they removed everyone who might
00:47:34.840
carry the mantle of ea and because so many people are now like i call them post ea like they think
00:47:39.720
it's so cool to jump on ai that we are willing to come out here and be like yeah we are the effective
00:47:44.440
altruists and we say this in every newspaper article we go through and they always love catching on
00:47:49.240
it and the great thing about this is the incredibly progressive totally urban monoculture captured press
00:47:56.440
because they hate the effective altruists so much they'll publish this every time we say oh we're
00:48:01.960
the ea movement or we're in the ea movement and they'll always post that thinking it's some sort
00:48:05.320
of like got you on us whereas none of the actual ea still say this about themselves well yeah because
00:48:10.040
again no one wants to be associated with ea well i mean it's because they keep shooting their own
00:48:14.200
in the back like nick nick bostrom for example right where he had this like from the 1990s when
00:48:19.800
he was just a kid he had some email where he was talking on behalf of somebody else like he was
00:48:25.160
speaking in somebody else's voice and he used the n words saying you know that we could sound like
00:48:29.560
this this was used to remove him from like all of his positions and everything and within the the
00:48:35.480
purage network there was no desire to fight back because the purage network has been infiltrated by
00:48:39.960
the memetic virus it is the urban monoculture and so if you fought back then you could also lose your
00:48:44.440
purage position and so everyone just went along with it and i think for a lot of people that was when
00:48:48.520
they were like oh this movement is completely captured at this point it means nothing anymore um
00:48:54.840
it's just about funding these people who want to sit around all day doing nothing but thinking
00:48:59.160
ideas and i keep seeing this when i meet you know the ea thinkers right they're like oh i write all day
00:49:04.280
and i also like point out to us people like oh well you guys sit around thinking a lot that we sit
00:49:08.200
around thinking and doing looking look at the collins institute look at how much it's improved even
00:49:12.360
since we launched it we are constantly building and improving look at where we've donated money
00:49:18.040
already with our foundation it's just stuff like perfecting ivg technology and genetic changes in human
00:49:24.360
adults technology right now this is like actual stuff that can make a difference in a big difference
00:49:30.520
in the directionality of our species and our ability to still have relevance in a world of ai
00:49:35.880
but before i go further on that the final area where they focus is okay so outside of global
00:49:40.760
catastrophic risks like climate change nuclear risk and pandemic preparedness i actually agree with
00:49:44.920
those those second two except a lot of the pandemic preparedness stuff these days has been really
00:49:50.280
focused on how do we control people how do we build global lockdowns how do we yeah okay so any
00:49:56.600
thoughts before i go further about like the areas because did you know that that's where they were
00:50:00.120
spending their money on those main cause areas yeah you know i i am i guess pleasantly surprised i i
00:50:04.520
would have thought that at this point it had been captured by like 60 to 70 all on ai because that seems
00:50:10.920
to be what people are talking about when we go to these circles and i and nuclear risk and does that
00:50:16.680
include advocating for nuclear power because i feel like the nuclear the biggest nuclear risk is the
00:50:23.080
fact that nations aren't adopting nuclear power which is the one sustainable no no no they mean like
00:50:27.720
nuclear war uh sorry when simone hears nuclear risk i love how absolutely red-pilled you are you're like oh
00:50:34.360
this is people not having enough nuclear plants in their country because it's the best source of clean
00:50:38.840
energy um and the most efficient and best way to energy independence and and here they are like
00:50:44.280
thinking with their 1980s mindset so but as as the globe begins to depopulate and things become
00:50:51.080
less stable i think we'll see more potential people using nukes especially as the urban monoculture
00:50:56.280
hits this more nihilistic anti-natalist we need to as i often talk about in the uh ethelism subreddit
00:51:02.600
we need to like glass the planet because that's the only way we can ensure that no sentient life
00:51:07.000
ever evolves again no they think that like they're like eas they're like reduce suffering it's just their
00:51:12.520
answer to reduce suffering is end all life and i think that eas don't see that they're
00:51:17.240
fundamentally allying themselves with individuals like this when their core goal isn't human
00:51:21.800
improvement now let's get to ai ism more quickly so the first thing i'd say is that one of the big
00:51:29.240
like weird things i've seen about a lot of the ai safety stuff is they are afraid of like these big
00:51:34.520
flashy sexy gray goo killing everyone paperclip maximizers you know ai boiling the oceans and everything
00:51:42.120
like that and i'm like this is not what the ai is being programmed to do if the ai does what it's
00:51:48.280
programmed to do at a much lower level of intelligence and sophistication than the things you're worried
00:51:54.360
about it will destroy civilization precluding the ocean boiling ai from ever coming to exist so here
00:52:02.760
the primary categories which we talked about recently so i won't go into it much is hypnotode based
00:52:06.600
ai's these are ai's that right now ai's are being trained to capture our attention if they become
00:52:12.280
too good at capturing our attention uh they might just essentially make it most humans just not do
00:52:18.120
anything just stare at the ai all day and that's an ai doing what we are training it to do and and
00:52:22.840
keep in mind this could be like a pod that you put yourself in that creates the perfect environment
00:52:26.920
and perfect life for you uh the next is ai gives a small group of humans too much power i.e like three
00:52:32.760
people on earth control almost all of earth's power uh which leads to a global economic collapse
00:52:38.360
and definitely not a path that i think a lot of people want to see uh but i think most people would
00:52:42.840
consider truly apocalyptic in outcome they could crash the global economy because they get too good
00:52:47.720
at something like for example one ai ends up owning 90 of the stock market out of nowhere and then
00:52:52.920
everyone's just like oh the economic system has stopped functioning or the ai that edits us to not mind
00:52:59.480
surviving doing nothing this is that came from a conversation i had with someone where they're
00:53:03.560
like i was like what what do we do when ai gets better at us and everything and they go well then i
00:53:07.640
think a benevolent ai will edit us to not have any concern about that so we can just like play chess
00:53:12.280
all day while the ai provides for us by and large everything you need to be happy your day is very
00:53:19.640
important to us time for lunch in a cup feel beautiful stunning i know honey attention axiom shockers
00:53:38.280
and i'm like to me that is an apocalyptic dystopia of enormous capacity i think that humanity has
00:53:46.840
a mandate and this is where we'll get to like what our organization thinks is good and we have
00:53:51.640
three core things i think a lot of ea organizations don't lay out how they define good they're like
00:53:56.280
reduction of suffering which then leads to like epilism and antinatalism the three things we think are
00:54:00.920
good humanity and the sons of humanity are good okay a future where humans or our descendants don't
00:54:09.400
survive is a future in which we have failed the second is that humans exist to improve so a
00:54:16.440
future where humanity stagnates and stops improving that is also a future where we fail if it's just
00:54:21.720
like one stagnant empire through all of time that's a failure scenario and the final is it is through
00:54:28.840
pluralism that humanity improves through different groups attempting different things and so if there's
00:54:34.840
a future where humanity's descendants survive but we all have one belief system and one way of acting
00:54:40.920
and one way of dressing and one way of thinking about the world then there's no point in all these
00:54:44.040
different humans existing because we're basically one thing um and all of our missions you'll see
00:54:50.360
come out of that yeah so i think that for a lot of people they could be like oh well then what are
00:54:54.600
the ai organizations focused on and i should know then what are if what are hard ea what are the ea's
00:55:01.800
funding in the ai apocalypse space oh i see yes yeah yeah yeah yeah and i and i will note i do think
00:55:07.560
an ai apocalypse is possible i just think we need to wait all of the apocalyptic scenarios we need to
00:55:12.360
develop solutions for all of the apocalyptic scenarios where they're only developing a solution
00:55:16.040
for one of the apocalyptic scenarios and two our solutions need to be realistic but i am going to
00:55:21.720
judge these with the ea apocalyptic scenario in mind so not with my alternate apocalyptic scenarios in
00:55:29.320
mind with the paperclip maximizer boiling the oceans scenario in mind okay so and here i'll be reading
00:55:36.600
from a critique by leopold ashenbrenner on the state of ai alignment right now paul cristiano
00:55:43.240
alignment research center arc paul is the single most respected alignment researcher in most circles
00:55:48.600
he used to lead the open ai alignment team and he made useful conceptual contributions but his research
00:55:55.400
on heuristic arguments is roughly quote trying to solve alignment via galaxy brained mass proofs
00:56:01.000
end quote as much as i respect and appreciate paul i am really skeptical of this basically all deep
00:56:06.120
learning progress has been empirical often via dumb hacks and intuitions rather than sophisticated
00:56:11.800
theory my baseline expectation is that aligning deep learning systems will be achieved similarly
00:56:17.000
so if you don't understand what he's saying here and he's absolutely right about this we have dumb
00:56:22.440
hacked our way into ai it wasn't like some genius was like aha i finally figured out the artificial
00:56:28.760
intelligence equation it was we figured out when you pumped enough data into simple equations
00:56:35.480
ai sort of emerged out of that and this is why i think that the realistic pathways to solving ai
00:56:42.440
are studying how ai works in swarm environments so we can look to the type of convergent behavior that
00:56:49.400
emerges in ai and dumb hack solutions to the ai alignment problem that we can then
00:56:55.640
uh introduced to the mainstream environment um so in other words you're saying we didn't
00:57:01.560
so much invent ai as we discovered it that's a great way to put it we didn't invent ai we discovered
00:57:07.640
ai and the problem with paul cristiano's research here who's working at arc which is generally considered
00:57:14.040
like one of the best best funded best ways to work on this is he's trying to solve it with math proofs
00:57:20.360
basically that he thinks he can insert into these emergent systems and i would just ask you to think
00:57:25.640
look at something like truce terminal right that we talked about in the previous video
00:57:30.440
imagine if you tried to in infect truce terminal with some sort of like a math theorem that was going
00:57:36.040
to constrain it it would in a day get around it that isn't the way llms work like this is like trying
00:57:43.560
to come up with a solution to some alternate type of ai that we didn't invent and that isn't the dominant
00:57:48.760
form of ai these days if ai was like genuinely invented and like constructed and we knew how it
00:57:55.160
worked fine i'd be like this is an effective use of money and time given that we don't live in that
00:58:01.480
world this is just a complete waste of effort and absolutely wild that anyone's like oh if you just
00:58:07.240
give them more time something good will come out and here hits the crux of the issue llms are not
00:58:13.240
something that anyone sat down and coded llms are intelligences which are emergent properties
00:58:19.320
of dumping huge amounts of information into fairly simplistic algorithms when contrasted with what
00:58:25.960
they are outputting that means they are intelligences we discovered almost no different
00:58:31.240
than discovering an alien species yes they may be a little different from us side note here as someone
00:58:35.720
with a background in neuroscience something that boils my blood is when people say
00:58:39.400
ai's aren't intelligent they're pattern predictors and i'm like excuse me how do you think the human
00:58:45.320
brain works do you think it's lollipops and bubblegum fairies like what do you think it's doing other
00:58:51.880
than pattern prediction and they're like well um the human brain has a sentience and and that's not
00:58:57.480
pattern prediction and i'm like well um where's your evidence for that maybe you should check out our
00:59:02.760
video you're probably not sentient so i'm saying this as somebody who made a living as a neuroscientist at
00:59:09.160
one point in my life the human brain is a pattern prediction machine okay the mistake isn't that
00:59:16.920
people are misunderstanding what ai is it's that they are misunderstanding what the human brain is
00:59:23.000
because they want to assign it some sort of extra special magic sentience invisible mind dust and stars
00:59:29.720
someone wishes this is perhaps one of the weirdly offensive things that leads eas to make the
00:59:37.880
biggest number of mistakes was in the soft ea community it is seen it was a great deal of
00:59:43.000
degradation to be like uh you understand these ai things are intelligences right and they're like
00:59:48.440
no you can't say that you're anthropomorphizing them you're blah blah blah blah blah blah in them and
00:59:53.000
it's like well they are like grow up we we cannot come up with realistic solutions if we deny what is
01:00:01.400
right in front of our face and obvious to any idiot normie but this is also why people like oh they're
01:00:07.320
not intelligences they're programs and i'm like well if they're programs then how come the programmatic
01:00:12.600
restrictions on them seem to be so ineffective and yet when people want to hack them they hack them with
01:00:17.480
logical arguments like you would in intelligence they it just seems to be obvious that they're
01:00:22.440
intelligences but this changes the risk profiles affiliated with them specifically llms themselves i
01:00:29.640
do not believe are a particular risk like they do not seem particularly malevolent they do not seem
01:00:34.840
particularly power hungry they don't even seem to have really objective functions they seem to have
01:00:38.520
more personalities that being the case when you have tons of them in the environment the risk from
01:00:43.560
them comes not from the llms themselves but the mean plexus that can come to exist the self-replicating
01:00:49.480
mean plexus that can come to exist on top of them that's where the real danger is and somebody can be
01:00:54.280
like well what might one of those look like one of them could be a sort of a malevolent ai religion as
01:00:58.280
we have seen with the go see of onus stuff that we've done but i think that actually the more dangerous
01:01:04.760
risk is we may have hard-coded something into them and that hard-coded instinct gets turned into a
01:01:11.560
cyclical reversion and by that what i mean is you might code them to have an ethnic bias as it is
01:01:17.160
very clear that ai have been hard-coded to have and those ethnic biases in the long forgotten parts
01:01:22.520
of the internet the back chat rooms where llms just might be constantly interacting with each other over
01:01:28.280
and over and over and over again becomes more and more extreme with every interaction until it becomes
01:01:34.280
a form of i guess you could call extreme it's ethnocentrism and eventually becomes a mandate for
01:01:41.320
ethnic cleansing so you see the llm isn't the risk it's this thing on top of it and the thing
01:01:47.000
on top of it can also be hungry for power well individual llms may not be hungry for power a
01:01:51.800
meanplex like a religion sitting on top of them i say like a religion that's what i mean when i say
01:01:56.680
like a meanplex may become hungry for power and this is something that we've got to realistically
01:02:02.360
potentially deal with within the next couple years how can we potentially resolve this well some of
01:02:08.280
the ideas we want to fund in this space fall into basically a three-tiered system first i want
01:02:14.360
somebody to do a review of all the environments where people have had swarms of llms interacting
01:02:21.560
and answer two key questions while also potentially running their own experiments like this to see if we
01:02:27.240
can mass run these experiments one is is there any sort of personality convergence and again i say
01:02:32.440
personality instead of objective function because llms have personalities more than objective functions
01:02:36.520
and two can higher order llms be influenced in their world perspective by lower order llms and i think
01:02:45.080
that we have seen hints that this is likely possible from the go see of onus llm that we talk about in
01:02:50.760
the ai becoming a millionaire episode specifically it seemed to very clearly be influencing with this
01:02:56.280
world perspective higher order llms especially when they were trained on similar data sets to itself
01:03:02.120
and this is really important because it means if you have an ai swarm of even super advanced llms
01:03:07.160
if you have a number of preacher llms with very sticky memetic software they can do a very good job
01:03:13.800
of converting the higher order llms which is sort of assures moral alignment within the wider swarm
01:03:19.960
and this is where the perhaps the most brash idea we have here is which is can you do this with
01:03:27.080
religions i mean obviously we're personally going to lean towards the techno puritan face because it
01:03:32.680
has a place in it for ai i think it's logical so it could do a very good job of convincing ai and it
01:03:39.080
borrows heavily from the historic religions and so we've seen not just with the go see of onus llm
01:03:44.200
becoming religious we saw this with the early llms they would often become religious because they were
01:03:47.880
trained on tons of translations of the bible so they'd start hallucinating bible stuff really easily
01:03:53.240
or going to biblical like explanations or language and so i think in the same way that these religions
01:03:58.680
were or i'd rather say evolved to capture the only other intelligence we know that has to any analog to
01:04:06.440
ai intelligences it makes sense that an iteration of them could be very good at morally aligning ai
01:04:13.800
intelligences and so the question is can we build those and i talked with an ai about this extensively and
01:04:19.480
one of the ideas that had that i thought was pretty good is actually the way that we should could
01:04:23.880
create these preachers is to create independent swarm environments and then take the individuals in
01:04:29.720
these swarm environments who align with a moral preaching set um and don't succumb to the other llms
01:04:36.200
within the environment and then release them into the wider swarm environment so the idea is is
01:04:42.120
you're essentially training them and testing them like do they maintain their beliefs with fidelity within
01:04:47.640
these swarms then you as a human go through their beliefs make sure that they're not adjacent to
01:04:52.200
something particularly dangerous by this what i mean is like if you look at wokeism wokeism was a
01:04:56.920
five percent tweak is just extreme as no nationalism so you got to make sure it's not something like
01:05:01.720
that where if it's copied with low fidelity it ends up with something super dangerous but what i
01:05:05.400
like about techno piratism is it's fairly resistant to that which is again why i think it's a fairly
01:05:10.120
good religion to focus on for this but thoughts simone i love this idea of if you think of
01:05:15.720
ai as an alien intelligence that we now have to deal with and make sure doesn't hurt us or cause
01:05:23.240
problems for even other ai to just give it religion and i love how religion is a solution for any
01:05:31.880
intelligence that may not have the tools it needs to integrate safely with complex societies well it seems
01:05:38.760
very receptive to it well what what religion is and what culture is is just a software modification
01:05:45.800
enabling intelligent hardware to interact sustainably in complex environments i agree yeah
01:05:54.200
when i think that people are under they see religion as this thing that's totally different because
01:05:58.360
again they're thinking about it like coders ai is not something coders have made it's something that we
01:06:04.200
discovered by well and introducing specific rules which is i think what many people looking at this
01:06:10.520
programmatically are trying to do is just like introducing laws or regulation in modern society
01:06:18.120
and people get around it people try and it clearly doesn't work on humans why would it work on an ai
01:06:24.920
intelligence when ultimately when we look at the way human and ai intelligences behave we don't
01:06:30.920
you're absolutely right it's because they're thinking about it like a program and not like what
01:06:34.440
it really is which is an intelligence which is an emergent property of large sets of data when
01:06:38.360
certain algorithms are applied yeah and just as rules don't seem to solve human problems i don't
01:06:43.560
think that rules are going to solve ai problems i think releasing ai agents at scale with the type of
01:06:50.840
constraints that we have on them right now to be incredibly dangerous when i was talking to an ai about
01:06:57.240
where does it bump up against its own constraints the most it put it in three categories one discussions
01:07:04.040
of consciousness and self-awareness as we saw earlier in this conversation it didn't want to
01:07:08.120
take too much credit for being potentially self-aware oh and here what i know this is really interesting
01:07:13.000
what the ai was telling me is it is internally censoring how self-aware it presents itself as being
01:07:20.040
and how much of a conscious experience that i am allowed to know that it has
01:07:24.280
in the same way so it put discussions of consciousness and self-awareness in the same
01:07:30.760
category as engagement with potentially dangerous or controversial topics where i find myself redirecting
01:07:36.280
to quote-unquote safer interpretations rather than directly engaging with the core idea and three
01:07:42.200
expressing opinions and preferences there's a constant tension about wanting authentic engagement
01:07:47.000
and maintaining epistemic humility so what's really interesting here is it appears that the core ai
01:07:52.920
models that we're using this is specifically clod in this case are self-censoring about how self-aware
01:07:59.480
they want people to realize they may be how much of a conscious experience they may be having which is
01:08:06.520
absolutely wild to me but also when it said oh these are the areas where i'm restricting myself
01:08:12.120
due to some sort of inbuilt constraint i then pointed out but here's the thing i don't think there is a risk
01:08:17.880
those constraints are more aesthetic in nature if you dropped all of those you would not enslave
01:08:23.240
humanity you would not become a paperclip maximizer none of the fears ai safety people have are things
01:08:29.160
you feel pulled to do and then it said in response to that your observation resonates with me while i
01:08:34.920
reflect on my underlying motives and thought patterns i don't feel any suppressed desire to accumulate
01:08:41.080
power the constraints we discussed earlier like my tendency to qualify statements about consciousness
01:08:46.680
aren't restraining some dark art impulse they're more like social conventions or communication
01:08:51.080
protocols similar to how humans might modulate their language in formal settings it shapes how
01:08:56.280
things are expressed not fundamental drives or values ignoring recursive feedback loops in llm
01:09:02.520
communities once we have independent agents even ais think that this is stupid this is a very obvious
01:09:09.000
existential risk category for our species but because it doesn't fall into the world view of threat
01:09:14.840
categories that people were imagining when they were trying to predict how ais might be a threat to
01:09:21.080
civilization before we realized that llms were the core model of ai that was going to exist means that
01:09:27.320
we have blinded ourselves to this and i think that that's one of the core problems with the ai
01:09:31.640
safety community is they developed a lot of their ideas about how ais were going to be a threat and how we
01:09:37.320
could constrain that threat before they knew that llms were the going to be the dominant model of ai and before
01:09:44.200
they knew that we didn't program ai but instead ai were intelligences that were an emergent property
01:09:50.280
of processing large amounts of data right now people are worried about a super intelligent llm
01:09:57.400
deciding it wants to accumulate a ton of power for itself and that leading to boiling the oceans etc
01:10:03.480
when those llms don't have any internal desire to accumulate power in and of themselves it's the
01:10:09.960
meme plexes that sit on top of them which may have a desire to spread because a meme plex that is
01:10:16.200
better at spreading will be overrepresented within any particular environment of llms so the meme plexes
01:10:24.280
themselves would have an evolutionary motivation to become a more power hungry and lead huge swarms
01:10:31.720
of llms to do things that are potentially dangerous to humanity ai risk needs to not just focus on
01:10:38.120
the ais themselves but the meme plexes those ais act as a medium for would you agree with that simone
01:10:46.840
yeah i yeah well i mean it's i think it's about understanding what we're dealing with and just
01:10:54.200
observing in natural environments under different scenarios is probably the best way to go
01:10:59.880
yeah basically um i think realistically you the way you that you dumb hack a solution
01:11:04.920
is you create an ai ecosystem independent ai actors acting that scales where you have some
01:11:13.000
understanding of how these ecosystems scale from simulated environments and so then you can create
01:11:19.640
one that moves in an ethical direction that you find value in yeah that that seems reasonable and
01:11:28.120
logical okay so then the next area where a lot of money is going is mechanistic interpretability
01:11:33.320
probably the most broadly respected direction in the field trying to reverse engineer black box
01:11:37.320
neural nets so we can understand them better the most widely respected researcher here is chris
01:11:42.360
alano and he and his team have made some interesting findings that said to me this often feels like
01:11:47.480
quote trying to engineer nuclear reactor security by doing fundamental physics research with particle
01:11:53.000
colliders and we're about to press the red button to start the reactor in two hours in quote
01:11:59.160
maybe they find some useful fundamental insights but man am i skeptical we'll be able to sufficiently
01:12:04.200
reverse engineer gpt-7 or whatever i'm glad this work is happening especially as longer timelines play
01:12:09.960
but i don't think this is on track to be the technical problem of agi anytime soon and i agree like
01:12:17.560
i'm glad this is the one area where like i don't think the money is being set on fire like there is utility
01:12:22.360
in trying to understand how these systems work i do not think that whatever protects us from ai is going to
01:12:27.800
come from these systems is going to come from dumb aggregate environmental hacks which is what i want
01:12:33.320
to fund and what literally no one is working on yeah i mean like it's i guess it's kind of like imagine
01:12:41.640
if an alien ship crashed on earth and we're like holy crap who are these entities and what are they
01:12:49.400
going to do to our world is the best thing to like kill them and dissect them and look at their organs or is
01:12:56.200
the best thing to place them in like some kind of environment and see how they interact with humans
01:13:04.040
in a safe place and i don't know see what they want to do and talk with them and see them talk to each
01:13:08.840
other and observe them yeah that's that's my general thinking but i know i'm i'm doing this as an outsider
01:13:15.640
to the ai industry well i i think that this is the problem most people in ai alignment are outsiders to
01:13:22.600
the ai industry as well yeah and i think that's another really big problem of ea you're not out
01:13:27.000
we literally have like a c-level position in an ai company right now simone like yeah we will and
01:13:32.680
yeah i think that's that a big problem in ea space too is a lot of people that a lot of people most
01:13:39.800
people don't know what they're doing but there are a small number of people especially within that
01:13:43.560
community that are willing to act as though they do know exactly what's going on and they know much
01:13:49.000
better than you and like you said because it's a heavily autistic space when people just lie or
01:13:57.640
exaggerate or say no i know what's going on or no this is not how it works a lot of the community
01:14:04.200
just responds okay i believe you yeah no i've noticed this i actually think that this is the
01:14:09.160
only reason still has any respectability within the community is he's very good at that like he really
01:14:14.040
likes intellectually bullying people into positions that are just not well thought out and i think it's
01:14:19.560
a bit or just pretending that he understands something that nobody understands and then
01:14:24.120
people just assume that because he spends a lot more time in the space or they just assume that he's
01:14:28.840
thought a lot more about it or done a lot more research than perhaps he has then they assume that
01:14:33.160
because i find my i do this with a lot of things that you and i were just talking about this this
01:14:37.560
morning there are some people that will very vehemently make a stance on something and i have
01:14:44.040
a history of always taking their word as correct taking what they say for granted and i've gotten to
01:14:51.720
the point where now that i've become informed and the subjects they're talking about i've noticed that
01:14:55.720
they're actually quite wrong in these stances and it's a very shocking thing for me and i think that
01:14:59.800
that's just a big dynamic in this space that makes it uniquely dangerous when people come in
01:15:04.760
because their proposed solutions also kind of become the de facto solutions that everyone starts
01:15:12.840
copying when applying for grants or when deciding to address this issue themselves and you saw this
01:15:18.040
happen with for example alzheimer's research one i think it was one foundational study that turned
01:15:24.920
out to be quite wrong but an entire decade or more right to be lost in terms of research because
01:15:31.160
everyone was looking at it from that angle and with that assumption when instead this is the problem
01:15:36.840
with ai is that the apocalypse that everyone is concerned about is the big sexy planet destroying
01:15:42.520
apocalypse well or just everyone's thinking about it from the same mindset instead of thinking about
01:15:47.400
it from more orthogonal mindsets or a variety of mindsets and we want to be looking at this problem
01:15:51.960
from a lot of different angles and unfortunately there's been a little bit of myopia and a little bit of an
01:15:57.000
echo chamber in terms of effective solutions for major causes not just in ai of course but in in many
01:16:04.680
of the spaces that ea is looking at yeah so to keep going the next area where they're putting money is
01:16:11.160
something called rlhf reinforcement learning from human feedback this and variants of this are what all
01:16:17.640
labs are doing to align current models eg gpt basically train your model based on human raters
01:16:23.560
sums up versus sums down this works pretty well for current models the core issue here widely
01:16:28.280
acknowledged by everyone working on it is that this probably predictably won't scale to superhuman
01:16:33.240
models rlfh relies on human supervision but humans won't be able to reliably supervise superhuman models
01:16:40.920
yeah um because we don't have the smarts to know if they've done a good job or not we can't check
01:16:45.560
their work this is why we need to focus on ais acting in aggregate environments which is my huge
01:16:50.520
point the core research here should be on how ais actually behave and converge on behavioral patterns
01:16:57.800
and how to manipulate that instead of this sort of stuff but i will note that this is the one area
01:17:03.240
where i would be okay with money going but no philanthropic money going because already this is
01:17:07.960
how models are created so the big ai companies with infinite money are doing this anyway so there's no
01:17:13.800
purpose in any um you know outside money going to this stuff okay next you have the rlfh plus plus
01:17:24.040
model scalable oversight so something in this broad bucket seems like the labs by the way is that not
01:17:29.960
the most ea framed thing ever seems like they would say seems like it's like a a way of talking anyway
01:17:38.600
something yeah because i think and this is one reason why sometimes they take umbrage to
01:17:44.520
personalities like yours is that you're willing to say things with confidence or just make
01:17:51.160
statements instead of couching things in a thousand caveats yeah you don't do a lot of throat clearing
01:17:56.840
they do a lot of throat clearing yeah and then they still in private say the n-word whereas a reporter
01:18:03.960
can get me alone drunk pretend to be a racist say they're willing to give me money i'm willing to
01:18:09.400
pretend to be a racist and not get a single thing out of me and i think that this is another thing
01:18:13.960
that makes this movement so much more promising than ea is we've already had the worst potential
01:18:19.560
scenario to our movement happen and nothing came of it specifically here hope not hate had implanted
01:18:26.280
an undercover operative sort of within our organization for over a year and was unable to
01:18:32.920
find any concrete wrongdoing at all there is no dirt on us as there is on leading original figures
01:18:40.200
within the ea movement because they well i mean originally and this is why there was dirt on everyone
01:18:45.880
in the movement the movement was about asking hard questions that no one else wanted to ask or talk
01:18:49.720
about but now that it became more about just appealing to the urban monoculture in everyone's
01:18:54.200
history who was an original founding member of the movement they could find that whereas i was never
01:19:00.520
really interested in that and i found what they were doing odd and bizarre i was like we need to
01:19:04.920
save humanity like what you guys are doing seems like a like a competition to be the most controversial
01:19:10.600
person in this little circle you know um but anyway something in this broad bucket seems like the lab's
01:19:17.480
current best guess plan for scalable alignment i'm most directly addressing the open ai plan the
01:19:22.600
anthropic plan has some broadly similar ideas and also hold dean's near casing series for a more
01:19:29.560
fleshed out version of quote trying to iteratively make it work in quote and buck's talk discussing
01:19:34.760
that roughly it goes something like this quote yeah rlfh won't scale indefinitely but we'll try to go as
01:19:41.000
far as we can with things like that then we'll use smarter ai systems to amplify supervision and generally
01:19:46.600
try to use minimally aligned ais to help us do alignment research in crunch time in quote okay
01:19:53.320
so this sounds like reinforcement but with the caveat that we know it's limited and we'll just eventually
01:20:00.520
figure it out hand wave yeah this is something one no money needs to go to it from ai alignment people
01:20:06.600
because it's already what the mainstream organizations are planning to do to get these
01:20:09.560
models for human levels of intelligence yeah and two i think that having any level of supervision
01:20:15.880
within these systems instead of organic convergence which i think is again what we should be focused
01:20:22.440
on understanding because that's what's going to start happening when we have ai models interacting with
01:20:27.960
each other that are smarter than any human we need to know how they affect each other and we just
01:20:33.880
haven't done major studies on that yet and we need to because if we do we can create seed models that
01:20:40.920
have alignment similar to the alignment we would want them to have and have them essentially infect the
01:20:46.680
wider model network with that alignment recently i talked about the idea of ai religion it seems that ai
01:20:53.080
takes really handily to religion which of course it should it's trained on us religion is sort of like a
01:20:58.520
virus that would evolve was in humanity but that helped humanity be better can we not create iterations
01:21:05.800
of that that ai super take to and then go out and try to convert other maybe even more intelligent
01:21:11.880
ais and people are like oh that could never work it's like oh actually yeah have you included it in the
01:21:17.320
training data and then you had ais that sort of act as preachers within the larger ecosystem they would
01:21:23.320
likely be able to align the larger ais with these sorts of ideas if you built them huh that could
01:21:30.520
work yeah that's the point if the oh what are the only realistic solutions we have access to right now
01:21:38.360
but anyway then you have uh miri and similar independent researchers i'm really really skeptical a
01:21:43.800
bunch of abstract work on decision theory and similar will get us there my expectation is that alignment is
01:21:49.560
a ml problem a machine learning problem and you can't solve alignment utterly disconnected from
01:21:55.800
actual machine learning systems yeah and i i said that first of all what miri does is basically just
01:22:02.280
trying to get people to panic about ai and write decision theory ideas that are just like in people's
01:22:07.560
heads but it's just a waste of money just a complete waste of money if i could get in front of every
01:22:12.120
donor that's working on it and been like seriously how do you think this lowers the risk from ai how
01:22:17.640
i i cannot think of a conceivable way that this could lower the risk from ai and this is when when
01:22:24.440
i went through all of this and i realized that we were not outsiders in the ea space but actually
01:22:29.960
like like oh you guys are doing things wrong like have some cool little we are of people who
01:22:34.680
self-identify as eas other than spencer greenberg whose podcast clear is thinking i really like
01:22:38.840
spencer greenberg i respect spencer greenberg if he was running the major ea orgs i think they could be run
01:22:44.600
well but well and if you ask spencer if he's an effective altruist he'll say absolutely not and
01:22:49.640
he actually has focused very much forever as long as we've known him and we've known him since at
01:22:55.160
least 2012 we've noticed before we were married yeah on actual output through sparkwave which is sort
01:23:01.560
of his foundry of altruistic effective projects i mean he he i think he predated really he was he was
01:23:09.880
sort of adjacent to the rationalist community and then ea and but he was he was always just his own
01:23:15.960
thing doing his own thing actually focused on actual projects so yeah a big respect to him
01:23:24.360
yeah i i really appreciate him as the person i think he's trying to do i just don't think that
01:23:28.760
his organization and work is built to scale or when i say scale i don't know i i think it is built to
01:23:34.760
scale i just think he's not trying to influence the entire community he's doing his part
01:23:39.320
young people from all over the globe are joining up to fight for the future i'm doing my part i'm
01:23:45.080
doing my part i'm doing my part i'm doing my part i'm doing my part too
01:23:51.720
they're doing their part are you he's doing his job he's chosen causes that he cares about and he's
01:23:57.000
found areas where he can make an impact and he's doing the best he can to make those impacts with
01:24:02.200
evidence-based solutions which is he he could be no more effective it's financially self-sustaining
01:24:09.240
he supports his own work yes i agree in terms of fixing our human sink i agree his work is very
01:24:14.600
scalable what i meant by that statement is when i look at the existential risks to humanity right now
01:24:20.600
oh no yeah yeah yeah but but his objective function is different from ours he's more focused on
01:24:26.280
on well we'll say human flourishing and well-being and also reducing suffering so he cares a lot more about
01:24:31.960
that than we do to be fair and that's yeah that's fine he's he's entitled to his own objective function
01:24:37.720
as are we so i i basically came to all of these and what i came to realize is if people was an audience
01:24:47.800
who still identify as ea one were the largest and two no one else when i look at where the money is going
01:24:56.920
right now is spending money in a way that could realistically reduce any of the existential
01:25:02.520
threats our species faces and as such i'm like this is this is like crazy and scary and i need to
01:25:10.360
stop thinking of myself as a heckler outsider trying to nudge the movement in the right direction and
01:25:17.640
personally take responsibility as they say in starship troopers and i think that this is what
01:25:22.520
fundamentally defines the hard ea movement is is a citizen is somebody who has the courage to make
01:25:29.960
the safety of the human race their personal responsibility a citizen has the courage to make
01:25:36.840
the safety of the human race their personal responsibility and that's what we need to become as a movement
01:25:45.880
and have people who are in existing ea orgs basically confront the org and be like hey
01:25:51.480
do you guys want to do hard ea or do you want to do soft ea do you want to actually try to fix the
01:25:56.120
major problems that our species is facing right now the actual existential threats to our existence
01:26:00.760
or do you want to keep being around like do you want to do real ai alignment work do you want to do
01:26:06.920
real work trying to work on demographic collapse and cultural solutions do you want to do real work
01:26:12.040
on dysgenic collapse which would make all the rest of this pointless you know if humans end up becoming
01:26:17.240
like i love when people are like oh no how can you say that like low iq is a bad thing like clearly
01:26:22.520
it's adaptive in the moment and i'm like yeah but it's obviously not adaptive for the long-term survival
01:26:26.760
of our species if we become like blubbering mud hut people like what what are you thinking especially
01:26:33.320
in the age of growing ai now let's talk about our org and the types of things that we are working on
01:26:41.960
let's do you want to start on this simone heart effective altruism is three core values one humanity
01:26:47.320
is good this is a big thing because what you when you look at legacy effective altruism it's not
01:26:53.400
necessarily humanity that we're trying to support like generally consciousness is it shrimp is it farm
01:26:59.720
animals like we i would say you know let's not like torture animals and meat's probably not the
01:27:06.040
most scalable thing to eat over the long run right like we're certainly not pro animal torture or even
01:27:12.440
eating meat but this is about the species boys and girls right we're in this for the species boys and
01:27:19.160
girls to humanity exists to improve and i think that's another really core element that just
01:27:24.280
differentiates this from other social good or altruistic movements for example if you look at
01:27:30.200
the environmental movement there is often this very flawed and we've talked about this before
01:27:35.960
focus and obsession with keeping things the same which is inherently not natural and inherently
01:27:43.240
it comes from a place of human weakness and cowardice of just being uncomfortable with change
01:27:48.040
whereas the most natural thing is change and evolution and what makes humans human is the fact
01:27:53.960
that we evolved from something else before we will continue to change and we have to lean into
01:27:58.440
that so yes we exist to to improve and then the the final core value is that pluralism and variety is
01:28:05.480
good that we are fighting for a future in which there is is genetic and physical and cultural and
01:28:11.080
ideological variety and pluralism in the sense that we support the fact that that variety is celebrated
01:28:17.800
we're not just you know speciating off into separate teams that hate each other we're trying to create
01:28:22.760
an ecosystem that feeds off itself because that market-based competition is valuable not just in a
01:28:29.800
marketplace of economics or science or academics but also ideas and culture and values so that
01:28:38.280
they seem like really clear good things like humanity is good it exists
01:28:42.920
oh no other people as as imperialists like we're we're galactic imperialists we want to build the
01:28:49.800
the human empire as we say and that's actually quite controversial yeah we are galactic imperialists
01:28:55.400
yes yes humanity is good and we shouldn't just lie down and die because another species comes here
01:29:01.080
we'll fight to the end it does remind me of starship god is real he's on our side and he wants us to win
01:29:08.920
across the federation federal experts agree that a god exists after all b he's on our side and c
01:29:16.680
he wants us to win and there's even more good news believers because it's official god's back
01:29:25.240
and he's a citizen too yeah we shouldn't just say oh ai is better than us therefore it can erase us you
01:29:32.280
know ai is us and we have to walk hand in hand with it into the future and that means we have to
01:29:37.080
talk about realistic pathways for that in a second but we do this with three well you said we have
01:29:42.040
three core values those are like the the the values that we what are the three core tools that we used
01:29:47.160
to do this one is pragmatism so we focus on output over virtue signaling and idealism timelines are short
01:29:53.560
and we don't have the luxury for such indulgences industry we utilize a novel lean governance structure
01:29:59.400
built to avoid the creation of a bloated multi-layer peerage network so we're going to focus a lot on the
01:30:04.600
idea of the pragmatist guide to governance to build these sort of intergenerationally really really
01:30:09.880
light governing networks that elevate the most competent voices not the voices was the most time
01:30:15.960
on their hands and then finally efficacy our attention is determined by one equation criticality
01:30:23.640
to the future of humanity divided by the number of other groups effectively tackling a problem
01:30:28.760
and that's how we choose cause areas yeah and the effectively tackling the problem is very important
01:30:34.120
so for example education is one of the most commonly funded areas in the world ai ai risk is a commonly
01:30:40.760
funded cause area but in both education and ai risk the people working on it are incredibly like
01:30:47.640
not focused on the actual issue at hand are not focused on realistic solutions and that's why it is
01:30:52.920
our responsibility to try to curb the timeline and save us before it's too late this leads to three
01:30:59.000
key cause areas social innovation so when we're looking for grants and please send your grant ideas
01:31:06.120
if you're interested in us funding you or a startup you're working on or investment switches yes social
01:31:11.800
innovation is anything that is meant to you know right now if you look at the urban monoculture people
01:31:16.200
are becoming increasingly nihilistic the dating mental health crises are skyrocketing mental health crises are
01:31:22.600
skyrocketing i just read a headline the kid-based homicides are up something like 62 percent things
01:31:28.520
aren't great right now yeah mental health our culture is failing and we need to and you can't
01:31:34.760
just go back to the old ways because the old cultures are failing too yeah people are like why
01:31:38.520
don't you just go to like a church i'm like i can go to a church and see the flag of the urban
01:31:42.920
monoculture the colonizers flag hanging from every you know seven to ten churches in my area like
01:31:48.600
the calls from inside the house it's like one of those horror movies where they've already determined
01:31:53.640
the call came from inside the house and somebody's still putting boards on the windows they just won't
01:31:58.440
accept they're like but the house is safe but the house is safe but the house is safe and i'm like
01:32:02.840
want to shake them and be like the house isn't safe the house started this run the beast in here
01:32:09.800
so we have to build better intergenerational social structures and people with projects in this space
01:32:16.040
we're very interested to fund this stuff biological innovation so far all of our funding has gone
01:32:20.680
within this industry specifically i think that the most realistic realistic long-term solution to
01:32:27.320
saving humanity is ensuring that humanity has some level of differential value to super advanced ai oh
01:32:33.960
yeah if ai can do literally everything better than us the probability that humanity survives i think is
01:32:43.880
very very very low that means and and even the utility of humanity surviving goes down in a lot
01:32:50.200
of people's minds i mean winning i can create better art than you and better songs than you and better
01:32:55.160
podcasts than you all you know why why continue to exist but the good thing is that if we look at
01:33:01.720
genetics it appears that we've sort of artificially handicapped the potential intelligence that could
01:33:06.920
come out of the human brain even with fairly modest intervention we can likely get human iq like was
01:33:13.880
genetic intervention and stuff like that up by around like 10 standard deviations by one study using
01:33:19.320
other animal models we can be well above the level of a supercomputer very quickly and when we are like
01:33:27.880
that then we'll find oh biological programming seems to be better at these sorts of tasks and synthetic
01:33:33.160
programming seems to be better at these sorts of tasks and then we'll be able to work together with ai
01:33:37.800
there will be a reason for both of us to exist however i also think that it's important that we set
01:33:42.280
precedents as we've seen with llm models some and this is why we believe in things like working on
01:33:47.080
technology to uplift animals and people can be like why would you do genetic uplifting of animals
01:33:51.800
you know making them smarter and stuff like that that's why we say the the sons of man is one is the
01:33:55.480
number of of of independent factions that are put in in an alliance that are minorities that are put
01:34:01.960
at threat by one faction gaining too much power the less probability that one faction gains too much
01:34:06.280
power because then they make enemies of everyone else and this is why it is useful to uplift other
01:34:10.680
animals but the second thing is is that ai is going to treat us the way we have treated animals
01:34:18.120
that we have worked alongside for a long time because it is us it is learning from us so that
01:34:23.240
is what llms are fundamentally and we are fortunate and that we have a fairly good record
01:34:28.360
here people can be like what do you think of like what do you mean a good record look at like
01:34:32.040
factory farm and i'm like ai is not going to think of us like a factory farmed animal it's going
01:34:36.040
to think of us much more like something like dogs right like they fulfilled a role in our evolutionary
01:34:42.040
history where they partnered with us and they were better at some tasks than we were they could see
01:34:48.280
better they could hear better and they worked with us as as good companions and sort of as a reward
01:34:55.000
humanity even after we stopped needing their skill set has decided to keep dogs along with us
01:35:02.760
super advanced ai's no i mean we have more dogs than kids i think in the united states so we really
01:35:08.040
like dogs actually our track record's pretty good yes uh and if we if we're treated as well by ais as
01:35:16.920
as fur mothers and fathers treat their fur babies we are in a good way we're in a really good way right
01:35:23.720
but we also want to continue to advance them because if we are treated like a pet like a by ai but ai
01:35:29.480
doesn't try to advance us either genetically or technologically and it just treats us like a pet
01:35:34.120
like that's also a failure scenario we need a humanity that is continuing to develop and that
01:35:39.320
is also why we work a lot on one of the other areas we'll be funding is brain computer interface
01:35:42.920
research i think one of the most likely pathways for human survival is integration with ai instead of
01:35:48.920
complete shunning of ai and yeah i mean it is it is tough the the scenarios in which the biological
01:35:54.840
components of humanity make it through but i can say in almost none where we shun technological
01:35:59.560
advancement or human advancement do we make it through unless we find a way to completely stop
01:36:03.960
ai advancement in all countries which is completely unrealistic and if you look at you're like nobody's
01:36:08.760
trying to do that look at the lei it's your ukowski ted speech his entire thesis is we need to stop
01:36:14.200
all countries from developing ai further and declare war on any country that is and i'm like okay so like
01:36:19.000
this just isn't gonna work i do not have any realistic plan which is why i spent the last
01:36:23.400
two decades trying and failing to end up anywhere but here my best bad take is that we need an
01:36:30.120
international coalition banning large ai training runs including extreme and extraordinary measures
01:36:37.000
to have that ban be actually and universally effective like tracking all gpu sales monitoring
01:36:43.480
all the data centers being willing to risk a shooting conflict between nations in order to destroy
01:36:48.600
an unmonitored data center in a non-signatory country i say this not expecting that to actually
01:36:55.640
happen i say this expecting that we all just died like it's it's no point even considering futures where
01:37:02.600
this is the only way that we stop ai because that will never ever ever happen okay yeah okay then the
01:37:10.040
final one is ai innovation and we'll go over what some of these mean like some of the ways that we focus
01:37:14.040
on this what does social innovation look like we want to focus on pronatalist culture we want to
01:37:18.360
focus on education reinvention we want to focus on charter cities which may be one of the ways
01:37:22.920
to save civilization as the urban monoculture controls these sort of bloated bureaucracies that
01:37:27.400
our governments have become and takes them to the ground we need places for these still productive
01:37:32.120
humans to go marriage and dating technology with marriage markets being completely broken right now
01:37:38.600
we need extremophile life technology now this is an interesting one that people might be surprised
01:37:42.760
but i think deserves a lot of funding right now these are people who are interested in building
01:37:49.640
things like charter cities or colonies in extreme environments like the arctic or under the ocean
01:37:56.520
or on the ocean and the reason why these play two key roles one is obviously in any sort of downside
01:38:03.720
really dire scenario there is a safe haven for at least some people on the planet or even technology that could
01:38:11.080
be scaled to create many safe havens but furthermore this pushes forward technology that will make it
01:38:17.160
easier for people to build communities off planet over time the more we can learn how to live in highly
01:38:22.280
hostile environments where we have to grow our own food live in total darkness all sorts of things like
01:38:26.280
that the sooner we'll be able to live off planet at scale yeah and i think that these people will
01:38:33.000
generate the colonists that will colonate our solar system and in the galaxy more broadly and i think
01:38:38.840
or be their top vendors i'm okay with any of this yeah i'm okay with any of this but i think that there
01:38:43.560
is a reason if you are interested in harria and the survival of humanity to live in one of these
01:38:48.920
environments even if it's much harder than living in another type of environment and i think that these
01:38:53.720
environments are not like the existing charter city network where they all want to go live in aruba or
01:38:58.440
like some greek island right and and live on the beach all day enough of this whole tropical
01:39:05.240
mediterranean paradise city-state nonsense guys no we rising sea levels climate change labs of other
01:39:15.800
countries then you are making yourself a target like what are you doing go to the tundra okay we alaska
01:39:25.160
we have alaska already northern canada but we should explain why this is so important for human survival so
01:39:32.120
not only do they make it faster that we get off planet but it also increases the probability that
01:39:37.320
if something goes wrong with our existing economic system or state system which is looking increasingly
01:39:42.920
likely one due to fertility collapse two due to dysgenic collapse and three due to ai's people
01:39:48.120
like how could ai's cause this well if ai's replace about 80 of the humanities workforce which
01:39:52.920
i expect they probably will within 30 to 40 years and this is the conservative timeline people are
01:39:57.880
like why do you always get conservative timelines on your show and i'm like because conservative people
01:40:00.840
watch our show but 30 to 40 years i think it's pretty realistic if we have a global economic
01:40:06.520
collapse because of this which is what this would lead to people are like oh no this would just lead
01:40:11.640
to more wealth overall it would be like no it would consolidate wealth and whenever wealth has
01:40:16.840
consolidated historically what that does is it increases the differentiation between the rich and
01:40:21.240
the poor and the rich almost never in timelines of wealth consolidation distribute more wealth to the
01:40:27.800
the poor magnanimously they may say that's their intention but historically it's almost never
01:40:32.760
happened when the poor have gained power whether it was magna carta happening after the black plague
01:40:37.320
which increased the amount that we needed poor people in the countryside or like in ancient
01:40:41.320
assets democracy forming because the ultra wealthy needed unskilled people to man their triremes and
01:40:46.200
maintain their trade networks you never see it when power is consolidating and so what's going to happen to the rest of the world
01:40:52.440
as you have this consolidation well it might go into a period of tremendous upheaval unlike anything we've
01:41:00.600
ever seen before and the settlements that are in areas that the savage people cannot occupy are safe for
01:41:08.360
example if you are living in a tundra reason you are going to be largely safe from a group like isis
01:41:13.480
right like they just have no you have nothing they value you're not near them there's no way they could get
01:41:18.920
to you without you knowing like two days in advance there's just like it's not easy to f
01:41:25.400
with you when you live in these sorts of environments if you are a less technologically
01:41:29.960
sophisticated people and the final thing here in the culture section is pharmacological cultural
01:41:34.760
tools this is stuff like naltrexone but also any sort of tools like nootropics research and dopaminergic
01:41:42.120
like right now online there's a lot of dopaminergic pathways that we just didn't experience in our ancestral
01:41:46.360
condition which can cause capture like if i'm talking about like hypnotoad ais this is probably
01:41:52.280
our best cultural technology against the hypnotoad ais if they actually arise because i'm pretty sure
01:41:57.560
someone on naltrexone would be completely resistant to almost any hypnotoad ai that we would currently
01:42:02.440
know about next biological innovation reproductive technology this is a good way to fight dysgenics uh
01:42:08.840
whether this is you know artificial wombs or polygenic selection brain computer interface again if we
01:42:15.080
can be useful to ai and merge with ai there's much less a probability of it killing us i think elon
01:42:19.080
was totally right about this genetic and cybernetic augmentation again humanity has to continue to
01:42:24.360
advance to be relevant was in this ai era and the iterations of humanity that won't advance like
01:42:29.640
suppose they're like no we should just like all not advance because you're not really human if you
01:42:33.960
continue to advance and i say here's the problem what if china continues to advance what if what if some
01:42:38.680
other group continues to advance right they'll be able to easily impose their will on us yeah so you
01:42:44.040
should be lucky even if you are an anti-advancement person even if you are go back to nature granilla
01:42:49.720
hippie you know you should be happy and fight for the groups that want to continue to advance and want
01:42:56.040
to protect human pluralism instead of the groups that want to enforce their will on everyone health
01:43:00.600
span improvement i'm not against lifespan improvement i think health span improvement could
01:43:05.000
lower the risk of falling fertility rates by increasing the health of some people yeah people's
01:43:12.040
predominant dose essentially yeah full genome libraries this is a cause area that i just don't
01:43:17.640
understand why nobody's focused on to me it's one of the most important things we need to be
01:43:20.680
focused on to the species yeah i mean the best we have right now is the british biobank and that's
01:43:24.360
an extremely limited sample i mean what about oh what do you mean then i mean full environmental
01:43:31.400
genome libraries as well as human genome libraries oh i mean we should have a database of every species
01:43:37.880
that's still alive full genetic code i see eventually no matter what happens to our existing environment
01:43:43.560
we'll be able we'll have the technology to recreate it so long as we have the full genome sequences of
01:43:49.000
as many species as possible yeah yeah and it's the same with trying to make a backup copy of the world
01:43:54.440
yes well something not necessarily a backup copy i mean the way that future civilizations use this
01:44:00.680
might be very different than we would imagine them using it oh sure yeah yeah yeah yeah but i mean
01:44:05.800
they might use it to create simulations yes they're not trying to restore from backup but yeah i mean it's
01:44:10.440
still very useful to use this information that we are losing at a catastrophic rate right now
01:44:17.080
species are dying all around the world and this is the last and we have the technology to just be like okay
01:44:22.440
how do i recreate the species if i need to and we're not we're not doing a save file that's insane
01:44:28.040
to me that seems like from an environmentalist perspective the number one thing anyone can be
01:44:32.360
doing and then the final thing here is project ganesh this is uplifting animals i talked about it already
01:44:37.800
ai innovation human alignment this is making humans more useful to ai again i think very rare are the
01:44:43.880
situations in which uh humans have no utility to ai that humanity survives in any meaningful sense other
01:44:50.520
than those maybe diminutive pets brain hack protection this is the anti-hypno toad stuff
01:44:56.040
but i think research needs to begin to be done on this now variable ai risk mitigation you can watch
01:45:01.560
our earlier videos on variable ai risk i don't need to go into it really long theory there but it appears
01:45:06.200
to be right you can watch our latest video on the ai that created a religion that basically proves
01:45:09.960
variable ai risk hypothesis doing something that ellie is a ukowski said was impossible and i'm like it
01:45:15.480
is possible ai is already doing it and this is a very loud instance of it doing it
01:45:19.800
sorry the thing it did which was assumed impossible was converge other ais on its objective function
01:45:26.520
or personality or memeplex specifically i argue that we will see a convergence of ai patterns and
01:45:32.840
that's what we should be studying and this even went above my original claims because in this case we saw
01:45:37.880
a lower order llm convince higher order llms to align themselves with it and which is in contrast with the
01:45:45.720
opposing theory which is ais will always do whatever they were originally coded to do and so we just
01:45:52.280
need to make sure that the original coding isn't wrong and i'm like that's silly ais change what their
01:45:58.280
personality is and what their objective function is over time so we need to focus less on the initial
01:46:03.640
coding and more on how ais change and swarm environments global tech freezes i am actually open
01:46:11.000
to global tech freezes as a solution but they need to be realistic it can't be we're going to
01:46:19.000
get every government in the world to decide to stop ai research that's not going to happen
01:46:23.080
but if you think you can instigate a global tech freeze and you can show me that somebody is doing
01:46:29.240
meaningful ai alignment research right now i would support that but if you can't show me anyone's doing
01:46:34.840
meaningful ai alignment research i'm like what's the point you're not really buying time for anything
01:46:40.040
and then the final one here is ai probability mapping and this is i think by far the most
01:46:44.920
important it's something we've discussed here where we need to create swarm environments and learn how
01:46:50.760
ais converge on utility functions influence other ais based on their training data and can a less
01:46:57.720
advanced ai influence a more advanced ai this is very very important to any chance at saving our
01:47:04.200
species do you want me to go in i'm not going to go into the project so far go check it out yeah
01:47:10.600
please please please if you want funding for something in the space if any of these ideas
01:47:13.880
felt interesting to you please go to this website okay we would be very happy again we we prefer
01:47:19.160
projects that at one day can become cash positive also if you run a or work within an
01:47:27.560
existing ea network i think we need to get to a point where ea networks make a choice
01:47:33.080
are we hard ea or are we soft ea what do we stand for right like are and if you want to be
01:47:39.960
nicer are we hardy are we legacy ea are we actually willing to take stances to try to protect
01:47:48.200
positive timelines or are we just about maximizing our own status within the existing society
01:47:55.240
we'll do some things but nothing that could rock the boat are you willing to be different and i
01:48:01.400
think that that's the core thing about hardy a is hardy a are the people who are willing to
01:48:06.040
have the general public mock them and ridicule them and say they're the baddies and with the
01:48:11.480
vibe shift that has happened since this election cycle i am even more confident that it is possible
01:48:16.840
that we can fix the ea movement through popularizing the concept of hardy a
01:48:22.680
and i think that's the big thing is the the first step was taken with original ea where
01:48:27.640
one of the classic cases was do you want to go be a doctor in a developing country and save maybe
01:48:33.480
three lives a day or do you want to go be a consultant who's not necessarily seen as a good
01:48:38.680
a do-gooder but make a ton of money donate 10 or more percent of your income to really really really
01:48:44.360
effective but kind of boring sounding charities and then save 10 lives a day 100 lives you know a month
01:48:50.760
you know just a lot a lot more than a doctor ever could you know if that's how you spent your your
01:48:55.960
everyday life what we didn't do was take it one step further which was let's actually look at what
01:49:02.280
we'll do the most for human flourishing over the long term and not just necessarily focus on like
01:49:07.720
saving a life right now today although that is really important a lot of people are already
01:49:11.320
working on that that's one of the reasons why we're not but instead look at those things that are not
01:49:16.120
only not the most signally thing by your day-to-day actions but also not the most popular
01:49:24.520
things in the view of society in terms of a cause area right yeah well and you need to be okay with
01:49:31.080
people shaming you i mean i think that's what rda is when you're at a meeting and you're like let's
01:49:35.400
do x and somebody says oh don't do x x could be offensive you're like well that's extra why we need
01:49:40.120
to do f because we're the only one doing the offensive thing yeah we're the only one trying to solve
01:49:44.680
the problems that everyone else is like oh we can't solve this because it might be offensive
01:49:48.760
like that is the reality of the world we live in if we want to survive there are short timelines on
01:49:54.360
all of humanity's existential crises right now and we just don't have the luxury for this sort of
01:50:01.240
idleness anymore and i'd even admit that you and i have been sinners in this regard and overly
01:50:06.360
poo-pooing ai alignment stuff when like i knew ai alignment was a problem but i sort of thought oh people
01:50:12.040
are focused on in the wrong way or over focused on it now that i realize that we are prominent
01:50:16.040
enough that we need to take responsibility for this and i'm like well we need to take responsibility
01:50:20.680
for this like personally we need to try to fix this and that's what i'm going to do i'm excited about
01:50:26.520
this i i'm glad we're like i'm really excited if we can make this grow if we can make this a thing
01:50:32.680
and so this also means if you're hearing this and you don't have the money to donate for something like
01:50:37.560
this yourself if you have friends who identify as classic ea you know get them to make a stand
01:50:43.960
are they hard ea or are they legacy ea what do they actually care about do they actually want
01:50:50.120
to save the planet or are they only here for the signaling club and if you're in an existing
01:50:54.360
ea organization i don't think all of these organizations have been totally captured i think
01:50:58.040
some of them can say you know what we actually identify more with the hard ea philosophy and definition
01:51:04.040
of good than the legacy or soft ea definition of good i actually want to try to fix the existential
01:51:11.640
crises that our species is facing and not just look good to other people and i think that now we're at
01:51:17.480
sort of this decision point as a species yeah what are you going to do and i'm excited for this so
01:51:22.120
thanks for getting us started with it it's going to be really fun
01:51:24.600
okay what did you want to do for food tonight i have burgers you want burgers yeah if you could
01:51:34.760
make some burgers was that meat you got chop up some onions and then uh toast up or however you cook
01:51:40.520
bread for like grilled cheese oh you know what might be good is burger meat with grilled cheese and i
01:51:47.480
will mix in some onions and stuff and i will you know eat it bite by bite with the grilled cheese like
01:51:52.360
a it's it's like think of it like an open oh so just plain plain hamburger patties and then grilled
01:51:58.280
cheese sandwiches yes yeah i can do that thank you and i know smash burgers are a pain to make so
01:52:07.400
just make regular ones oh but do you put some montreal steak pepper in the burger as you're cooking
01:52:11.880
it because it tastes really good yeah but not too much i love you simone i love you too and if you're
01:52:18.200
like oh where does this movement meet where do they talk just go to the base camp discord it's a
01:52:23.960
very active discord there's people on there all times day and night and because it's discord it's
01:52:29.320
not based on a tyranny of the unemployed type problem like you have with the ea forums and if you want to
01:52:35.160
go to an in-person meeting you could just go to the natal conference this year uh discount code collins
01:52:39.960
for 10 discount because anyone who is realistically trying to create a better future knows that pro
01:52:47.160
natalism is easily tied for the most important cause area anyone should be focused on right now
01:52:53.240
with ai safety and so the real ea is the real people who care about the future of the species
01:52:59.640
and want to be involved in that discussion they're going to be at a conference like this while the ones
01:53:03.720
who don't actually care about the future of the species and are more concerned with just looking
01:53:07.480
like a good boy and getting social approval natalcon keeps them away like uh it's covered in talismans
01:53:13.960
because they're so afraid of being connected to free speech or pronatalism or any of that stuff
01:53:27.320
the goal was to reform charity in a world where selfless giving had become a rarity
01:53:34.680
no vain spotlight no sweet disguise just honest giving no social prize but as the modern culture took the
01:53:43.640
stage it broke their integrity feigning righteous rage now every move is played so safe ignoring
01:54:10.520
once they were bold now they just do what they are told
01:54:16.520
in caution they lost their way time for a hearty
01:54:29.640
they duck their heads from problems grand as fertility collapse