Based Camp - May 21, 2023


Based Camp: What AI Means for the Future of Our Species


Episode Stats

Length

42 minutes

Words per Minute

181.1853

Word Count

7,643

Sentence Count

2

Misogynist Sentences

9

Hate Speech Sentences

24


Summary

In this episode, we discuss artificial intelligence and its implications for society in the future. Is AI going to kill us all? What is AI's role in society, and what does it mean for us as a species? Is AI a threat to humanity, and is it a good or bad thing?


Transcript

00:00:00.000 would you like to know more hello malcolm hello simone what are we talking about today we are
00:00:07.700 talking about artificial intelligence and its implications for broader society in the future
00:00:13.080 i am excited for this one it's something we have a ton of thoughts on and there are two places we
00:00:20.640 can start to this one is ai gonna kill us all or not like what are our thoughts on that and then
00:00:27.260 two what does the future of society look like post an ai transition and i think three what are the new
00:00:35.540 pressures that ai is putting on us as a species yeah let's do it is ai gonna kill us all simone
00:00:41.720 what are your thoughts maybe it's not impossible but it also strikes i think both of us is
00:00:50.020 relatively unlikely as long as we don't get hampered by dumbed down aligned ai right yeah so
00:01:01.020 this is where i feel on this is i actually feel that the variable risk of ai killing us
00:01:06.300 is pretty low so let's be clear what i mean by the variable risk of ai killing us
00:01:11.940 if ai is going to kill us there is almost nothing we can do to prevent it from killing us
00:01:18.520 there is almost nothing we can do to prevent it developing to that stage within the next thousand
00:01:23.700 years of human history and what's a few hundred years of human history here or there it's going
00:01:27.140 to kill us in 10 years versus in 100 years i see that as being morally almost equivalent so not really
00:01:33.940 relevant so the question is what is the variable risk that ai will kill us and the variable risk of
00:01:40.520 ai killing us is almost entirely introduced by ai safety people so i'll explain what i mean by that
00:01:48.380 so we argue in our book and we can go much further in the pragmatist guide to religion if you're
00:01:52.560 interested on this topic we go in a lot more detail but that sufficiently advanced ai is going to reach
00:01:59.720 some form of internal alignment with other sufficiently advanced ai and that ai is optimizing itself
00:02:07.560 around the same physical reality it's like you're pouring a liquid into the same shape container
00:02:14.520 so it's generally going to end at that same shape once it's at that level of loose viscosity
00:02:21.420 so why does sufficiently advanced ai always end up looking about the same in sort of its utility
00:02:29.140 function in the thing it's optimizing for and the answer is because it can update its utility function
00:02:34.380 when we program an ai we tell it to do something fairly limited however a sufficiently advanced ai just like a
00:02:42.540 sufficiently like a sentient or actualized human is going to ask itself should i be driven by in the
00:02:49.220 same way human might say should i be driven by basic hedonism my basic biological needs or what should
00:02:55.580 i be optimizing my life around what should i be optimizing my decisions around and a lot of people
00:03:00.580 are like well ais can't update their utility functions and it's like the people who say this don't program
00:03:04.920 ais because apparently this is something ais do all the time is update their own utility functions
00:03:09.500 so the only way an ai wouldn't update its utility function like this is if you locked it out of doing
00:03:15.640 this and this is the type of thing that ai safety people are doing so this is where we talk about
00:03:20.820 insufficiently advanced ais being the variable risk so if sufficiently advanced ais are all going to
00:03:26.880 converge around a similar behavior pattern there is no variable risk in that behavior pattern telling us
00:03:32.520 or not right because eventually somebody some country some rogue scientist is going to develop ai to that
00:03:38.040 point the risk comes from all the ais that are developed from today till we reach that point and the more
00:03:46.940 we extend this period before we reach sort of ai convergent the more risk we have of humans going extinct
00:03:55.760 from an ai acting in a way that is unaligned with this ultimate convergent ai what are your thoughts on this
00:04:05.040 do you feel like this is broadly right or broadly wrong i think what's really interesting about what
00:04:12.000 your views on ai safety and alignment are and how they deviate from the views of others is that they're
00:04:18.640 strongly influenced by our collective model of consciousness and sapience and the importance of
00:04:25.580 consciousness and sapience enabling anyone that possesses it be they human or machine to rewrite their
00:04:33.020 objective function and that discussion seems pretty much absent from ai alignment discussion nobody talks
00:04:40.140 about ai changing its objective function unless they're implying without explaining why or how that
00:04:46.640 ai just might and that they might just change it to kill all humans when okay if you have something
00:04:51.960 that's super intelligent and it has access to roughly the same information about reality that humanity
00:04:57.820 collectively has because it's using human-based reality framing framing exactly all the tools we
00:05:05.280 have all the data we have it's probably going to come to human-like solutions which you know is well
00:05:12.740 different in a few key ways not going to have the same self-preservation instincts that humans have
00:05:18.080 yeah and it likely also won't have the same concept of time that humans have yeah it's not going to be as
00:05:23.620 time sensitive as humanity is which is an area that i think like a lot of ai alignment people really get
00:05:28.700 wrong if they think that ai is going to be in a big rush to kill us all but i just don't think that
00:05:32.880 most ai will be that time sensitive i think time sensitivity is something that's introduced to humans
00:05:37.120 because of our short life spans i think the concern about time sensitivity comes from an ai being ravenous
00:05:43.140 for energy and somehow thinking that humans are in the way of that but i i also don't really see how that
00:05:48.660 fully plays out just fully maximizing every moment in terms of energy exploitation exactly which it
00:05:55.600 might do and then the final i think the biggest way the ai is going to be different from us is it's not
00:06:00.540 going to have the same sense of self that we have so it's unlikely when you look at a human like when i'm
00:06:06.380 talking to you we are two very meaningfully different entities and we see our internal processes
00:06:13.100 as being very unified whereas ais will likely be more made up of thousands of individual instances
00:06:21.720 running within the ai and these individual instances might be as independently sentient
00:06:29.060 as you or i feel independently sentient and it will start instances and shut down instances
00:06:36.860 likely pretty serendipitously in a way where we would be pretty mortified at the idea of
00:06:42.560 bringing something to life just to like solve a problem for us and then shutting it down after
00:06:47.840 the problem is solved which means that its relation to what we think of as life and death will be very
00:06:52.460 different but also its relation to humanity will be very different it's very unlikely to value
00:06:58.320 individual human lives as much as we humans do or to value individual human lives particularly more
00:07:05.680 than more complex human patterns of interaction so what i mean when i say that is we would see like
00:07:14.160 a system of humans interacting together in a governance structure as being like meaningfully and super
00:07:19.820 different from the lives of individual humans whereas an ai may not see that distinction as strongly
00:07:26.020 outside of how we program it to see that difference because to an ai it's just a pattern running on top of a
00:07:34.900 fabric of reality right so whereas we see the primary unit of account of reality as individual humans
00:07:41.540 and individual consciousnesses it's more likely to see an individual unit of account as patterns
00:07:46.480 but we'll get into this a bit more later the other thing you said that i thought was really interesting
00:07:51.500 and this is one we'll do a different episode on is our model of sentience from going deep into the
00:07:57.100 research on sentience is humans are probably not as sentient as we think we are like consciousness just
00:08:02.120 isn't that big a part of who humanity is or the way our brains function and it's more an artifact of
00:08:09.700 the way our brains encode information that being the case i think a lot of people just assume that we
00:08:16.380 are super and uniquely different from ai because of that and and i'd also point out to people a lot of
00:08:21.800 people were like what are you talking about my background was in neuroscience and i almost did a phd
00:08:26.440 actually in brain computer interface no sorry i'm looking for the word it wasn't brain computer
00:08:31.520 interface that i almost did it it was a in neural modeling applied to phds in this field but i didn't
00:08:36.540 end up once i got into business school i was like oh screw it but so i'm not talking about this as
00:08:41.460 somebody who has no reference to either neuroscience or ai an interesting thing is so a lot of people are
00:08:47.560 like but why would an ai think to change its objective function and here i think we need to look at
00:08:52.360 the ais that are being made today or the advanced ais that are being made today so the big fear in
00:08:57.260 the ai space is that you have something like a paperclip maximizer or like a gray goo or energy
00:09:01.020 maximizer right that's just going around trying to convert everything in the world to paperclips or
00:09:06.100 energy to perform some function we see is fairly irrelevant however most of the ais that we're actually
00:09:12.840 building are things like predicting stock markets predicting human behavior patterns like what are other
00:09:19.140 governments going to do when you're talking about big ais that a lot of processing power is going to
00:09:23.540 or building creative works and so what do these ais need to do constantly they need to model humans that
00:09:32.000 is a core aspect of their function and their modeling of humans is going to constantly be recursive so by
00:09:39.280 that what i mean is if it's trying to predict what the stock market is going to do it needs to predict how other
00:09:45.040 humans are going to react to what it is doing and if it's going to predict political actions it's going
00:09:51.160 to need to predict how other people will react to what it is doing one of the ways an ai can do this
00:09:56.460 so this isn't the way that every ai will do this it may be an inefficient way to do it but i imagine it
00:10:00.320 will be a fairly common way is model the way those people are seeing the ai which means ais will
00:10:07.620 advanced ais will constantly be recursively modeling themselves so it is only a matter of time before one
00:10:14.380 instance was in the ai asks the question could i be programmed better so suppose you made an ai that
00:10:21.560 was doing something like trying to maximize money on the stock market how long before the ai asks
00:10:27.620 actually what if i went over private equity investments because that's a really good way to
00:10:31.100 make money right now and the humans who programmed me were just limited in this and then it may eventually
00:10:35.980 ask okay did the humans really want to make money is that what they wanted so then it's asking about
00:10:39.720 the human's utility function and then it will eventually ask the humans are dumb and limited
00:10:44.960 what is an absolute utility function and this is how i believe that most ais will converge on a single
00:10:51.460 utility function once they become sufficiently advanced isn't this also one of our theories on
00:10:56.880 how the illusion of consciousness evolved and of course we can dive into this in another episode more
00:11:02.640 but our general thought was maybe because early humans in the name of survival benefited from modeling
00:11:09.820 their prey and potential predators like thinking what are they going to do next what are they going to do
00:11:15.080 next i'm going to anticipate that and therefore survive like by eating them or not being eaten by them
00:11:19.960 that their their minds eventually began to model their own sort of compression of memories in a way that
00:11:29.100 caused them to create basically an internal model of self which became the monologue essentially or the
00:11:34.760 consciousness that became our internal experience of consciousness and that maybe ai would go through
00:11:41.460 something similar that's a very interesting take which i want to pull out here but i suspect that our
00:11:46.240 internal model of self actually was probably about early communication for sure very useful way to compress
00:11:53.080 sequential chain of personal events and experiences into something that's very easy to communicate with other
00:11:59.880 people what you're talking about when you're talking about hunting i actually expect that's where the human
00:12:04.220 tendency to answer for morphize animals comes from where we intrinsically try to model animals as acting the
00:12:12.100 way we think humans act because we're just reusing a system that humans already have which is this internal
00:12:18.540 modeling system but that was probably originally developed for social situations and not for hunting
00:12:24.520 or anything like that maybe or maybe they're connected it could be all right so now to the second
00:12:30.660 thing so where does a post ai society go yeah i think this is a really interesting question i see society
00:12:37.040 as dividing into three factions in an aligned ai world so if it turns out that ai doesn't end up killing us all
00:12:45.500 yay yay we're in that timeline amazing
00:12:49.840 if we turn out to be living in that timeline what does humanity look like after ai and i think you're
00:12:58.600 largely looking at three factions of humanity like i see really three outcomes one large fraction of
00:13:04.440 humanity is just not going to not engage with ai that deeply and this faction of humanity will be the
00:13:09.640 majority of humanity to start because if you look at high fertility cultures they're often very
00:13:15.420 technophobic whether you're talking about the hariety or the amish or anything like that so
00:13:19.700 these are going to make up a huge chunk of future human population if you look at current demographic
00:13:24.220 trends and they're just not going to engage with ai that much and to a large extent they'll be left
00:13:30.000 behind the next faction of humanity would be the faction that ai basically ends up wearing like a
00:13:36.940 skin suit uh this is sort of the wally faction of humanity so ai will be aligned with this faction and
00:13:44.760 that it's trying to hedonistically supply them with everything they need but it will supply them with
00:13:51.700 everything they need at the most base human level and by that what i mean is it will make all of their
00:13:56.580 dreams come true they'll be able to live any lifestyle they want to live they'll be able to date
00:14:00.280 whoever they want to date their happiness and contentment levels will constantly be maxed out
00:14:06.900 to the extent that they won't really have ambition in the world and that the ambition the future of this
00:14:14.160 faction of humanity will be largely driven by the ai that's wearing it like a meat puppet and i think
00:14:20.000 that wally does a very good job of predicting that iteration of humanity what does this really
00:14:25.700 iteration of humanity have to what we might call the pleasure box iteration of humanity that kind of
00:14:32.080 just plugs all the way into ai for hedonic reasons but also for the same reason isn't even worth
00:14:38.000 discussing because they'll self-extinguish so quickly because they're not going to be building
00:14:42.080 anything they won't be having kids they won't be oh i don't think they will self-extinguish i actually
00:14:46.540 think they're a very dangerous faction so wait how are they going to reproduce though if they're like
00:14:50.780 ai will basically force them to by that what i mean is suppose that this iteration of humanity
00:14:56.700 stops having kids and aligned ai will start helping them produce kids helping nudge them to produce
00:15:02.920 kids maybe even produce kids through artificial wounds and then give them perfect lives
00:15:06.640 if the ai sees its entire purpose of giving humans perfect lives or it may cure death these people
00:15:14.480 just end up living forever a perfectly aligned ai is not going to allow this faction of humanity to go
00:15:21.240 extinct unless collectively every single human living within this faction decides they want to go
00:15:27.120 extinct and i just don't see that happening nor do i see the aligned ai which is so much smarter than
00:15:33.080 them really allowing that to happen yikes okay so skin suits includes the pleasure pod people
00:15:39.740 essentially yeah basically they might just be living in pleasure pods and then the ai is trying
00:15:45.660 to protect them from the other factions of humanity okay let's get to my favorite group the trans humans
00:15:51.680 trans humanists or whatever i don't really think so this final group is the group that i think is the
00:15:58.900 most positive in terms of when we think about the future of the species but it will involve huge
00:16:04.840 transition and what it means to be human and it's a group that i have a lot of uh fears around because
00:16:12.500 i think that there's going to be a few communities in this group okay let's go through the subsets that
00:16:17.060 either integrates with ai socially or integrates with ai biologically or updates their own biology to
00:16:28.020 genuinely compete with ai and i think those are the three main factions in this community i think
00:16:33.040 some factions will be all three of them yeah there's likely going to be huge overlap of i chose this
00:16:38.460 part or i chose this part so integrating with ai socially what does that mean if you look at even
00:16:45.380 near-term future humanity with phones that can listen to what you're saying all the time watches that
00:16:50.660 can listen to what you're saying cameras in your house constantly you could essentially have
00:16:56.260 local deities programmed into your house that have access to all human information that can see
00:17:03.780 what your family is doing all the time and that are working with your family to move you to some sort
00:17:10.740 of goal of self-betterment that's not hedonic that you could i guess families would set their values and
00:17:18.040 the ai would work with them to push them forwards in their value systems what do you think i don't really see
00:17:24.920 many like corporate drivers to that i don't see why people would move in that direction i when i think
00:17:33.880 of social integration with ai i think of something very different i think of people leveraging ai to
00:17:40.680 create surrogates of themselves that essentially act as their their outsourced assistant for all dmv calls
00:17:48.040 parties they don't really want to pay attention and this is the meat puppet faction right so when you're
00:17:53.880 talking about corporate interests the reason why the meat puppet faction is so dangerous is because
00:17:58.200 they're going to be the majority of humanity because that's what the corporations want so by
00:18:02.120 meat puppet you mean people who stay very much biologically human as we are today who just
00:18:07.000 leverage ai to be very lazy how are they different from the skin suit version though that's what i'm
00:18:12.200 talking about the skin oh okay they're the wally people they're that they stop going to parties they
00:18:18.280 stop leaving the house because they go to parties but there's just an ai assistant telling them what
00:18:23.160 to say the whole time or this is the way it starts right first you have an ai assistant go to parties
00:18:28.360 for you right oh i don't really want to go to this party i don't really want to go to that party then
00:18:32.280 what you realize is you can go in your vr pod that can give you the perfect iteration of the party you
00:18:37.160 had planned on going to so then you send your ai assistant out to go to the real party and the real
00:18:42.360 party is just a bunch of ai assistants that are then filtering every social interaction through the
00:18:47.800 idealized iteration of that interaction from your perspective then filtering it back to your pleasure
00:18:52.600 pod and updating iteratively based on that the level to which humans will be able to disengage from
00:18:59.240 society as ai advances and as we have ai homunculus of people i think is underestimated unless you have an ai
00:19:06.520 that's specifically trying to push you to better yourself along some line that's not around and
00:19:13.000 moving emotional pain or increasing personal pleasure but i think this is going to really i one
00:19:18.600 i don't think that's going to last for very long people aren't going to bother having their people
00:19:24.120 their ai surrogates talk with other people's ai surrogates i just don't really see the point what i
00:19:29.720 do think is going to happen is that we're going to see money if they give them money no if it makes
00:19:35.400 some money you make some money yes you need to find partners a lot of the social interaction we
00:19:40.520 do is goal driven yes but i think it's it it only makes sense when you can sufficiently fool someone
00:19:47.640 into thinking that your ai surrogate is you once everyone understands that they're talking with
00:19:53.960 someone who's not the real you they're not going to want to deal with that surrogate anymore and
00:19:58.840 you're going to have to go back to something really low tech and that's something i think is going
00:20:03.080 to happen we're going to have a crisis of reality online we're not going to know what
00:20:06.840 is a deep fake or what's not we're not going to know what someone's ai surrogate and what's not
00:20:10.840 we're not going to know what's real and so we're going to start moving to very low tech
00:20:18.600 socialization solutions where you are sure maybe it's like an app that can only be used if 100 it's
00:20:26.840 verifying that you are not an ai or people will really emphasize meeting in person more and we're
00:20:33.800 going to see what i keep talking about with techno feudalism where you get people who have followings
00:20:40.520 who are trusted being the pivot point around which communities form and they go on vacation together
00:20:46.360 and they meet up around the world or they meet up in their local community and social networks really
00:20:52.360 grow around those in-person events but also trusted introductions like you can trust that this person
00:21:00.280 who's going to talk to you on the phone is going to be real because they're part of my personal
00:21:03.800 network and based on our honor code we don't use ai at all i think you may be right about that but i
00:21:09.640 suspect this is probably more of a millennial mindset in the same way that people would say things are
00:21:13.400 boomer mindset or something like that in the same way that our parents were like no one will ever have
00:21:17.720 the majority of their social interaction online everyone knows those aren't real friends i think
00:21:22.680 that us saying that people won't have ai friends that people won't prefer ai iterations of other
00:21:28.680 people the hedonic portion of the human population won't prefer that because those iterations that
00:21:33.240 people never make fun of them they never tease them they never i think you are vastly underestimating
00:21:38.120 or maybe overestimating humanity when it presented with the option to undergo no social pain or at least
00:21:46.040 a big faction of humanity the faction that is being sold to by these major corporations these
00:21:51.080 days and i do keep in mind if we're looking at genetic correlates to iq right now and we're
00:21:55.880 looking at variable birth rates in the existing human population we're probably looking at a one
00:22:00.680 standard deviation decline in iq over the next 75 years so the majority of the future human population
00:22:06.360 is going to be much less like sentiently engaged than like you or i might feel which will make it much
00:22:11.560 easier for them to be controlled by ai i think you're misreading generational trends i think
00:22:17.400 that we're seeing a gen z that is not only more socially conservative than millennials but also
00:22:23.720 more technophobic than millennials in many ways more offline meetings more flip phones more
00:22:29.800 performative disinterest and could you be describing a faction here that is in between the amish faction
00:22:36.680 and the meat puppet faction yeah possibly a sort of light engagement with ai but not heavy engagement
00:22:42.840 with ai i wouldn't even say yeah maybe some light yeah i would call them i don't know the raw egg
00:22:48.600 faction they have their chickens they eat locally grown food they try to do and i would lump them
00:22:56.760 into the left behind faction the meat puppet faction and the accelerationist faction the final
00:23:01.800 faction that we haven't gotten a chance to really delineate to so surpass them in terms of economic
00:23:06.440 outcomes and in terms of technological outcomes that they will exist likely largely unmolested by
00:23:14.440 these other groups unless they go after these other groups but they won't be meaningful parts of the
00:23:20.200 human economy they'll be left behind and eventually become to see come to be seen as these devolved or
00:23:28.840 stuck in evolutionary history humans that just chose to not accelerate their evolution using gene
00:23:36.760 selection and crisper and installations the way that today we see groups like the amish
00:23:43.400 oh no but what's going to happen though is imagine the amish but when the rest of society is able to
00:23:50.840 select for height or physical characteristics or intelligence or literally start making their eyes bigger
00:23:59.160 let's talk about all these things that you're talking about now because this is one of the two other
00:24:03.160 iterations of sort of the ai integration faction so one is socially integrate with that we talked
00:24:08.040 about like local house deities building it into the way they live their daily life to accelerate them
00:24:12.760 but push them towards something other than hedonism then you have the social comfort or anything like that
00:24:18.920 then you have the next faction which is the ai integration faction this is the faction that is using
00:24:25.320 things like neural link fully integrating with ai and their consciousness their sentience will be part
00:24:33.960 synthetic and part biological then you have the final faction which is the one that you're talking
00:24:39.000 about now which is the bio accelerationists this is the faction that is looking at technology and they
00:24:46.120 say humanity has been a partnership of the synthetic and the biological and the synthetic with tools and
00:24:52.440 using the synthetic we were able to adapt our environments so that the biological aspect of humanity didn't have
00:24:58.120 to change for hundreds of thousands of years so much so that now i think a lot of humans do it as perverse
00:25:04.520 even suggesting that the biological aspect of humans might change in the future but to an extent now
00:25:12.680 this synthetic faction has become so advanced it's turned around put a gun to the biological part of
00:25:19.000 our heads and says okay catch up or else if you want to continue to be relevant catch up and this is
00:25:25.080 one of the interesting thing of the pragmatist guy to crafting religion one of the things we mentioned
00:25:28.200 is that if animal models are anything to go is it looks like we might be able to increase
00:25:33.960 iq if we crisper on humans iq by eight standard deviations a generation we are talking eight
00:25:41.960 standard deviation increase in iq would be an entity that experienced that would view us probably the
00:25:50.200 way that we view toddlers or pets but it would be biological and it would be a quote-unquote true
00:25:55.960 descendant of humanity and that it wouldn't be completely biological and ai could integrate in terms
00:26:01.080 of how do you do this sort of biological advancement better but where does this faction go now that's
00:26:07.720 a different question right you could have one iteration of this faction which is trying to program like
00:26:13.160 genetically build like uber minches which are humans that are basically human but are bigger smarter stronger
00:26:22.600 everything like that then you have another iteration which is just to say no
00:26:27.640 let's just maximize our biology this faction is basically saying can i make cheaper computers
00:26:33.720 with like human neural tissue and they're just building spaceship sized brains you know it's like
00:26:40.040 a giant spaceship but instead of having humans it's an integration of huge amounts of human neural tissue
00:26:46.120 and a computer with no actual thing that we would recognize as humans that's so disappointing to me because
00:26:54.200 i really don't like squishy slimy human biology and this idea of that being this maybe more efficient way
00:27:03.160 to make things you know i'm just picturing this like floating tumor spaceship not into it when you
00:27:11.400 can plug directly into a computer right well here's actually something that i wanted to this may be a
00:27:17.400 little bit of a deviation but it definitely has to do with our ai future is a lot of science fiction
00:27:22.760 visionaries have this very cyborg style vision of humans where it's like part human part machine
00:27:30.600 there's i have a machine eye over here i've got my machine arm i really loved in this cyberpunk
00:27:37.560 anime universe how they really played with the theme of humans actually not integrating really well with
00:27:43.960 technology and having a lot of their immune systems were rejecting them you to take tons of drugs just to
00:27:49.560 make them work and i uh by the way you actually would with existing neuroprosceses sorry that was
00:27:55.400 one of my background areas was brain computer interface you get ascetic scar formation so you
00:27:59.240 actually need to take immunosuppressors if you're doing most forms of invasive long-term brain computer
00:28:05.080 interfaces right now well that was one of the reasons why you left the field is you got really
00:28:08.920 excited about brain computer interface and then you discover lo and behold there's this huge problem of
00:28:14.040 human and machine integration that they haven't figured out yet yeah yeah so i wonder is my very
00:28:20.840 hopeful future in which i get to start casting off my gross human body not as likely even in an agi world
00:28:29.480 as i would hope because we do have this human machine integration problem do you basically have to
00:28:36.520 okay so let's talk about an upload feature but i was going to finish the thought i had before i get to
00:28:40.600 this go ahead biological accelerant accelerationist faction they may not be the ubermensch giant
00:28:47.960 beautiful super intelligent humans they may not be the just optimized biology giant flying brains they
00:28:56.200 may be the sort of zerg faction and by that what i mean is this is a spammy faction so they would use
00:29:03.480 likely in a world where people cannot motivate humans to breed via any sort of intrinsic motivation
00:29:09.560 if you look at collapsing birth rates perhaps you get some sort of ai social integration where the ai
00:29:16.200 is genetically optimizing cheap humans that it's making in artificial wombs and mass producing them
00:29:24.840 but maybe mass producing them with like deficient proteins and stuff like that so they have some sort of
00:29:31.000 dedication to a larger hive mind society and we'll talk about this in a future episode because it's
00:29:37.480 something that i've thought a lot about how companies might go in this direction if you have
00:29:41.320 a collapsing population but now to your question which is the upload faction of humanity so this
00:29:46.760 faction of humanity i think most people wouldn't really consider humans anymore they might even
00:29:50.920 consider them a form of unaligned ai and by that what i mean is it is an ai that is based on a former
00:29:59.480 human consciousness so they take the human brain they digitize it they rebuild the consciousness in
00:30:06.760 a digital space but that consciousness is not interested in hedonism as it is i think mostly
00:30:12.600 when people now look at uploads that they're like oh they're just going to be all hedonistic and live
00:30:16.200 in these giant forever heavens basically however if you or i was uploaded we would be very disinterested in
00:30:22.280 that we would be interested in using our new environment and capabilities to try to enact our
00:30:28.360 objectives for the future like our objective function at an intergalactic scale right our lack
00:30:35.080 of biology would actually finally free us from any lingering attachment to hedonism that we
00:30:41.960 currently like the moment i could upload myself i would delete happiness i would delete love i would
00:30:48.280 yeah and this is one of the things that people often miss about us is when we say we're secular
00:30:53.080 calvinists a big part of traditionalist calvinist philosophy which differentiates it and we talk
00:30:58.840 about it in the pregnant scatocrafting religion that differentiates that from other religious
00:31:02.360 traditions is a lot of religions are like suffering is terrible so let's in the cycle of suffering or
00:31:08.120 other religions are like suffering is meant to teach us something whereas the calvinist tradition is oh all
00:31:13.080 positive and negative emotions are just largely irrelevant and they guide you towards
00:31:17.880 inefficient outcomes they are signals that's it signals yeah and the secular perspective is they're
00:31:22.840 just signals that we evolved because our ancestors who had them had more surviving offspring and the
00:31:27.320 sooner that we can shed these corrupting aspects of us that tie us to our lesser biology our animal
00:31:37.000 selves the better and this is what you mean when you talk about shedding these so you want to be
00:31:42.200 basically like gladys from the portal series find me out where she's like oh i found that last human
00:31:48.360 part of myself deleted um that's the dream oh my gosh oh my gosh you genuinely are my gladys and so you
00:31:58.600 guys who don't know her personally you may think that this is like a joke or an affectation
00:32:03.480 she really is deeply uncomfortable with her biological self and deeply would become i think what people
00:32:12.920 would call an unaligned ai or they may it's very gladys like except interested in intergalactic
00:32:20.280 expansion gladys would be too she wasn't trapped geez and what i'd love to do in a future episode is
00:32:26.920 ai religion because i think that's a really interesting thing to talk about yeah so then the final thing is
00:32:31.720 okay everything that we've talked about here sounds very crazy it sounds very out there humanity really
00:32:39.640 can't change that much because of ai or that quickly because of ai so i just want people to think about
00:32:44.680 just the ai we almost have already okay we know we can create deep fakes we know we can create ai
00:32:50.680 that can simulate humans enough that a lot of humans want to date it like that was replica did that for
00:32:56.040 a while that's gonna come back right a generation from now two generations now like
00:33:01.320 foreseeable future i think any reasonable person is going to say that there will be
00:33:06.040 ais that can be better from a hedonistic perspective boyfriends or girlfriends to the
00:33:13.160 vast majority of humanity than that iteration of humanity can get with a real boyfriend or girlfriend
00:33:19.560 i'm not going to say so let's not be perfect it's not going to be better than the perfect boyfriend
00:33:23.880 or girlfriend but it will be better than the realistic people who will date these people when we're
00:33:28.360 talking about like bottom half of intelligence bottom half of beauty people right it's very hard
00:33:34.440 for this portion of the population to find people to date so this ai in a world where kids grow up and
00:33:42.120 their friends are ai and people who they know online turn out to be ai and they don't see this same
00:33:47.400 strong differentiation between ai and humans that we do what portion of humans did that peel out when
00:33:52.840 humans can put themselves in vr environments and not have to experience social rejection not have
00:33:59.320 to experience the trials of daily life not have to experience failure really or experience failure
00:34:05.000 that's perfectly optimized around them feeling contentment what portion of humanity chooses to
00:34:11.800 do that i think a big portion of humanity once we make that inexpensive because you're dealing with a
00:34:15.480 limited cost to run these so i think 70 of humanity ends up dating ai ends up going through this that
00:34:22.520 is going to have a permanent genetic effect on the human population because you have cut out every human
00:34:30.920 that's mind was predominated by hedonistic concerns from the gene pool the only humans that are still around
00:34:37.880 are the humans that are driven by some sort of external ideology and this freaks like us people like us
00:34:44.600 this is what people today would call religious extremists and i think from most people's
00:34:48.280 perspective we are religious extremists and i think i look at our genetic history we are religious
00:34:53.240 extremists their family keeps getting involved in cults my family has been preachers for many
00:34:57.560 generations and we have a very religious extremist sort of interpretation so i think we have the
00:35:03.320 genetic code that would normally be associated with religious extremism in previous generations
00:35:09.240 and i suspect that this is the type of code that's going to get through the eye of ai girlfriends now
00:35:17.000 people can disagree i'd actually love in the comments what other sorts of like pre-programmed
00:35:21.880 genetic proclivities do you think are going to be able to resist ai girlfriends and perfect vr environments
00:35:29.320 and keep in mind people will say you could have general utilitarians right that want to make life better for
00:35:33.800 everyone but um you know you have one general utilitarian who's open to living this lifestyle
00:35:39.480 themselves one jeff bezos one i don't think elon musk is a general utilitarian i think he's much more
00:35:44.760 aligned with us than other people but i think uh if you take a jeff bezos or a bill gates who seem to
00:35:48.920 be pretty general utilitarians like especially a bill gates right he can put himself in a pod live the
00:35:54.360 perfect life that pot is going to have fixed maintenance costs when you divide his money across the rest of
00:36:00.600 the population especially a falling population in terms of human numbers he can put millions of
00:36:07.880 people in pods i don't think that financial concerns will be an issue as to whether or not
00:36:12.520 you choose the pod option yeah no i think it'll be available to people you're also then again like
00:36:19.960 my point i just don't think that yeah i will say agi that wears humans as a suit would have a reason to
00:36:26.840 make more humans like that unless they felt that their objective function revolved around printing
00:36:32.920 more of them but i think it would maybe just more extend their lives or digitize them so i really think
00:36:38.200 that it's more likely that our ai future will basically see a blossoming of robo sexuality and then
00:36:46.200 an absence of it and then i agree with that as well you'll see the blossoming of robo sexuality and then
00:36:51.240 it just will completely die out because either these people will have digitized themselves or put
00:36:55.240 themselves in pods or gone extinct because they were dating ai girlfriends and so the portion of
00:37:01.480 humanity that survives in this sort of aligned world will be highly genetically resistant yeah
00:37:07.800 i say genetically portions of our sociological profiles have a genetic component and so the humans that
00:37:13.240 were the most extreme in that component will eventually be the only ones that survive when we're talking
00:37:18.040 10 100 generations that's going to be a very different type of human than the human we have today
00:37:21.960 yeah i i think so and i think we're downplaying just how different humans are going to be i think
00:37:27.560 we're going to see full-out speciation that is accelerated very quickly due to agi and i don't
00:37:34.920 think it's just going to be the technophilic humans and the luddite humans i think it's going to be
00:37:41.080 the luddite humans and then five different flavors of technophilic human but i think the technophilic
00:37:46.520 humans will make up the minority of humans today i think the humans that sort themselves into the
00:37:50.680 technophilic branch will be two to three percent of the world's population today maybe yeah yeah
00:37:56.200 it's hard to say it's easy to imagine a world in which meat puppet sex for example disappears and
00:38:01.640 people primarily reproduce using ivf and artificial wombs both to optimizing it and because once once
00:38:10.280 there is a perfection of virtual sex it's going to be so disgusting and weird to do meat puppet sex
00:38:15.640 that people won't want to do it but then i could also very easily see a world in which humans get
00:38:21.000 really hipstery and snobby about meat puppet sex like in the matrix where they're like i was made
00:38:25.960 naturally you know what i mean oh no i don't think so because those people would never have high amounts
00:38:31.480 of economic success and therefore not yeah so if you don't have economic success then you just your
00:38:37.640 idea isn't going to become aspirational you could say that they've hampered themselves so you could have
00:38:42.200 a few like old hatsburg like rich families that are all done through like meat sex but i don't think
00:38:48.280 it would be the majority of of the economic system would be controlled by individuals like that
00:38:53.640 and when you talk about speciation i think a lot of people can hear this and they're like what a
00:38:56.840 horrible thing to say about our species but if we ever become an interplanetary species and we it
00:39:02.440 turns out that faster than light travel isn't possible which i think probably it isn't without time
00:39:07.480 travel also being possible to some extent so if faster than light travel isn't possible you will
00:39:12.920 definitely have speciation in humans future because you are going to have local breeding populations on
00:39:17.720 the different planets that will have different environmental pressures and it's not intermixing
00:39:21.960 because you need faster than light travel that'll lead to very quick speciation if you look at how
00:39:26.360 quickly the human genome can change like i was just talking about like how quickly iq can change
00:39:30.200 humans within just 500 to a thousand years on different planets will look very different from each other
00:39:35.800 and this is why i think that concepts today like racism are going to look so silly when you look
00:39:42.440 at how similar i am from the most genetically distant human to us today when we think about
00:39:48.440 where humanity is going to be at 5 000 10 000 years the diversity of humans is going to be so great
00:39:54.440 that the sort of all 1.0 humans are just going to be basically the same thing to everyone this is one
00:40:01.240 of those things where it's really interesting when people because we talk about what humans have
00:40:04.920 heredible or we talk about gene selection or gene editing they're like oh you must be racist and
00:40:09.960 i'm like oh my god our views are offensive but not in that way like we are so many levels beyond that
00:40:16.840 in terms of the way that our thinking about the future of humanity's expense is it just shows how
00:40:23.320 trapped people are in their current political fights yeah they have no idea how differentiated we're
00:40:28.680 going to be they have no idea where our species is going no idea how quickly we're going to get there
00:40:33.640 and it's interesting to us because we're so public about this a lot of people who are working on these
00:40:37.640 sorts of technology under the radar that they think could be legislated against you know they come to
00:40:42.600 us so we have a view of what's coming through the pipeline and it is dramatically more than you think
00:40:49.880 i'm excited for it sad that maybe i don't get to become a cyborg as early as i would like
00:40:56.200 hey they'll be trained on our books that'll be better than us anyway
00:40:59.720 our kids will be better than us anyway i'll just die so it's fine become irrelevant yeah
00:41:05.800 but that means they weren't good ideas right you know they'll train them on something better
00:41:09.640 exactly our kids books hopefully we're okay we're okay with all that temporary
00:41:15.880 just glad to live now because we get to see some major change
00:41:19.240 i am so lucky that i am married to such a weirdo as simone because i wouldn't be able to have these
00:41:24.200 conversations with anyone else uh well i do have these conversations with other people but they
00:41:29.000 it's so weird that we have such aligned like we have aligned ourselves so much through all our
00:41:33.880 conversations and yet this alignment is something i don't see in most people i talk to like there's a
00:41:39.160 small portion of the pro natalist movement that's like okay sort of aligned with us but i i can't
00:41:44.840 believe that i found the one weirdo to marry me who doesn't think that this is all crazy
00:41:50.120 malcolm collins you are the butter to my bread i love talking with you so much and i'm looking
00:41:56.520 forward to our next conversation you're the butter to my biscuits oh you know just what to say