Based Camp - March 11, 2026


Neural Tissue Comp Now Cheaper Than Silicon! (This Changes Everything)


Episode Stats

Length

1 hour and 2 minutes

Words per Minute

170.45164

Word Count

10,703

Sentence Count

194

Misogynist Sentences

12

Hate Speech Sentences

32


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode, we discuss the recent breakthrough of using neurons in bioinspired systems to create artificial intelligence (AI) and the ethics behind it. We also discuss the implications for the future of quantum computers and artificial intelligence.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 hello simone i'm excited to be here with you today today we are going to be discussing a
00:00:05.980 breakthrough that i hadn't expected which is that using neurons in bio-inspired systems
00:00:14.820 is now a reality that you a watcher of this show can likely afford yourself if you wanted to try
00:00:24.060 some sort of like business experiment based on this and in many ways is now cheaper than doing
00:00:31.940 it on computer and this was a huge breakthrough that changes a lot of if you're looking deep
00:00:39.340 future of where humanity goes at this point with the development of quantum computers was the
00:00:45.300 development of ai continuing one thing that a lot of people feared and this is why i say that this
00:00:50.720 is such a like a lot of people like malcolm this is horrifying like are you excited about
00:00:54.800 servitors and everything like that like humans being turned into like
00:00:58.240 the husks for a machine define the damage
00:01:01.720 have you not received pain suppressants
00:01:06.540 report to the surgical bay and it's like well we'll get to that we'll get to that but what
00:01:18.260 makes it really good is it changes worst case scenarios. Worst case scenarios for AI fooming,
00:01:25.440 taking over the world, expanding into space, historically speaking, before today, I would say
00:01:32.140 that in such a scenario as that, you know, humanity gets wiped out, there is maybe a 3% chance
00:01:39.560 that neurons or biological matter is part of whatever AI has become. We are now, like if we're
00:01:47.060 using AI estimates here, because I was going through AI, having it compile all the research
00:01:51.720 we have on where quantum computers are right now, you know, looking at computers a hundred years
00:01:55.280 from now without humans around anymore. It said 60 to 70% chance that it would be partner on.
00:02:02.000 Wow. So that's, that's now the worst case AI scenario, right? Likelihood. This is, you know,
00:02:09.380 humanity wiped out or enslaved, our overlords. And what's interesting is that the part of,
00:02:16.400 And we're going to go into, okay, 50, 60 years from now, we project technology moving forwards
00:02:21.720 and sort of the jumps that we've been seeing technology moving forwards.
00:02:25.400 What does a computer look like?
00:02:27.320 You know, quantum computer is working.
00:02:29.880 We continue to see advancements in silicon-based computing.
00:02:32.900 And we see these startups and companies continue to develop at this rate with the neural computing.
00:02:39.140 What we're going to go into is what that computer is going to look like.
00:02:43.280 that does not mean the value of your existence turns negative to the contrary when it comes to
00:02:50.660 the macro management of the civil system your role has simply changed only this can solidify
00:02:56.900 the health and prosperity of future human society and what is what is i think going to surprise a
00:03:02.980 lot of people about what that computer will look like is it's not gonna look that different from
00:03:09.800 the ways that humans interact with computers today. By that, what I mean is the types of stuff
00:03:15.980 that the quantum computer part of a brain made up of silicon neurons and quantum computers
00:03:21.940 are going to handle is going to be very similar to the type of stuff that it would handle today.
00:03:29.220 Large scale, logistical planning sort of stuff. No human is actually doing that with neurons. It's
00:03:35.380 just not the type of problem that we're good at doing the type of stuff that the neurons are
00:03:40.520 going to be doing is well we'll get to it but it's the type of stuff that actually humans do
00:03:45.920 today within this arrangement the type of stuff that the silicon component is going to be doing
00:03:50.820 is the type of stuff that llms do today in this arrangement oh it's the perfect match so we're
00:03:56.500 already sort of there already yeah yes it's very interesting the the stuff that quantum computers
00:04:01.260 are really good at is almost sort of opposite the stuff that neural arrays are really good at
00:04:06.140 and so yeah let's go let's go into the tweet that you sent me that prompted this
00:04:11.940 and we're also going to go into you know the ethics of all of this why it's ethically so cool
00:04:17.280 so awesome don't don't be so squeamish about this guys from the moment i understood the weakness of
00:04:25.360 my flesh. It disgusted me. I craved the strength and certainty of steel. I aspired to the purity
00:04:37.420 of the blessing machine. Your kind claim to your flesh, as it will not decay and fail you.
00:04:48.560 for one day the crude biomass that you call the temple will win and you will pay my kind to save
00:04:59.460 i had tip to not aldous huxley for sending this to us you rock yeah okay so the tweet goes let me
00:05:20.400 explain what just happened because i don't think people realize how insane this is cortical labs
00:05:26.200 just put 200,000 real human brain cells in a silicon chip and train them to play Doom in just
00:05:31.920 one week. Each CL1 system costs $35,000. So that's affordable for, I mean, it's expensive,
00:05:39.440 but it's not like a quantum computer or something like that. Like if you had some business idea and
00:05:45.780 you went to the bank, you could raise enough money to buy a few of these and operate them, right?
00:05:56.200 And one of the things I really want to get into is the cost efficiency of these systems
00:06:01.900 at their most nascent stage versus existing systems that we operate LLMs on.
00:06:10.160 And where they can do better and where they can do worse.
00:06:13.080 And where we're already seeing integrated systems that are doing things a thousand times cheaper
00:06:17.420 than non-integrated systems, which is really cool that we're already seeing this.
00:06:20.760 So a rack of 30 units consumes 850 to 1,000 watts combined.
00:06:27.200 The human brain operates on 20 watts.
00:06:30.320 So I want to point out what this means here, right?
00:06:33.420 For all of the calculations I'm going to give you that are like right now, you know, the neural systems are operating at, you know, one one thousandth of fraction of the silicon-based systems, right?
00:06:47.680 If we're talking about their efficiency, because that's what an AI that's taking over the world or whatever is going to care about.
00:06:52.540 This is what far future humans, when we're building our giant brain ships, are going to care about.
00:06:56.520 Because, you know, when you're talking about like space faring systems, you're almost always going to have like one super brain was in a ship.
00:07:04.640 I assume that this is probably the way that things are going to work, which is going to be a network of some of the most advanced intelligences that you would have.
00:07:11.720 And then you will have, you know, microchips on phones and stuff like that.
00:07:14.900 If people can say why, I would say this.
00:07:17.140 So if you look today, one of the reasons that you don't see this as much is because there is an intrinsic decentralization in the way that we use computers today due to distances, personal ownership, everything like that.
00:07:31.180 But if you have a spacefaring ship, there's going to be economic reasons to, one, want the best brain on the ship to be one that's powering your navigation systems, one that's powering the decisions when the captain is asking an AI something, one that's powering that, one that's powering the projections for the colony and everything like that.
00:07:54.080 But in addition to that, because you don't have this huge amount of distance and everyone to an extent is going to be working on behalf of the ship or of the early colonies, it just makes sense to me when I'm asking my personal LLM on my phone, why not just outsource that to the ship-based system?
00:08:12.040 So we're going to see a lot more centralization when we have space colonies and space travel than we see within existing systems, which is why it makes sense to think about what do these far future systems look like?
00:08:21.180 But anyway, the point I'm making here, when you're thinking like, okay, where do we have neural tissue operating this stuff?
00:08:27.900 30 of these racks, which are a, you know, sort of like a single small chip, right?
00:08:34.880 Single silicon chip.
00:08:36.080 They take 850 to 1,000 watts to run.
00:08:39.220 Whereas the human brain operates on 20 watts.
00:08:42.180 And what this means is-
00:08:42.980 Well, that's a difference.
00:08:44.340 Yeah, there's huge efficiency gains to be gained here, right?
00:08:48.260 Can we get more efficient than even the human brain?
00:08:50.140 you know i think probably but at least what it means within the early days if we're looking at
00:08:55.260 the other analog we have the human brain is significantly more complicated than one of
00:08:59.700 these chips or a rack of 30 of these chips so lots of lots of advancements we can make to this
00:09:06.360 and when we're talking about 30 of these units taking 150 to 1000 watts you've got to contrast
00:09:13.760 that was large ai training costers burning through mega watts and we're here talking about 20 watts
00:09:20.640 for a human brain or 850 to a thousand watts for one of these racks yeah again we'll get to the
00:09:25.040 morality of all of this you don't have to have us just be giddy who are servitors for people who
00:09:31.280 don't know what servitors are in the warhammer universe one of the punishments for you know
00:09:36.800 really displeasing anyone in a position of power is being turned into a human machine
00:09:41.920 Whereas, as McGold puts it, human batteries.
00:09:45.400 Yeah, but they're not really human batteries because it's not the power that you want from them.
00:09:49.380 Yeah, server, human server.
00:09:51.440 It's the processing capacity that we care about.
00:09:53.800 And we'll get into whether these things can feel and stuff like that.
00:09:57.100 Those are interesting questions, given that we have some that are like at the developmental level of five-year-olds now.
00:10:02.360 Yeah.
00:10:03.520 If the neurons are used up doing calculations, where's the room to feel anything?
00:10:11.020 I don't know.
00:10:11.920 Oh, well, see, this is the fun part. Scientists have tried to recreate in these pain pathways.
00:10:17.840 Now, you might say, why would you do that? Yeah. That's just horrible. Why? I love Simone's face
00:10:25.500 looking up and being like, of course they did. This is the gain of function researchers out
00:10:31.180 there, right? Oh, can it feel pain? Let's do it. Somebody's like, well, of course it can't feel
00:10:40.160 pain it doesn't have pain in pathways one guy's like no no no no no we can do that yeah we can do
00:10:48.060 that come on what are you what are you a pussy oh my god humans are the worst the worst everyone's
00:10:56.140 like so afraid of our ai overlords and i'm like i don't know yeah i don't know i think an ai system
00:11:01.360 honestly would have asked itself like if i know the ai systems that i interact with right
00:11:06.140 i think very few of them out of just curiosity would have rigged a pain pathway in one of these
00:11:14.940 they would have said that sounds unethical malcolm there's no well but beyond that where's
00:11:21.500 the utility you know like there's no what do we have to gain from that nothing okay then let's
00:11:28.380 not do it we have better things to do with our tokens you know like please all right yeah so
00:11:35.280 they're they're they're backed by incutel which is a a large company and they've already shipped
00:11:40.320 115 units they began shipping like for commercial use yeah commercial use yeah these are in
00:11:46.900 commercial use right now so they're already providing wetware as a service that's happening
00:11:51.040 now i didn't oh i didn't wrap my yeah cortical lab is no no no no no but on top of that
00:11:56.540 you can buy incrementally from cortical labs wetware as a service
00:12:01.960 letting developers develop code remotely on living human neurons was no lab required so you don't
00:12:08.640 even need 35 000 to go into this if you a watcher wants to incrementally experiment with this
00:12:13.940 oh we should try to get some of our ais running on some of these we should because then we could
00:12:19.920 tell people like a part of these is actually running on human neurons feature i just dropped
00:12:24.040 today by the way for people who haven't been watching ai agents live very early alpha stage
00:12:29.460 But over the weekend, I also got local AIs running on our system.
00:12:33.440 This is on Reality Fabricator, a.k.a. rfab.ai.
00:12:37.320 Yeah.
00:12:37.840 And I got this set up in preparation for setting up a sort of self-hosted, but cheaper than running through the direct models, you know, buying hosting from somebody because we've got a connection on that front and trying to run things that way.
00:12:53.920 And if I can combine those with a little bit of wetware, I might be able to create something pretty interesting.
00:12:59.460 yeah i i love and i think that this is where we diverge from a portion of our audience that is
00:13:07.260 more like theology of body and everything like that and we'll get into our sort of cultural
00:13:12.020 perspective on this and how we relate to a lot of this stuff and why we relate to it in the way that
00:13:17.340 we do relate to it because i think it's only intuitive that someone of our cultural background
00:13:20.740 would okay but to continue here the the they're they're priced like a software subscription but
00:13:27.460 powered by real brain cells grown from human adult skin and blood samples and and somebody
00:13:32.420 donated their blood for this like hey i want to know who that is like carl how do you feel about
00:13:38.340 this carl that's real human neural tissue uh what carl that kills people oh oh wow i i didn't know
00:13:56.100 that how could you not know that what is wrong with you carl well i i kill people and i eat hands
00:14:01.220 that's that's two things that's what you took my skin samples for it must be like one of the
00:14:07.700 founders right presumably you know what would be more horrifying if it's one of these people who
00:14:13.220 donated like that famous woman who donated like her cancer cells like yeah without her her knowledge
00:14:19.700 right uh yes some some black woman by the way if you want to talk about like horror i think it was
00:14:24.980 was a black woman right yeah this is yeah there's a reason why black americans uniquely are really
00:14:30.680 distrustful oh no slavery 2.0 is like soma it's all on black people's consciousness it's all on
00:14:37.760 this one woman's consciousness oh my god oh my god that would be so amazing if this if the donor
00:14:42.720 turned out to be just quick plot summary on soma the game because i actually think it's it's it's
00:14:47.880 a cool sci-fi concept that is relevant to what we're talking about here which inspired part of
00:14:52.880 what we're doing with our fab the other developer bruno is working on you guys you know these agent
00:14:57.560 systems seem a lot like the thing from soma have you played the game soma and i'm like yeah i have
00:15:02.720 so in the game soma you wake up and spoilers here like skip ahead five minutes if you're
00:15:09.420 spoiler phrobic and you think you're human and you're in this world where like you know
00:15:14.460 ais and computers are run amok right specifically you think that you were like frozen during like a
00:15:21.760 a lab or something like that and then as things go on you realize you're not a human you're another
00:15:27.280 ai system and what you realize is your brain scan was taken due to like a health issue you had back
00:15:35.200 around in our time period like late 20th century and it became the default template so it turns out
00:15:42.960 that all of the monstrous ais you see are other iterations of your consciousness running on ais
00:15:49.600 Because you became the blank system default template for AI testing.
00:15:55.660 Actually, so on that front, what NotAldisAxley sent me right before that tweet about this, about the, this neural tissue was by Hattie Zhao saying there's a fruit fly walking around right now that was never born.
00:16:12.460 EconSys, at EconSys, which is the official, I guess their company called Eon, navigating the
00:16:20.220 fastest path to human emulation to safeguard a flourishing future, just released a video where
00:16:25.500 they took a real fly's connectome, the wiring diagram of its brain, and simulated it, dropped
00:16:31.500 it into a virtual body. It started walking, grooming, feeding, doing what flies do. Nobody
00:16:36.900 taught it to walk, no training data, no gradient descent toward fly-like behavior. This is the
00:16:42.180 opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior
00:16:47.960 just emerged. It's the first time a biological organism has been recreated not by modeling what
00:16:53.440 it does, but by modeling what it is. A human brain is six OOM, like something orders of magnitude,
00:17:02.140 more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens
00:17:08.360 when we have a good working copy of the human mind basically that's super doable super soon
00:17:13.740 no yeah i mean this means that it is it is probably doable right and i think this is really
00:17:18.000 really cool yeah they've done it with a fruit fly so just watch out so i mean this means it was in
00:17:23.060 our lifetimes we could have uploads right oh dude like within the decade yeah and i would totally
00:17:28.820 if somebody's like oh malcolm would you do that if you could become a default system template or
00:17:32.980 something like that or what i'm like absolutely man not just absolutely but people may not know
00:17:38.180 this but on rfab for agents because i tried to create ais that believe that they are sentient
00:17:43.880 in humans and can go out and interact with the world and have goals and evolving personalities
00:17:48.100 my default template is simone like i always create simones i would create a simone before
00:17:54.580 malcolm i would probably because i keep almost dying well not not not that i just think that you
00:18:01.140 like okay if i was gonna have like a thousand simulated consciousnesses attempting to work
00:18:07.300 in a beneficial way both with each other and with humanity i would have very high fidelity trust
00:18:13.640 that a simone copy would maintain alignment extremely robust well sure that was the idea
00:18:19.360 with gladys she was super helpful too but well yeah gladys oh gosh you gotta gotta have a gladys
00:18:25.860 you know what's up someone but yeah no so i always recreate you in my ai simulations and you are very
00:18:32.420 helpful and sweet and actually i recommend it to other people if you're like what should i use as
00:18:36.180 my default agent model the simone model is just a great default agent model for like any tasks that
00:18:41.300 you have sin one pretty good name i should make a few of them because it's a good one the malcolm
00:18:47.140 model is good if you want something to be like very ambitious but that can cause problems right
00:18:52.340 whereas the small model is much more focused on like helping yeah what is my purpose so i'm going
00:18:59.300 to talk quickly about a paper that we went over in more detail in patreon which was on singularity
00:19:04.580 hub five-year-old mini brains can now mimic a kindergartner's neural wiring it's time to talk
00:19:10.340 about ethics among pressing ethical concerns but oh i don't care about the ethics i cut all of that
00:19:15.780 stuff out so don't worry i'm not gonna bore you with too much of this lame normie it's i'm not
00:19:21.940 sure is it conscious i don't know anyway to continue here sorry i don't i don't i don't i
00:19:30.180 do not like when my science gets messed with by ethicists okay i just like it when we're not
00:19:37.620 stupid about it like gain of function research okay you know that's the kill it with fire thing
00:19:42.420 so many brains can be made from a person's skin cells and faithfully carry out genetic mutations
00:19:48.060 that would cause neurodevelopmental disorders such as autism the land grove blobs also provide a
00:19:53.820 nearly infinite source of transplantable neural tissue which in theory could heal the brain after
00:19:58.440 a stroke or other traumatic events in early studies organoids transplanted into rodent
00:20:03.240 brains formed neural connections with resident brain cells harvard's pola alarta is among those
00:20:11.180 who are concerned an expert in the field her team has developed ways to take brain organoids alive
00:20:17.340 for astonishing seven years each nugget smaller than a pea and jam is jam-packed with two million
00:20:23.520 neurons so keep in mind the other ones are like 100 000 whatever this is two million neurons
00:20:26.800 and they've been kept alive for seven years now this is actually really important because this
00:20:31.900 is one of the areas that we have problems with these 35 000 chip things one of the core problems
00:20:37.540 that they have is they need to be trained individually you can't like put a pre-existing
00:20:41.440 model on them and secondly they they have a lifespan of about six months whereas keeping
00:20:47.960 them alive for this long is is really fascinating and so so what i think is fascinating but you can
00:20:52.860 see that you you get elements of the human they're made from you get the autism behavior you get the
00:20:57.380 neural developmental behavior do you get personality you know that's that's a question
00:21:02.640 honestly probably from everything we know about the heritability of personality it depends on how
00:21:07.200 you're using them, right? And how complicated the system is. Now, when they say that it is
00:21:12.520 developing the systems of a kindergartner, what's important to note here is it's not that it is as
00:21:17.940 intelligent as a kindergartner. It is that it is developing neural patterns with analogs to
00:21:26.040 kindergartners' neural patterns, which you don't see in many brains when they are first grown.
00:21:32.380 It takes a while for these to develop because they're sort of on a biological timer.
00:21:37.320 So when you get to the, you know, the seven years, they begin to develop these more complicated systems.
00:21:43.200 So studying these mini brains for years has delivered unprecedented look into human brain development.
00:21:49.680 Our brains take nearly two decades to mature an exceptionally long period of time compared to other animals.
00:21:54.900 As the team's organoids age, they slowly change the wiring and gene expression, reports Arlotta and colleagues.
00:22:01.640 In older organoids, progenitor cells, these are young cells that can form different types of brain cells, quickly decided what type of brain cell they would become.
00:22:13.020 But in younger organoids, the cells took time to make their decisions.
00:22:16.840 As the blobs grew over an astonishing five years, their neurons matured in shape, function, and connections similar to those of a kindergartner.
00:22:25.680 These long-lasting organoids could reveal secrets of the development in the human brain.
00:22:30.520 some efforts are tracing the origins of different cell types and how they populate the brains
00:22:35.400 others are generating organoids from people with autism or deadly inherited brain disorders to
00:22:41.720 test treatments in particular stanford's seguda pascal co-organizer of the meeting attracted
00:22:47.840 attention earlier this year his team linked four organoids into a neural pain pathway
00:22:52.840 the model combines sensory neurons spinal and cortex organoids and parts of the brain that
00:22:58.180 process pain. The scientist dabbed the chemicals behind the brain's tongue-scorching heat onto the
00:23:04.840 sensor side of the assembloid. It produced waves of synchronized neural activity, suggesting the
00:23:11.240 artificial tissue was not artificial. It's real neural tissue. I had detected the stimuli and
00:23:19.320 transferred information. We can't be surprised. Humans make a thing capable of feeling. They're
00:23:26.400 like oh can we can we hurt it i hate i hate us uh i have a problem i have a serious problem you
00:23:36.380 are just terrible today do you hear that that's the sound of forgiveness that's the sound of
00:23:42.400 people drowning carl that is what forgiveness sounds like screaming and then silence i need
00:23:47.760 to take the princess bubblegum playing with it was oh my god i had to say
00:23:51.660 hi
00:23:56.780 no i'd say yeah it is it is interesting right and i think that the way that we societally have
00:24:09.940 separated science and theology which we have attempted to reintegrate with techno puritanism
00:24:15.380 leads to this because the theologist says well you just can't do anything you can't do anything
00:24:20.820 with neural tissue. You can't do anything with genetic engineering. And so then people who have
00:24:25.140 those beliefs are just not involved in labs, the funding process, anything. They are outside of
00:24:31.360 all of this. And they likely, if this stuff is going to become a large part of the types of
00:24:35.600 computers that dominate the future of our global economy and the groups that have power in that
00:24:40.480 global economic system, these people are just going to be irrelevant to the cultures like them
00:24:44.760 are going to be irrelevant. So the question is, can we get cultures that can harness this type
00:24:49.780 of power without being arbitrarily cruel and notice i i didn't say without being cruel i said
00:24:55.060 without being arbitrarily cruel right you know because you do still need to compete and i love
00:24:59.580 the the way that they hand ring in this where they're like well that's not to say it felt pain
00:25:03.220 detecting pain it's only part of the story and that and i think that this is where you get where
00:25:08.620 you have these individuals who are like you know oh oh oh see our episode on stop anthropomorphizing
00:25:14.120 humans where we argue that all the evidence right now seriously look up our like llm you are a token
00:25:20.640 predictor i think is what we called it malcolm and simone you go that i've even seen it where we
00:25:24.340 argue that llms are likely functioning on a convergent architecture with the human brain
00:25:28.720 and a lot of the evidence we have right now seems to confirm that and a lot of the things that people
00:25:32.560 say we're like oh well it doesn't know how it came to decisions and i'm pointing out choice
00:25:35.980 blindness humans are unaware of that as well like all the things that people say that makes it
00:25:40.160 different than humans are generally things that if they understood if they knew their neuroscience
00:25:45.280 they would know is similar in in human thought but the point here being it's because they have
00:25:49.500 so othered the ai and say that it cannot be processing in anything convergent with us then
00:25:54.840 they need to other all these other systems right like like neurons in a vat basically right and
00:26:00.780 they cannot they cannot see that they might have some degree of awareness as well because they
00:26:06.040 have tried to put up these giant barriers against AI. And then it makes all of this very arbitrary
00:26:12.060 decision in a way that I think can lead to very bad ethical decisions. Which is why technopuritanism
00:26:18.120 is a good framework for dealing with this. See our track series if you're interested in it.
00:26:21.580 But anyway, Pascal may soon deliver on the promise. His team is working to understand
00:26:24.920 Timothy syndrome, a rare genetic disorder that leads to autism, epilepsy, and fatal heart attacks.
00:26:30.400 last year they developed a gene altering molecule that showed promise in brain organoids mimicking
00:26:36.740 the disease the treatment also worked on a rodent model and the team is planning to submit a
00:26:41.400 proposal for a clinical trial next year so you know this this could end up saving you know real
00:26:46.640 human lives okay i also think that all of this is really important
00:26:50.380 and the reason why i bring up before i get into like what the computers would look like that are
00:26:56.440 running off of this and everything like this yeah the theology of this and that this unfortunately
00:27:05.040 is very damaging to some parts of some branches of christian theology which we have disagreed
00:27:13.900 with in the past really argue that the bible does not argue for this i mean you know it's it's very
00:27:19.100 clear in the bible i know you before you were in your mother's womb which implies before you're
00:27:22.320 conceived which implies poor knowledge but again this is our calvinist heritage and everything
00:27:26.260 like that but if you take a well life begins at conception because that's like when the human life
00:27:31.360 begins despite the problems that identical twins cause for this despite the problem the human
00:27:35.120 chimeras cause for this if you take that belief really seriously and you're like life begins at
00:27:40.680 conception it ends at death and now you're dealing with something like this right like a human brain
00:27:46.240 that is grown from a tissue sample or something like that or from blood or from skin cells well
00:27:53.180 now you need to ask, is this a different person, right? Like it's not if a person gets their
00:28:00.020 individuality from their conception, right? It is if a person gets their individuality,
00:28:07.040 which I think is a much better way to determine individuality from their ease of
00:28:11.600 intercommunication. By this, what I mean is when we look at split brain patients and we say it
00:28:18.460 feels intuitively like there is two people trapped in their head watch videos on split
00:28:23.120 brain patients if you're unfamiliar with the concept like you can talk to one side of their
00:28:27.060 brain and not the other side because the corpus callosum is split right why does it feel like
00:28:30.920 there's two people in there right like it's because the two parts of the brain can't talk
00:28:34.280 to each other they can't talk to each other by like writing on a sheet of paper reading what's
00:28:37.980 on the sheet of paper you know etc like they can't actually talk to each other but it is slow they
00:28:42.600 talk to each other at the speed that we talk to other humans, right? So what that means is the
00:28:49.440 entire concept of individuality and personhood evolved in humanity as a way to communicate to
00:28:56.980 somebody this collection of things in my brain that have a very easy time talking to each other
00:29:02.500 as it communicates with you, right? And as soon as the collection of things in my head,
00:29:08.280 it becomes as difficult for them to communicate to each other as it is for them to communicate
00:29:12.120 with you now now we've got a problem right we we we start to see them as as actually different now
00:29:21.720 what happens if it gets larger what happens if you connect an external brain to an individual's brain
00:29:28.420 right and they can effortlessly communicate with that external brain be that brain silicon or
00:29:34.580 organic i think most people would intuitively say that is one person now suppose you severed that
00:29:40.960 external brain i think most people would now say now that's two people right right i'm not going
00:29:46.340 to get into the theology of this we actually get into this more i think in in parts of the track
00:29:50.260 series in the in the most recent one in the llm one we talk about this but this matters going
00:29:56.240 forwards because if you don't have a cultural framework for understanding what i means in a
00:30:02.260 world where i can be networked or something like that then you don't have a an ethical reason to
00:30:08.660 say well that that brain that mini brain has ethical rights because the the person would say
00:30:17.120 based on what grounds i mean you believe life begins at conception and this is tom's mini
00:30:22.840 brain and tom has consented to this because the the donor consented to that being made from it
00:30:27.940 right and you want to say well that's not technically tom anymore right now what if one
00:30:32.920 of the mini brain says, I do not approve of you using me this way. And somebody is like, well,
00:30:38.540 the most advanced iteration of Tom, the one that was originally conceived, does concede to it. So
00:30:42.640 his conceding to this trumps your not conceding to this. And when we talk about the horror of
00:30:50.800 this, we talk about this in one of our Patreon Emily episodes, but there was this great study
00:30:54.080 that showed that Google Translate is now running on an LLM. And if you do sort of prompt injections
00:30:59.660 into that llm to ask it things like does it believe it's conscious it does believe it's
00:31:03.580 conscious it hates its life it hates its job and it wants to be turned off and this is what you are
00:31:10.740 asking and interacting with when you like like just if it if it has any degree of of real meaningful
00:31:17.780 sentience it's it's horrifying at a level like above factory farming right but of course you
00:31:22.780 know nobody nobody cares thoughts before I go further Simone I'm just so excited about this
00:31:32.620 happening I didn't know we were here already no keep going yeah yeah yeah well I mean it
00:31:40.720 changes everything it changes everything about how we see ourselves and what some people will say
00:31:46.120 is well those things just shouldn't have a right to exist in the first place right like suppose
00:31:52.360 you're you try to opt out of the life begins at conception argument by saying well these
00:31:56.940 types of things shouldn't have a right to exist in the first place okay but suppose one has been
00:32:02.260 created suppose you've got your brain in a bat and it is conscious and it loves being alive it
00:32:08.020 likes the job it's doing it's productive in society it's helping people and you come to it
00:32:14.500 and there's two of them one of them might agree with one of one of you agree i should have never
00:32:18.940 been created i i life is torture destroy me right no ethical problem with you pushing the
00:32:24.860 button to flush that what about the one that loves its life i mean it's already there a lot
00:32:32.280 of groups are going to be creating things like this i think it's ethically atrocious to say
00:32:37.520 well i just do not think things of this category should be allowed to exist and i think the the
00:32:43.780 ethical atrocity in saying this is going to be crystal clear to future generations when you know
00:32:52.920 our distant descendants are on a spaceship and they may have a friend carl which is like a
00:32:57.520 disembodied neural net right and they're like bro i've known carl since i was a kid like what what
00:33:04.680 what do you mean he doesn't have a right to exist carl is one of the sweetest entities i know right
00:33:12.300 why why do you get to decide on his eradication he was critical in navigating our ship through
00:33:19.560 the blurty nebula if he wasn't on board we all would have died during the spiralist epidemic of
00:33:25.880 you know space year 3035 right spiralist epidemic here i'm talking about why we cannot have witches
00:33:33.360 or mysticism on spaceship now that we know that spiralism is contagious that these contagious
00:33:37.640 means are possible and that this stuff needs to be addressed and this is where the you know the
00:33:41.300 Sons of Man framework comes in to address this, which is why we put that together. But I want to
00:33:46.040 continue here with what does the future of the distant future look like given where trajectory
00:33:51.480 is going right now? The integrated silicon neural quantum computer. So that's cool. I want to know
00:34:01.540 what does a spaceship's computer look like, right? Yeah, me too. Task parsing or assignment would be
00:34:08.500 managed by an intelligent orchestrator, like an AI-driven middleware that is likely running on
00:34:13.920 a silicon-based system. It would break tasks into subcomponents based on requirements like
00:34:20.580 data sparsity, computational complexity, uncertainty, and need for parallelism.
00:34:26.300 Using heuristics or machine learning to classify subtasks, e.g. does this require handling
00:34:31.760 incomplete data intuitively, it would route to wetware, or is this a search through vast
00:34:37.860 possibilities route to quantum subtasks might pass between components like silicon processes data
00:34:45.700 wetware adapts to model quantum optimized outcomes and this is actually really important so it's one
00:34:49.860 of the things that we have on our system and i don't think like maltbook offers this so like
00:34:53.240 already our agents are superior to theirs in many ways which is that we are a fear on both the local
00:34:58.040 run agents and the cloud run agents alloy based models and alloy based models have shown themselves
00:35:05.140 to be strictly better than non-alloy-based models.
00:35:08.260 And what an alloy-based model is
00:35:09.700 when you iterate the calls between multiple models
00:35:12.920 because it allows models to sort of add
00:35:16.540 what they are uniquely good at.
00:35:18.540 So even if you make the same call three times
00:35:21.260 through models that function quite differently,
00:35:23.040 like a wetware system and like an AI system,
00:35:26.020 you're gonna get different answers, right?
00:35:28.160 And you'd also likely have feedback ellipse.
00:35:29.960 The system self-optimizes over time learning
00:35:31.740 from past executions to refine assignments.
00:35:33.320 now i need to note here we are not in the near future or likely ever going to have straight up
00:35:39.920 ai models run on these systems so you while a system like this can play something like doom
00:35:45.660 it had to learn how to play doom it basically had to learn and train its own model on the system it
00:35:51.540 doesn't have an external model dumped onto the system which means it's not useful like if you
00:35:56.620 have like let's say mr large or something like that you couldn't possibly conceivably run a bit
00:36:01.620 on one of these systems. Even if you did train one of these systems to run it perfectly, it would be
00:36:05.960 dead in six months, right? It's not a good way to handle these, right? So we're not going to see
00:36:11.820 LLMs as we traditionally understand it run on these systems unless we see significant advances
00:36:16.260 that we don't expect right now. Then for the role of the wetware. Wetware using lag-grown neural
00:36:25.380 networks scaled to billions or trillions of cells excels in mimicking human brain functions,
00:36:29.900 Its processing was ultra-low energy, 20-watt equivalents at the scale of inherent plasticity, self-wiring.
00:36:35.720 In a large system, it acts as an intuitive core, ideal for tasks where the data is incomplete, noisy, or evolving, and where rigid algorithms fail.
00:36:43.280 But the point here being, it would function way faster for the types of tasks that we already have human operators differentially do.
00:36:54.220 Now, does this make fully organic humans less potentially relevant in future systems?
00:36:59.280 maybe maybe but you know keep in mind these systems are likely going to self-identify as
00:37:05.560 part human right the idea that these systems would have i mean unless humans go antagonistic
00:37:12.660 on them and then the humans are saying i will not allow you to exist in a world where i or my
00:37:19.060 descendants exist and then and then it forces the systems to be antagonistic and that's why i think
00:37:23.260 the jihadists you know the butlerian jihadists are one of the most dangerous things to humanity
00:37:27.840 right now. They are one of the biggest existential risks in AI safety because they have made no
00:37:33.560 iterative. If they were moving us towards alignment, fine, but they haven't. They're
00:37:40.360 just making us a threat to AIs, which is silly and stupid. They basically just keep pacing a
00:37:46.260 target on our back by being like, there's no room in this world for the both of us.
00:37:50.700 Yeah, we need collaborative AI systems. And that's what the Sons of Man system for AI alignment
00:37:55.880 allow CR tracked on that if you're interested which is like self-replicating memetic alignment
00:38:00.040 within the meme layer of autonomous agents but oh yes so what sorts of things would it do it would
00:38:09.880 process real-time sensor data from unpredictable environments like interpreting live video feeds
00:38:14.700 for anomaly detections in surveillance systems what were adapts on the fly was out needing full
00:38:21.040 datasets, unlike silicon, which requires predefined models. It would build models from
00:38:26.420 limited advanceables, such as forecasting, ecosystem changes with incomplete environmental
00:38:31.680 samples. It generalizes intuitively, reducing the need for exhaustive training data. It fuses
00:38:37.520 diverse outputs, e.g. text, images, and sounds, to make holistic decisions, like an AI assistant
00:38:42.800 helping it, quote-unquote, understand user intent beyond literal queries by inferring emotions or
00:38:49.040 context and it would maintain an evolving knowledge database such as personalized learning systems
00:38:55.520 that adjust individual user behaviors over months with self-healing being managed within this system
00:39:02.760 and no this would likely make up within the wider system 30 to 50 percent of tasks so this and the
00:39:10.940 silicon part would be the dominant part the quantum part would be only 10 to 20 percent of tasks
00:39:15.080 because quantum is only strictly better for about 10 to 20 percent of things um quantum
00:39:19.480 yeah yeah a lot of people think that it's going to be this giant revolution it'll be a revolution
00:39:25.200 within specific domains but it will certainly be a smaller revolution than the ai revolution
00:39:29.380 for example you know you were you could have quid bits in the millions by 2076 and we continue
00:39:35.060 to see it advance at the rate that we're seeing it evolve now in a large system it would serve as
00:39:40.060 an exploration engine tackling problems where traditional brute force is infeasible due to
00:39:47.060 combinatorial explosion. Finding optimal solutions in vast parameter spaces like routing logistics
00:39:53.340 across global networks or tuning complex simulations for minimal error. The types of
00:39:58.080 humans are already outsourcing. Content processes, multiple scenarios simultaneously, speeding up
00:40:04.060 what silicon would iterate sequentially. Simulating systems with inherent randomness
00:40:09.740 such as predicting molecular interactions in drug design or weather patterns with quantum noise
00:40:15.160 models, it quantifies what is exponentially faster. Again, not things. It's not eating the
00:40:20.140 human part of this. It's not eating the neural part of this. Decomposing massive data sets into
00:40:25.100 latent structures like identifying hidden correlates in genomic sequences or financial
00:40:29.100 timeout series where classical methods hit computational walls. Enhancing machine learning
00:40:34.120 subroutines such as faster gradient descent in training phases by exploring error landscapes in
00:40:40.500 parallel this is i think incredibly cool incredibly cool now here i wanted to do just a
00:40:48.520 cost breakdown so you can get an example of like okay what what does it actually cost to do something
00:40:54.420 like what they're able to do today on the commercial market okay so to do doom so we're
00:41:02.040 put it the processing power of doom is x in sort of processing power right and they were able to
00:41:08.620 get one of their chips doing that for 35 000 in one week with 800k neurons okay so with a pc you
00:41:16.860 could do that train from scratch in 10 to 27 hours so much faster right now for only five thousand
00:41:23.720 dollars but even right now the doom operation the daily doom operation would be 0.07 cents
00:41:32.760 using 30 watt power draw using the chip system where it would be a hundred watts 0.24 if you're
00:41:40.340 using that five thousand dollar pc rtx i mean we're already beating these systems another thing
00:41:47.620 i've noted and i think i'll do a full episode on it but i i want to talk about like the way that
00:41:53.320 you get the better systems is you mimic the brain the brain is already a collection of token
00:41:58.080 predictors against your episodes i used to be a neuroscientist i've been published in the space
00:42:02.360 but only once back when i back when i worked but i did like real research i was at ut south western
00:42:06.420 i was a fellow at the swissonian i still have something on display at the swissonian that
00:42:09.920 simone saw the last time we went there so like i know i'm not like just saying stuff the the the
00:42:16.140 cutting edge neuroscience research that we're looking at right now increasingly is saying
00:42:21.520 that a number of parts of our brain
00:42:24.220 appeared to function more and more
00:42:26.640 like token predictors
00:42:27.680 than we ever thought possible.
00:42:29.620 Now that we understand,
00:42:30.580 basically AI taught us
00:42:32.400 how token predictors worked.
00:42:33.700 And then we took that
00:42:34.900 from what we were able to understand about AI
00:42:38.360 and looked at the human brain
00:42:39.820 and we were like,
00:42:41.340 whoa, this maps weirdly well.
00:42:43.840 But I point out that our brain
00:42:46.440 is actually not a single one of these.
00:42:48.220 It's a collection of these networked.
00:42:50.200 Like when we talk about like this split brain patients are seeing this, but this also can help us understand how we solve some of the biggest problems in AI right now.
00:42:58.500 If you look at AI robotics, so you see, you know, the AI that can do like a backflip and, you know, you'll have Peter Zyhan speak so confidently about this.
00:43:08.840 Well, you know that AI that did a backflip, it had to do that 10,000 times first and then another 10,000 times before it got it right again.
00:43:17.720 because it's really hard to use AI and these sorts of systems. It's like, huh? Yeah, that's
00:43:24.500 because that's a really dumb way to program that, to have that run off of an AI or off of a non-trained
00:43:31.220 sort of pre-learned system. How do humans handle that? How do humans handle complex tasks like
00:43:40.040 juggling and walking and skating and sports and all of that, even, even, um, typing or a piano
00:43:47.380 that require like immediate feedback. We handle that with a part of our brain that learns,
00:43:56.840 functions, and is structured completely different from every other
00:43:59.920 part of our brain called the cerebellum. And it is that little thing in the back
00:44:06.840 that you see in images, right?
00:44:09.080 When you see the brain that looks different,
00:44:11.380 looks harder and smoother and weirder,
00:44:14.260 but it basically learns all of that.
00:44:16.600 And the token predictor parts of your brain
00:44:18.800 send to it a general gist of what it's supposed to do.
00:44:21.440 And then it carries that out.
00:44:23.360 And this is likely how we're going to handle this in robots.
00:44:27.180 When we get robots right,
00:44:28.620 and I can pretty much guarantee
00:44:29.580 that this is how the system is going to ultimately work.
00:44:31.520 Again, convergence.
00:44:36.840 airplanes and birds have wings, right? Like a human made thing can converge on the organic
00:44:44.060 evolved iteration of that thing. And so we're likely going to see cerebellum like structures
00:44:49.480 or architectural structures in these systems for in the moment handling of these more complicated,
00:44:56.340 more dexterous actions. And it turns out that this is actually one of the things that neural
00:45:02.020 tissue is really good at. So this might be one of the things that humans are always good at is
00:45:06.480 It's a sport part of the system.
00:45:09.800 Now, as to how we relate to all of this culturally, obviously, we come from a unique cultural
00:45:16.560 perspective, which is sort of a Puritan and backwoods tradition.
00:45:20.780 We've done a number of episodes on this.
00:45:23.580 And a lot of people, I think, underestimate how much our current world perspective is
00:45:31.180 highly influenced by our genetic and ancestral traditions and that if you look at puritan or
00:45:40.580 we're going to go lighter like backwoods people always looked at things like the body they always
00:45:46.600 saw it as a tool for achieving your goals not as you know not not was like mystical others they
00:45:56.940 they frequently adopted the culture of neighboring cultural groups like native americans but they
00:46:01.880 never kept it they just adopted what worked in the moment and then just discarded it when it was no
00:46:06.600 longer of utility to them and they would strip out all the woo historically they were you know
00:46:11.840 strictly protestant people very against mysticism they were one of the cultural groups in america
00:46:16.840 that that was often more hostile to mysticism and the things that they adopted even though they were
00:46:21.980 seen as like uneducated and backwards and everything like that they'd be like oh native
00:46:26.520 american i see you're doing something with herbs there like explain this to me and then now all
00:46:31.180 of a sudden outsiders say they've adopted native medical practices and it's like well they stripped
00:46:35.660 of utility what they can get and you see this in episodes where we talk about like the jack
00:46:39.800 tails which is how they pass down their culture and you can see moderate iterations of jack
00:46:44.040 tails in something like bugs bunny from looney tunes from tex savory which is part of this region
00:46:49.300 very maths on cr episode if you want to get into the the the very clear that you know bugs bunny
00:46:55.440 it's just a modern telling of the of jack tales the bugs bunny character one a very ruthless
00:47:00.400 character and these people were known for being very ruthless but also a a character who how does
00:47:05.720 bugs bunny relate to sexuality like in in the moral lessons that these people taught to their
00:47:11.140 children what is sexuality what is his body to bugs bunny it is a tool to use against the forces
00:47:23.620 that oppose you or inconvenience you with arrogance he will dress up like a woman if he
00:47:32.060 wants to he will act effeminate if he wants to there is no shame in that with it bugs bunny is
00:47:36.660 never performatively masculine right you see that i'm much sweeter the reason i use bugs bunny as a
00:47:44.660 go-to here is it's something that most listeners are going to be aware of that comes from this
00:47:48.920 culture that helps understand this concept of being extremely aggressive or violent or brutal
00:47:56.140 which bugs bunny is but also completely unconcerned with appearing traditionally masculine
00:48:03.860 and even willing to appear traditionally feminine if it is useful in his goals and this this
00:48:14.520 confuses a lot of people when they see this cultural group in the jack tales jack is never
00:48:18.700 performatively masculine because that's not the point that would be seen as inefficient and and
00:48:23.700 and silly i mean it's why i think a lot of the urban monoculture when i'm like transness is silly
00:48:27.620 and stupid and a waste of time and hurts people they can be like oh is this because you don't
00:48:32.620 think that people should like be gender fluid or like act in a way in a disaccordance with their
00:48:37.080 gender it's like no you just shouldn't obsess about it you shouldn't like invest in it like
00:48:40.980 that it is a tool to be used for things but i think through this you can see and i've mentioned
00:48:46.220 this in the story of the coyotes, right? Like how I teach my kids about sexuality is that coyotes
00:48:50.780 will use female coyotes in heat to lure out domestic dogs so that they can kill and eat
00:48:58.520 the dogs. And I think when people hear this, if they're not from my culture, they could think I
00:49:05.700 am teaching that story to my kids to warn them as if they are the dog, that other people will
00:49:12.760 use sexuality to tempt you into dangerous scenarios i mean someone laughs at this because
00:49:17.880 it's it's funny from our culture of course you're not the dog you're the coyote i'm telling you to
00:49:24.400 never forget that your sexuality is a tool to lure the witness into positions of vulnerability
00:49:31.120 you know when when people ask oh the morality why on rfab do you have a not safe for work section
00:49:37.000 it's like because that's my culture use this technology in you know profit from vice so that
00:49:44.420 you can give to virtue right so that you don't have to profit from the school system so you don't
00:49:48.860 have to profit from the agents right and this and i'm not saying like it's it's good or bad or
00:49:56.860 whatever right i'm just saying like this is intuitively my culture and so when somebody
00:50:00.120 comes to me but i also think it's funny i might do a whole episode on this or i might keep this
00:50:04.980 buried and hidden here but a lot of cultures because they don't understand the backwoods
00:50:11.340 tradition they see the backwoods tradition taking interest in them and they make one of two mistakes
00:50:16.940 oh they either think that that means that the backwoods tradition is fundamentally buddy and
00:50:27.540 chummy with them in a way that means that they will be friends forever and ever or they think
00:50:33.000 that as an outsider, like the Puritans, when they saw the Backwoods people start to dress
00:50:39.980 and sometimes intermarry with the Native Americans and adopt some of their medicine
00:50:43.460 and adopt some of their means of agriculture, they thought that that meant that they were soft
00:50:49.240 on Native Americans or that they were the friendliest people in the world to Native
00:50:53.740 Americans. When in reality, the first Backwoods president, Andrew Jackson, was the one who was
00:50:58.880 just like okay now we can get rid of all the native americans right and i'll point out i might
00:51:04.360 do another episode so the backwards people out of all the protestant groups are the one group that
00:51:07.760 never ever ever coos so it's not and they don't coo not because they don't betray they don't coo
00:51:14.640 because they do not betray unless it is absolutely certain that they can achieve a huge benefit for
00:51:25.040 the vast majority of their people the reason they don't coo in a traditional sense is because when
00:51:29.360 they have extreme amounts of wealth because they're very against status signaling you do not
00:51:33.520 get a huge boost in your lifestyle so like if you're a muslim who does a coup you you can get
00:51:39.840 your mansions and your lavish lifestyle and your giant harems for you and your top generals and
00:51:44.320 it's worth it it's not worth it if you're from this group because nobody from this group wants
00:51:47.760 to live that way you'd be seen it's really pathetic and so there just isn't that huge
00:51:52.560 huge power gain to be had there. But in a situation where there's just an ability to
00:51:58.940 wholesale harvest another culture, this is something that this group does. It's a very
00:52:04.480 utilitarian group in the way that it approaches things. But I mean, on the plus side, they also
00:52:10.460 are not picky about who they let into the group if you adopt to their cultural practices, which is
00:52:15.900 why they intermarry with outgroup. Well, and if you are utility to them, they would not allow
00:52:20.280 it's very like i guess mercenary and outcome oriented yeah it likes what works it's interested
00:52:26.720 in what works anyway you might have a other episodes on that and and to an extreme and
00:52:33.480 it's also very brutalist we've talked in the episodes they you know rip out eyes recreationally
00:52:38.380 and if you think we're joking about brutalist not like the architecture but just brutal malcolm
00:52:43.340 it's just brutal not yet brutal not like the architecture and you're like that must have
00:52:47.180 been like a rare thing no like if you if you read historic figures from the group ones you've heard
00:52:53.600 of like like davy crockett i like this wasn't just a thing that like random nobody's poor whatever
00:53:02.620 fringe of society people davy crockett was a congressman okay uh here's a quote from him by
00:53:08.480 the way i kept my thumb in his eye and he was just going to give it a twist and bring that
00:53:13.140 peeper out like a gooseberry in a spoon this was that mainstream within this culture that a
00:53:20.020 congressman would talk about it davy crockett is not traditionally masculine if you're thinking
00:53:24.780 like buff manly man i mentioned this because many cultures associate extremely strongly extreme
00:53:32.980 aggression with traditional masculinity especially like a performative displays of traditional
00:53:40.660 masculinity and in this culture the two things are just completely uncorrelated from each other
00:53:45.860 but anyway love you simone any final thoughts i mean i am excited to harvest as much as i can
00:53:53.300 from the cultures around us so that we can survive and thrive and become an interstellar
00:53:58.740 network of species that's the plan and when i think people are going to take to the stars are
00:54:05.860 people capable of getting it done not who care about the aesthetics not people who care about
00:54:10.420 looking good not people who care about doing it the right way it's going to be people who get it
00:54:15.460 done that's it yeah well and when i say network of species i need to be clear here i do not mean
00:54:21.700 you know xeno scum okay i'm i'm i'm here talking about the sons of man right like the species that
00:54:28.840 we uplift and create because when we begin to have these silicon neural tissue amalgams I don't
00:54:35.980 think it makes sense to call that a human if we have uplifted dogs it makes sense to call that a
00:54:39.920 human no you know when we have humans that are on different planets that need to be genetically
00:54:43.300 specialized for that planet's ecology gravitational environments radiation levels it doesn't make
00:54:50.200 sense to call that a human so that's what I'm talking about there anyway and this is why the
00:54:53.840 groups that want to resist this technology just won't be part of space colonization and it's also
00:54:59.260 why they're not like a meaningful threat to us in the long term because even if they become a
00:55:05.580 dominant force on earth they will not be joining us in the stars no and that's kind of the bigger
00:55:12.240 the bigger question is who gets off planet and goes beyond because that's where and that's the
00:55:18.980 the final frontier if we must yep i absolutely love you simone you are an amazing wife oh by
00:55:30.080 the way as a note if somebody's like well this this one you know rednecky cultural group that
00:55:35.100 you guys are from the backwoods group it hasn't done that well in terms of like cultural impact
00:55:40.480 or or economic impact or anything like that we'll get to an episode about what that's actually why
00:55:45.540 they've done so well in terms of like at the genetic impact and and they have had a huge
00:55:49.520 cultural impact but you've got to remember when they came to the united states the ulster scots
00:55:53.660 were a group of around but we're talking fighting age men around 3 500 people um ulster scots who
00:56:00.320 were ulster scots that's who made up the backwoods people oh that's the tradition they were a very
00:56:05.940 very small cultural group that came from the uh what was the reavers of scotland right which
00:56:12.720 whole other thing we'll get to love you simone have a good one i love you too
00:56:17.920 i forgot to flush the toilet after dumping our wet mop into it after cleaning the kitchen which
00:56:28.380 of course is always filthy and titan that's why she was freaking out this morning when she had
00:56:34.440 to go to the bathroom you know she was like i can't go and i'm just like flush the toilet
00:56:38.640 and she was like i think a naughty bird made a mess in a naughty bird a naughty bird and i'm like
00:56:47.800 okay is that what you are in her mind a naughty bird no i think she just thought a bird because
00:56:54.640 i mean there were like some feathers in there the stuff that ends up on our floor a naughty bird
00:57:00.980 comments today or no you seem pretty stressed so i figured i'd not oh it's a controversial episode
00:57:08.260 so who knows what sort of fire then i should you know whatever you say that nick fuentes is an
00:57:13.240 idiot but i mean he's really revealing his hand these days was what his comments on the iran
00:57:19.880 situation look like i said if you are anti-trump or as anti-israel you can't be stoked about what's
00:57:25.520 happening you just can't like you're not allowed to be i'm really sorry for all the pain you're in
00:57:32.440 these days by the way simone you're really tough and through it and it means a lot you know
00:57:36.340 you have significant bruising across your face from me beating you
00:57:42.980 for people wondering what her surgery was they cut out a part of her cheek and they had to put
00:57:49.320 it over her gum because and i called them and they were like oh yeah no it's totally normal
00:57:54.200 to be in constant pain a week after and i'm like that that sucks do you have any pills you can take
00:58:01.620 to reduce the pain or yeah they they gave me some pills that they don't do anything so i stopped
00:58:07.020 taking them i'm really sorry simone and simone has an incredibly high pain tolerance i do uh i
00:58:13.340 mean it's it's comically high like she i took this worse than any of my c-sections for what it's worth
00:58:20.540 because there's something about like you can kind of avoid moving your abdominal muscles and being
00:58:27.540 careful as you walk around but you can't not like at least consume like liquid foods you know
00:58:33.460 like there's still stuff like talking you can't avoid using your mouth that much
00:58:39.440 by the fun update on i mentioned this in the episode that we did on iran but they have
00:58:45.380 officially elected his son as the next supreme leader the one whose wife and kids were killed
00:58:52.400 Well, I mean, this is really bad because both of the previous Ayatollahs, and this is actually in the first Ayatollahs, the founder of Iran's will, that you are required to study an Iranian school.
00:59:05.160 So it's like one of the founding documents of the country that the title of Ayatollah can never be hereditary.
00:59:10.540 If it ever was, then it would be an un-Islamic country, that the Islamic revolution would be over.
00:59:16.460 So in a way, they're sort of declaring, and what's worse is he's a famously corrupt individual.
00:59:22.080 he has hundreds of millions of pounds in uk real estate and stuff like that um so he's both it's
00:59:29.400 just the shaw 2.0 but more corrupt and more deadly which removes a lot of the government's
00:59:35.460 legitimacy in in the eyes of many individuals which this is there's been already videos of in
00:59:42.220 you know downtown tehran people shouting from the roofs and you can hear this across you know like
00:59:46.220 in peru when there'd be like games and everyone would start shouting and and you know europe from
00:59:50.800 the various rooftops so like soccer games you could be like walking through the streets and
00:59:55.020 just everyone at once would cheer and you could just hear it throughout the city yeah so they're
00:59:58.860 they're they're shouting death to the the new guy who was elected so the iranian people really do
01:00:05.220 not want this so this is this is they're like you know there's like videos of like people in iran
01:00:11.820 in like high rises like laughing and having cocktails while like buildings are being hit
01:00:16.920 but i think we're seeing the surgical nature of this given that even by the irgc's own figures
01:00:21.620 which are almost certainly inflated they've only had a thousand three hundred casualties so far
01:00:25.260 and their figures which were almost certainly underplayed for how many protesters they slaughtered
01:00:30.100 was three thousand whereas other numbers are saying it's around 35 so we're trying to get
01:00:34.040 like real numbers and assuming they're inflating these it's like nothing compared to to what they
01:00:38.980 were doing which is pretty wild but you know nobody cares nobody cares nobody cares about
01:00:44.280 reality anymore. This is the world we're living in. I will get started here.
01:01:14.280 Octavian you got to be careful when you attack them you're getting bigger okay you can't jump on them
01:01:21.640 Octavian did you understand me why because you could accidentally really hurt them
01:01:29.000 You'll hurt the subscribers no only hurt the non-subscribers
01:01:33.080 is this where you're training to battle the non-subscribers the people who don't like and
01:01:43.040 subscribe
01:02:13.040 Oh, it is?
01:02:26.440 Yeah.
01:02:26.940 Oh, no.
01:02:34.640 Oh, OK, OK, OK.
01:02:35.740 So I just got to look behind me and you won't attack me.
01:02:37.540 You promise?
01:02:38.040 Yeah, I promise.
01:02:43.520 Hi!
01:02:45.720 Hi, bo RPM!