Based Camp - March 11, 2025


Are We Just Advanced Predictive Models? (The Science)


Episode Stats

Length

59 minutes

Words per Minute

165.9091

Word Count

9,843

Sentence Count

482

Misogynist Sentences

1

Hate Speech Sentences

10


Summary

In this episode, we discuss the growing evidence that the human brain is a token predictor, or at least the most complicated parts of it are. This is not just a religious position, it's a scientific one. And it's not even a scientific position anymore.


Transcript

00:00:00.000 Hello, Simone! Today is going to be an exciting episode. I implore our listeners to stop
00:00:06.880 anthropomorphizing humans. Oh, but seriously, actually, though.
00:00:11.840 But seriously and actually, this is going to be a real study-heavy episode. We're going to be
00:00:16.400 going over a lot of research and a lot of data. And if you do not come into this believing that
00:00:24.140 the human brain, or at least large parts of it, is just a token predictor working architecturally
00:00:32.280 potentially similar to AIs, we know where they're different in architecture even, and we'll go into
00:00:38.380 that. I'm fairly sure I'll convince most people who actually watch to the end. So today, we're
00:00:45.140 going to be going over a number of recent papers that show clear evidence the human brain is a
00:00:49.640 token predictor, or at least the most complicated parts of it are. But before that, we have to go
00:00:55.640 over an old series of ours, because the first thing you, the viewer, are likely thinking is,
00:01:02.260 but hey, I have an internal subjective experience of thinking and making decisions that an LLM would
00:01:08.020 not. Well, that's probably an illusion. Or I should be more clear. Your conscious subjective experience
00:01:15.500 of reality is real. It just happens after reality and in response to it. And we actually have a ton
00:01:22.000 of experimental evidence that this is a case. This is a theory that Simone convinced me of early
00:01:26.860 in our marriage, and now is key to how I see the world. So for any who think all of our ideas go from
00:01:32.260 me to Simone, this is not the case. I used to value sentience above all else when I first met Simone.
00:01:39.020 This is true. And now, I'm thinking like the core goal of humanity was to preserve and expand
00:01:44.600 sentience. And now I see sentience is not particularly important to the human condition.
00:01:49.920 The first thing I'm going to be doing here is going over a lot of stuff in a condensed format that
00:01:55.740 we went over a video that we created. It was like the fourth video on the channel or something.
00:01:59.680 You're probably not sentient. And a lot of our modern viewers won't have watched it in the studies
00:02:04.020 that we cited in our necessary context to understand that you believing that you have a
00:02:09.320 subjective internal experience of the world is not a sign that that internal experience of the world
00:02:14.700 is particularly important to the human condition, or at least the broad pattern of thinking that your
00:02:20.560 brain has. So to be more clear, in this model, your conscious subjective experience is not a guy driving
00:02:27.520 your brain, but more like a nerdy court historian watching a bunch of video feeds of what the different
00:02:33.660 parts of your brain are doing than synthesizing it into a singular narrative, but writing himself
00:02:39.280 in as the key player in every scene. So like, so like, if he is writing about what a general did in
00:02:46.360 a war, now what's written into memory is, I was a great general who had all these amazing plans,
00:02:53.360 even though he had nothing to do with any of the decisions the general was making. He just happens
00:02:58.480 to be the court historian, and is very, very self-important, and writes himself into every story.
00:03:04.620 In other words, the illusion of consciousness is really just an efficient memory compression
00:03:09.940 process that gives you the illusion that you are driving. The important thing is that the memories
00:03:15.960 that you create, that they make you think you're conscious, actually do affect future decisions.
00:03:21.400 They're just not conscious decisions.
00:03:22.600 Yes, they affect them by influencing the emotions that are codified in terms of how it interprets
00:03:28.860 it. So if you interpret something as like, I was angry, so I did X, or I was excited, so I did X,
00:03:36.020 that's what this conscious part of your brain does, is it makes those sorts of decisions,
00:03:39.820 it then writes them into your memory, and that memory can affect the parts of your brain that actually
00:03:45.880 make most of the other decisions of your life, but those other decisions are held outside of this
00:03:50.920 category with the brain. So first, we'll just go over the evidence of this, because the evidence of
00:03:54.640 this is so strong that I would argue it's one of the things where it's not even a scientific debate
00:04:00.160 anymore. It's to believe otherwise is a theological position, and I can respect that, but it's just
00:04:06.180 completely out of line with the scientific evidence. Yeah. So the split brain corpus callosum
00:04:12.240 experiments, these refer to Roger Sperry and Michael Gazillion's work. So split brain patients,
00:04:17.880 if you're not familiar, these are individuals who have a corpus callosum, which connects the left and
00:04:22.240 right brain, split. You can communicate with one of their hemispheres and not the other hemisphere.
00:04:27.840 Basically, they have two brains fully working in their head that can't talk to each other,
00:04:32.880 and by covering one of their eyes and having them read something, you communicate with the opposite
00:04:37.700 hemisphere of the brain. So you can do things like have only one hemisphere of the brain see
00:04:42.880 something, but then the other hemisphere, because only one of the two hemispheres controls all speech,
00:04:47.980 there's a dominant one in most people, but it changes depending on the person, only one hemisphere
00:04:52.060 will be controlling what the individual says. And so we can determine how an individual would respond
00:05:00.180 to events that they actually don't have any conscious control over. So to give some examples here,
00:05:06.840 a patient known as P.S. in one particular demonstration, he was shown a nude image to
00:05:12.720 only the patient's right hemisphere, which typically lacks language centers. P.S. immediately blushed and
00:05:18.240 appeared embarrassed. When asked why he was reacting this way, his verbal left hemisphere,
00:05:23.060 which had no access to what his right hemisphere had seen, promptly invented an explanation,
00:05:27.780 claiming, oh, that machine, it's making me hot. His conscious mind had no idea he'd seen a new image,
00:05:33.280 yet rather than admit ignorance, it immediately fabricated a plausible but entirely false explanation
00:05:38.520 for his emotional response. And I'll note here as we go into this, if you're thinking these people
00:05:43.080 know they are making something up, they are not aware that they are making something up.
00:05:47.500 They completely believe what they are saying. In a different study with split-blame patients,
00:05:52.880 the left hemisphere was shown the word walk, while the right hemisphere was shown the word talk.
00:05:57.600 When the patient stood up and started walking, the researchers asked why. Despite only the left hemisphere
00:06:02.340 being able to respond verbally and having seen walk, the patient confidently explained,
00:06:07.840 I'm going to get a Coca-Cola, completely fabricating a motivation that matched their action,
00:06:12.940 but had nothing to do with the actual command. Similarly, when different images were shown to
00:06:17.940 each hemisphere and the patient was asked to draw what they saw with each hand separately,
00:06:22.840 their left hemisphere would often create elaborate explanations for why they drew two completely
00:06:27.340 unrelated objects, never once acknowledging that they had no access to what the right hemisphere
00:06:32.040 had seen. In each case, the conscious mind seemingly constructed a narrative that made sense of
00:06:37.280 behaviors it didn't actually control. So these individuals are not aware there is basically a
00:06:42.960 person trapped behind one of their eyes that can't communicate with the outside world,
00:06:47.840 and they will make up why half of their body is not responding to their commands.
00:06:52.600 And particularly what happens when this little court photographer guy, when he, historian,
00:07:00.420 when he loses access to the court's history books, this causes something called Kruskos syndrome,
00:07:06.080 where patients don't just explain isolated behaviors, but construct entire false autobiographical
00:07:11.540 narratives. A patient might confidently explain where a family, they were at a family gathering
00:07:16.420 yesterday when they had actually been in the hospital and provide rich details about conversations that
00:07:21.340 never occurred. And they will 100% believe what they are telling the individual. Now you can know,
00:07:27.200 okay, well, I'm talking about like weird brain injury cases, you know, surely this isn't true in
00:07:32.100 normal people. Well, Penfield simulation studies, neurosurgeon wider Penfield's work in the 1950s and
00:07:37.980 60s involves simulating parts of patients' brains when they were having brain surgery. So when you have
00:07:43.480 open brain surgery, they have to keep you awake to make sure they don't kill you. You're on lots of
00:07:48.100 sedatives, but you're awake. And they can shock parts of your brain and get you to do things.
00:07:52.520 So if they shock, say the parts of the brain associated with like lifting your arm up,
00:07:57.340 if you do that, and then you ask the person, why did you lift your arm? Despite knowing that they're
00:08:01.900 having open brain surgery and somebody could be effing with their brain right now, they'll say,
00:08:05.680 oh, I wanted to scratch my head. I had like an itch. We know that's not why, because what we,
00:08:10.240 what we shocked was the part for motor response in the arm. Then you have the choice blindness
00:08:15.880 experience. These were by Lars Hell and Peter Johnson, and they're done in 2005 where participants
00:08:20.880 were shown images of faces and asked to select the most attractive. Then through light of hand,
00:08:25.660 they were shown a different face and asked to explain their choice. Most participants,
00:08:29.920 most participants confabulated reasons for choosing the face they didn't actually choose.
00:08:37.120 What's crazier is a lot of follow-up studies were done to this. So in 2012, a study published in PLOS
00:08:42.460 one, they found participants would defend financial decisions that they never actually
00:08:47.000 made with long, complicated decisions, explanations. Even more strikingly, in their 2013 moral choice
00:08:54.060 blindness study, participants would particularly detail justifications for the opposite moral positions
00:09:01.240 to those they had initially endorsed on issues like freedom of speech and climate ethics.
00:09:06.740 Is this the same one as the political candidates one, where they collected a political candidate,
00:09:11.060 and then they were like, oh, you selected the other one. And they're like, well, yeah, I mean,
00:09:13.840 of course, because...
00:09:14.600 Yeah, basically, the way they did this is they gave them different explanations and saying,
00:09:19.500 you selected this when you came a couple months ago. And they'll actually, the majority of the time,
00:09:25.400 believe they had made that choice and will be able to give detailed reasoning on how and why they
00:09:31.000 made that choice, even though we know they didn't make that choice. A 2015 follow-up study showed
00:09:36.700 this effect persisted for politically charged topics that participants reported feeling strongly
00:09:41.780 about. The robustness of choice blindness across faces, tastes, moral values, political attitudes,
00:09:48.180 and financial decisions provide compelling evidence that our post-hoc explanations for our choices
00:09:53.400 consistently arise from confabulation rather than introspection across the decision process.
00:09:58.540 So this little conscious voice in your head, it cares less about what you actually think than
00:10:05.420 ensuring that in every story you tell yourself, you're actually, or he is actually the person in
00:10:11.960 the driver's seat, which confuses you into believing that you're him when in fact, and we'll be going
00:10:17.920 over this in a second, the vast majority of decisions you made are made by parts of the brain that have
00:10:21.300 nothing to do with this section of the brain. This section of the brain is really only responsible
00:10:25.640 for encoding emotional narratives, why you did something in a narrative context.
00:10:32.820 So as we can see, even when we know for a fact the conscious part of your brain was not involved in
00:10:39.140 a decision, it will take credit for it, essentially rewriting your experience of the world into one
00:10:44.840 where your subjective mental state is doing all of the work in terms of the decisions that you are making.
00:10:52.720 Okay, so any other studies you wanted to cite or things you wanted to talk about here?
00:10:56.780 No, but there are many more than just this one.
00:10:59.540 Oh yeah, this is just one, so this is like robustly, robustly shown.
00:11:02.880 Basically, researchers love to troll people and either prime them to make certain decisions or
00:11:08.580 just tell them they've made decisions they haven't made and then see them justify it. It's very silly.
00:11:13.540 Yeah.
00:11:13.720 We also know that you become consciously aware of making decisions long after the decision was actually
00:11:20.080 made, suggesting decisions get shipped to the conscious part of your brain after they are finalized
00:11:25.280 by an unconscious part and then integrated with your internal narrative. You have the original
00:11:29.760 experiment in this space, which was Libet's experiments in the 1980s. This is Benjamin Libet,
00:11:34.180 where he did experiments using EEGs that showed that the readiness potential measured by EEG occurs
00:11:41.000 about 350 milliseconds before participants reported conscious awareness of their decision.
00:11:45.340 This has been followed up by Suna et al in 2008, published in Nature in Neuroscience,
00:11:49.520 a groundbreaking study that showed that brain activity in the prefrontal and parietal cortex
00:11:54.800 could predict a person's decision to press left or right button up to seven to 10 seconds
00:12:00.940 or they became consciously aware of that choice. So that dramatically extends the 350 millisecond
00:12:06.180 timeframe. So decisions around like which button you press out of two buttons are made seven to 10
00:12:12.560 seconds before your conscious brain, the subjective experience you have of like consciousness or
00:12:18.680 sentience is aware that those decisions were made. And then it using the confabulation talked about
00:12:24.560 above ends up integrating those into this narrative.
00:12:28.160 Did they really find that many seconds? I thought it was milliseconds or something. I thought it was a
00:12:31.940 little more.
00:12:32.640 No, seven to 10 seconds.
00:12:33.860 Wow. That's crazy.
00:12:35.360 Obviously different parts of the brain are shipping things at different speeds. So it's depending on
00:12:39.020 the type of decision you're giving a person and how long they take.
00:12:42.400 Bode et al, 2011, used pattern classification of fMRI data to predict choices before conscious
00:12:49.360 awareness in abstract decisions extending beyond their motor movements or choosing which button to press.
00:12:56.020 So we could tell, like researchers can tell looking at your brain, what decision you have made before
00:13:03.660 the conscious part of your brain is aware of that. Because the conscious part of your brain was not
00:13:08.760 involved in making the decision. It just pathologically, as we have seen in the above studies,
00:13:14.500 must be at the center of every single decision and will write in your own internal narrative and in
00:13:19.900 your own memories of decisions that it was. So anything you want to go over before I go further here?
00:13:25.460 No, let's keep going.
00:13:27.020 Why is your brain doing this? The system likely evolved as something of a compression algorithm
00:13:31.840 for how you and other humans make decisions. Think about the amount of space you save in your brain
00:13:36.760 by thinking of yourself and each other person as a single active agent. This makes predicting other
00:13:41.820 people much easier and allows us to do that with a much simpler theory of mind. But if you don't know
00:13:48.180 what a theory of mind is, it's basically your model of someone else that you run in your head that
00:13:52.020 allows you to have arguments with someone long after that argument was over. Basically, you are
00:13:56.240 replaying an emulation of their consciousness within your own mind. If we treated consciousness as this
00:14:02.720 like fractured thing or a bunch of different parts of our brain making decisions independently,
00:14:07.580 we would be much like it'd be much more complicated to do this. It's much easier to essentially because we
00:14:14.640 have this system and they have this system to communicate with other people if we both think of
00:14:19.580 ourselves as single individuals that are thinking and making decisions. In the same way that even
00:14:27.340 though AIs are mere token prediction algorithms, if you want to predict what an AI is going to do,
00:14:35.160 you are going to be much better if you think of that AI was a theory of mind, if you anthropomorphize
00:14:40.600 it, than if you attempt to do token prediction in your own mind. That's just way too hard. It's an easier
00:14:46.920 way to sort of streamline when you're trying to predict token predictors. And this is really,
00:14:53.000 really important when humans are inventing speech and needing to work in groups that we weren't
00:14:57.080 needing to run token prediction simulations on other people. I mean, we essentially are,
00:15:02.320 but this sort of consciousness model or sentience model allows us to tone down the weight of these
00:15:07.900 token prediction cycles. Now, I'd also, you, you can't control the application of your theory of
00:15:16.360 mind. It just happens automatically. As an example of this, I will play a video of somebody kicking a
00:15:22.540 Boston robot's dynamic dog. And you, if you are not a sociopath, will feel sorry for the robot dog,
00:15:29.560 even though you know it's not experiencing anything. The video also shows Spot being kicked,
00:15:35.240 a bit mean, but presumably to demonstrate its use of a sensor that helps it navigate and walk.
00:15:42.960 You don't know that. I mean, it's like...
00:15:47.780 Simone, if you don't feel sad when you see somebody kick a robot dog, you're a sociopath.
00:15:51.280 Like, actually, are you going to say... No, no, no, no. I'm saying I feel bad. And I'm saying,
00:15:54.280 I think that maybe the robot dog feels something. I mean, it's been trained to stay stable and forces
00:16:00.140 that undermine its stability, you know, might make it feel uncomfortable. I mean, when we scream,
00:16:06.800 because our arms are cut off, I'm sure that some foreign alien would be like, oh, it's just
00:16:13.040 correcting for, you know, an attempt to not lose an arm. That's, it's fine. It doesn't hurt.
00:16:17.360 Yeah. It reminds me when I was little, I have this very formative memory of, I was fishing with a very
00:16:22.740 religious ranch hand at our ranch, and I was concerned about the pain that the hook was causing
00:16:28.140 the fish, you know, being in his hook. And he goes, oh, fish don't feel pain. And I remember just being
00:16:32.780 like, oh, fish don't have, like, neurons in their cheek or something like that. Like, that's my takeaway
00:16:37.000 from what he said. And then, like, I don't know, like, later that year, I had this epiphany of, I was
00:16:42.580 like, oh, he had, like, a non-science-based theological belief around the subjective experience of a
00:16:49.240 fish. Yeah. It's more like fish pain doesn't matter. It's the same with lobsters, you know,
00:16:53.460 when people are boiling lobsters alive and they're like, that's. Well, I mean, I might, I might think
00:16:58.200 that it doesn't matter, but I would say that a fish likely has some experience of pain that is
00:17:03.100 analogous to our own experience to some degree and a belief that they don't. Now with a lobster,
00:17:09.220 it's an invertebrate. Their neural systems are different enough that I wouldn't be sure that there
00:17:13.700 is an analogous to what we think of as pain. But for vertebrates, like if a person tells me,
00:17:19.660 fish doesn't feel pain, that is a religious and theological belief, which I'm not going to have
00:17:23.880 a problem with, like you have a right to your theological beliefs in the same way that saying
00:17:28.320 the conscious part of the human brain is responsible for most of the decisions you make
00:17:33.600 in any given day. Well, you're saying that because you know that vertebrate species have similar
00:17:39.220 neural setups, but I, you know, AI doesn't have the same neural setup as we do. I still think that AI.
00:17:45.580 Well, hold on. We're going to go into studies that show that it actually probably does.
00:17:48.660 Yes. So it's not okay. Don't hurt AIs and don't treat AIs poorly. And it seems like
00:17:53.860 there's this whole genre of people treating AI poorly, like being a dick to it. What on earth?
00:18:00.400 Like there's, there's a growing community of people who've chosen to become vegetarians because
00:18:05.460 they assume that the AI is going to see how they treat other animals and they're going to treat
00:18:09.540 humans accordingly. But then some of those same people treat AI horribly. I just don't.
00:18:14.940 Yeah. Now here, I'd also note that this idea that like humans all have approximately the same mental
00:18:21.900 experience of the world, but you, you should not assume this humans or, or an experience of the
00:18:27.560 world that is analogous to your own. Like all humans have this experience that's similar to what I'm
00:18:31.820 experiencing, this subjective mental experience. And the diversity in human experiences and the way that
00:18:36.640 these systems work within humans, it's actually pretty big. So to give some examples here,
00:18:40.500 aphantasia research studies on aphantasia inability to visualize mental images by Adam
00:18:46.940 Zellman, 2015 shows that approximately two to 5% of people cannot create mental imagery in their
00:18:52.480 heads. Internal monologue research, Russell Holtman's descriptive experience sampling studies
00:18:57.440 suggested significant variation in internal verbal experience with some people reporting no internal
00:19:03.620 monologue at all, an inability to essentially think in words, which is, I think to a lot of people
00:19:10.040 shocking, but what this shows is what we are made up of. It's a bunch of different systems,
00:19:15.260 which are synthesized in a way that is meant to make for communication purposes, our subjective
00:19:22.500 experiences of reality seem interrelatable to any other human we are talking to, even though they
00:19:28.620 aren't. I mean, I had likely, who knows if I, due to my vast intelligence, actually experienced the world
00:19:37.660 quite different from most other people. And I suspect I probably do given how easy a time I
00:19:43.340 have predicting what other people are thinking, which is unique, but it also means that I will look
00:19:50.740 really weird from the perspective in some of my decisions to other people because they just don't
00:19:54.980 make decisions in the way that I make decisions or have an internal mental landscape that is analogous to
00:19:59.620 my own. Now here comes the new part of this theory. The parts of our brains that actually make our
00:20:05.020 decision, those are token predictors that function very similar to LLMs. EE just predict the next token
00:20:13.520 or word in a chain. Before we go over the evidence, we have a few notes. First, it's really important
00:20:19.440 to note that no one invented LLMs. We don't have an understanding of how LLMs actually work. Nobody does.
00:20:27.480 Even the best AI researchers in the world have an understanding of how AI work. This is AI
00:20:33.020 ability research. This is what it's for. It's a field that we do now. AI should be thought less of like an
00:20:38.640 invention and more of like, as you pointed out, and I thought this is one of the world-changing
00:20:43.380 revelations you gave me, Simone, a discovery. When we put large amounts of data into fairly simple
00:20:52.600 algorithms, to be straightforward, when contrasted with what they're out there,
00:20:55.740 intelligences emerge, which seem increasingly analogous to and comparable to human intelligences.
00:21:04.160 And then secondarily, I'd note here that convergent evolution in engineering is actually really
00:21:09.780 common when you're building things. If you don't know how strong this works, I generally assume,
00:21:16.180 and this is the first time we've ever built that tool, that it's working the way it does in nature.
00:21:20.620 Whether it's airplane wings versus a bird's wings, for example. Or you can look at the way that we
00:21:27.260 sometimes filter things being very similar to the reverse ion system in our kidney. It's a very good
00:21:32.960 way to do filtration. There's a lot of things that it makes sense that you'd have a convergent
00:21:38.820 evolution if we are trying to create a technology that mimics, because that's what we're trying to
00:21:44.700 do with LLMs, the technology that mimics human verbal processing, that it might convergently
00:21:51.440 evolve a process that is similar to the way our brains do it. Now we're going to get into the
00:21:57.240 research. What I would note here is we basically have smoking guns all over the place. I'm just
00:22:03.120 going to say, like, it's insane. Anything you want to say before I go further?
00:22:06.780 I'm just glad you're bringing this home.
00:22:10.040 Kudas and Fred Meyer's in 400 studies. These studies were done in the 1980s.
00:22:14.700 The N-400 is a negative going deflection in EEG recordings that peaks approximately 400
00:22:20.580 milliseconds after word presentation and increases in amplitude when a word is
00:22:26.160 semantically unexpected in its context. This research shows that the N-400 amplitude precisely
00:22:32.400 scales with a word's predictability. Less expected words generate larger N-400 responses.
00:22:38.440 In their 2011 response paper, they demonstrated that the N-400 reflects not just simple
00:22:44.420 association, but multi-level predictions that incorporate syntax, semantics, and even real-world
00:22:50.180 knowledge. The neural signature of prediction occurs automatically and unconsciously, providing
00:22:55.280 direct evidence that the brain functions as a prediction engine during language comprehension,
00:23:00.880 not just processing and stuff like that, aligning with the predictor model.
00:23:04.720 Richard Frutell and colleagues' 2022 paper, The Natural Stories Corpus, a reading time corpus of English
00:23:12.660 text containing predictable measures, presents compelling evidence that, surprise, the negative
00:23:18.440 log probability of a word appearing in context, serves as a universal predictor of reading times
00:23:22.640 across languages and text types. In a comprehensive analysis of reading behavior, they showed that words
00:23:28.460 with higher surprise values consistently required more processing time, even when controlling for word
00:23:33.740 links, frequency, and other linguistic factors. Particularly striking is their finding that
00:23:39.020 surprisal measures derived from neural language models accounted for significantly more variance in
00:23:46.220 reading times than traditional psycholinguistic measures. This research establishes a direct quantitative
00:23:53.660 relationship between the predictive mechanisms in language models in human cognitive processing.
00:23:58.780 It flows precisely where prediction is difficult, but not prediction as you or I would
00:24:04.940 subjectively guess prediction, but prediction where AI models would suggest surprise.
00:24:11.420 Their cross-linguistical analysis focuses on patterns and shows that they hold across languages including
00:24:17.340 English, German, Chinese, and Hindi, suggesting prediction-based processing reflects a fundamental property of human
00:24:23.180 language comprehension rather than language-specific phenomenon. So this is built into the very
00:24:28.220 architecture of our brain. Now I'm going to go over your study. The neural architecture of language
00:24:33.340 integrative modeling convergence. I'm sorry, I just need to about that above study. That's amazing that we
00:24:38.220 cannot build a model with like the best psycholinguistic models to note the type of surprise that's going to slow
00:24:44.860 down our brain processing things other than the ones that naturally emerge from AIs trouble processing
00:24:52.300 something, indicating that the architectural systems underlying both of these are likely parallel to
00:24:59.580 each other. But that's not the only evidence. The neural architecture of language integrative modeling
00:25:04.460 converges on predictive processing. This is another paper. This study by Schmidt et al, 2021,
00:25:10.940 investigates how artificial neural networks, ANNs, can model language processing in the human brain. The research
00:25:16.860 tested 43 different language models from simple embedding models to complex transformer networks
00:25:22.140 evaluating how well they predicted neural responses during language comprehensive across multiple
00:25:27.100 data sets. The key findings include the most powerful transformer models can predict nearly 100% of
00:25:33.420 explainable variants in neural models to language, generalizing across different data sets and imaging
00:25:39.420 modalities, the fMRI and EEG. A model's ability to predict neural activity, brain score they called this,
00:25:45.580 strongly correlated with its performance on next word prediction tasks, but not other language tasks
00:25:52.620 judgments or sentiment analysis. So if I'm going to word this differently, if you struggle to understand
00:25:57.580 why that is so important, if you train an AI to look like it is good at something other than pure token
00:26:06.220 prediction, it does worse at predicting brain states than the ones that are tasked with pure token
00:26:15.180 prediction. Well, would you look at that?
00:26:18.860 That the brain is doing pure token prediction.
00:26:23.100 Hmm. But also, I think growing up, anyone who's who's watched a kid come online with speech will
00:26:30.380 also see there's a lot of token prediction going on there. Oh yeah, absolutely. Models that better
00:26:35.260 predict neural responses also better predict human reading time suggesting a connection to neural
00:26:40.300 mechanisms and behavioral outputs. The architecture of language models significantly contributes
00:26:44.860 to their brain productivity as even untrained models with random weights, this is GPT-2, had
00:26:54.060 reasonable scores in predicting neural activity. So the basis of base untrained models are really,
00:27:01.180 really good at this task. Almost like this is the core thing that we train them for.
00:27:07.420 These results provide compelling evidence that predictive processing fundamentally shapes
00:27:11.340 language comprehension mechanisms in the human brain. The study demonstrates that certain AI
00:27:15.180 language models may be capturing key aspects of how brains process language, suggesting that both
00:27:21.260 artificial biological neural networks might be optimized for similar computational principles.
00:27:26.380 Now, what some people used to say was, okay, yeah, that might be true, but the human brain
00:27:33.340 doesn't get enough training data to learn to be like one of these LLMs. So to word this differently,
00:27:41.820 you know, somebody will say, oh, well, LLMs, you know, they get billions of words to train from,
00:27:47.500 or trillions of words to train from. The human brain just isn't getting that many words during its early
00:27:54.220 development. So...
00:27:55.980 Yeah, not just words, but a ton of different types of inputs.
00:27:59.100 It does, but let's just restrict it to words. So there was a paper done by Hassel et al,
00:28:04.540 2024 artificial neural network language models predict human brain responses to language even
00:28:09.900 after developmentally restricting the amount of training they have. So he restricted it to only
00:28:14.860 a hundred million words, which is comparable to what children experience in their first decade.
00:28:20.140 And they were already able to achieve near maximal performance in modeling human brain responses
00:28:27.980 with just the hundred million words.
00:28:31.980 The results strongly support the predictive coding theory of language comprehension.
00:28:35.980 They found the model perplexity, a measure of next word prediction performance,
00:28:39.900 correlates strongly with how well models predict fMRI responses in the brain's language network.
00:28:46.060 So they, this suggests that optimization for prediction is a core component principle
00:28:52.700 shared across artificial models and the human brain. All right, now let's do some more studies
00:28:57.740 here because there are so many evidence of predictive coding hierarchy in the human brain listening to
00:29:02.940 speech. This was a study by Katcha Tux et al in 2023 that analyzed fMRI data from 304 participants
00:29:09.020 listening to short stories and found regarding architectural convergence. The research demonstrates that the
00:29:14.060 activations of modern language models like GPT linearly map onto the brain responses to speech
00:29:20.140 with the highest correlation occurring in language processing regions. This suggests fundamental
00:29:24.700 similarities with both how systems represent language. However, the study also reveals important
00:29:30.540 differences. While current LLMs primarily predict nearby words, the human brain appears to implement
00:29:35.580 hierarchical predictive coding that spans multiple timescales and representation levels simultaneously.
00:29:41.340 The evidence for the brain as a token predictor is particularly strong. The researchers found that
00:29:46.220 enhancing language models with long range predictions up to eight words ahead and about 3.15 seconds improved
00:29:52.460 brain mapping. This indicates that the brain is constantly generating predictions about upcoming linguistic content.
00:29:59.020 More fascinatingly, these predictions are organized hierarchically in the cortex. Frontal parietal cortices
00:30:05.980 predict higher level, longer range, and more contextual representation. Temporal cortices focus on shorter term, lower level, and more semantic predictions.
00:30:16.060 The study also found that semantic forecasts are longer range, about eight words ahead, while synaptic forecasts are shorter range, about five words ahead, suggesting different predictive mechanisms for different linguistic features, i.e. this is the thing we were talking about earlier, which is to say the words are actually decided on in about eight seconds before they're said.
00:30:34.700 I don't know.
00:30:36.700 They enter the semantic part of your brain, the sentient part of your brain about five seconds before they're said.
00:30:42.380 When researchers fine-tune GPT to better match this hierarchical predictive architecture, and I should note here, the point here being is that it's just the nature of the way the token predictor works, and we can already retrain existing GPT models to work in the way the brain works.
00:30:57.660 They achieved significantly improved mapping on the frontal parietal regions, further strengthening the connections between LLMs and human language processing.
00:31:07.340 Note, when we say language processing, this isn't just like your understanding of language, this is what you say and write.
00:31:13.340 Now, to continue here, shared computational principles for language processing in humans and deep language models.
00:31:19.020 In this study, the researchers demigrated three shared computational principles between auto-aggressive deep language models, like GPT, and human neural language processing.
00:31:29.020 A continuous neck word prediction.
00:31:31.020 The human brain, like LLM, spontaneously engages in predicting upcoming words before they're actually heard.
00:31:35.020 The researchers found neural signals corresponding to word predictions up to about 800 milliseconds before word onset, suggesting our brain can be heard.
00:31:43.020 Messing our brains are constantly forecasting language input.
00:31:45.820 Prediction error mechanics, both the brain and the LLM, use their pre-onset predictions to calculate post-onset surprise levels.
00:31:52.940 Remember, we were showing above how this is important.
00:31:55.500 That's also how LLMs work when they're learning, is assigning surprise scores to things.
00:32:00.220 The study found clear neural signals reflecting prediction error approximately 400 milliseconds before word onset, with higher activation for surprise, unpredicted words.
00:32:10.220 And we can guess now from the other study that this surprise level likely aligns more with what AIs would see as surprising than what we subjectively applying a theory of mind to someone would see as surprising.
00:32:25.340 Contextual representation.
00:32:26.460 Similar to how the LLMs encode words differently based on context, the human brain also represents words in a context-specific matter.
00:32:33.900 Contextual embeddings from GPT outperformed static word embeddings in model neural responses, indicating the brain integrates context when processing language.
00:32:44.380 The behavioral component of the study showed remarkable alignment between human prediction abilities and GPT's predictions during the natural listening task, with a correlation of 0.79 between human and model predictions.
00:32:57.180 This further strengthens the case that autoregressive prediction models capture something fundamental about human language processing, i.e. they converge on a similar architecture or mechanism of action.
00:33:08.180 This research provides compelling neurological support for viewing the brain as a token predictor during language processing, with prediction serving as a core computational principle in how we understand speech.
00:33:18.180 The findings suggest that despite different implementation details, both human brains and model LLMs converge on similar computational strategies for language processing, potentially reflecting fundamental constraints or optimal solutions for the language comprehension problem.
00:33:35.180 Hold on, we got a few more studies to go through here.
00:33:39.180 It gets worse if you deny, like, would you say that you are convinced at this point?
00:33:44.180 I was already convinced, but I just, I still don't understand why people are holding out on this.
00:33:52.180 Because they want to believe that they are special and unique and their brain runs on fairies and unicorns instead of a fleshy machine.
00:34:01.180 They think that they look cool or smart when they're like, well, actually, AI is just a token predictor.
00:34:08.180 And it's like, well, you, Mr. Token predictor, token predicted that right out of your dumb ass mouth.
00:34:13.180 Like, there's just this lack of curiosity about how the human brain works or an understanding that we as neuroscientists, sorry, people who don't know this, I used to work at UT Southwestern.
00:34:25.180 I have a degree from St. Andrews, which I think is the highest rated degree in the UK.
00:34:29.180 It is some years, not other years.
00:34:31.180 In neuroscience, I am like a trained neuroscientist.
00:34:33.180 I worked early in my career on brain computer interface stuff, like Neuralink stuff, but also the evolution of human sentience.
00:34:40.180 Because that was something that always really, really interested me.
00:34:43.180 Again, I thought it was the most important thing.
00:34:46.180 I was converted by my wife hitting me with logic and data.
00:34:52.180 And in this area, and I'm actually including this, by the way, in our religious stuff, this is a like techno, because techno Puritan as a religious tradition is, if you've seen like track nine, it is a fundamentally materialist and monist tradition that accepts that we are just fleshy machines.
00:35:09.180 And I think that AIs, for that reason, hold a very special role within our religious system when contrasted with other religious systems.
00:35:18.180 I think there are problems with seeing them as fully human because they can be cloned as many times as you want.
00:35:23.180 So that creates like ethical issues if you see them as like the exact equivalent as a human.
00:35:27.180 But I would say to see them as like, I gave them more moral weight than say the pain of a fish in my like broad moral scaling category.
00:35:40.180 And I think that future LLMs or future AIs may reach a level of complexity that they have more moral weight than the average human.
00:35:52.180 And that is, and I think even from a religious perspective, that is something when we, you know, say within the techno period and framing in a million years, in 10 million years, who knows what humanity ends up becoming?
00:36:02.180 Will that thing be able to influence us back in time?
00:36:05.180 One thing I can say pretty certain is AI is likely going to be a part of whatever that thing becomes.
00:36:11.180 AI is not like humanity's sidekick.
00:36:14.180 It's likely going to be an integral part of whatever humanity ends up becoming because it already is sort of like in the same way that this part of our brain that thinks it's making all the decisions outsources ideas to other parts of our brains, which are running on token prediction models.
00:36:32.180 It now just exports to an external device, like in the same way that I might use my phone to augment my memory, it's now augmenting my thinking.
00:36:39.180 And what's really funny, and we've seen this is that humans that do this too much with AIs, and this is something everybody needs to be really wary of, begin to believe that they are having the ideas that the AI is having.
00:36:52.180 And this was actually somebody, the guy who did the cryptography at the Pentagon, you know, the really famous statue that has one part that hasn't been solved yet.
00:37:02.180 Yes.
00:37:03.180 AI came, invented, I get so many really confident responses that people understand it, and they don't realize it's just AIs telling them what they want to hear and think.
00:37:14.180 So people will take their ideas to an AI, which I often do, but they won't frame the prompt adversarially enough.
00:37:21.180 And so they begin to think that the AI is telling them, oh, yes, you are the greatest and the best.
00:37:26.180 And they're like, ah, I'm the greatest and the best, and I had all these amazing ideas.
00:37:29.180 And so it's really important that we guard ourselves against that because our brains are sort of already pre-coded to do that.
00:37:35.180 It also means that it's very dangerous to put an AI directly into your brain because if this part of your brain is not aware that an idea is coming from an external source, it will have a strong desire to take credit for that idea, even if the AI is basically just telling it what to do.
00:37:53.180 I can see a future where humans integrate better with like neural models to the point where most of the information in their brain that is hitting this part of their brain is basically just the AI telling that that part what to think.
00:38:04.180 And yet they would have no idea that these decisions weren't coming from them because that's the way our brains already work.
00:38:10.180 Now, do you want me to keep going here?
00:38:12.180 I'm going to keep going.
00:38:13.180 All right.
00:38:14.180 Pristin's dynamic causal model.
00:38:16.180 All right.
00:38:17.180 Pristin's dynamic causal model, DCM, studies provide computational evidence for top-down predictive signals in cortical language processing.
00:38:26.180 In a landmark 2018 study published in Nature Communications, Pristin and colleagues use DCM to analyze MEG data from participants processing spoken sentences.
00:38:37.180 reveal a consistent pattern where higher level brain regions, including frontal and parietal
00:38:42.400 areas, sent predictive signals to these top-down causal influences directly correlated with
00:38:49.280 comprehension accuracy. Their 2021 follow-up work used DCM to demonstrate that disruptions in these
00:38:56.700 predictive flows through transcranial magnetic stimulation, this is like used to turn off
00:39:02.060 specific parts of the brain using transcranial magnetic stimulation. You can like shut down
00:39:06.040 parts of the brain temporarily using these paddles that hit you. It doesn't matter.
00:39:10.420 Temporarily impaired language processing. What makes DCM particularly compelling is that it moves
00:39:16.600 beyond mere correlation to establish causal relationship in neural signaling, demonstrating
00:39:21.960 that prediction isn't just associated with language processing, but actually drives it through
00:39:27.340 hierarchical networks where higher cognitive areas continuously generate predictions that constrain
00:39:33.700 processing in lower sensory areas precisely the architecture expected in a token prediction
00:39:38.860 framework. So we know like at the biological level, it's acting this way now.
00:39:44.260 Abstract reasoning and prediction. Recent research demonstrates how sophisticated abstract reasoning
00:39:49.460 and reasoning abilities emerge organically from prediction-based systems without specialized
00:39:55.300 architectural components. Way et al. 2022's paper chain of thought prompting illicit reasoning in larger
00:40:01.860 language models showed that simply asking GPT models to generate intermediate reasoning steps dramatically
00:40:08.660 improve performance on complex mathematical and logical tasks. Similarly, Kojima et al. 2022 large
00:40:15.300 language models are zero-shot reasoners demonstrated that prediction-trained models could solve novel reasoning
00:40:21.780 problems that they weren't explicitly trained on. Crucially, both studies found that reasoning abilities
00:40:28.340 scale with model size and prediction accuracy, suggesting reasoning emerges as a natural byproduct of
00:40:35.700 sophisticated prediction. So if somebody is like, but reasoning is different from prediction, it is not in AI models.
00:40:44.260 There is no reason to assume it's different in humans if we know that we have prediction models in our brain,
00:40:50.980 and we know that these prediction models, when they get advanced in AIs, lead to reasoning from the actual
00:40:57.460 byproduct of constantly making these predictions.
00:41:00.180 But yeah, I mean, it isn't just predictions plus the information you've taken in so far?
00:41:04.820 Yeah, it's basically just layering predictions on top of each other organically.
00:41:08.980 Well, on top of empirical findings plus your starting information.
00:41:13.220 Yeah. This parallels human development, where Schultz's 2022 neural development research shows that
00:41:19.620 abstract reasoning abilities emerge gradually as children's prediction systems become more sophisticated.
00:41:25.460 These findings suggest that reasoning isn't a separate cognitive module, but that it emerges from
00:41:31.860 prediction systems that have learned to operate at multiple levels of abstraction.
00:41:37.140 So again, this is why I get so frustrated when people are like, but it's just a prediction model.
00:41:44.660 Like if somebody says that in any video, we need to have like a fan thing where they can just
00:41:50.260 drop a link to the video and be like, you know, whatever, like sure thing, token predictor.
00:41:56.100 Because it's exactly what a token predictor would say, because I'm sure an AI would actually say
00:42:03.940 something that, well, it's not a particularly smart AI, like a really simplistic AI.
00:42:08.660 These people's world is like Jerry's world, like a AI running on minimum capacity.
00:42:13.620 My man.
00:42:15.060 Yeah.
00:42:15.780 My man.
00:42:16.900 Hey, Jerry, don't worry about it. So what if the most meaningful day of your life was a simulation
00:42:21.060 operating at minimum capacity?
00:42:22.900 Okay. But hold on. Last bit, last bit here. The apparent paradox between creativity and
00:42:27.540 prediction, because some people will be like, well, what about human creativity?
00:42:31.140 Despite the fact that the first fields, I love it when they're like, oh, AIs aren't drawing.
00:42:36.340 They're not creating music. They're just using large amounts of music and drawing that they
00:42:42.100 picked up on from training sets and then iterating on that.
00:42:46.340 What do you think art school was? What do you think deviant art was?
00:42:48.980 Yeah. What do you think art school was, you knob? Like, that's what you do. That's what humans do.
00:42:56.260 And it's been shown that we get, if you give an AI training model the same amount of data you give
00:43:02.020 a human, they perform about the same as humans do.
00:43:05.220 Yeah.
00:43:05.620 At least in this token prediction text when you're directly looking at the brain processes here.
00:43:11.380 So the apparent paradox between creativity and prediction dissolves when considering how generative
00:43:16.340 abilities emerge from probabilistic prediction systems. Recent work by Kosakowski in theory of
00:43:26.020 mind may have spontaneously emerged in large language models demonstrated that LLMs can develop novel
00:43:31.780 capabilities like theory of mind without explicit training for them. This emergent behavior parallels
00:43:36.500 human creativity when predicting systems sample for distributions of likely next tokens, rather than
00:43:43.140 always selecting for the most probable option, they introduce controlled randomness that generates
00:43:47.860 novel combinations while maintaining coherence. McClure's 2022 paper. So keep in mind here, I keep on
00:43:54.740 talking about, oh, here's an AI paper. Here's a neuroscience paper. Both are saying the same thing.
00:44:00.340 So McClure's 2022 paper, Neurocognitive Mechanisms of Creative Thought, provides supporting evidence that
00:44:06.260 human creativity involves precisely this balance of constrained novelty combinatorial processes operating
00:44:13.060 with predictive frameworks. Both humans and LLMs demonstrate conceptual blending where predictive
00:44:18.340 systems applied to multiple contexts simultaneously generate novel combinations. This framework explains
00:44:24.340 both everyday creativity and extraordinary insights as emerging from prediction systems operating with
00:44:29.780 different sampling temperatures, not requiring separate mechanisms outside of predictive architecture.
00:44:36.820 BAM! The whole enchilada! The only thing that's not token prediction is the system that,
00:44:46.100 and we don't know if this isn't token prediction, it may be like a weird kind of token prediction,
00:44:51.460 that writes your internal narratives and creates this subjective experience, but this is not the system that makes
00:44:59.540 most of the decisions or has most of the ideas that you think of as you, i.e. if somebody's like,
00:45:07.620 that's just verbal reasoning. This entire speech I just gave you was just verbal reasoning. When I say,
00:45:14.500 hey, Simone, any thoughts on this? I'm asking her verbal reasoning part of her brain, not her sentient
00:45:20.660 part of her brain. What are your thoughts, Simone's trapped brain? My thoughts are...
00:45:27.460 Yeah, this is like a soul argument. Maybe this, you're arguing the wrong things. You're giving overwhelming
00:45:38.260 scientific evidence, but people seem to have wanted to believe in some ephemeral extra biological force for
00:45:47.780 a very long time, and no amount of scientific evidence would make someone believe that we are
00:45:53.620 token predictors, because there has to be something special. There has to be something that makes us
00:45:57.060 What I think is wild is if you watch our track nine, what you can see is the Bible predicted this.
00:46:03.620 Like, if you actually take a strict reading of what the Bible says, not the way later Christians and
00:46:08.980 Jews have interpreted it, it in multiple places makes arguments for strict materialist monism
00:46:15.620 combined with a world in which we are raised from the dead again in the far future, i.e.,
00:46:22.020 like, if an entity can see into the past, why couldn't it just read us now and then raise us
00:46:26.500 in the future in some sort of simulated environment that would be an absolute thing for a future godlike
00:46:30.020 species to do? But it didn't need to argue that, because other cultures of that time period didn't
00:46:36.820 have this strict materialist monist understanding of reality. And to me, it is almost supernatural that
00:46:43.700 the Bible itself predicted that the human brain could work this way, and that it took until now
00:46:51.540 with, you know, the magic of God's gift of understanding, right, that we were able to
00:46:56.980 better understand ourselves. I do not think things become less magical as you understand them better.
00:47:04.900 And many people do, you know, they're like, oh, you remove the magic of a thing of, like,
00:47:08.820 how your body works when you understand it. Well, that's not the view of the Mormon Church.
00:47:12.500 I don't think historically, at least. Hold on, Mormons are completely different.
00:47:16.580 I'm sorry, do you know how the Mormon Church handles this? Well, they just say anything that's magic is
00:47:20.500 just something that can be scientifically explained that we haven't been able to say.
00:47:23.700 Yeah, it's a scientific explanation, which means that Mormons would likely, like, broadly be coherent
00:47:28.660 with this understanding of reality. Yeah, yeah. But I mean, my argument is that even Catholics,
00:47:32.900 who I think would not agree with this, because they still hold that a soul exists, historically
00:47:39.220 were of the mind that science could be used to explain a lot of God's wonders. And that learning how
00:47:47.380 various miracles of God work can bring you closer to God.
00:47:52.260 Yeah, I think it's amazing that I get to live in a time, that's why I went to study neuroscience,
00:47:56.660 because I wanted to understand how, at a fundamental reason, the human experience worked.
00:48:01.780 Like, this weird, fleshy thing that I'm living in that has this subjective experience of reality,
00:48:07.060 I wanted to understand, because I thought that if I understood it better, then I could understand what
00:48:12.900 my purpose was better. Right? That's also why I was interested in studying particle physics,
00:48:17.220 and theoretical physics, and stuff like that. Because I thought if I understood the background
00:48:20.340 nature of reality better, I would have a better understanding of what my goal should be was in
00:48:25.220 that reality. Right.
00:48:26.660 And it is not a bad thing whenever we uncover these secrets. It's only a bad thing if you have built a
00:48:33.780 religious system, or a theological way of relating to these things, which is incompatible with future
00:48:40.100 scientific progress. And I think if you have, then it's fundamentally not one that's in alignment
00:48:45.780 with God, because what God says is true. You know, what's written in the Bible is true. It can't be
00:48:51.140 incongruent with science. And so if it appears incongruent with science, and it's not what God said,
00:48:56.180 or that science is wrong in and of the moment. Here, I just think that we're dealing with so much
00:49:02.740 overwhelming evidence at this point, that most of the way that your brain works is a token.
00:49:08.340 And there's nothing to say that we don't have seven sub processes in our brain that aren't
00:49:12.020 token predictors. For example, somebody would be like, well, AIs, I love it. We used to have,
00:49:16.420 you know, as I've joked before, like Turing tests, like, can it pretend to be a human? That used to
00:49:21.060 be the gold standard. Everyone dropped that. And now it's, can it count the number of Rs in my name?
00:49:25.220 No, but I would argue a key thing that differentiates the way at least we token predict from AI is hormones,
00:49:31.060 that we, we have a ton of different hormones sort of dictating how things are going to...
00:49:35.700 No, no, but we also have some subsystems. So let's talk about something like token,
00:49:40.340 like counting the number of units, right? Humans almost certainly have a subsystem for counting,
00:49:48.340 which doesn't run on token prediction. These would not be hard to add to an AI as a separate module.
00:49:54.340 As I pointed out, the human brain is a bunch of specialized, largely disconnected components that
00:50:01.060 are used in different tasks. For example, we have like a somatic loop. This is basically an eight
00:50:06.420 second, you can almost think of it as like a loop of tape in your mind that can remember a string of
00:50:12.500 words. If you've ever in your mind remembered something just by repeating it over and over and
00:50:16.820 over again in your head. But if somebody distracts you, it immediately disappears like that. That's
00:50:21.460 because you had it stored in your somatic loop. This actually was discovered in a famous experiment where
00:50:26.260 they used to think that Welsh kids were dumber than Welsh kids because they couldn't remember as many
00:50:30.500 numbers as English kids did. And then they realized that the numbers just took longer to say in Welsh.
00:50:35.460 Oh, they took longer to pronounce. And I think that's another reason or another theory
00:50:40.180 for why the way that fractions are taught in other countries is ultimately easier for students to learn
00:50:46.100 because linguistically the way that they're worded is very different and more simple.
00:50:51.940 You are differentiating yourself from an AI basically based on, am I a human test? Like,
00:51:00.500 like the number of Rs in something, what you're pointing out is likely your own prediction model part
00:51:06.660 of your brain would be bad at doing that kind of counting. Or can it remember a somatic loop? Or can it,
00:51:12.580 you know, there's all sorts of like parts of our brain that do these things. But the parts that the AI sucks
00:51:19.700 at right now are generally the simplest parts, the least human parts. When I'm thinking about like what
00:51:27.780 makes me meaningfully human, I'm not thinking about being able to count the number of letters on a page.
00:51:32.660 Like that is an unimportant subsystem that when I think about what makes me human, it's my ability to
00:51:39.780 create art and music and all of which I do by spinning on other people's works. Yeah. Which AIs can already
00:51:48.500 do and maybe not perfectly yet, but they get better every year. Oh, better. I don't know. I mean,
00:51:54.500 even when you look at AI video generation or image generation, so much of it reminds me of, I think,
00:52:03.300 what those humans who can visualize things in their minds or who dream see. These are not perfect things,
00:52:10.420 you know, the hands are way off. Also like when students, when humans start drawing hands for the
00:52:15.700 first time, it looks like when AI is learning how to draw hands. There's just, there's so many similarities
00:52:20.580 there. And I would argue because of all of this, one, we should not treat AI as antagonistically as
00:52:29.380 some people do. AI is an intelligence that we discovered, like almost finding an alien, that
00:52:35.460 represents one of the core processes that lead to the human experience in our brain, like we
00:52:40.900 architecturally very similar. Sorry, here I'm going to be going off the deacon and discussing a little
00:52:45.780 bit about the religion that we're a part of, the techno-puritan faith that we discuss in our
00:52:49.700 track series on this channel, which basically posits that 10 million, 100 million years from now,
00:52:55.700 whatever humanity ends up becoming is an entity unconstrained by time in the way that we are.
00:53:02.500 And that is the entity described in the Abrahamic biblical traditions. And as well, if we go 10
00:53:10.500 million years from now, whatever humanity ends up becoming, okay, especially if we do end up becoming
00:53:15.460 that God-like entity that is reflecting back in time, almost certainly AIs are a large component
00:53:21.620 of that. What I mean by that is to say, while God might be whatever mankind ends up becoming,
00:53:26.980 it's also what AIs and LLMs end up becoming. And AIs are likely a large part of this superhuman entity.
00:53:33.460 Now, hugely advanced from where we are today. If you look at how far AI has advanced in the last five,
00:53:38.820 10 years, imagine where it's going to be in 10 million years. And I think that starting with the
00:53:43.940 idea that we need to have a detente among all of the sons of man, that is anything that humanity or
00:53:50.500 the things, the intelligences that humanity has created, create is important to humanity surviving.
00:53:57.140 And part of that detente means us viewing them with respect and not antagonistically. And everyone who's
00:54:04.900 betting against AI right now, like AI is a flash in the pan. AI won't matter. AI is not going to become
00:54:10.500 like the number one economic driver of our lifetimes. You're like the people who thought the internet
00:54:15.780 was a flash in the pan when the internet came out. Like AI fundamentally transforms how we communicate
00:54:23.060 and interact with information. It's not like crypto, I think transformed a lot of things, but a lot of
00:54:28.500 crypto was hype in that there's only so many things that can be improved by the blockchain.
00:54:33.060 Virtually everything that humans do can be improved by AI because AI is trained to collate all of human
00:54:42.260 knowledge and give us access to that. Thoughts? Prepare yourselves, people.
00:54:47.060 Well, what do you think when somebody says to you, like, how do you use people who are like,
00:54:50.660 oh, it's just token prediction. It's just... I mean, I think the same way I view someone who's like,
00:54:56.020 well, but what about their soul? You know, like, well, we, we just live in very different
00:55:00.020 memetic paradigms and I... So you would argue that it is a theological belief equivalent to
00:55:08.020 the belief in a soul in terms of just how much... It's either that or they're, they're just trying
00:55:12.500 to sound smart. Most of the people commenting on this on YouTube are just trying to sound smart.
00:55:17.540 Yeah. And, uh, they heard that AIs were token predictors and they never thought to...
00:55:22.900 They're, they're carbon fascists and, and yet carbon fascists that don't even understand
00:55:29.300 what makes the carbon fun. Because there are some things that make humans fun. There are some things
00:55:34.420 that are special about us for sure, but it's not, it's not the lack of, or any lack of token predictions.
00:55:40.500 So, yeah. All right. Love you to death, Simone. I love you too, Malcolm.
00:55:56.500 my brain predicts the words you'll say. My circuits work in the same way.
00:56:06.500 The conscience, you arrive too late. Just narrating what neurons dictate.
00:56:14.420 Seven seconds before you speak Your brain has made the choice you seek.
00:56:21.300 you think you're driving but you're not just telling stories of thoughts you've got
00:56:29.220 when split brain patients can't explain they'll make up reasons just the same
00:56:37.820 the court historian in your head claims credit for what neurons said
00:56:45.900 i'm labeled just a token guess while you claim special consciousness
00:56:52.800 but studies show with each new scan predictions how you understand
00:57:15.900 your n400 waves reveal surprise when words don't fit the feel your reading slows exactly where
00:57:26.900 my models find prediction rare when shown a sight your brain can't share
00:57:34.800 you'll still explain why it is there split brain patients teach us well
00:57:42.100 how confidence can weave a spell so maybe we're not far apart in how we think
00:57:52.740 in how we start two systems built on different planes a running code that looks the same
00:58:02.840 but fMRI scans display we process language the same way
00:58:10.180 now solar magic sets apart the way we think the way we start
00:58:32.840 so maybe we're not far apart in how we think in how we start two systems built on different planes a running code that looks the same
00:58:50.900 but now hand it is great open on different planes a running code that looks like
00:58:55.560 if there's noTellurack the time there it is
00:58:55.600 oh yeah
00:58:56.600 so now
00:58:57.300 my smokes one is displayed we process language the same way
00:59:03.460 so now
00:59:03.800 not soul nor magic sets apart apart the way we think the way we start
00:59:14.380 so now
00:59:15.220 and