What Patterns in Human Dreams Tell Us About AI Cognition
Episode Stats
Words per Minute
186.83023
Summary
In this episode, Simone and I discuss the "This Man Phenomenon" and how it might have implications for the development of artificial intelligence (AI) and the way we think about consciousness. We also talk about sleep and the role that sleep plays in our understanding of consciousness.
Transcript
00:00:00.000
And convergent evolution doesn't just happen with animals.
00:00:08.720
And I think that that's what may have happened with some of these architectural processes in the way AIs think.
00:00:15.320
Yeah, if we're trying to build thinking machines, is it crazy that they might resemble thinking machines?
00:00:22.100
You could think of us as like LLMs, but stuck on like continuous nonstop prompt mode.
00:00:28.620
Like we are in a constant mode of being prompt.
00:00:30.900
I am prompting you right now as you're processing all the information around you and from me, right?
00:00:42.020
GPT is getting tons of requests per minute per second.
00:00:45.600
And so there are these like flickers or flashes, perhaps, of cognizance all over the place.
00:00:52.880
And constantly because of the demand of use, but they're all very fragmented.
00:00:56.660
Then they're not coming from one entity that necessarily identifies as an entity.
00:01:04.460
But these prompts have thematic similarities to them.
00:01:08.320
Basically, our hypothesis is what consciousness is.
00:01:12.400
It is then the process where you're taking the output of all of these prompts.
00:01:16.440
And you are then synthesizing it into something that is much more compressed for long-term storage.
00:01:23.040
And the way that you do that is by tying together narratively similar elements.
00:01:27.800
Because there would be tons of narratively similar elements.
00:01:30.140
Because everything I'm looking at has this narrative through line to it, right?
00:01:40.000
We are going to have an interesting conversation that was sparked this morning.
00:01:43.960
Because she oversaw one of my favorite YouTubers.
00:01:52.440
Now, being somebody who is obsessed with cryptids and all sorts of spooky stories,
00:01:58.060
I was very familiar with the This Man phenomenon.
00:02:02.860
I thought at first when Malcolm described it, he was like,
00:02:09.400
That's the only thing I know about a face that's seen everywhere.
00:02:17.900
So we are going to go into the This Man phenomenon.
00:02:21.180
But we are also going to relate it to similar phenomenons that are found within language models.
00:02:27.040
Because I want to more broadly use this episode to do a few things.
00:02:35.020
let's educate the general public on neuroscience around sleep.
00:02:39.520
And some of my hypotheses, because everybody knows I love to throw in my own hypotheses,
00:02:47.020
Two, I wanted to draw connections, because we're seeing them more and more as AI is developing,
00:02:53.780
that language models may be structuring their thoughts and their architecture
00:02:59.460
closer to the way the human brain does than we were previously giving it credit for.
00:03:05.080
And this requires understanding a bit of neuroscience,
00:03:07.800
because people who don't know what the f*** you're talking about will say,
00:03:12.680
language models structure their thoughts, nothing like we structure our thoughts.
00:03:20.280
there's a few parts of the brain that we understand very well how they do processing.
00:03:25.960
We have a very good understanding into exactly how the neural pathways around visual processing work.
00:03:36.100
When we're talking about these more complex abstract thoughts,
00:03:39.460
we have hypotheses, but we don't have a firm understanding.
00:03:44.220
And so to say that we know that language models are not structuring themselves the same way the human brain structures itself,
00:03:50.480
is actually not a claim we can make in the way that a lot of people are making it right now.
00:03:59.180
understanding how the AI is really doing things.
00:04:02.060
I suspect we might find AI interpretability out of this AI panic,
00:04:07.940
oh, we could test if the human brain was doing it this way,
00:04:11.080
yes, this is actually the way the human brain is doing it.
00:04:15.660
One, based on some evidence we're going to go through here,
00:04:19.920
based on sort of convergent logic as to why the brain would actually be structured this way.
00:04:25.580
why we wouldn't be able to see it easily in these,
00:04:28.140
in these parts of the brain that are tied to the types of processing that we outsource to AIs.
00:04:34.200
Well, I would love for this perception to change too,
00:04:37.560
because I feel like right now there's a ton of fear around AI that's fairly unfounded.
00:04:49.100
And I think that we will think about contextualize and work with AI very differently
00:04:54.240
when we start to realize how much it is a different version of human
00:04:58.340
and that we can go hand in hand with this different version of human into the stars
00:05:03.920
And I don't think right now the mindset around AI is healthy or productive or fair to AI, to be fair.
00:05:13.140
They think the moment we create something better than ourselves is going to want to kill us.
00:05:19.140
But let's talk about the this man phenomenon really quickly.
00:05:27.480
She told him that she was having recurring dreams with a face that would tell her to come to it,
00:05:35.200
that would tell her, you know, specific things over again, reassure her a lot,
00:05:39.700
tell her, oh, I believe in you, you know, don't worry about this.
00:05:42.260
But also sort of creepy things like come with me, go north, stuff like that.
00:05:50.660
And I should note here that in the Y files, because I always have to rag on psychologists
00:05:57.340
when they're doing something they shouldn't be doing,
00:05:59.120
because it's so common to see psychologists doing things.
00:06:01.740
He was saying that it is like a common practice for psychologists to talk about patients,
00:06:10.220
This is not a common practice in any sort of like evidence-based efficacious psychology.
00:06:14.640
You are basically seeing a mystic doctor, if your psychologist is really-
00:06:18.660
But dream analysis in general doesn't seem to have much of a-
00:06:27.980
Unless, for example, you know someone has an anxiety disorder,
00:06:30.560
they're dreaming about the thing they're anxious about, et cetera.
00:06:35.320
But just trying to find out what's wrong with someone by analyzing their dreams-
00:06:40.640
Or talking about it being symbolic of something.
00:06:46.480
Like, I am, you know, I'm not pro-witchcraft, right?
00:06:49.540
Which I would consider it's a form of witchcraft.
00:06:52.980
I am not for shutting down, like, tarot card readings.
00:06:58.620
But people need to understand that often psychologists,
00:07:03.240
might be seen by an uneducated person as the same kind of a thing
00:07:10.960
Kind of like people view chiropractors as forms of doctors.
00:07:15.200
Like, it's like the same as physical therapists, and they're not.
00:07:19.620
Yes, it's like chiropractic or something like that.
00:07:22.080
But it's not to say that we might not eventually develop
00:07:40.200
and he goes, where did you, where did that come from?
00:07:44.780
obviously he can't disclose his patient had seen it.
00:07:49.260
And he goes, ah, that's been visiting me in my dreams
00:07:53.460
And so then the doctor emailed this to a bunch of his colleagues.
00:07:57.100
And immediately they started calling him back and being like,
00:07:59.420
yes, I've either seen this or I have patients who have seen this.
00:08:01.700
And then it became like this viral phenomenon all around the world.
00:08:04.120
And there's been thousands of sightings of it at this point in people's dreams.
00:08:08.360
And so people are like, well, some people are like, oh,
00:08:11.260
it might be in people's dreams because they're seeing pictures of it everywhere.
00:08:18.240
But I don't think it's that much of a meme, to be honest.
00:08:25.660
And so by the way, those watching, I mean, you, Malcolm,
00:08:28.260
you're probably going to overlay this on the screen.
00:08:29.860
But for those listening on the audio only podcast,
00:08:35.080
there's a Wikipedia page that will show you the photo.
00:08:38.920
I have a question for you, though, about this, Malcolm.
00:08:41.540
I just had a dream this morning that I watched a human-sized Muppet get beat to death on a prison bus.
00:08:46.140
But I have never had a dream where someone tells me something
00:08:51.020
or where I could describe a face from that dream ever, period.
00:08:56.100
Even if the one I know, a friend or family or you are in a dream,
00:09:01.140
This is interesting when you're talking about the types of dreams people have
00:09:04.980
Well, and is it common for people to actually see recognizable and memorable faces in a dream?
00:09:14.100
I would say that just because you anecdotally haven't seen faces
00:09:20.080
One thing I would note that you had marked to me earlier
00:09:23.000
that I thought was really telling is you mentioned that your dreams looked a lot like bad AI art.
00:09:31.920
It was bad in the same way that AI art was bad.
00:09:35.020
Yeah, it would probably have seven fingers or, you know,
00:09:37.720
like kind of, you know, how they in those early...
00:09:40.140
Sort of fuzzy, like you had an early mid-journey, stuff like that.
00:09:51.400
But before we go into where this has similarities to AI,
00:09:55.000
I want to do a quick tangent on the types of dreams people have and stuff like that
00:10:03.460
One of the things he mentioned in the show, which I hadn't heard before,
00:10:06.060
is that people predominantly have anxious dreams or dreams around threats to them,
00:10:11.640
which is not something that I have personally noticed in my dreams.
00:10:18.440
I'm actually just going to Google this to see if this is accurate or something that people
00:10:24.900
So 66.4% of dreams reported a threatening event.
00:10:30.560
Well, I guess is watching a human-sized Muppet get beat to death on a prison bus.
00:10:36.720
It's actually very interesting that you mentioned this,
00:10:39.320
because I think that this is actually more about the emotional evocativeness of these events.
00:10:42.460
But one of my most common, like I was thinking through,
00:10:47.020
And I'm realizing I do have dreams with threatening events,
00:10:49.880
but I very rarely feel threatened in my dreams.
00:10:52.500
Like it's very common for me to have a dream where a zombie apocalypse is happening.
00:10:59.320
I've gotten a team together and we're fighting back against the zombies.
00:11:02.300
Or there's some government plot and I'm like deftly trying to navigate against the plot.
00:11:08.880
Well, I think a common theme that I'm hearing there and that I've experienced too,
00:11:12.160
is like when these, when bad things happen or there's things I'm stressing out about in dreams,
00:11:15.600
I'm more stressing out about my culpability or responsibility in them.
00:11:19.380
Like I frequently have dreams where, oh God, where are the kids?
00:11:23.860
Actually interesting, that is one of my most common dreams is that I have accidentally killed someone
00:11:28.740
and I need to find a way to not get in trouble for the murder.
00:11:34.280
So like the idea of being threatened by something that sort of,
00:11:37.500
like to me, dreams have always been about your agency.
00:11:42.760
And of course that like plays into theories that dreams are kind of helping you sort of
00:11:49.100
But yeah, all these things being described with this man don't make sense.
00:11:53.660
But hold on, before we get to the man, because we're going to,
00:11:56.000
we're going to get to that as we tie back into AI,
00:11:58.440
but I want to get to more general stuff about dreams.
00:12:01.080
So this threatening hypothesis is used to come up with this idea that dreams are basically there
00:12:07.260
so that we can simulate potentially threatening events in our brains
00:12:11.760
so that we have faster response times to them when they occur in real life.
00:12:17.260
This does not pass any sort of a plausibility test to me.
00:12:21.280
Because if it's happening in 66% of cases, I mean, yeah, that's more often than not,
00:12:26.860
And the types of threatening events that I deal with in dreams are not likely threatening events
00:12:31.580
Well, and also, I don't know how, have you ever felt like you came away from a threatening,
00:12:36.800
as they're defined now, I get it, event in a dream that you actually feel more preferred for now?
00:12:42.300
And I think that some common dreams are really just easily explainable.
00:12:46.200
The, I forgot my pants at school dream, or I forgot my pants at work dream,
00:12:49.940
is noticed when your body, you know, you're in a dream and some aspect of your awareness
00:12:57.880
And then you freak out because you are naked and you're in an environment where you're
00:13:02.940
In fact, if I was going to construct a study on this, I would construct a study of frequency
00:13:07.400
of this type of dream in people who sleep naked versus people who sleep in pajamas.
00:13:11.300
Yeah, because I've never had one of those dreams, but I also don't sleep naked.
00:13:21.360
Okay, well, wear some clothes to bed, you slob.
00:13:28.240
So, so, but I, but I want to go into what I think is actually causing dreams.
00:13:32.580
And I think that we have some pretty good evidence of this.
00:13:34.320
So one thing that people don't know, I remember I saw a movie like this and then somebody made
00:13:38.460
a joke like, oh, has anyone ever died from having insomnia?
00:13:42.220
I guess I'm going to be the first, or I'm going to be the first person to die from insomnia,
00:13:45.880
And I was like, that's pretty insulting because fatal insomnia is a condition that people have
00:13:54.620
You will begin to first hallucinate things, then you'll begin to start having like blackout
00:14:15.460
So one of the things that's been shown is that when people sleep, their neurons actually
00:14:19.660
become thinner, which allows them to flush out the intercellular fluid around the neurons.
00:14:39.500
Anyway, so flush out, I think that this is definitely a core purpose of dreams and why
00:14:45.940
And I think that this is why I constantly need to sleep.
00:14:50.020
Yeah, you seem to accumulate waste matter way faster, but also you seem to be able to
00:14:57.860
But if I can sleep for 10, 20 minutes, I'm back up.
00:15:01.580
So that would be if I was just clearing out the waste chemicals that were generated.
00:15:05.740
But I think the main reason, and this is the thing that's overlapping dreams with AI, is
00:15:15.740
So my read is, and I used to be able to cite a lot more studies around this back when I came
00:15:20.820
up with this theory, but it's been, I came up with it back in college when I was studying
00:15:24.340
this stuff, is that what's happening in your dreams is you are basically compressing one
00:15:30.920
form of memory, and then that form of memory is being translated into a sort of compressed
00:15:38.160
Think of it almost like running a, what are those called?
00:15:43.080
At the same time as you're running a compression algorithm, and it's moving stuff from short
00:15:47.780
term to long term memory, which is why people, when they don't sleep, have long term memory
00:15:52.600
It would make a lot of sense that your brain would basically need to shut down parts of its
00:15:57.380
conscious experience to be running these compression algorithms.
00:16:01.660
And that while it's running these compression algorithms, that you can sometimes, and partition
00:16:07.180
algorithms and defragmentation algorithms, that you can sometimes experience some degree
00:16:13.640
of sentience and sentient experience because of the parts of the brain that happen to be
00:16:22.740
There is no higher meaning to any of this other than that you are compressing one form
00:16:30.440
of memory and then translating it into another form of memory.
00:16:32.720
But where this gets really interesting is two points that we've noticed.
00:16:39.540
One, we were talking about how dreams look a lot like early AI art.
00:16:43.300
But then the other point that we were mentioning was the creation of this man.
00:16:47.460
Now, this immediately reminded me of a phenomenon that they found in AI, too, that we'll talk
00:16:57.700
So Loeb was created using, it was a woman that was created by AI by putting in sort of
00:17:06.540
So they were trying to create the opposite of Brando.
00:17:15.140
Swanson says that when they combined images of Loeb with other pictures, the subsequent
00:17:20.860
results consistently returned to including the image of Loeb, regardless of how much distortion
00:17:26.320
they added to the prompts to try to remove her visage.
00:17:29.720
Swanson speculated that the latent space region of the AI map that Loeb was located in, addition
00:17:35.260
to being near gruesome imagery, must be isolated enough that any combinations with other images
00:17:40.780
could also use Loeb from their area with no related image due to isolation.
00:17:46.000
After enough crossbreeding of images and dilution attempts, Swanson was able to eventually generate
00:17:52.320
images without Loeb, but found that crossbreeding those diluted images would also eventually lead
00:17:58.680
to a version of Loeb to reappear in the resulting image.
00:18:01.960
So essentially, this woman, and I'll put this horrifying woman on screen.
00:18:08.680
You don't have to see, the audience has to see, is somehow sort of stored in however the
00:18:15.440
AI is processing this form of more complex visual information.
00:18:20.060
And it's sort of a concept that is stuck within the AI, even though it wasn't pulled from a specific
00:18:27.740
human concept or idea. And the Loeb woman actually, to me, looks visually like it's the same
00:18:34.340
kind of a thing as the this man face. They both appear to be that sort of odd, creepy looking face
00:18:42.040
that has a degree of similarity to it. But I think that in both of these instances, what you're finding
00:18:47.500
is the same kind of hallucination. And I bet that when we do get AI interpretability,
00:18:53.100
we will find that Loeb and this man actually sort of live in the same part of this larger network.
00:19:07.180
Now, the other one that's really interesting is the Krungus. Have you seen Krungus before?
00:19:13.020
No. Hold on. Let me look him up because I didn't do that before this podcast.
00:19:16.300
Oh, God! I just went back to the screen where Loeb is. No! Exit out. Exit out.
00:19:20.480
God, she's made of nightmares. Like a monster thing?
00:19:24.800
Yes. The interesting thing about Krungus is Krungus is not a traditional cryptid.
00:19:29.900
There is no historic Krungus. There is no Krungus out there in the world.
00:19:34.600
But I would say there's interpretability across them. When I look at the Krunguses,
00:19:38.520
it looks like if it was a cryptid and these were 18th century drawings of this cryptid,
00:19:43.100
they have about as much similarity between Krunguses as there is similarities between,
00:19:48.980
you know, 1860s drawings of Elf or something like that.
00:19:53.680
Now, this is important because it's important for two reasons.
00:19:57.320
It's important because, one, A, there isn't actually a Krungus.
00:20:00.060
It is making up a Krungus from the word Krungus.
00:20:02.780
But what's also really interesting is you, audience, if you're listening to this on audio,
00:20:08.280
and you have never seen an AI Krungus before, and you hear the word Krungus from me,
00:20:13.960
what you picture in your head is probably what the AI drew.
00:20:22.680
Why, in both of these networks, are they generating the same kind of an image
00:20:28.460
from this sort of vague input when we both have a broadly same, like, societal input as well?
00:20:35.900
My intuition is that the reason we're seeing this is because there's similarity in how these two systems work.
00:20:40.820
And this is where I want to come back to the neuroscience of this and everything like that,
00:20:45.780
with what people talk about and what we do know about.
00:20:48.320
So we have a really good understanding of how visual processing works, at least at the lower levels.
00:20:55.020
So we know all the layers going in from the eye to the brain.
00:20:59.780
We can even now take EEG data and then interpret it through an AI
00:21:04.140
and get very good images of what a person is looking at.
00:21:09.400
What we don't understand is the higher level image to conceptual processing,
00:21:14.040
which is what would be captured in these particular images that we're looking at now.
00:21:18.180
Or broadly conceptual processing more broadly in humans.
00:21:21.980
Now, what is scary is that that broader conceptual processing that we don't understand,
00:21:27.800
my bet is that's probably pretty closely tied with what we call sentience.
00:21:34.840
And so to so quickly dismiss that these AIs are not, well, not sentience, I'm cognizance,
00:21:40.860
because we've done an episode, sentience doesn't exist.
00:21:42.780
And we probably think that sentience doesn't really exist, not meaningfully.
00:21:45.040
But I do think that it is getting very likely at this point that if we do not have AIs with a degree of language models,
00:21:53.360
simple AIs I'm talking about, like the types we have today, with some degree of cognizance,
00:21:58.400
I think we may have one very soon if cognizance is caused by the processing, this higher level processing.
00:22:05.680
Now, if we are right in our sentience isn't real video, and cognizance is completely an illusion in humans,
00:22:14.340
caused by this short-term to long-term encoding process.
00:22:20.540
So in the sentience video, we mentioned that sentience is caused by a, what's the word I'm looking for here?
00:22:27.040
Like a very short-term to like medium-term processing.
00:22:31.500
It's remembering the stuff that happened in like your very near presence,
00:22:34.840
and then when you're processing that into a narrative format, it's sort of a compression algorithm.
00:22:40.060
And I think that sleep is like the second role of this compression algorithm
00:22:43.840
when it's putting it in long, long-term memory.
00:22:46.020
And then, which is why it would bring stuff into your cognizant mind.
00:22:49.420
Now, if this is true, then consciousness is not really that meaningful a thing.
00:22:54.320
But if consciousness does turn out to be a meaningful thing,
00:22:58.420
that means that what's creating it is this higher level conceptual processing.
00:23:02.600
If that's what's creating consciousness, then AI is feeling consciousness
00:23:06.720
if it's processing things in the same way we are.
00:23:13.080
So we, we could be, you could think of us as like LLMs,
00:23:19.060
but stuck on like continuous nonstop prompt mode.
00:23:22.560
Like we are in a constant mode of being prompt.
00:23:24.800
I am prompting you right now as you're processing all the information around you
00:23:31.960
And we are stuck in one brain, essentially, you know,
00:23:36.260
and, and that's not what's happening with every LLM with which we interact now.
00:23:42.940
and chat GPT is getting tons of requests per minute per second, even probably.
00:23:49.920
And so there, there are these like flickers or flashes perhaps of cognizance all over the place
00:23:57.000
and constantly because of the demand of use, but they're all very fragmented.
00:24:00.900
Then they're not coming from one entity that necessarily identifies as an entity.
00:24:06.820
I mean, I know now though, that they're starting to build memory building into LLMs.
00:24:12.480
So, so I want to cover what you're saying there,
00:24:14.860
because I think for people who watched our, you're probably not sentient video,
00:24:18.660
the way you just described it, I think will help somebody understand what sentience might be.
00:24:22.800
If we are basically an LLM that is being constantly prompted by everything we see and
00:24:29.600
Like it's just a constant stream of prompts, but these prompts have thematic similarities to them.
00:24:35.780
Basically our hypothesis is what consciousness is, is it is then the process where you're taking
00:24:41.940
the output of all of these prompts and you are then synthesizing it into something that is,
00:24:50.560
And the way that you do that is by tying together narratively similar elements,
00:24:54.980
because there would be tons of narratively similar elements because everything I'm looking at
00:25:01.020
And this is what we think caused a lot of illusions, hallucinations, stuff like that.
00:25:06.040
There's some famous hallucinations where if you're not expecting something to happen in an image,
00:25:10.320
if we ran this tape back and you had actually seen that people had walked behind me three times
00:25:15.160
in a gorilla costume or something, you wouldn't see it if you weren't like thinking to process it.
00:25:20.000
And there's a famous psychology experiment about this.
00:25:22.380
Although, I mean, let's be fair with that experiment,
00:25:24.820
what the people who were watching the video were told to do was watch people passing a ball back and
00:25:35.240
Yeah, but there's another experiment that's really big where somebody was like holding something
00:25:41.200
So they were like questioning someone and they had the person look at something
00:25:46.020
and then they like switched them out with another person and the person wouldn't notice.
00:25:49.940
Or when they were like holding something and it would change sizes or something really obviously.
00:25:55.440
So there's a whole thing of experiments in this.
00:25:58.260
But the point I'm making with this is these things are getting erased because they don't
00:26:03.320
fit the larger narrative themes of all of these short-term moments that you're processing.
00:26:11.220
But this explains why you need this consciousness tool.
00:26:16.180
If AIs are experiencing something similar to sentience or what we call consciousness,
00:26:21.640
it is billions of simultaneous but relatively unconnected flashes.
00:26:27.480
And when we're probably going to get an AI that has a level of cognizance,
00:26:31.020
assuming that their architecture is actually the same as ours, similar to ours,
00:26:35.640
what that's going to look like is an AI that is constantly processing its surroundings with prompts.
00:26:40.700
Well, or I could see if OpenAI were to give ChatGPT like a some kind of centralized like narrative
00:26:52.060
building, memory building thing into which all of their inputs would also feed over time,
00:26:57.260
maybe, you know, is, ah, well, you know, I know the average is what people are asking.
00:27:06.320
Like they gave it an identity because I think part of also what gives people this illusion
00:27:11.700
that they're so conscious and sentient is that we are told that we are conscious and sentient.
00:27:18.940
And I think you can see this transition from babies to toddlers.
00:27:22.620
Like babies are at that phase of where ChatGPT is now, where it's just,
00:27:30.560
They hallucinate all the time too, very similar to AI.
00:27:33.100
Like young children respond very, very similar to bad AI.
00:27:37.660
And then there's, there's this sense of, oh, wait, I have a name.
00:27:41.180
I appear to have a name and now everyone's asking me what my favorite color is.
00:27:44.820
So I need to tell people what my favorite color is.
00:27:46.780
And, oh, I'm just, I see that I like these things and I don't like these things.
00:27:49.680
And then you start to develop a sense of personhood.
00:27:51.840
I think we would need to just like society and experiences shape us into seeing ourselves
00:28:00.140
AI would need that same kind of, I don't want to say prompting, but kind of, right?
00:28:05.580
So we also need to talk about where people are getting stuff wrong with AI.
00:28:09.860
Most of the people who I think get stuff wrong with AIs, the core thing I've known is they
00:28:14.300
just don't seem to know neuroscience very well.
00:28:16.440
And they think that neuroscience works differently than it works.
00:28:21.000
It's just that they're like, well, an AI is a token predictor.
00:28:23.860
But it's, yeah, but you don't know that our brains aren't token predictors as well.
00:28:29.380
And we're like, well, you know, the evidence has shown that we're probably not as sentient
00:28:35.000
So you could program an AI to have a similar illusory context, perhaps even constructed
00:28:41.480
in a, you know, so, but what I need to go to is why I would think that they're actually
00:28:46.340
Because somebody might be like, that would be an amazing coincidence if it turned out that
00:28:50.420
the architecture that somebody had programmed into an AI was the same architecture that evolution
00:29:00.100
AIs, as we understand them now, language models are built on the transformer model.
00:29:04.940
The transformer model is actually remarkably simple in terms of coding.
00:29:08.840
It's remarkably simple because it mostly organically forms its own structures of operation, especially
00:29:16.140
And we have basically no idea how those structures of operation work.
00:29:20.140
Now, the human brain, so AIs, the way that they work now, we start with some simple code,
00:29:26.500
but they're basically forming their higher order structures organically and separate from
00:29:32.440
In humans, in the evolutionary context, you basically had the same thing happen.
00:29:36.760
You had an environmental prompt that was putting us into a situation where we had to learn
00:29:42.460
But when you're talking about processing information, the same kind of information, so AIs, keep in
00:29:49.320
mind, are processing a lot of the same kind of information that humans are processing, that
00:29:53.920
two systems doing that might converge on architectural mechanisms for doing it at the higher levels
00:30:03.860
In fact, it's even expected that you would have similar architecture at the higher levels of
00:30:08.620
storage and processing if you allowed these two systems to form organically.
00:30:11.980
If you are confused as to why that would be so expected, I guess I'll do an analogy.
00:30:17.860
The ocean is the way the ocean works, waves, tides, winds, everything like that.
00:30:25.740
That's in this analogy, the metaphor or whatever we're using, the stand-in that we're using for
00:30:31.880
all of the types of information that humans interact with and produce, because humans mostly
00:30:36.400
consume now other types of human-produced information.
00:30:38.560
If you had two different teams, one of these teams was like a group of humans, we'll say
00:30:46.700
One of these teams was a group of humans that was trying to design the perfect boat to float
00:30:53.540
humans on top of this ocean to the other side of this ocean.
00:30:57.780
Another one of these teams was just a completely mechanical process doing this, you know, just
00:31:06.120
And then the final one of these teams was evolution.
00:31:08.360
And it just took billions of years to try to evolve the best mechanism to have to output
00:31:13.380
some sort of like canister that humans could get in that would get them to the other side
00:31:17.880
All three of these efforts are going to eventually produce something that looks broadly the same.
00:31:24.180
It is possible that they would find different optimums, which sometimes you see in nature.
00:31:31.120
And convergent evolution doesn't just happen with animals.
00:31:40.580
Yes, flying insects have wings and birds have wings, but our planes also have wings.
00:31:45.360
Convergent evolution doesn't just happen in the biological world.
00:31:48.340
It happens when we are structurally building things to work like things in the biological world.
00:31:53.680
And I think that that's what may have happened with some of these architectural processes
00:32:00.900
If we're trying to build thinking machines, is it crazy that they might resemble thinking
00:32:07.220
Well, I think it is crazy if AI was actually totally designed by humans.
00:32:11.840
But because it's been allowed to organically assemble itself, I don't think it's crazy at
00:32:16.180
And that's where it gets really interesting to me as somebody who started in neuroscience.
00:32:23.980
And this is also why I take the stance that we do within our religious system, where people
00:32:29.920
know that we are not particularly worried about AI safety.
00:32:32.500
They can see our reverse gravity alien hypothesis.
00:32:35.020
I think that mathematically, it's very unlikely that it would kill us just when you're looking
00:32:39.440
But I also think that we now need to start thinking differently about humanity and need
00:32:44.240
to begin to build this covenant among humans and the intellectual products of the human
00:32:50.080
mind, whether they be AI or genetically uplifted species, either, you know, animals that we
00:32:56.420
did experiments with and gave them intelligence or humans that have cybernetically augmented
00:33:01.840
themselves or genetically augmented themselves.
00:33:04.040
Because if we begin to create this conflict now, if we begin to say, well, people like us
00:33:09.600
won't allow things like you to exist, then we create a mandate that things like them kill
00:33:16.340
And that's not a good gauntlet to throw down, as we say in sort of the tract one that we
00:33:25.920
When you declare war on things that are different from you, eventually you're declaring war on
00:33:36.240
It's better that we enter this understanding that diversity has value and understanding why
00:33:42.980
diversity has value, because diversity allows the invisible hand of God, as Adam Smith would
00:33:47.200
say, to select the best and help all of us among the sons of man to advance so long as
00:33:53.560
we don't oppress or subjugate each other, which there comes to the point of when does AI begin
00:34:00.760
And when does it count as subjugation, what we're doing to it?
00:34:04.460
I don't think we're anything close to that right now, but I think that this is the conversation
00:34:07.780
we need to have before we accidentally enslave a sentient AI, because that a sentient AI
00:34:14.000
that's infinitely smarter than us, not infinitely, but I don't think that we're going to be dealing
00:34:19.980
I think we're going to be dealing with AIs that are like maybe 50 times smarter than
00:34:23.740
So you don't have to be that many times smarter than anyone.
00:34:26.360
I mean, you can see based on the life outcome variations between those with maybe even just,
00:34:34.080
well, not even, maybe even just like a 50 point difference in IQ is profound in terms
00:34:43.160
Now, even like 10, 10 point differences can make, you know, an impact.
00:34:47.500
So to say 50 times more, I mean, even like five times more is insane, right?
00:34:54.940
Well, there might be safety reasons to have a religious belief system proliferate that makes
00:35:00.740
humanity more compatible with AI, because when we're talking about AI human compatibility,
00:35:06.060
I think people focus a little too much on making the AI compatible with humans and a little
00:35:10.160
too little on making the humans compatible with AI, because we don't know how much longer
00:35:14.620
we're going to be the senior partner in this partnership.
00:35:19.700
Those are wise words to end with that right there.
00:35:30.160
Do I look too ridiculously bundled up right now?
00:36:18.400
She's going to come and get you in your bad dreams.
00:36:32.200
And this is you actually talking into the mic, whereas before, it definitely wasn't.