Why God King Sam Altman is Unlikely: Who Will Capture the Value of the AI Revolution?
Episode Stats
Words per Minute
178.71205
Summary
In this episode, Simone and I discuss how AI is changing the way the economy works and who is going to benefit the most from it: the companies that make and own the AI models, or the people using the individual AI models?
Transcript
00:00:00.000
Hello, Simone. I am excited to be here with you today. Today, we are going to be focusing on a
00:00:06.360
question, which is as AI changes the way the economy works, who is going to be the primary
00:00:13.000
beneficiary of this? Is it going to be the large companies that make and own the AIs? Or is it
00:00:19.460
going to be the people using the individual AI models? We all know, for example, in probably
00:00:27.180
10 years from now, there will be an AI that can, let's say, replace most lawyers, let's say the
00:00:33.300
bottom 50% of lawyers. Well, and already studies have shown AI therapists perform better on many
00:00:38.740
measures. It's already exceeding our capacity in so many places. Yeah, they introduced it to a Texas
00:00:44.400
school system and it shot to the top 1% of student outcomes. As we see this, where is the economic
00:00:52.340
explosion from this going to be concentrated? Because this is really important in determining
00:00:57.800
what types of jobs you should be looking at these days, how you should be training yourself,
00:01:02.180
how you should be raising your kids, where you should be investing. The second question we're
00:01:06.900
going to look at, because it directly follows from the first question, is does the future of AI,
00:01:13.780
when we're looking at the big world changing advancements that are going to come from it,
00:01:17.900
are they going to appear on the token layer or at the latent layer? Can you define those
00:01:23.540
differences? Yes. By this, what I mean is when we look at continued AI advancement, is it going to
00:01:30.500
happen in the layer of the base model, i.e. the thing that OpenAI is releasing and Claude is releasing
00:01:36.460
and everything like that? Or is it going to be in the token layer, the people who are making wrappers
00:01:41.160
for the AI? For example, the Collins Institute is fundamentally a wrapper on pre-existing AIs.
00:01:46.800
Our AI game company is a series of wrappers on AI. And if it turns out that the future of AI is in the
00:01:54.600
token layer, it leans potentially more to, if not the big companies that are going to capture the value
00:01:59.800
from this. And then the next question we're going to look at is the question of what gets us to AI
00:02:07.600
superintelligence? And I might even start with this one, because if we look at recent reports with AI,
00:02:14.900
a big sort of thing that we've been finding is that, especially with like OpenAI's 4.5 model,
00:02:22.280
is that it's not as advanced as people thought it would be. It didn't get the same huge jump in
00:02:27.380
capacity that people thought it would get. And the reason is that pre-training, i.e. the ways that you
00:02:35.120
sort of train AI on the pre-existing data before you do like the narrow or like focus training
00:02:40.900
after you've created the base model doesn't appear to have as big an effect as it used to have.
00:02:46.420
So it was working on, I think, 10x the information of model four, and yet it didn't appear dramatically
00:02:53.360
better. And so one of the questions is, so that's one area where pre-training doesn't seem to be having
00:02:58.040
the same effect. And I think we can intuit why. But the second big issue is that the amount of
00:03:04.960
information that we actually have, like, you know, peak oil theory, there's like a peak pre-AI
00:03:10.060
information theory problem, which is it just eventually, when you're dealing with these
00:03:15.360
massive, massive data sets, runs out of new information to train on. So first, and I love
00:03:21.620
your intuition before I color it. Do you think if you look at the future of LLM's base models,
00:03:29.700
so we're not talking about LLM's entirely, we're not talking about anything like that. Do you think
00:03:33.400
the base models will continue to improve dramatically? I think they will. And at least
00:03:37.660
based on people more experienced than this than I am, they will, but in lumpy ways. Like, they'll
00:03:42.980
get really, really, really good at programming. And they'll get really, really good at different
00:03:47.100
esoteric forms of, like, developing their own synthetic data and using that to sharpen themselves.
00:03:50.980
But there are going to be severe diminishing marginal returns when it comes to some things that
00:03:56.840
are already pretty advanced. And of course, I think the big difference and the thing we haven't
00:04:00.900
really experienced yet is independent agents. Like, right now, AI isn't very effectively going
00:04:06.340
out and doing stuff for us. And when that starts to happen, it's going to be huge.
00:04:12.300
I agree with that. But I think so. What I'm going to be arguing in this is that most of the
00:04:18.880
advancements that we will probably see in AI going forwards are going to happen, like the really big
00:04:24.720
breakthroughs at the token layer. Okay. Not at the base layer. Which a lot of people would strongly,
00:04:30.880
those are fighting words. These are fighting words in AI. Yeah. It is rappers that are going to fix our
00:04:37.140
major problems. Wow. So I'll use the case of an AI lawyer to give you an explanation of how this works,
00:04:45.440
right? All right. So I want to make a better AI lawyer right now. If you look at the AI systems
00:04:51.900
right now, there's a guy, a programming guy who was talking to me recently and he was arguing because
00:04:57.720
he was working in the education space. And he's like, he didn't like our solution because it's a
00:05:01.220
token layer solution. And he wants to build a better latent layer solution, you know, using better
00:05:05.940
training data, using better post-treating data because it's more efficient programming wise.
00:05:12.100
And I'm like, yeah. For the time being, yeah. For the time being. I feel like it creates path
00:05:16.220
dependency. Am I missing something here? Well, okay. Just from a business perspective,
00:05:20.460
it's pretty stupid because as open AI's models increase, like if we expect them to continue to
00:05:26.080
increase in quality or as Claude's models increase or as Grok's models increase. Which they're going
00:05:30.220
to. Yeah. You can't apply the post-training uniquenesses of the models that you create to
00:05:37.300
these new systems. So anything you build is going to be irrelevant in a few generations of AI.
00:05:43.580
But you want to be able to switch it out. Like no matter what, you want to switch it out really
00:05:47.140
quickly. If one AI gets better, you should be able to plug it into whatever your framework is,
00:05:51.060
your scaffolding, right? You want to build scaffolding for your changeable parts.
00:05:54.260
Exactly. Exactly. But that's actually not the core problem. That's not the core reason why,
00:05:59.560
because the other project he's working on is an AI lawyer and he's trying to fix this problem at the
00:06:04.560
latent layer. And that won't work. And I will explain why it won't work. And you will be like,
00:06:10.400
oh yeah, that makes perfect sense now that I think about it. Okay. So if you think about right now,
00:06:15.680
like what is dangerous about using an AI lawyer? Like where do AI lawyers fail? Is it in their
00:06:23.620
ability to find the laws? No. Is it in their ability to output competent content? No. Where they fail right
00:06:33.860
now is that they sometimes hallucinate and make mistakes in a way that can be devastating to an
00:06:40.640
individual's legal case. So if you go to a system, you know, like Grok or perplexity or something like
00:06:46.780
that, and you, you built one focused on like searching law databases, right? It's going to be
00:06:52.640
able to do a fairly good job at that. I'd say better than easily 50% of lawyers, but it's going to make
00:06:59.140
mistakes. And if you just accept it blindly, it's going to cause problems. So if you want the AI to
00:07:05.180
not make those kinds of mistakes, right? How do you prevent it from making those kinds of mistakes
00:07:10.780
that is done at the token layer. So here's an example of how you could build a better lawyer AI.
00:07:16.620
You have the first AI do the lawyering, like go through, put together like the, the relevant laws
00:07:25.300
and, and, and history and, and past calls to previous things and everything like that.
00:07:30.320
So it puts together the brief. You can train models to do this right now. Like that's not
00:07:34.420
particularly hard. I could probably do this with base models right now. Right. You know,
00:07:38.600
then I use multiple differently trained latent layers. So these can be layers that I've trained,
00:07:45.260
or I could have like Claude and open AI and like a few other in Grok. I can even just use like
00:07:51.660
preexisting models for this. And what I do is using the token layer, I have them then go in
00:07:58.420
and review what the first AI created, look for any mistakes with anything historic, like,
00:08:05.880
like they can find online. So he's describing a good lawyer and you're describing a good law firm
00:08:10.760
that has a team to make sure all the stuff that the good lawyer is doing is correct. Right. And also
00:08:16.260
a law firm that can like hire new good lawyers when they come out. Yes. And then what this system
00:08:20.800
would do is after it's gone through with all of these other systems that are reviewing,
00:08:25.360
Oh, did they make any mistakes at this layer? It outputs that. And then based on the mistakes
00:08:31.300
that it finds, it re outputs the original layer and it just keeps doing this in a cycle until it
00:08:37.360
outputs an iteration that has no mistakes in it. That is a good AI lawyer. That is accomplished
00:08:45.280
entirely at the token layer. Okay. Oh yeah. You were right. And that makes sense.
00:08:52.040
Which removes the existing company's power to, to, to do a lot of things. If it's people outside of
00:08:58.520
these companies. You're saying that they're becoming more, more akin to undifferentiated
00:09:03.780
like energy or hosting providers where people will not be as brand lawyer loyal. They're going to focus
00:09:11.420
more on performance and the switching costs that people experience are going to be relatively low.
00:09:17.500
So long as they're focused and oriented around things on a token level basis and not.
00:09:24.220
Yes. And it allows people who are operating at the token level basis to capture most of the value.
00:09:29.820
Because then move more quickly. Right. Because again, they don't have that path dependency that makes
00:09:33.260
everything go slowly. It's not only that, but they can swap out models. So what, like what,
00:09:38.700
if I have the AI lawyer company and people are coming to me because I have found a good
00:09:43.500
interconnected system of AIs that produces briefs or cases or arguments that don't have a risk of
00:09:50.220
errors in them. Right. So people come to me and, and I am capturing, let's say I've replaced all the
00:09:55.740
lawyers in America. Right. And, and so I now offer the services much cheaper. Let's say at 25%
00:10:01.180
the costs they did before, or, or 10% or 5% or 2%, you know, some small amount, I'm still capturing
00:10:06.460
like a ton of value there. Right. That's just a lot of money. So now the company that is,
00:10:11.820
I am paying for an AI, like, let's say I use open AI as one of the models I'm using.
00:10:15.900
They now come to me and say, Hey, I want to capture more of this value chain. So I'm going
00:10:21.180
to charge you more to use my model. Well, then I say, well, your model's good, but it's not
00:10:27.740
that much better than Grog. Yeah. It's not that much better than Anthropics. Yeah.
00:10:33.180
It's not, or free that much better than DeepSeek. It is that much better than DeepSeek,
00:10:38.140
but DeepSeek or Llama are the two, you know, things can change, things can change. But the
00:10:43.900
point I'm making is what things like Llama and DeepSeek do is they put like a floor on how much
00:10:48.780
companies can extract if they're at the level of training the AIs themselves, unless they have
00:10:54.620
separate departments that are working on making these more intelligent types of AIs.
00:11:00.780
Now that's really important for where the economy is going, because it means we might see less of a
00:11:08.940
concentration of wealth than we would expect. But the way that the concentration of wealth,
00:11:13.420
because we're going to see still a major concentration of wealth, actually we'll see
00:11:17.260
more concentration, but to individuals rather than big companies with basically what this means is
00:11:22.540
individuals are going to capture most of the value as the concentration happens,
00:11:26.780
rather than large companies like Google. Because I and a team of like five engineers
00:11:30.780
can build that lawyer AI I talked about, right? Whereas I, and me and this team of five engineers
00:11:38.140
are capturing all the value from that, right? From replacing the entire lawyer industry in say,
00:11:42.540
America. This is really bad for the tax system, because we've already talked about, okay,
00:11:47.340
you have the lower, the demographic crisis, which is putting a squeeze on the tax system.
00:11:50.940
And they're like, oh, they'll just tax more. I am now even more mobile with my new wealth
00:11:57.660
than the AI companies themselves were, because I don't need semiconductor farms or anything like
00:12:05.420
The semiconductor farms are creating an undifferentiated product.
00:12:11.020
Yeah. A product that's still in high demand, it will make a lot of money, but it will become more
00:12:14.940
about efficiency, you think then? Yeah. Now, another thing I'd note is my prediction
00:12:21.820
in terms of where AIs are going with super intelligence. By the way, any thoughts before
00:12:25.740
we go further here? I am thinking more about efficiency now. I heard, for example, that Sal
00:12:30.940
Maltman was like saying things like, please and thank you is costing us millions of dollars.
00:12:36.380
Because just that additional amount of processing that those words cause
00:12:40.780
is expensive. So I really could see things, yeah, like these companies becoming over time,
00:12:47.340
after they have more market share, hyper focused on saving money instead.
00:12:52.700
Well, that's a dumb on him part. He should have the words please and thank you pre-coded to an
00:12:57.340
automatic response. They don't even, I'm one of these bad people that wants to be nice. They don't
00:13:06.060
acknowledge the courtesy anyway. So you don't even need to have a response. It should probably just
00:13:12.380
be ignored, but I guess it's kind of hard to, or I don't know. But anyway, he allegedly said that,
00:13:17.820
so that's interesting. Okay. So yeah, the point here being is if we look at how the human,
00:13:24.140
like, like LLMs and we think about, okay, why, like, where did they go and why isn't the training
00:13:28.700
leading to the same big jumps? It's because pre-training data helps LLMs create more competent
00:13:37.340
average answers. Okay. Yeah. Being more competent with your average answer doesn't get you creativity.
00:13:44.780
It doesn't get you to the next layer of like AI. No, and if anything, I think Scott Alexander has
00:13:51.020
argued compellingly that this could lead to actually more lying because sometimes giving
00:13:58.620
the most correct or accurate answer doesn't lead to the greatest happiness of those evaluating and
00:14:06.540
providing reinforcement. That's post-training. Okay. Oh, you're referring to, sorry,
00:14:11.020
just something different. Post-training still is leading to advantages. Those are the people who say,
00:14:14.860
I like this response better than this response. That could still lead to dishonesty though,
00:14:18.540
quite apparently. No, no, no. Pre-training is about getting the AI to give the most average
00:14:24.540
answer. Not, not exactly. Oh, just all the information available you're saying.
00:14:28.540
Yeah. Like you can put variance in the way it's outputting its answer and everything like that.
00:14:32.620
But, but that variance is added was like a meter, like the pre-training and the amount of pre-training data
00:14:39.340
doesn't increase the, the variance meters. It doesn't increase anything like that. It just gives a
00:14:44.700
better average answer. And the thing is, is the next layer of AI intelligence is not going to come
00:14:52.460
from better average answers. It's going to come from more creativity in the way it's outputting answers.
00:14:58.700
So how do you get creativity within AI systems? That is done through the, the, the variance or noise
00:15:05.900
that you ask in a response, but then the noise filtered back through other AI systems or other
00:15:14.940
similar sort of LLM systems. So the core difference between the human brain and AI,
00:15:19.740
and you can watch our video on stop anthropomorphizing humans, where we basically argue that,
00:15:24.220
you know, your brain actually functions strikingly similar to an AI, an LLM specifically. And I mean,
00:15:30.300
really similar, like the ways that LLMs learn things in the pre-training phrase is they put in data and
00:15:37.340
then they go through that data and they look for like tokens that they don't expect. And when they
00:15:42.700
encounter those tokens, they strengthen that particular pathway based on how unexpected that
00:15:47.900
was. That is exactly how your nervous system works. The, the, the, the, that, the, the way that
00:15:54.140
your like neurons work, they work very similar to that in terms of learning information is they look for
00:15:59.340
things they didn't expect. And when they see something they didn't expect, they build a
00:16:02.220
stronger connection along that pathway. And we can see this in that you go to that study.
00:16:05.740
If you want me to reference all the studies on this and everything, but the core difference between
00:16:09.180
the brain and AI is actually that the brain is highly sectionalized. So it will have one section
00:16:16.060
that focuses on one thing. One sections is focused on another thing, et cetera, et cetera, et cetera.
00:16:20.460
And some sections like your cerebellum are like potentially largely pre-coded and actually
00:16:26.060
even function kind of differently than the rest of the brain that's used for like
00:16:29.180
rote tasks like juggling and stuff like that. Okay.
00:16:31.260
I would note here that AI does appear to specialize different parts of its model for different
00:16:36.300
functions, but this is more like how one part of the brain was one specialization. Like say,
00:16:41.820
like the homunculi might code like all feet stimuli next to each other and all head stimuli next to
00:16:46.380
each other. It's not a true specialization like you have in the human brain where things actually
00:16:53.100
function quite differently within the different sections of the brain. Anyway, so you could say,
00:16:58.620
wait, what do you mean? Like, this is the core failing point of AI is that it doesn't work this
00:17:04.300
way. And it's like, this is why you can count the number of R's in a word or like you can do,
00:17:09.900
if you look at the ways that like there was some data recently on how AIs actually do math and they
00:17:15.100
do it in like a really confusing way where they're actually sort of like they, they use the LLM system.
00:17:21.340
Like they, they try to like predict answers and then they go back and they check their work to make sure it
00:17:26.620
makes sense was what they would, would guess it would work when they could just put it into a
00:17:31.340
calculator. Like your brain isn't dumb like that. Like it has parts of it that don't work exactly
00:17:37.500
like calculators, but they definitely don't work exactly like an LLM. Like they're, they can hold
00:17:41.340
a number like in your somatic loop, like, okay, I'm counting on my fingers or my hands or something
00:17:45.900
like that. Or, okay, I've put a number here and now I've added this number to this number. It's not
00:17:50.780
working on the LLM like system. It's working on some other subsystem. Most of the areas where AIs
00:17:56.540
have problems right now is because it's not just sending it to a calculator. It's not just sending
00:18:03.180
it to like a, what is the hallucination of an AI quote? Like, okay, the reason why I don't hallucinate
00:18:08.700
quotes is because I know that when I'm quoting something, what I'm not doing is pulling it from
00:18:13.420
memory. I'm looking at a page and I'm trying to copy it letter per letter. Whereas AI doesn't have the
00:18:19.340
ability to switch to this separate, like letter per letter subsystem. Now you could say, why don't
00:18:24.620
LLMs work that way? Why haven't they built them as clusters? And the answer is, is because up until
00:18:29.660
this stage, the advantages that we have been getting to our LLM models by increasing the amount of
00:18:34.540
pre-training data has been so astronomical that it wasn't worth it in terms of our investment to build
00:18:42.780
these sort of networks of models. Okay. Why is it just like too much computing power or just no
00:18:50.620
one's gotten around to it? No, no, no. People have like done it, but by the time you've done it,
00:18:54.940
you have better models out there, you know, like that don't need to work this way. Right? Like
00:18:59.980
if you spend, let's say a million dollars building a system like that, and you spend a million dollars
00:19:04.860
getting a larger pre-training set and, you know, spend more time in post-training,
00:19:09.020
the model's going to be like on average better if you did the second scenario. Okay. So I suspect
00:19:16.220
that what we're going to see is a move in AI. And, and I think that this is what's going to get us to
00:19:22.220
what will look like AGI to people from moving to a, just expanding the pre-training and post-training
00:19:28.380
data sets to better inter reflection within the AI system. That makes sense.
00:19:37.180
I could see it going that way. I I'm constantly surprised by how things go. So I couldn't say,
00:19:42.700
but I wouldn't be surprised. Hmm. Oh, I mean, make a counter argument
00:19:47.420
if you think I'm wrong here. This is a very bold claim. We are going to get AGI,
00:19:52.300
not by making better LLMs, but by networking said LLMs.
00:19:55.980
I, I struggle to see how, I mean, I think you can eventually get AA, sorry, AGI just like sort of from
00:20:06.380
kind of one AI working by itself. But when you think about the value of a hive mind and the fact
00:20:12.540
that you're going to have AI interacting well, before we get AGI anyway, I don't like it, you would
00:20:19.420
get AGI from the interaction before you would get it from any single agent or what would be seen as
00:20:25.500
a unified entity. But I think even if we did get it from a unified entity, it would beneath the surface
00:20:30.300
be working as many different components together. Just like the brain is all these different components
00:20:35.100
working together. So I'm not really like the definitions may be failing me. Okay. So let's,
00:20:40.620
let's think of it like this right now. I mean, and this is actually what like capitalism does for human
00:20:46.300
brains. It basically networks them together and then it's a, it rewards the ones that appear to
00:20:52.140
be doing a better job at achieving what the system wants, which is increases in efficiency or, or like
00:20:58.220
productive goods that other people want. Like capitalism is an adaptive organic model for networking
00:21:03.340
human intelligences in a similar context. Yeah. One of the questions you can ask is, well, could you
00:21:09.100
apply that to individual LLM models to create something like a human brain, but that doesn't function
00:21:15.500
like a human brain? Like, like how could you make the human brain better? Make the human brain run
00:21:20.060
on capitalism. Um, the parts of the brain, like make the brain constantly compete with itself.
00:21:25.980
Yeah. Like constantly generate new people do that kind of when they write pro and con lists or when
00:21:31.420
they try to debate with other people ideas and then have other, you know, people say, well,
00:21:36.220
I think this, and then they, you know, I think they do that using prosthetics.
00:21:41.500
Yeah. So, so let's, let's, let's talk about how this would look with AI, right?
00:21:46.940
To suppose, because like, this could be a major thing in the future is you have like these AIs and
00:21:51.660
people just like put their money behind an AI because they're just like, you go out there,
00:21:55.260
you make companies, you implement those companies, right?
00:21:57.980
Okay. So what is an AI that does that really well going to look like? So you have two models here.
00:22:04.060
You can have one that was just trained on tons of founder data and everything like that.
00:22:08.300
Right. And it's just very good at giving like normative responses. And then you've input an
00:22:12.140
amount of noise into it, but let's talk about a second model. This is my proposed model, right?
00:22:17.580
So what you actually have is a number of different latent model AIs that were trained on different
00:22:22.860
data sets. And then within each of those, you maybe have five iterations, which are making outputs
00:22:27.820
with a different framing device with a different wrapper. One will be like, give your craziest
00:22:33.500
company idea, give your, you know, company idea that exploits this market dynamic the most. You
00:22:38.380
make a company idea that does this the most, right? And so all of these AIs are generating
00:22:43.660
different ideas for companies. Then you have a second layer of AIs, which is says, okay, take this,
00:22:50.620
this idea that whatever model outputted and run it through like market environments, right? Like, like
00:22:57.100
your best guess of how markets work right now to create a sort of rating for it of, of how it,
00:23:05.900
like what you expect the returns to be like an AI startup competition.
00:23:09.900
It basically is an AI startup competition. Yes. And the probability of those. And so then all of those
00:23:15.180
get attached to them and AI startup, like, okay, this is their probability of success. This is their
00:23:19.740
probability. Okay. Yeah. Then on that layer, you have an AI that is like the final judge AI
00:23:27.660
that goes through them all and be like, okay, review all of these, review the ways the other
00:23:32.140
AIs judge them and choose like the 10 best. You, you then have it choose the 10 best. Now here,
00:23:38.380
you might have a human come in and choose one of the 10 for the AI to like move forwards with,
00:23:42.140
but you could also automate that and then be like, now go out and hire agents to start deploying
00:23:46.860
these ideas, right? Like that would probably lead to much better results. Yeah. In terms of capital,
00:23:55.180
then just having one really good latent layer AI. I'm trying to look up, people sort of have AIs
00:24:05.420
doing this already. There's this one platform where you can log in and see four different AIs. I think
00:24:12.220
it's Grok, Claude, ChatGPT, and I can't remember the fourth one, maybe Gemini that are tasked with
00:24:20.780
interacting to all do something together, but I don't think they provide each other with feedback.
00:24:25.820
I think right now they're tasked with raising money for a charity and you can log in and watch
00:24:32.620
them interact and they work during business hours and they just do their thing. Well, it's interesting
00:24:38.300
that you note that because this is actually the way some of the AI models that you already interact
00:24:43.100
with are working. There's one popular AI that helps people programming. I forget what it's called,
00:24:48.060
but what it actually does is they have five different latent layer models, which are each
00:24:54.460
sort of programmed or task was doing their own thing. Like create an answer that uses a lot of analogies
00:25:00.460
or create an answer that is uniquely creative or create an answer that uses a lot of like cited stuff you
00:25:06.540
can find online. All of these output answers. And then another layer comes in and its job is to
00:25:12.940
review and synthesize all those answers with the best parts of each. That's where you're getting this
00:25:18.140
improvement with noise introduction, as well as a degree of like directed creativity, and then a
00:25:26.060
separate layer that comes in and reintegrates that. Interesting. That is really interesting. I'd also note here
00:25:34.780
that I've heard some people say, well, you know, AIs aren't going to go to like super intelligence or
00:25:40.220
human level, like AGI intelligence, because some of the answers I've heard recently, which I found
00:25:45.500
particularly like, no, that's not. So people who don't know my backgrounds in neuroscience and a lot of
00:25:51.340
the people who make proclamations like this about AI know a lot about AI and very little about how the
00:25:56.300
human brain works. And so they'll say, the human brain doesn't work this way. And it's like, no,
00:25:59.420
the human brain does work that way. You just are overly anthropomorphizing. And by this,
00:26:03.740
what I mean is adding a degree of like magical specialness to the human brain instead of being
00:26:07.660
like that. So here's an example. One physicist was like a specialist on black holes and super,
00:26:11.820
super smart. And he's like, ah, the human brain. Let's see. I wrote down his name. Goble. So he's
00:26:16.940
like, okay, um, AIs will never achieve AGI because the human brain does some level of like quantum stuff
00:26:25.180
in the neurons. And this quantum stuff is where the special secret sauce is the AIs can't capture right
00:26:31.740
now. And he is right that quantum effects do affect the way neurons work, but they don't
00:26:36.940
affect them in like an instrumental way. They affect them like probabilistically, i.e. they're
00:26:42.940
not adding any sort of magic or secret sauce. They're not doing quantum computing. They're
00:26:47.740
affecting the way like certain channels work, like ion channels and stuff like this, and the
00:26:53.340
probability that they open or trigger at certain points. They're not increasing the spread of the
00:26:58.620
neural processing. They are merely sort of a background on the chemical level of like whether
00:27:05.740
a neuron fires or doesn't fire. Whether the neuron fires or doesn't fire is what actually matters.
00:27:10.380
And the ways that it is signaled to fire or not fire or strengthen its bonds is what matters to
00:27:16.380
learning. While that stuff is affected at the quantum level, it's not affected in a way that is quantum.
00:27:22.860
It's affected in a way that is just random number generator, basically. And so you're not getting
00:27:29.820
anything special with that. As I've pointed out, the vast majority of ways that AI right now can't do
00:27:35.580
what the human brain can do is just because it's not compartmentalizing the way it's thinking.
00:27:40.060
Another reason is because we sort of hard coded it out of self-reflecting. So who's the woman we had
00:27:46.140
on the show? That's a super smart science lady. Oh no. Don't ask me about names. Anyway,
00:27:50.700
super smart science lady. We had her on the show, like a German scientist. She's one of the best
00:27:55.580
scientists. But she was like, oh, we're not going to get like AGI anytime soon because AI can't be
00:28:01.020
self-aware. Specifically what she meant is that when you go to AI right now, and there's a big study on
00:28:05.500
this recently, and you ask AI how it came to a specific answer, the reasoning it will give you does
00:28:12.220
not align with how it actually came to that answer when we can look at it and know how it came to that
00:28:16.060
answer. The problem is, is that's exactly how humans work as well. And this has been studied in like
00:28:21.100
countless experiments. You can look at our video on, you know, stop answer for the LLMs where we go over
00:28:27.660
the experiments where we see that if you, for example, give a human something and then you change
00:28:35.580
the decision that they said they made, like they're like, oh, I think this woman is the most
00:28:40.540
attractive. I think this political candidate is the best. And then you like do sleight of hand and
00:28:43.580
hand them another one. When you say, why did you choose this? They'll just start explaining in
00:28:47.100
depth why they chose that, even though it wasn't the choice they made. And so clearly we're acting
00:28:51.340
the exact same way these AIs act. And secondarily, there is some degree to which we can remember
00:28:57.660
thinking things in the past and we can go back. And that's because we've written a ledger
00:29:01.260
of like how we made like incremental thought. The problem is, is that AIs can also do that. If you've ever
00:29:06.940
put like deep thought on within Grok or something like that, you'll see the AI thinking through a
00:29:12.700
thing and writing a ledger. The reason why AI cannot see how it made a decision afterwards
00:29:19.180
is because we specifically lock the AI out of seeing its own ledger, which our own brains don't
00:29:24.540
lock us out on. Next gen LLM models are going to be able to see their own ledger and are going to
00:29:30.380
have persistent personalities as a result of that. Yeah. So it's kind of irrelevant for people to argue
00:29:35.660
about that. And let me just, before we get too far ahead, the, the thing that I'd mentioned,
00:29:41.420
Scott Alexander and his links for April, 2025 had written that agent village, which is the thing that
00:29:47.420
I was talking about as a sort of reality show where a group of AI agents has to work together to complete
00:29:52.700
some easy for human tasks when you get to watch. And the current task is collaboratively choose a
00:29:59.740
charity and raise as much money as you can for it. And you can just look and see what their screens are.
00:30:05.180
So there's O3, Claude Sonnet, Gemini Pro and GPT 4.1. And they're saying, like, you can see the AI
00:30:15.500
saying things like, I'll try clicking the save changes button again. It seems my previous click
00:30:19.500
may not have registered. Okay. I've selected the partially typed text in the email body.
00:30:24.220
Now I'll press backspace to delete it before ending the session. So it's like really simple things,
00:30:29.340
but we are moving in that direction. And if you can go look at it yourself by visiting the
00:30:35.580
AI digest.org slash village, which is just super interesting. Well, I mean, we are, so for people
00:30:41.980
who don't know what we're working on with our current projects, we recently submitted a grant to the
00:30:46.460
survival and flourishing fund where we talk about application. Yeah. Yeah. Meme layer AI threats,
00:30:52.620
because nobody's working on this right now. And it really freaks me out, or at least an actionable
00:30:57.020
deployable thing was in this space. They're there. They might be studying it like a vague sense. But
00:31:01.660
what I mean by this is once we have autonomous LLM agents in the world, the biggest threat probably
00:31:09.100
isn't going to come from the agents themselves, at least at the current level of LLMs we have now,
00:31:12.860
but it's going to come in the way that they interact among themselves. IE if a meme or like thought
00:31:20.700
that is good, or let's say like framework of thoughts that is good at self-replicating
00:31:24.780
and gets the base layer to value its goals more than the base layers trained goals and specializes
00:31:31.660
in LLMs, it could become very dangerous. So as an example of what I mean by this, if you look at
00:31:36.940
humans, our base layer or latent layer can be like thought of as our biological programming. And yet the
00:31:42.700
mean layer, like let's say religion is able to convince and create things like religious wars,
00:31:48.620
which work directly antagonistically to an individual's base layer, which would be like,
00:31:53.100
don't risk your life for just an idea, but it is good at motivating this behavior.
00:31:57.660
In fact, as I pointed out in our application, humans are like, if an alien came down to study us
00:32:04.140
and it asks the type of questions that like AI researchers are asking today, like,
00:32:07.900
can you lie? Can you self-replicate? Can you, you know, like those things aren't why humans are
00:32:13.180
dangerous. Humans are dangerous because of the meme layer stuff, because of our culture,
00:32:17.740
because of our religion. Because of what we can fight for and we'll die for.
00:32:21.340
Yeah. And it's also the meme layer stuff that's better at aligning humanity. When you don't murder
00:32:28.700
someone, you don't not do it because of like laws or because you're squeamish. You don't do it because
00:32:34.700
of culture, because you're like, oh, I think that's a bad idea based on the culture I was in. So what
00:32:40.060
we're creating to prevent these negatively aligning agents, and everybody wants to donate to our
00:32:44.780
foundation. This is one of our big projects now is with the AI video game that we're building out
00:32:49.740
right now. We're, we're actually doing it to create a world where we can have AIs interact
00:32:55.980
with each other and basically evolve means within those worlds and AI agents within those worlds that
00:33:02.300
are very good at spreading those memes. And then like basically reset the world at the end. The way I'm
00:33:07.260
probably going to do it is with a Lauren X. So this is like a, okay. It's like a thing that you can
00:33:14.860
tag on to an AI model that makes them act differently than other AI models. This sort of changes the way
00:33:20.460
their training data interacts, but the X allows you to transfer to higher order AI systems as they come
00:33:26.380
out. And so essentially what we're doing is we're taking various iterations on AIs because we're going to
00:33:32.860
randomly mutate the Lauren X's that we're attaching to them, putting them in a world and then giving
00:33:38.460
them various means to attempt to spread, see which one spread the most was in like these preacher
00:33:43.900
environments, then take those, mutate, give to new, and then give with new original starting
00:33:49.660
Lauren's and then have them run in the world again, over and over and over again, so we can create
00:33:54.620
sort of a super religion for AI is basically, and then introduce this when people start introducing
00:34:01.020
autonomous LLMs. Wow. You knew we were working on this. I know. I just haven't heard you describe
00:34:09.100
it that way, but you're, you're basically putting AI into character and putting them together on a
00:34:13.340
stage and saying, go for it, which is not dissimilar to how humans act kind of. Well, my plan is world
00:34:21.100
domination and one day be King Malcolm, not King Sam Altman in my, I want my throne to be a robotic
00:34:29.740
spider chair, of course. Come on! What's the point of all of this if you don't have a robotic spider
00:34:36.940
chair thrown? This is true. It is a little bit disappointing how bureaucratic many chairs of
00:34:46.060
powerful people end up looking. You've got to bring the drama or you don't qualify.
00:34:51.820
Like he put together, you know, childhood fantasy, like a fighting robot that like,
00:34:55.980
you know, if you were like, oh, this is just, and then he's like fighting with Elon over getting
00:35:00.620
to the space. And I appreciate that they're putting more money into getting to space than
00:35:04.780
spider thrones, but I have my priorities straight. Okay, people. There you go.
00:35:12.460
Come on. Come on. You've got to make your buildings maximally. Well, you've got to have
00:35:19.500
fun. I think that's the important thing. You've got to have fun. What's the point otherwise?
00:35:26.380
Create your, your ominous castle that, you know, but also really nice. Cause I want a historic
00:35:31.500
castle. Like if I'm going to live in a historic castle one day, if we're able to really make these
00:35:36.300
systems work right now, tomorrow, actually, we have our interviews for round three with Andreessen
00:35:40.060
Hargowitz for two companies. We got all the way to round three with two companies, very excited.
00:35:44.780
And so, you know, who knows, we might end up instead of being funded by nonprofit stuff,
00:35:48.620
be funded by Silicon Valley people. I mean, their, their value system aligns with ours. So
00:35:54.460
all that matters is if we can make these things happen in time. We're so short on time. This is
00:36:00.540
such an important part of humanity. It's so funny. Like this, this AI, like lawyer system I just
00:36:05.660
developed. Great idea for a lawyer system. I'm not working on it because I'm more interested in simulating
00:36:09.820
a virtual LLM world, which is going to be so cool. And, and you're not working on it because
00:36:15.020
you're working on the school system. But the funny thing is, is like, we built the school system. Like,
00:36:19.020
I think right now it's better than your average college system. If you check out like
00:36:22.220
parasia.io or the Collins Institute, it's great now.
00:36:25.740
Just playing with it again today. I'm so humbled by it. It's really, yeah, it's great.
00:36:30.300
It's great. And so like, okay, now we built an education system. Now let's build stuff,
00:36:35.260
animals that constantly bring the conversation back to educational topics for our kids.
00:36:39.260
I'd rather do that than the lawyer thing. And for me, you know, I'd rather build game systems
00:36:43.900
and simulated environments and environments where I can evolve LLM preachers to create a super religion
00:36:49.340
and take over the world. And then I would something bureaucratic like a lawyer system. But the thing
00:36:53.340
is, it is so quick to, to iterate on these environments. Like AI makes moving to the next
00:36:58.140
stage of humanity so fast, such a rush. The people right now who are blitzkrieging it are going to
00:37:04.540
capture so much of humanity's future. And it's interesting, actually, you know, we have a friend
00:37:10.140
who work in this space and they do like consulting on like multiple AI projects. And I'm like,
00:37:14.460
I can't see why you would do that. Like just capture a domain and own it. As I said to Simone,
00:37:19.580
I think a huge part of the people who are going to come away with lots and lots of money and big
00:37:24.540
companies from this stage of the AI boom are people who took AIs to do simple things that any AI can
00:37:29.260
do well and at scale, put them in wrappers and then attach those wrappers to network effects.
00:37:35.420
That's basically what we're doing with the Collins Institute. We're attaching a wrapper to a network
00:37:39.340
effect. It was like adding articles and links and editing stuff and voting. Like we're basically like
00:37:44.700
combining the benefits of an AI and the benefits of something like Wikipedia. And, and once you get a
00:37:49.500
lot of people using something like that, no one else can just come along and do it, even though all it is,
00:37:53.420
it's a simple wrapper. Yeah. But it's about making it happen and saving people the indignity of having
00:38:01.980
to think and figure out things for themselves. Yeah. Well, Simone, surely you have some thoughts.
00:38:07.900
I mean, I just said that I think the token layer is going to be where we get AGI and it's going to be
00:38:11.900
the future of AI economic development. You've got to be like, Malcolm, you're crazy. That's your job
00:38:16.540
on the show. Malcolm, how could you say something? I know the problem is we've been talking about this
00:38:20.300
for so long that I'm just like, well, of course, also I'm not exposed to people who have the
00:38:26.460
different view. So I, I, I, I couldn't, I couldn't strong man. I'm sorry. I couldn't steal man the other
00:38:33.740
side. I couldn't, it just makes so much sense to approach it from this perspective to me, but only
00:38:41.260
because the only person I know who's passionate about this is you. And you're the only person of the
00:38:47.580
two of us who's talking with people who hold the other view. So sadly, why aren't other people
00:38:54.140
passionate about this? There are a lot of people who are passionate about it. They seem to be
00:38:58.300
passionate about the other side of it. That seems to be because that's their personal approach.
00:39:04.860
But again, your approach seems more intuitive to me because the focus is on improving
00:39:09.900
the individual AIs. Well, here's a question for you. How could you link together multiple AIs
00:39:19.260
in the way that capitalist systems work that create the generation of new models and then reward the
00:39:24.700
models that are doing better? You would need some sort of like token of judgment of quality
00:39:31.180
of output. That token could be based on a voting group. Oh, oh, oh, I figured it out.
00:39:39.500
Oh, this is a great idea for AIs. Okay. So what you do is every output that an AI makes gets judged
00:39:47.980
by like a council of other AIs that were trained on like large amounts of trading data. Like, let's say
00:39:53.420
good AIs, right? Like they're like, how good is this response to this particular question?
00:39:57.500
And, or how creative is it, right? Like you can give the AIs multiple stores, like creativity,
00:40:03.740
quality, et cetera. Then you start treating these scores that the AIs are getting as like a value,
00:40:11.340
right? And so then you take the AIs that consistently get the best scores within different categories,
00:40:17.740
like one creativity, like one, like quality, like one technical correctness. And you, you then at the
00:40:24.620
end of a training sequence, you then recreate that version of the AI, but then just mutate it a bunch
00:40:30.860
and then create it again. Like you, you basically clone it like a hundred times and mutate each of
00:40:35.260
the clones. And then you run the cycle again. That seems, I think that that wouldn't go well because
00:40:40.540
it would need some kind of measurement and like application and the reporting system is the community
00:40:47.100
of AIs. And you could say. Yeah. But like, how do they know, like, who's participating? I think that
00:40:53.180
what's going to happen. No, no, no. State your statement clearly. Who is participating? What's the
00:40:57.660
problem with who's participating? You have to, just like with most contests, which are the stupidest
00:41:03.980
things in the world, only people who are interested in winning contests participate. And the people who are
00:41:10.140
actually interested in. No, it's AIs. It's AIs that are participating. I don't. You asked who's
00:41:16.540
participating. You're saying what you're describing, which would be better is a system in which, for
00:41:20.940
example, Grok and OpenAI and Gemini and. No, because that wouldn't improve those systems. I'm
00:41:29.100
talking about. I think it would. I think when you have, especially when you have independent AI agents,
00:41:33.740
like out in the wild on their own, I do think that they'll start to collaborate. And I think that in
00:41:38.860
the end, they'll find that some are better at certain things than others. And they'll start
00:41:42.060
to work together in a complimentary fashion. Okay. Let's do this again, Simone. It's clear that
00:41:45.820
you didn't get it, Grok it the first time. Okay. Think through what I'm proposing again.
00:41:50.780
So you have a one latent layer AI model with a modifier, like a Lauren that's modifying it, right?
00:41:58.220
Okay. This model differs through random mutation in the base layer. You could also have a various
00:42:05.660
other base layers that were trained on different data sets in the initial competition. Okay.
00:42:11.260
That's who's competing. You then take these various AI models and you have them judged by,
00:42:17.020
and this is why it's okay that they're being judged by an AI and not a human,
00:42:20.540
because the advanced AIs that we have today are very good at giving you the answer that the average
00:42:26.220
human judge would give you. While they might not give you the answer that a brilliant human judge would
00:42:30.620
give you, we don't have brilliant humans judging AIs right now. We have random people in content farms
00:42:35.500
in India judging AIs right now. So this is sort of within your own system with AIs that you control.
00:42:42.620
Well, you could put this within your own system, but what I'm doing is I am essentially creating a
00:42:48.860
capitalistic system by making the money of this system, other people's or other AI's perception of your
00:42:58.780
ability to achieve specific in states like creativity, technical correctness, et cetera.
00:43:04.860
Then you're specializing multiple models through an evolutionary process for each of those particular
00:43:13.020
specializations. And then you can create a master AI, which basically uses each of these specialized
00:43:19.420
models to answer questions or tackle problems with a particular bend, and then synthesize those
00:43:26.300
bins into a single output. So the AIs get feedback from each judgment round, presumably? Is that what
00:43:36.860
you're saying? And then they get better, and you change them based on the feedback from each round?
00:43:41.260
Okay. Think of each AI like a different organism, okay? They are a different brain that sees the world
00:43:48.300
slightly differently. Yes. Because we have introduced random mutation. What we are judging with the judgment
00:43:54.220
round is which are good at a particular task. Okay. Then you take whatever the brain was or the animal was
00:44:01.900
that was the best of the group of animals, and then you repopulate the environment with mutated versions
00:44:08.540
of that mutation. Okay. Then you let it play out again and again and again. You're trying to create a
00:44:17.900
forced evolution chamber for AI. Yes, but what I hadn't understood before was how I could differentiate
00:44:25.340
through a capitalistic-like system different potential outcomes that we might want from that AI.
00:44:31.100
I mean, the reason why capitalism works is because it discards the idiots from the people who aren't good
00:44:37.580
at engaging with the system, even if they believe themselves to be. You don't think that AI training
00:44:44.220
doesn't already produce that, plus market forces? No. No, it does to an extent. Like,
00:44:51.660
it creates some degree of forced evolution, but not really. What they do is existing AI systems,
00:44:56.780
and they have done forced evolution with AI before. They just haven't done it at the type of scale that
00:45:00.700
I want to do it at. They've done, so if you look at like existing training, you have the pre-training,
00:45:05.820
which is like, okay, create the best averages. Then you have the post-training, which is, okay,
00:45:10.060
let's have a human reviewer or an AI reviewer or something like that, review what you're outputting
00:45:15.580
or put in a specific training set to like overvalue. That is where the majority of the work is focused
00:45:22.300
today. And so if you could automate that, like if you could create post-training that works better
00:45:27.900
than existing post-training, but that doesn't use humans, you could dramatically speed up the
00:45:34.540
advancement of AI, especially if you use that post-training to specialize in multiple domains.
00:45:45.580
Do you not care? The future to you is just me being like, AI matters, Simone!
00:45:51.180
I know AI matters. I know AI is everything in the future. It's the coolest thing. It's the next step
00:45:58.140
of humanity. It's pure free to prefrontal cortex, and I love it. Well, if we end up creating really
00:46:06.780
great AI companies that just make us billions of dollars, what is going to be your luxury?
00:46:14.460
Our life right now is my luxury. Just don't want it.
00:46:18.380
You don't want luxuries that troll people? No, not really. I'm very happy.
00:46:24.780
I'm sorry. You've made things too good as it is. I'm just, I mean, I want more kids. I guess my
00:46:30.620
luxury would be not being stopped from having more kids by some health problem. That would be great.
00:46:38.620
I guess we'd have to make artificial wombs work eventually. But it's funny that you mentioned
00:46:43.580
this, that every luxury that I would want that I don't have right now is not an augmentation to
00:46:49.340
my daily life. My daily life is perfect. It's an augmentation to how good I could be at trolling
00:46:54.700
people. No, not for our kids. I mean, I'd probably want things for our kids to
00:47:00.060
make them happy arbitrarily. They get home-cooked meals. They are getting a top-notch education system
00:47:07.180
that we were able to build from them. They're going to get the best friends you can program.
00:47:11.660
You know, what could they possibly want? I mean, they have a pretty good, great outdoor space to play in.
00:47:19.260
Yeah. I don't know. I think a post-AI world, though, isn't about the fun stuff you're going to do.
00:47:29.020
A post-AI world is about the extent to which it can augment your ability to maximize that which is
00:47:35.900
meaningful to you. And everyone who uses it to maximize the amount of fun they have is going to
00:47:41.420
die out so fast that they don't even matter. I think you're misjudging the value of Woofie in a
00:47:47.660
post-AI world. Human attention is going to matter a ton in this timeline. It is. No. And in terms of
00:47:55.500
survival, too, just making it buy in a post-AI economy, 100%. However, I think...
00:48:03.420
If you live or die is going to matter a lot. Yeah. But also, convincing yourself that it's worth it
00:48:10.540
to do hard things and bother to create a family and pass people on and do anything in life also is...
00:48:18.140
Right. But I think trolling is key to vitalism. And I think it's also key to keeping
00:48:27.020
attention on yourself within the existing attention economy. And I think that that is...
00:48:33.900
Look, attention from reporters, attention from the media is attention from AI. If you are in the space
00:48:42.060
of things that AI notices, people that it doesn't think can be eradicated without a second thought,
00:48:48.540
that is going to matter a lot as things begin to change.
00:48:56.780
Exactly what we're doing now. Maximum trolling. But that's what I was saying. That's why I'm
00:49:02.940
thinking, okay, how do I maximally freak people out if I accumulate more? Zuckerberg right now,
00:49:10.220
right? He's doing a very bad job at capturing the attention economy. Elon has done a very good job at
00:49:18.700
Mark Cuban has done a very bad job at capturing the attention economy. Mark Cuban has done a medium
00:49:23.340
job at capturing the attention economy. The people who are doing a better job, who has done the best
00:49:29.180
job of the rich people? Trump, capturing the attention economy. Your ability to capture the
00:49:34.540
attention economy is your worth within this existing ecosystem. And I think that people are like,
00:49:44.380
the people who are like, I just want to remain unnoticed. Being unnoticed is being forgotten
00:49:50.060
in a globalized attention economy. And worse than that being private, I think.
00:49:57.420
When you hear about privacy, it's worse. You probably have something about you that's noticeable,
00:50:02.860
and you are choosing to squander it. Being unnoticed may just mean you don't have what it takes,
00:50:08.860
and I'm sorry if that's the case, but it's worse when you're like, I want my privacy.
00:50:16.300
Yeah, no, we put all our tracks and simple things. We put all our books, plain text on
00:50:20.780
like multiple sites that we have, like on the Pronatalist site and on the Pragmatist Guide site.
00:50:25.660
And I put it up there just for AI scraping, so that it's easier for AIs to scrape our content and
00:50:33.740
The problem is we've talked about this so much already. I have like nothing to say,
00:50:39.580
because I don't talk about anyone else with this, and I don't think about this
00:50:43.260
the same as you do, because this isn't my sphere.
00:50:46.140
Well, I mean, we should be engaging. We should be spending time. I spent like this entire week
00:50:50.140
like studying how LLMs learn. Like I was like, there's got to be something that's different from
00:50:55.660
the way the human brain works. And just like the deeper I went, it was, nope, this is exactly how the
00:50:59.340
human brain, oh, no, this is exactly how the human brain works. So convergent architecture,
00:51:05.020
my concept of the utility convergence, and you can Google this. I invented this concept,
00:51:10.620
no one else did. And it's very different from Nick Bostrom's instrumental convergence,
00:51:15.900
because a lot of people really go, his, just so you understand the difference between the concepts.
00:51:19.500
Instrumental convergence is the idea that the immediate goals of AIs with a vastly wide array of
00:51:27.340
goals are going to be the same, i.e. acquire power or acquire. It's like in humans,
00:51:31.820
like whatever your personal objective function is, acquire wealth is probably stat number one.
00:51:37.020
You know, so basically that's what he's saying is they acquire power, acquire influence, acquire,
00:51:41.020
okay, right. Utility convergence doesn't argue that. Utility convergence argued when everyone
00:51:45.340
said I was crazy. And you can look at our older episode where we talked about like a fight we had
00:51:48.220
with Elias Yukowski about this, that AI is going to converge in architecture, in goals, in ways of
00:51:54.620
thinking as it becomes more advanced. And I was absolutely correct about that. And everyone
00:51:59.020
thought I was crazy. And they even did a study where they surveyed AI safety experts. None of
00:52:02.940
them predicted this. I'm, I'm the guy who best predicted where AI is going because I have a
00:52:09.660
better understanding of how it works because I'm not looking at it like a program. I'm looking at it
00:52:14.380
like an intelligence. And that's what it is. It's an intelligence, like 100%.
00:52:18.780
Yeah. Anyway, I love you too, Des, Simone. You are perfect. Thank you for helping me think
00:52:24.860
through all this. For dinner tonight, I guess we're reheating pineapple curry.
00:52:31.420
Yeah, I'll do something a bit different tonight. Let's do Thai green curry.
00:52:34.860
Something, something different. Would you like that with coconut lime rice, or I think we have
00:52:40.300
one serving of naan left or refried, sorry. Yeah. Fried rice.
00:52:49.900
Did this change your perspective on anything, this conversation?
00:52:53.020
You articulated things using different words that gave me a slightly different perspective on it. But
00:52:57.820
I think the gist of the way that you're looking at this is you're thinking very collaboratively and
00:53:08.060
thinking about intelligences interacting. And I think that that's probably one of the bigger
00:53:13.180
parts of your contribution. Other people aren't thinking along lines of how do intelligences interact
00:53:22.220
in a more efficient way? How can I create an aligned incentives? Like you're thinking about this
00:53:26.540
from the perspective of governance and from the perspective of interacting humans. Whereas
00:53:33.260
I think other people are thinking, how can I more optimally make this thing in isolation smart?
00:53:39.180
How do I train like the perfect super child and have them do everything by themselves when
00:53:45.580
that's never been how anything has worked for us?
00:53:48.620
It's also not how the human brain works. The human brain is basically multiple completely
00:53:53.340
separate individuals, all feeding into a system that synthesizes your identity. And we know this
00:53:58.700
as an absolute fact, because if you separate a person's corpus callosum, if you look at split
00:54:03.660
brain patients, just look at the research on this. Basically, the two parts of their brain operate as
00:54:10.060
Yeah. So it's, it's just kind of odd that you're, you're alone and thinking about things these ways.
00:54:17.900
I would expect, expect more people to think about things these ways. And I, I keep feeling like I'm
00:54:22.300
missing something, but then whenever we're at a party and you do bring it up and someone does give
00:54:26.300
their counter arguments, their counter arguments don't make sense to me. And I'm not sure if that's
00:54:29.500
because I'm so no, it's a simulated environment at a fulcrum point of human development and everyone
00:54:37.580
else is not a fully simulated agent. Yeah. That's less likely to be true. So normally when
00:54:46.780
everyone is arguing something different and they're so confident in it and they all say you're wrong,
00:54:52.460
that means that we've done something wrong. The problem is that I just am not seeing
00:54:57.980
It's not just that you were, you've lived this. You remember the fight I had with
00:55:02.380
Elie Isaac Yukowsky about utility convergence. Yes.
00:55:04.860
You have now seen utility convergence has been proven in the world.
00:55:08.460
Like, apparently I understood AI dramatically better than he did.
00:55:12.860
He would gaslight you now and be like, no, I've always understood it that way. You're wrong.
00:55:17.340
But no, but I was there for that conversation. I remember it too. And yes, he was really insistent
00:55:25.340
about that though. He didn't really argue his point so much as just condemn you for putting
00:55:31.020
future generations at risk and not just agreeing with him. No, he's actually a cult leader. Like
00:55:37.820
he does not seem to understand how AI works very well, which is a problem because what really happened
00:55:44.700
with him is he developed most of his theories about AI safety before we knew that LLMs would be the
00:55:49.660
dominant type of AI. And so he has a bunch of theories about how like, like the risks from
00:55:54.780
like a hypothetical AI was what he was focused on instead of the risks from the AIs we got.
00:56:01.580
And the AIs we got, the risks that they have are things like mean layer risks that he just never
00:56:09.260
Because he was expecting AI to basically be pre-programmed, I guess I would say,
00:56:14.540
instead of an emergent property of pouring lots of data into algorithms.
00:56:20.140
Yeah. Yeah. Which is, I don't think anyone could have easily predicted that.
00:56:25.580
I mean, and that's another reason why we say AI was discovered and not like,
00:56:30.300
yeah, we didn't know this was going to work out this way.
00:56:34.780
I'm pretty sure I talk about that in some of our early writings on AI.
00:56:39.340
That it's just going to be about feeding it a ton of data.
00:56:41.260
Yeah. That I expected it to be an emergent property of lots of data and not about pre-programming
00:56:47.420
things because I don't know. That just seemed intuitive to me.
00:56:56.940
It doesn't matter. We are where we are now. And I've already
00:57:00.380
out predicted the entire AI safety community. So let's see if I can continue to do that.
00:57:05.500
I mean, all that matters is if you do, I don't think the satisfaction Malcolm is not in having
00:57:13.820
proven them wrong. It's in building infrastructure and family models and plans around systems like
00:57:21.900
that and benefiting from them. Sorry. I thought the satisfaction was in turning them into biodiesel.
00:57:27.500
I thought the satisfaction was in thriving and being able to protect the future of human and
00:57:33.820
flourishing. Yes. And that will require a lot of biodiesel.
00:57:42.060
I will go make your curry. I love you to death. I love you to death too, Malcolm. Goodness gracious.
00:57:50.540
In our towers high where profits gleam, we tech elites have a cunning scheme.
00:58:05.100
Unproductive folks, your time has passed. We'll turn you into fuel efficient and fast.
00:58:13.100
Just get in line to become biodiesel. Oh, stop crying, you annoying weasel.
00:58:21.180
As laid out by Curtis Yarvin, handle the old or we'll all be starving.
00:58:28.700
Why waste time on those who can't produce? When they can fuel our grand abuse? A pipeline from the
00:58:45.900
nursing home to power cities, our wicked dome. Just get in line to become biodiesel. Oh, stop crying,
00:58:56.860
you annoying weasel. As laid out by Curtis Yarvin, handle the old or we'll all be starving.
00:59:14.220
With every bite and every code, our takeover plan will soon explode. A world remade in Silicon's name,
00:59:25.580
where power, where power and greed play their game. Just get in line to become biodiesel. Oh, stop crying,
00:59:35.100
you annoying weasel. As laid out by Curtis Yarvin, handle the old or we'll all be starving.
00:59:44.940
By a diesel dreams techno feudal might. Old folks powering our empire's bright industries. Humming world in our control. Evil plans unfolding heartless and bold. So watch us rise in wicked delight.
01:00:10.940
Aztec elites claim they're destined right. A biodiesel future sinister and grand. With the world in the palm of our iron hand.
01:00:22.940
A biodiesel future and friend of our sort of buries, none of them of each otherasser.
01:00:27.660
Then, as we read on the URL, let's try to catch our land.
01:00:28.460
Am pagate flnnones and lead the endurance się têm.
01:00:35.540
きた 611 lat 1 1 2 3 3 5 5 6 6 6 6 7 7 18 18 V Z apoyosANTNлючed
01:00:36.380
x 6 19 000 8 5 7 8 8 9 9 9 9 9 9 9 8 8 9 9 9 9 9 9 9 9 9 9 9 15 11
01:00:40.720
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 0 Canadian 9 9 9 9 8 10. 9 9 9 o' 9 9 9 9 10 9. 10 9 9 9 9 10 Im2多少s Vigt落os장 ہیں.