ManoWhisper
Home
Shows
About
Search
Based Camp
- August 17, 2023
"Mid is Over" How Do We Protect AI From Those it Will Replace? (With Brian Chau)
Episode Stats
Length
37 minutes
Words per Minute
183.05891
Word Count
6,845
Sentence Count
424
Summary
Summaries are generated with
gmurro/bart-large-finetuned-filtered-spotify-podcast-summ
.
Transcript
Transcript is generated with
Whisper
(
turbo
).
00:00:00.000
Of course, you see Sam Altman going to testify and he wants a licensing regime.
00:00:04.260
Of course, OpenAI will get a license.
00:00:05.900
Of course, OpenAI's competitors will not get a license.
00:00:08.420
Of course.
00:00:08.760
So if you're OpenAI, you know, and you've now secured billions of dollars in investment
00:00:14.900
in fundamentally these transformer models and, you know, the kind of hardware stacks
00:00:19.500
that are specific to replicating them.
00:00:22.240
And you get something that's out of left field or you get something that's limited
00:00:25.780
by a different factor, that's not limited by the advantages that OpenAI has over its competitors.
00:00:31.740
You know, maybe Foom is not the sound of AI accelerating,
00:00:34.520
but Foom is the sound of OpenAI's stock going to zero.
00:00:37.800
This is no longer a battle of, like, internet shitposters.
00:00:41.020
You know, this is a battle of real political interests
00:00:44.800
and the forces that drive the Democratic and Republican parties.
00:00:49.560
I also love that it's done such a good job of getting rid of these useless grifters,
00:00:54.280
like, artists and writers, because honestly, they had these pointless degrees.
00:01:02.160
They were a huge, I think, cause of our society's degradation.
00:01:07.380
And I'm so glad that AI has replaced them.
00:01:10.760
There's, okay, like, there's a real thing underlying that is that, like, mid is over, right?
00:01:16.660
And here, like, actually mid, I don't mean...
00:01:18.600
Would you like to know more?
00:01:19.520
Hello, everyone.
00:01:21.140
We are very excited to welcome Brian Chow for this episode of Basecamp.
00:01:25.400
We are going to be talking with Brian, not about his amazing podcast from the new world,
00:01:30.240
but rather a new venture of his, which is called the Alliance for the Future.
00:01:35.160
So I think that the number one thing to understand is that it's a non-trivial question,
00:01:42.440
whether you are able to use machine learning,
00:01:45.280
whether you are able to write machine learning, whether it will be illegal to log on to, you
00:01:49.400
know, chat.openai.com.
00:01:51.780
That is a real question that people have different answers to, that many people want for various
00:01:59.940
reasons to ban machine learning pretty much wholesale.
00:02:04.480
And this is, you know, the purpose of Alliance for the Future is to do everything we can to
00:02:11.440
make sure that does not happen.
00:02:12.460
Hmm.
00:02:14.200
So what are you saying to the AI apocalypticists who start immediately screaming at you and
00:02:21.700
flailing when you say things like this?
00:02:23.920
What's your take on LEIs or Yukowsky's positions and how do you respond to them?
00:02:28.300
Yeah, it's very funny.
00:02:29.420
This is very often the first question that people ask me when I mention this.
00:02:33.680
And they're, you know, striking the rogue data centers is among the, I think the less, you
00:02:42.660
know, the less dangerous versions of what people are trying to get.
00:02:47.800
Like this is something that maybe, maybe, I don't know, like there are people in, in
00:02:52.800
AFTF, in Alliance for the Future that I'm more worried about the EAs.
00:02:56.180
You know, I'm friends with many EAs and I've also seen the kind of regulatory environment
00:03:02.420
in, in Washington, in other countries, in the EU, you know, like the EU, the EU commission
00:03:09.520
are not EAs.
00:03:10.960
You know, the Chinese government is not EAs.
00:03:12.940
Chuck Schumer is not an EA.
00:03:14.620
Right.
00:03:14.760
I just don't think they're the, I just don't think they're the major threat.
00:03:18.380
They're like the closest thing to like a non-retarded version of the ban AI argument.
00:03:24.120
And that's why they get engaged with in like smart parts of Twitter.
00:03:27.360
It's because they're kind of like the, the, the smart representatives of this much bigger
00:03:33.500
target of people that I think most EAs would consider like stupid and misguided.
00:03:39.540
So, so I don't, I don't worry too much about the EAs.
00:03:42.180
I don't want this to be a conflict between, you know, like EAs and like accelerationists
00:03:48.960
or whatever.
00:03:49.560
I think that that's, you know, that's very silly.
00:03:51.820
That's getting caught up in like what I'm asking is what, so, you know, even our audience,
00:03:59.020
you know, we've got, I suspect some people in our audience who think AIs are going to
00:04:02.500
kill us all.
00:04:03.580
A lot.
00:04:04.600
Not a lot because we've taken very hard stances against that.
00:04:07.960
Well, very, so I don't know if you know our position on AI.
00:04:11.880
We take a position about variable AI risk, which is very different than absolute AI risk.
00:04:16.880
But I'm wondering, what is your stance?
00:04:18.840
Like if you're trying to calm somebody down, who's like, yeah, but it looks like genuinely,
00:04:24.560
I don't see how we remove this threat without restricting access.
00:04:30.200
How do you communicate to them?
00:04:32.700
So there's actually a difference between like Eliezer and like fans of Eliezer.
00:04:36.760
With fans of Eliezer, I just worry that they're driven too much by the current hype cycle.
00:04:42.060
Then, you know, they saw that it's cooling down a bit.
00:04:44.360
They saw, you know, like GPT-3 get released and then they saw GPT-4 get released.
00:04:49.260
And they were like, what if it just keeps developing at the same rate that between GPT-3 and GPT-4
00:04:54.320
was released?
00:04:54.960
And of course, that did not happen.
00:04:57.180
That was, you know, that was the accumulation of lots of work and, you know, the effort across the entirety of OpenAI's existence.
00:05:07.640
And there are many technological factors that just slow development.
00:05:12.720
This is the trend you see is that you see, you know, like an accelerating technology.
00:05:17.560
You see it get adopted in early market versions.
00:05:20.840
And then you start to see that the speed of that development petering out.
00:05:24.740
I actually have a more specific argument going through all of the, so far I have the hardware part done.
00:05:31.400
This has been put on the back shelf a bit because of everything else I'm doing right now.
00:05:35.520
But I do think the first part of the article actually holds up very well in the past few months,
00:05:42.820
which is diminishing returns on machine learning one.
00:05:46.380
You can find that also at fromthenew.world, fromthenewworld.
00:05:50.460
It is the home of everything now, everything that I do.
00:05:54.100
And yeah, so to those people, I would say you are severely estimating the progress in artificial intelligence.
00:06:02.720
Many of them have, you know, they have this like, I did not come up with this word.
00:06:07.620
They came up with this word of like foom, which is basically like the idea that, you know, it goes foom.
00:06:16.380
It sounds like, this sounds like, this sounds like a straw man version of the argument.
00:06:22.460
But it is, it is actually their own argument that, you know, once it, once it starts accelerating,
00:06:27.480
it will just become, you know, extremely fast.
00:06:30.260
It will only get faster and faster.
00:06:31.900
It's like, this, this is just not, this is, you know, this is hype cycle fantasy.
00:06:37.260
In terms of like the longer term.
00:06:40.020
In terms of the longer term.
00:06:41.220
Explain why it's hype cycle fantasy.
00:06:42.700
And I, like, I, this is the main thing.
00:06:45.180
Look at, yeah, it's, it's not the mainstream view.
00:06:48.220
Like, like, maybe it's like, maybe it's the view of my, like, Twitter posters.
00:06:51.680
No, no, I mean, I'm among, like, EA Zoomers, I know.
00:06:54.480
Like, they, they genuinely, and I, I don't mean to be, we've tried really hard not to insult
00:07:00.680
specific people on this show.
00:07:02.860
But Eliezer Yukowski is the Greta Thornburg of AI apocalypticism.
00:07:07.320
No, no, you have to, you have to have the Straussian reading of Eliezer, right?
00:07:12.640
So, so the Straussian reading of Eliezer.
00:07:14.920
So, so like the surface level reading of Eliezer is that, like, he, he just wants to
00:07:18.540
create, you know, massive panic and for government to take control of everything.
00:07:23.680
You know, that, that's what he writes in his Time article pretty much.
00:07:26.780
It is.
00:07:27.040
Like, this is, this is like not even really an exaggeration.
00:07:29.720
So, so like the Straussian reading of Eliezer is, you know, like he, he doesn't actually
00:07:33.720
think that the world is like 90% that, that like the, that like the world is like absolutely
00:07:38.480
doomed, but he thinks that like no one will take it seriously, right?
00:07:41.880
He, he thinks that unless you, unless you turn the volume, everyone's wearing earmuffs,
00:07:46.240
so you better turn the volume up to a hundred and you better give, you know, and this is by
00:07:50.860
the way, like not necessarily an incorrect model of how, how the government works.
00:07:55.300
Like, this is what Fauci did as well.
00:07:57.260
You know, the, the reasoning was, you know, that you wouldn't give us power to do any
00:08:01.440
pandemic prevention measures unless we took very extreme positions on how dangerous the
00:08:08.100
virus was, on how confident we were about certain interventions.
00:08:12.780
And, you know, in terms of the politics, Fauci was correct.
00:08:18.040
Like that was actually correct.
00:08:20.440
He, he got the power.
00:08:21.700
It's very likely that he would not have gotten the power if he made, you know, like a
00:08:25.220
moderate case for the risks of COVID.
00:08:27.260
And I think Eliezer has learned that lesson.
00:08:29.660
So I don't know, like, like maybe, so like, I think Eliezer is like, not as, not as, not
00:08:36.720
as insane as, as it sounds.
00:08:38.460
Like the things that he advocates for are like truly insane.
00:08:40.980
But like, I think like Eliezer, the man, you know, is not necessarily that insane.
00:08:45.380
So your argument is just, he's, he's wildly overcorrecting.
00:08:49.720
And that in reality, AI is not going to accelerate as quickly as everyone thinks it is.
00:08:54.760
And therefore, as AI more linearly develops, people can develop safeguards as necessary.
00:09:03.820
Therefore, it doesn't present an existential risk.
00:09:06.000
And therefore we shouldn't be, be stifling its development with regulation and rules.
00:09:12.540
Is that correct?
00:09:14.100
Yes.
00:09:14.420
So it's mostly correct.
00:09:16.920
So, and I should say here, I'm speaking for myself, not for like the AF, for AFTF in
00:09:21.960
general, but so like the original version of the, the name of the original version of
00:09:29.300
EA that cared about AI was that long-termism.
00:09:32.260
And the reason why it was called long-termism is, you know, the idea is that in hundreds
00:09:38.980
of years, humanity, if you just look at, you know, population growth or like, I don't
00:09:43.820
know, you guys have a much more, much more pessimistic version of this, right?
00:09:47.280
But if we create a way, if we create a way to solve our population growth problems and
00:09:52.180
have the earth continue growing, then the number of humans in the future are just much
00:09:55.860
more than the number of humans in the present.
00:09:57.740
So you have to care about the long-term.
00:09:59.140
And in terms of the long-term, in terms of the timeframe of like hundreds of years, I
00:10:04.120
am not, you know, I'm not completely sure that AI will not be a problem in like a hundred
00:10:08.480
years.
00:10:09.080
That is something, you know, that, that I accept as, as a possibility.
00:10:13.760
The question is, you know, if you have a pace of technological growth, that is, you know,
00:10:20.860
what you would infer from every other technology that has ever existed pretty much from, you
00:10:27.940
know, like the history of technological development from, you know, early science to the industrial
00:10:32.320
revolution, to more recent cycles, the 2000 cycle, you know, the, the very recent, you
00:10:37.680
know, like 2020 to 2021 cycle of technologies.
00:10:41.320
You get that, you, you get this very known thing, very well-known thing called an S-curve.
00:10:45.300
You get an early acceleration and then it peters out.
00:10:47.840
And then that's where the hard work has to be done.
00:10:49.400
Actually getting the technology adopted in all these sectors of society.
00:10:53.360
And this is also like the mainstream economist view.
00:10:56.320
And, you know, I'm not like this, like the thing is, this is the mainstream view, right?
00:11:00.980
The mainstream view is that, you know, hype cycles happen, that techno technological progress
00:11:05.180
is good, but it is not necessarily something that should be, you know, that you should go
00:11:11.140
all in on, right?
00:11:11.880
You should, you know, you should invest in some tech stocks, but you should not, you know,
00:11:15.180
gamble your entire life savings on one tech stock and so on and so forth.
00:11:19.500
So I really do think this is the implementation of the normal, the normie, like non-hyper online
00:11:28.140
position on technology as policy.
00:11:31.000
That's how I would put it.
00:11:33.140
So functionally, your organization, if it succeeds, what's it doing?
00:11:39.300
What specific changes does it make in policy?
00:11:42.140
How do you achieve those?
00:11:43.640
What research are you outputting?
00:11:46.600
So Alliance for the Future is a completely new think tank.
00:11:49.760
The number one thing is just to balance the scales, because right now there is a lot of
00:11:55.140
funding either from the EA side, although I don't think that's the main problem, but even
00:11:59.520
more from existing political interests.
00:12:02.440
And of course, you see Sam Altman going to testify and he wants a licensing regime.
00:12:07.680
Of course, OpenAI will get a license.
00:12:09.320
Of course, OpenAI's competitors will not get a license.
00:12:11.680
Of course, you know, increase the barriers to entry.
00:12:15.140
Actually, so this is a thing that you're talking about briefly here, but I really want our audience
00:12:19.520
to understand this.
00:12:20.700
So in the business world, you know, what you want to do to maximize the value of your company
00:12:26.560
is to advocate for regulation.
00:12:29.060
Like a lot of people are really surprised that like Google would advocate for like internet
00:12:32.960
search regulation or like, but this is what you see with any large monopoly in a space
00:12:38.680
is they spend a huge chunk of their revenue advocating for regulation of their own company.
00:12:45.120
And the reason they're doing that is because it prevents new entrants from entering the market,
00:12:49.760
which protects their monopoly.
00:12:52.780
That is why people like Sam Altman are advocating for this regulation.
00:12:56.240
It's not because they're genuinely scared of AI.
00:12:59.300
It's because they're the first players on the market to continue.
00:13:02.160
This is something, this is actually a very important topic.
00:13:04.880
Okay.
00:13:05.160
Like this is a good venue to be like very autistic about this.
00:13:08.260
So there's, there is this, I think like he published most of his stuff in the 60s, 70s.
00:13:16.180
Economist, one of the, considered one of like the founding people of political economy,
00:13:20.920
Gordon Tullock.
00:13:21.720
Okay.
00:13:22.160
All of the GMU people love this guy because he was from GMU, I think.
00:13:25.880
And he has this idea called the Tullock rectangle.
00:13:29.820
And that idea is that, okay, you look at, you know, if you've looked at a supply and demand
00:13:34.600
curve, when there's regulations that interfere with the supply and demand curve, it can increase
00:13:38.200
the profit of, of an industry that has being interfered with because it stops.
00:13:43.740
And essentially like the number of new, new customers or like the, the amount of increased
00:13:48.120
profit from these regulations outweighs the amount of, the amount that you lose from missing
00:13:52.900
out on some traits.
00:13:54.560
The way that I really want to see this expanded is with firm dilution and not only with firm
00:13:59.860
dilution, but with dilution of the entire industry.
00:14:04.460
So, so, so what do I mean by this?
00:14:05.800
In an industry like machine learning, you have basically, you, you have a precedent that people
00:14:11.060
are not sure is optimal.
00:14:12.860
So you have, you know, right now we have transformers pretty much.
00:14:16.060
We have this paper from 2017.
00:14:17.880
There have been minor modifications to it.
00:14:19.820
The paper is called attention is all you need.
00:14:21.740
And this basically outlined the way in which all language models and many similar models
00:14:28.280
operate.
00:14:29.700
And we don't really like, but it's the best we got so far, but it's not like, it's not
00:14:36.940
like a mathematical proof.
00:14:38.300
It's not like a certain thing.
00:14:39.740
You know, we have no idea if this is the best way to do machine learning.
00:14:44.580
And in fact, many of the people who are more hyped about AI think that, you know, we're
00:14:47.680
just about to get another breakthrough in how we do, you know, how we do machine learning.
00:14:53.000
And so if you're open AI, you know, and you've now secured billions of dollars in investments
00:14:59.960
in, in fundamentally these transformer models and, you know, the kind of hardware stacks that
00:15:04.740
are specific to replicating them.
00:15:06.980
And you get something that's out of left field, or you get something that's limited
00:15:10.820
by a different factor.
00:15:11.840
That's not limited by the advantages that open AI has over its, its competitors.
00:15:16.480
You know, maybe foom is not the sound of AI accelerating, but foom is the sound of open
00:15:21.040
AI stock going to zero.
00:15:22.920
To this topic, just to shout out for listeners, cause I have a little request from a friend
00:15:28.600
here.
00:15:29.000
I know of this company that's found a way to create like much better using your own
00:15:34.220
models, like, like much tighter chip sets.
00:15:37.180
Anybody who's interested in investing in like a large fab, like, like, look, I'm talking,
00:15:44.020
you know, many millions of dollars, but it could make AI much less expensive to run.
00:15:48.820
I've got a company that's interested in doing that right now.
00:15:51.380
They've already developed all this stuff.
00:15:52.860
They're just looking for whoever they work with on the space.
00:15:54.860
Oh, this is fascinating.
00:15:56.460
So this is like not an open, open AI competitor, but like a TSMC competitor or something like
00:16:00.680
that.
00:16:00.820
Yeah.
00:16:02.100
Okay.
00:16:02.700
Interesting.
00:16:03.220
Interesting.
00:16:04.340
Another thing I wanted to dive into is before we started recording, you had alluded to this
00:16:08.360
not being really an EA style think tank.
00:16:12.760
And I thought that was interesting because you are trying to be effective in your altruism
00:16:18.340
through doing this, right?
00:16:19.980
Like I'm so, so tell me more.
00:16:22.020
Why do you think this is not an EA thing?
00:16:24.860
So like on the, on the center for effective altruism homepage, they have like this essay
00:16:29.540
about like, who is, who is an effective altruist.
00:16:33.020
And they basically say like everyone who wants the world to be better and who likes evidence.
00:16:38.540
And I'm like, okay, you know, I guess it's an EA think tank.
00:16:42.240
Sure.
00:16:42.540
Give us money.
00:16:43.880
Of course, you know, most EAs, you know, most EAs want more, more regulation in the, even
00:16:49.800
in the short term, or actually I'm not sure about like the, the people who are, you
00:16:53.720
know, most funded by EAs.
00:16:54.980
That's what they want.
00:16:56.000
I should say as well, this is an article that might be out by then, but there's also a very
00:17:01.340
strong EA case.
00:17:02.880
Even if you think that there, I actually, I say this as well.
00:17:06.520
Like the more you think that AI is happening soon, the more it's the flight 93 election.
00:17:16.500
The more you should be trying to make sure that there is no government control of AI because
00:17:23.040
there is one institution in all of human history that is guaranteed to be misaligned.
00:17:28.860
And that is the most powerful government in the world.
00:17:32.180
Why is that institution always guaranteed to be misaligned?
00:17:35.380
The answer is political economy.
00:17:37.380
It goes back to Tulloch.
00:17:38.440
It goes back to many of the factors we were talking about, but people have the idea.
00:17:42.280
Oh, you know, in markets, there's externalities and in markets, people will compete.
00:17:46.300
And the thing that makes them succeed will not always be the thing that's best for the
00:17:49.120
general population.
00:17:50.820
And, you know, when it comes to elections, you know, the thing that makes people succeed
00:17:54.120
in elections, the thing that makes them get the most votes is always going to be the
00:17:57.700
thing that is, you know, most rational and sane.
00:18:00.200
And of course, it's not true.
00:18:01.540
And you go further, you go further down the level, right?
00:18:04.840
So there are these like nesting, there are these layers of the onion as you go down like
00:18:09.760
the policymaking stack and people don't transfer their lessons.
00:18:13.720
So I think many EAs, maybe they've read like Brian Kaplan, they've read, they might even
00:18:18.340
have read Garrett Jones.
00:18:19.400
And they understand the flaws of the voter.
00:18:23.120
They understand the flaws of, you know, when you go and cast your ballot for Donald Trump
00:18:27.620
and Joe Biden, that that's not necessarily the best thing that, you know, they're not
00:18:32.860
necessarily going to be doing the best thing for the world.
00:18:35.540
But you go one level down the stack to the legislative level or to the administrative level,
00:18:41.200
and they do not see the exact same incentives in play.
00:18:44.580
Where when you pass a bill, for example, when you pass like, like, the authorization of the
00:18:49.820
use of military force, right?
00:18:51.760
Or when you pass like when you pass like the budget, the budget reconciliation bill, what
00:18:57.320
factors go into play in terms of getting your policy priority into that bill?
00:19:02.340
And what in what ways will be corrupted by the process by the requirements and the incentives
00:19:07.860
for it to be there in the first place?
00:19:09.760
And, you know, the short answer, this is something very funny that I posted about recently and
00:19:16.760
got a lot of support on Twitter for, is that like, there are some ideas that are so correct
00:19:21.060
that even the most strawman version of them is true.
00:19:24.380
So like the extreme strawman version of my opinion is like, or of this, like it really
00:19:30.140
like not my opinion originally, but like the field of political economy, the extreme strawman
00:19:35.440
of it is that like, whatever idea you want to get into law will be unrecognizably corrupted
00:19:40.760
once it is in law.
00:19:42.480
And even that like extreme strawman is like, pretty much true.
00:19:48.620
You just look at like,
00:19:50.020
I would call it extreme strawman.
00:19:51.660
You're like, this is what my opponent's like, but it's true.
00:19:53.920
Even the most insane of them.
00:19:56.160
Yeah.
00:19:56.460
Yeah.
00:19:56.840
That's, that's the best thing about it.
00:19:58.700
Right.
00:19:58.920
Like, like even, you know, even, you know, the most fervent supporters of law, you know,
00:20:04.160
you, you ask them, you know, you ask them like, what happens?
00:20:07.640
What do you think this law will look like after it's passed?
00:20:10.120
I'll say like, I, I don't know.
00:20:11.660
I just hope it's, this is also like a take that, that I want to, I want to like specifically
00:20:17.060
address.
00:20:17.680
If we're talking about addressing EA takes is that I've had multiple people that I really
00:20:22.680
respect say like, oh, we, we want the government to interfere because we'll just slow down
00:20:28.560
progress.
00:20:29.240
We just want it to like cause harm.
00:20:31.660
And as long as it's like, as long as it's like causing harm, it will reduce the probability
00:20:36.120
or the speed that we get AGI.
00:20:38.820
And this is also something that I don't necessarily think is true.
00:20:43.280
And the best counter example in recent memory is gain of function research.
00:20:47.600
Right.
00:20:48.060
So we have this phrase called like a narco tyranny.
00:20:50.680
Usually people talk about in the context of SF, which is like the bad actors are not punished
00:20:54.940
because they're outside of the system.
00:20:56.260
The only people who are punished are like the good actors in the system.
00:20:59.540
Right.
00:20:59.700
So, so you're, you're, you're punished for like protecting your convenience store, but
00:21:03.220
like the, the, the people who rob your convenience store are rewarded by the state.
00:21:07.520
And you see the exact same thing in regulatory capture.
00:21:10.460
You see the FDA, you know, very, very, you guys, you guys actually have had experience with
00:21:16.840
this, right.
00:21:17.320
With going after novel technologies that have some promise, but they will also, you know, this
00:21:22.780
is not the FDA, but the U S government will also fund gain of function research in Wuhan.
00:21:28.060
So it's, the question is not, you know, this like one singular lever of what it does to
00:21:33.460
the industry.
00:21:34.100
It is much more of the question of like how it influences the distribution of, of players.
00:21:42.120
And, you know, the most likely thing that will happen with the distribution of players
00:21:45.160
is that, Oh, actually it's just open AI, Google, Facebook, so on, or, or not really a so on.
00:21:53.140
It is really much just in this case, you know, in, in some regulatory capture cases, they're
00:21:58.960
morbid in this case, you know, it is, it is really just, you know, like the major named
00:22:03.200
players.
00:22:03.580
So let's say that you are successful with this beyond your wildest dreams, how will
00:22:09.640
the world be different?
00:22:10.660
Is it a scenario in which rather than there being like three players who are setting the
00:22:15.080
tone for AI, it's a little bit more distributed, like a lot of people are contributing, it's
00:22:19.180
more open source or, you know, what, what, what kind of environment or ecosystem are you
00:22:24.080
trying to create?
00:22:25.860
So many people, this is a very fascinating critique of political economy, because if you look at
00:22:31.560
these incentives, they, they rarely ever change.
00:22:33.680
So, so there are cases of just being like a doomer of people saying like, not, not an
00:22:38.020
AI doomer, but people like saying, Oh, like the regulatory crackdown is inevitable, but you
00:22:43.300
just look at the history of the United States and that's just not true.
00:22:48.060
The best example is you guys remember SOPA, like a stop or like something like online.
00:22:54.560
Yeah.
00:22:54.640
They were trying to stop the internet nonsense.
00:22:56.460
Oh my God, yes.
00:22:57.540
Yes.
00:22:57.840
Wow.
00:22:58.180
So like this, this was a case, I think it was the, it was, I forget, I forget which
00:23:05.220
agency there was, some, some agency was trying to pass, I forget the exact name of it, but
00:23:09.920
it may have been, okay, whatever.
00:23:12.960
But, but basically they were trying to do this restriction of content on the internet,
00:23:16.300
basically saying that, you know, like if any, if any like random commenter said, or like
00:23:21.560
linked to porn or, or something like that, then it would be, you know, then the entire
00:23:26.120
website would be subject to legal crackdowns.
00:23:29.220
Right.
00:23:29.660
And people, people just wrote in, people got, got media attention.
00:23:34.220
It became like, it became like this, this huge, like unification of the internet was,
00:23:39.060
was one way that it was described.
00:23:42.400
We are basically unanimously.
00:23:45.720
Everyone was like, this is a terrible idea.
00:23:47.040
Like, this is just, you know, this is just disastrous.
00:23:50.100
And we stopped it.
00:23:51.480
You know, we had, we had a W and there are instances where there are calls for regulating
00:23:59.980
an emerging industry.
00:24:02.140
And for one reason or another, it just doesn't happen.
00:24:06.420
You can look at the internet, for example, you know, this is in the broader history of
00:24:10.580
the internet.
00:24:11.500
Mark Andreessen actually gave me this example, right?
00:24:13.400
So Mark Andreessen, you know, like he, he talked about, he talked about this on my podcast.
00:24:18.040
He's been on several others, but it should be, it should be out by the time, you know,
00:24:21.440
this, this releases.
00:24:23.180
And he talked about, you know, like nowadays, like, what if you try to call, what if you try
00:24:27.640
to ban the internet now?
00:24:28.980
We've created a successful political constituency.
00:24:32.360
If your industry is big enough, which I think machine learning will be, you know, I think
00:24:36.540
even though we won't get, you know, artificial general intelligence, we will get, you know,
00:24:39.900
many commercial applications.
00:24:41.160
We have many commercial applications now, right?
00:24:43.620
And once all of those are adopted, once all of those are, you know, regular parts of people's
00:24:47.500
lives, if it's big enough, like the internet is, and like, I think machine learning will
00:24:51.640
be, then you've created a political constituency.
00:24:54.560
You know, all you have to do is wait for the existing adoption curve to happen.
00:24:58.740
So you're, you're, you're saying that you're trying to broker that transition to like get
00:25:03.680
to a place where adoption is wide enough, where like the market will handle the protection
00:25:08.360
because they are dependent on it and huge fans of machine learning.
00:25:11.640
Right. It's not even the market.
00:25:12.360
It's just, you know, like if half your country is using chat GPT or using some form of LLM,
00:25:17.280
you know, you're not banning it.
00:25:19.220
I'm sorry.
00:25:19.700
You know?
00:25:19.960
Yeah. Like no one will get reelected if they support any legislation that does that.
00:25:23.340
So basically before that saturation has been reached, you were trying to ensure that we
00:25:28.860
don't preemptively make that impossible.
00:25:31.260
Right.
00:25:31.320
Yeah. Now, now is the most important reason for exact, or now is the most important moment
00:25:35.540
for exactly that reason is because it's, it's the only, you know, it's the period of time
00:25:40.620
in which it is most politically vulnerable, but most economically has the most economic
00:25:45.760
potential.
00:25:47.560
Yeah. That's meaningful.
00:25:50.460
Are you concerned about all the people?
00:25:52.860
Because it sounds like you're not really fighting against EA-ers who are making weird
00:25:56.300
arguments against AI.
00:25:57.360
You're more fighting against legislators who are like, well, but, you know, they might
00:26:02.240
have much more, what you might say, like normie arguments against it.
00:26:05.200
Right. So they're going to say, well, what about the fact that AI is going to take jobs
00:26:08.820
away? You know, what are you going to say to them?
00:26:11.160
I want to take an interlude between this.
00:26:14.160
Most of the people who are actually proposing like these, like legislative crackdowns are very
00:26:20.140
anti-EA.
00:26:21.700
You know, they are anti-EA.
00:26:23.640
Like, like one of these people, like Timnit Gebru, who is like this race grifter who is
00:26:28.100
now like focused on machine learning is they like, or like she absolutely like despises
00:26:34.340
EA for like, honestly, like pretty pathological reasons.
00:26:39.740
Like it's not helpful for her for in any way to dislike EA, you know, like they could easily
00:26:45.700
be, you know, like legislative allies, but, but she, like, you know, but, but she just absolutely
00:26:51.200
despises them, you know, loves calling them racist because they believe in IQ and they
00:26:57.040
believe in like.
00:26:57.820
Oh, okay.
00:26:58.420
So that general approach.
00:26:59.900
Yeah.
00:27:00.360
Yeah.
00:27:00.640
Like a lot of the most despicable people, like also hate EA and like, like this is, you know,
00:27:08.980
it's no longer just, this is no longer a battle of like internet shit posters.
00:27:13.380
You know, this is a battle of real political interests and the forces that drive the democratic
00:27:20.520
and republican parties.
00:27:22.600
What's your rebuttal to their arguments though?
00:27:24.480
Because those aren't EA arguments.
00:27:26.000
So what is your rebuttal to, this is going to take away jobs.
00:27:29.540
It's, it's dangerous and we don't need it.
00:27:31.420
So why should we support it?
00:27:33.240
Et cetera.
00:27:33.820
The jobs point is, is particularly interesting because it's, it's kind of like framed as a
00:27:39.120
republican concern and it's, it's very, it's very funny.
00:27:44.540
It actually relates to the other discussion that, that we've had or the discussion in the
00:27:50.600
future that I, that I have some premonitions about.
00:27:54.800
Yeah.
00:27:55.660
In many cases, AI, the things that AI are replacing are kind of bullshit jobs.
00:28:02.040
There are things that people already dislike, you know, people do not, there's this wonderful
00:28:07.040
tweet by Sam Altman that says, you know, today I've had one person tell me that they've used
00:28:14.580
ChatGPT to expand their bullet points into a long corporate email.
00:28:19.520
And another person tell me that they've used ChatGPT to condense a long corporate email
00:28:24.440
into five bullet points.
00:28:26.300
It's the future of communication.
00:28:28.340
I also love that it's done such a good job of getting rid of these useless grifters, like
00:28:33.840
artists and writers, because honestly, they had these pointless degrees.
00:28:41.240
They were a huge, I think, cause of our society's degradation.
00:28:46.440
And I'm so glad that AI has replaced them.
00:28:49.800
There's okay.
00:28:50.740
Like there's a real thing underlying that is that like mid is over.
00:28:55.380
Right.
00:28:55.600
And here, like actually mid, I don't mean, you know.
00:28:57.880
Yeah, yeah, yeah, yeah.
00:28:57.900
It is over.
00:28:59.060
I like that way of putting it.
00:29:02.280
It's very funny because people portray like the most famous, people portray like Drake
00:29:12.020
being replaced by AI.
00:29:13.760
It's like, no, he has, he still has interesting things going on.
00:29:18.500
Or like Taylor Swift.
00:29:19.460
Look at Taylor Swift's like ticket sales, you know, like she's not worried about this.
00:29:22.620
The thing, like the top, like the people who are actually contributing to, you know, the
00:29:31.220
culture that we consume every single day, they're, they're just going to be fine.
00:29:36.860
And in fact, they're going, like the next generation, Sam Woods was on my podcast.
00:29:40.940
He had this wonderful line, which is your job's not going to be replaced by ChatGPT.
00:29:45.760
It's going to be replaced by someone using ChatGPT.
00:29:47.760
I think that, you know, in the future, like the meta, the meta of art will be very much.
00:29:54.120
And I think like the top artists today will be able to adapt.
00:29:58.160
They have that kind of like entrepreneurial focus.
00:30:00.420
Taylor Swift is once again, a great example.
00:30:02.840
Like, I think she'll really enjoy, you know, playing and her team will really enjoy using
00:30:07.540
the new tools to discover like the frontier of music.
00:30:10.800
But I think what you're discounting here.
00:30:12.220
That people never want to work.
00:30:13.600
She represents 1% of people, like the vast, vast majority of people have not done anything.
00:30:19.580
Grimes has adapted.
00:30:21.280
Yeah, exactly.
00:30:22.040
Exactly.
00:30:22.520
I think that you are overestimating the competence and the aggressiveness of this top, you know,
00:30:30.500
1% of society.
00:30:31.960
And that for a long time, they haven't been pushed out.
00:30:34.980
Like Quentin Tarantino, like digital cameras come along.
00:30:37.580
He's like, I'm not going to touch them.
00:30:38.740
Even still, he doesn't do them.
00:30:39.660
A lot of the, throughout most of our history, you've been able to get away with that kind
00:30:43.180
of BS, that kind of arrogance.
00:30:45.640
But I don't think you're going to be able to now.
00:30:48.740
Yeah, there is, you know, there is the innovator's dilemma.
00:30:52.060
I think you're right, actually.
00:30:53.440
I'll say more like, I think that non-zero of the current top artists will adapt.
00:30:59.040
But in terms of like one specific one, yeah, like, you know, I'm not 100% sure that Taylor
00:31:04.740
Swift will be, you know, will be the one who is like, yeah, who is taking up all of these
00:31:09.180
new tools.
00:31:10.220
But I think that, I think that it will be, you know, it will be a hybrid.
00:31:13.260
That I'm very confident about.
00:31:14.920
That's, you know, the mainstream of art, the mainstream of culture, the mainstream of
00:31:19.420
film.
00:31:20.420
Those will all, you know, it won't be, it won't be completely AI generated.
00:31:23.700
It won't be completely human generated, created from scratch.
00:31:27.820
It will be some combination of the two.
00:31:30.280
Just like, you know, we had digital cameras, people, you know, people are adapting to digital
00:31:33.900
cameras.
00:31:34.480
We had the internet, we had social media, we now have a mix, right?
00:31:38.020
You can think of Netflix as a mix between the original film model and YouTube, right?
00:31:43.560
Garrett Jones has this amazing term like spaghetti and spaghetti assimilation, which is he uses
00:31:49.700
this in the context of immigrants.
00:31:50.940
So like Italians come to America and like, or like they come to New York and they make
00:31:55.220
New York, you know, they become more like New Yorkers, but New York's culture also, you
00:32:00.480
know, they start eating pizza and they start eating spaghetti.
00:32:03.400
It becomes more Italian.
00:32:05.140
And I think the same is true of AI, you know, our current culture will become more like AI.
00:32:10.740
It will, there will be, you know, I think like Sam would put it best, you know, the replacement
00:32:16.220
of jobs is not really going to be, you know, like vertical.
00:32:19.540
It's not really going to be like people being completely replaced by AI.
00:32:23.100
It's going to be a new skillset.
00:32:25.440
People are learning to do something better.
00:32:28.180
And, you know, the people who will do that better are going to be the people who use AI.
00:32:33.700
I think though, that what's understated is that a huge portion of knowledge workers, and
00:32:40.700
in that I include people like sales, marketing, writers, artists, designers, salespeople,
00:32:47.780
like website people have been doing work that isn't actually used.
00:32:52.380
Like, I think we've had this period of inertia where people are still hiring and paying and
00:32:57.040
thinking that they need these people even before AI.
00:32:59.280
And like actually not using the vast majority of work they do, and that there are busloads
00:33:04.720
of graduating classes that believe that their job is to sit behind a computer and write strategy
00:33:09.760
documents and analyze things, but not actually build or create anything.
00:33:13.620
And that these people are going to get laid off.
00:33:17.400
They're getting laid off in droves.
00:33:18.700
They're not going to get rehired.
00:33:19.960
They're going to have to figure out their own way.
00:33:21.400
Do you think that those groups are capable of building new lives for themselves when
00:33:26.780
they've been conditioned to do something that is completely different?
00:33:33.560
Like, so this is an interesting question.
00:33:38.100
This is an interesting, like, economics question.
00:33:41.620
But I do want to, like, mess with the framing a little bit.
00:33:45.820
Go ahead.
00:33:46.200
I think, like, this is not that related to AI.
00:33:49.200
This is, like, related to low interest rates more than it is related to AI.
00:33:54.320
Like, it is kind of like, it is like these things happening in the same time.
00:33:59.060
But, like, you know, you could easily see a world where, you know, open AI just develops
00:34:04.700
or, like, all open AI and its competitors just develop and release everything, like, a few
00:34:09.900
years earlier.
00:34:10.780
And we have AI being released to broader society in, like, a, you know, user-friendly way.
00:34:17.960
At the same time as, like, the crypto hype bubble.
00:34:22.520
What would the vibes be around AI then?
00:34:25.480
You know?
00:34:25.960
If it's, like, if it's, like, the tail end of the lockdown, crypto stocks are going crazy,
00:34:31.700
you know?
00:34:32.160
And then open AI publishes, like, current level chat GPT.
00:34:36.840
What are the vibes then?
00:34:39.080
And I think it will be a vibe of, like, just much more optimism.
00:34:42.080
It won't be a vibe of, like, complaining.
00:34:43.860
It won't be a vibe of, like, you know.
00:34:45.600
And, of course, this is not really an argument for my position.
00:34:49.160
But it's an argument against, you know, I think it's an argument against much of the
00:34:53.060
contemporary.
00:34:54.140
This is, like, not the EAs, right?
00:34:55.460
I think the EAs would still be worried.
00:34:57.260
But for, like, the people worried about jobs, for the people who are worried about, you know,
00:35:01.660
like, automation, for the people who are worried about, you know, basically collapse, I think
00:35:06.180
that that's much more kind of absorbing the more general economic environment than it is an
00:35:10.660
actual concern about AI.
00:35:11.920
Well, what I like about your view is that it's actually quite optimistic, which is super
00:35:17.600
not Gen Z.
00:35:18.460
Like, I really like that.
00:35:19.320
It's eminently reasonable.
00:35:20.580
It's like, actually, this problem will solve itself.
00:35:23.140
We're going to have the critical mass of machine learning users, essentially, that are
00:35:26.460
going to make sure that it doesn't get, you know, walled off and made very difficult to
00:35:30.580
open source and, you know, collaboratively develop.
00:35:33.660
I'm just going to help to bridge the gap with this.
00:35:35.840
And I think that makes me super intrigued to see how it goes for the Alliance for the Future.
00:35:39.660
I'm really glad that you are in its foundational team and doing this work.
00:35:44.760
And I'm keen to see how it goes.
00:35:45.800
So to our listeners, if you're interested in this, check out the Alliance for the Future.
00:35:49.920
Check out Brian's podcast as well.
00:35:52.660
Yeah, affuture.org is how to find us.
00:35:54.660
We would really, really appreciate donations at this early stage.
00:35:59.560
And yeah, you can check out all of my writing.
00:36:01.640
You can check out my writing on AI specifically at pluralism.ai.
00:36:05.300
And you can check out all of it, including the podcast at fromthenew.world.
00:36:10.840
Thank you so much, Brian.
00:36:12.380
This is, it's always really fun to talk with you.
00:36:15.100
Awesome.
00:36:15.580
This was very fun.
00:36:16.940
It was not, you know, it wasn't four hours, but it was, you know.
00:36:20.560
See, we don't have attention spans for that.
00:36:22.560
We're like fast, but you know.
00:36:24.400
Yeah, yeah, yeah.
00:36:25.080
You guys were on the podcast for four hours.
00:36:29.320
So for people who want to see our podcast with him, we talked to him for like four hours each.
00:36:34.020
And I was hammered when I talked to him.
00:36:37.060
Completely hammered.
00:36:37.780
He got me at like 9 a.m. at night, going on until like 1 a.m. in the morning.
00:36:43.780
It was enjoyable.
00:36:45.880
It was more like, I think it was like 3 until 8 p.m.
00:36:48.780
But for Malcolm, whose day starts at 2 a.m., that is like extremely late.
00:36:54.000
No, no, no.
00:36:54.440
It was late.
00:36:55.580
No, I went past midnight, I think, was recording.
00:37:00.440
It went pretty late.
00:37:01.900
I don't think.
00:37:02.880
Yeah, I'm not sure if it was midnight.
00:37:05.080
In my brain, it was past midnight.
00:37:07.980
We'll see.
00:37:08.420
We'll see.
00:37:08.900
Okay, okay.
00:37:09.680
But our viewers, check it out.
00:37:11.580
He has a great show.
00:37:13.340
He really knows how to pull things out of people.
00:37:15.520
Yeah.
00:37:16.660
Yeah.
00:37:17.060
You listed amazing questions.
00:37:18.080
You're an amazing interviewer.
00:37:19.320
So yeah, check out From the New World, but also The Alliance for the Future.
00:37:22.780
Thanks again, Brian.
Link copied!