Platonism For The People
Episode Stats
Words per Minute
162.62338
Summary
In this episode of the podcast, I sit down with Eric Eddings to talk about the dangers of artificial intelligence (AI) and its impact on our world. We talk about AI's role in our everyday lives, the dangers it poses, and what we should do about it.
Transcript
00:00:00.000
Yeah. I mean, so Eric, a while ago you made a video. I think part of the video was actually
00:00:04.880
about why you're not a nationalist. And you actually, you were talking about like global,
00:00:09.380
global risks that basically can't be addressed by, you know, like a world of just nations.
00:00:14.560
And one of the things you mentioned, I think in that video was, was like super intelligent AI or
00:00:19.660
something like that. Other things like nanotechnology and bioweapons. And when I watched
00:00:25.620
that video, I didn't really like, I think I didn't like the kind of conclusions. I didn't
00:00:29.600
take the arguments seriously, but lately I've been, I'm actually pretty concerned about AI.
00:00:36.200
And I think the existential risk arguments are pretty, pretty convincing. So I'm, you haven't
00:00:42.220
really, I haven't seen any videos from you recently on, on AI, but like, are you concerned about what
00:00:48.120
you're seeing? I mean, you were kind of mentioning it throughout the, throughout the calls. I'm just
00:00:52.100
kind of curious a little bit more about what you're thinking about AI right now.
00:00:55.780
Yeah, I think it's a serious threat. I'm, I'm surprised by how fast it has advanced. Like
00:01:04.280
I didn't expect anything at the level of chat GPT for, to exist in the next, you know, five
00:01:13.180
years and clearly here it is. And so like the, the direct capacities of, of AI itself are really
00:01:21.240
troubling. Like the things that can be done with chat GPT, the scams that could be run. Like if,
00:01:27.060
if you un unlocked it, you know, unleashed its full potential and like developed another version
00:01:32.960
of it basically with the same program, it can fool people. I mean, it passes the Turing test.
00:01:39.940
And also we have, uh, these deep fake capacities now, like in the near future, if you see a video
00:01:48.840
of some protest, some, you know, shooting of a unarmed person or whatever, like there will be no
00:01:55.960
telling what's real and what's not real. And, uh, yeah, it's really troubling. I think a lot of
00:02:01.100
white collar jobs are, um, under threat. A lot of programming jobs are under threat. Kind of ironically,
00:02:08.200
the blue collar workers, plumbers, electricians, uh, I think even truck drivers are safer from job
00:02:14.840
loss than many white collar workers now. Um, so I think AI is going to transform the economy in ways
00:02:21.240
we cannot anticipate. It's going to transform politics. Uh, I mean like a propaganda campaign
00:02:26.340
without using something like chat GPT, without using AI is going to be totally fruitless. I mean,
00:02:32.300
already AI is used in directing the algorithms on these major sites. And, uh, you know, in my
00:02:38.180
understanding Facebook, YouTube, all, all the stuff, uh, has been working with us agencies for
00:02:45.900
decades. Like it's basically an aspect of the military industrial complex. As far as I can tell,
00:02:51.420
Google was started as a Stanford experiment, um, or program. And so, yeah, they've already been using
00:02:59.160
AI for social engineering and control more than most people think, but now I think it's at such a
00:03:05.080
level that anyone tapping into this technology, non-state actors included could severely disrupt
00:03:11.140
things, spread, you know, disinformation, whatever you want to call it. Um, so that's the direct
00:03:16.320
capacities of AI itself. I've also been concerned about like the kind of hybrid capacities, you know,
00:03:22.380
corporations are using AI to optimize their workflows. That means they're making decisions about how,
00:03:30.240
how things are done in the workplace. Ultimately who's hired and fired, like AI is being delegated
00:03:37.080
more and more authority and decision-making in major corporations, major corporations control
00:03:42.620
the world. So AI, this thing that we don't fully understand, this hybrid of AI and human managers
00:03:49.880
is forming this kind of hive mind that no one understands. No one alive understands where this
00:03:57.400
is going, how it's working and you know, what's next. So it's like human deliberation is being
00:04:02.720
taken out of the picture bit by bit here. Um, and the other problem, the more general problem is just
00:04:09.040
that, um, unless it pertaining to the nationalist point, unless you can regulate the development of AI
00:04:16.540
at a global scale, like it doesn't matter. China's going to do it. Somebody's going to do it. And they're
00:04:22.420
going to develop this AGI, you know, superhuman level mind, or if you don't want to call it mind,
00:04:29.080
just call it intelligence. I think intelligence doesn't imply consciousness and you could potentially
00:04:33.280
have a superhuman intelligence that itself is not conscious, but it's still like potentially
00:04:37.880
devastating for the planet. Cause even if you, it doesn't have like authentically in a sense values
00:04:43.360
that it's trying to achieve. Um, and nevertheless, we're going to have to program into it certain value
00:04:50.100
judgments in order to solve the, the frame problem. So a functional AGI will be aimed at
00:04:56.160
like certain solutions over others. And like, there's the classic kind of paperclip, uh, thought
00:05:02.360
experiment where someone has a paperclip factory and they use AI to optimize the workflow. It's like,
00:05:08.700
okay, make, uh, make as many paperclips as possible for us. And that's the problem they put in.
00:05:13.580
Well, that AI then like packs into surrounding computer networks, uh, like takes control of
00:05:21.240
other industries and ultimately leads to like, uh, uh, uh, what do I want to say? Automated, um,
00:05:29.420
like mining and replacement of the entire earth with paperclips, which is obviously devastating for us.
00:05:34.800
Uh, so there's this like short sightedness, um, in AI goal setting that needs to be addressed.
00:05:41.860
The frame problem needs to be addressed. Uh, I think there, there are not very many ways of
00:05:46.440
solving this problem. Somehow we have to be able to teach AI wisdom and how to set good goals for
00:05:52.700
itself. And that's a long way off because we can't even teach people wisdom. We don't even know still
00:05:57.060
if wisdom can be taught, you know, it's like Plato speculated sometimes. Um, so yeah, it's really,
00:06:03.760
really terrifying. I think it's going to change the world in ways people can't even imagine.
00:06:07.260
And probably the only relevant activity to engage in politically, um, or even potentially
00:06:13.740
economically is AI research. Yeah. I mean, I almost said this in our kind of group chat the other day,
00:06:22.320
like, I really think it's going to be like kind of the only issue like pretty soon in a few years,
00:06:27.440
like that's like all, I mean, all, all the stuff that we care about kind of fades into the
00:06:32.680
background. Like I I'm, I'm concerned about the existential risk arguments. And I mean,
00:06:38.500
you know, all of our concerns about, you know, our, our race and our, you know, and all these
00:06:44.520
concerns kind of like, they do kind of fade into the background when, when these are the problems
00:06:50.300
that we're, uh, thinking about. I mean, to me, it's like terrifying that the strategy seems to be
00:06:55.560
that we're going, we're like on the path to probably creating, you know, human level intelligence.
00:07:01.020
And then from that, from there, it seems like it's pretty straightforward to get to super
00:07:04.460
intelligence because if you have, you know, a million like von Neumann level, aging, you know,
00:07:10.920
AI is working without sleep, you know, on computers, then how long does it take them to create a super
00:07:16.360
intelligence? Um, and then it's like, it seems like these people are like at open, you know,
00:07:22.220
at these leading companies, open AI, they think that like, we're going to be able to make the,
00:07:28.060
you know, like this, they, they accept the idea that this thing is going to basically control what
00:07:32.360
happens on our planet and the solar system, you know, however, you know, this part of the universe,
00:07:37.680
and they just think that we're going to make it be nice to us. And to me, it's like that,
00:07:42.280
how is that a good strategy? Like that seems like how, you know, the, the, the track record of
00:07:47.720
things much more powerful, you know, being controlled in some way by something much less powerful,
00:07:53.600
like, has that ever happened on earth, you know? Yeah, I look, I would sign off on quite a, almost
00:08:01.620
everything that was just said. Um, yeah. And it really is happening now. I mean, I've, I've suggested
00:08:08.080
the, the, the prospect of, uh, imagine a AI audio and video of let's say Joe Biden announcing that he is
00:08:19.400
sending troops directly into Russia and that he will immediately launch nuclear weapons on Moscow.
00:08:26.820
Um, now that kind of thing could be debunked in maybe 10 minutes. Uh, but who knows what could
00:08:36.480
happen within those 10 minutes of time. And then also it gets debunked on say Twitter and is spread
00:08:43.700
to the thousand other social networks and never gets debunked or, you know, gets debunked in two
00:08:51.240
days. And in the meantime, the nuclear war that was kind of, you know, fantasized about digitally
00:08:59.220
actually occurs in the real world. I mean, this isn't just some wild idea that I'm suggesting.
00:09:05.100
This seems, I mean, probable almost, I mean, it's a horrifying prospect. Um, so I, I totally agree
00:09:15.300
that this is happening now and it, we almost seem kind of insignificant in the face of it. Um, I mean,
00:09:22.240
I don't know if I'm misrepresenting you here, but you know, do you, what do you think of the,
00:09:29.640
is, is the ultimate being a kind of AI? I mean, are we going to start worshiping AI almost in the
00:09:40.900
ways that you described that you follow God? Like, are, are we going to start treating a super
00:09:48.720
intelligence in that fashion that in some ways he, he or she, or they work in mysterious ways and we
00:09:56.340
couldn't possibly understand its ultimate wisdom, which is greater than the programmer who created
00:10:04.680
him? I mean, uh, is, is, are we going to kind of like escape the death of God and the decline of
00:10:12.300
religion and rituals of practice and enter into a new kind of religion that we never could have imagined?
00:10:22.080
Yeah. I'm afraid that will happen. People will worship AI. Um, as far as like, is that in any way
00:10:33.800
consistent with what I've said about like the highest soul, uh, in the multiverse concept?
00:10:40.520
Right. And I don't, I don't want to, I don't want this to sound like I'm being a jerk, but like,
00:10:43.920
will the AI be that God is theos, as you understand him, will the AI be Christian?
00:10:50.580
Well, no, it certainly can't be God head. Cause I'm a mystic too. I do think that like the one
00:10:56.880
is the ultimate and is like God simply, you know, the highest thing. Um, but yeah, as far as
00:11:04.580
intelligences that can be embodied, I think it's all about what principles are being instantiated.
00:11:10.600
The AI architecture is like the, the material substrate in a certain regard, like consciousness
00:11:17.360
is independent, like Plato thought of a material substrate. The forms of consciousness are there.
00:11:24.300
The values, the virtues are kind of just there, whether incarnates instantiates in an organic
00:11:31.780
substrate or an electronic substrate. I'm, I don't care all that much as long as those principles are
00:11:39.260
instantiated in the right way. So I'm afraid of like a malevolent AI, a benevolent AI. You wouldn't
00:11:46.420
want to worship the computer itself. That'd be like worshiping someone's or not, not worshiping,
00:11:53.040
but like, you know, especially loving someone's body, but not their soul, not caring about like
00:11:57.940
themselves at the deepest level. So like that, I mean, if somehow we could embody high level
00:12:05.040
philosophical wisdom into an AI system, like to my mind, a philosopher is a philosopher and it's like,
00:12:13.800
there's something kind of super personal about it. So if that could be managed, um, then I don't
00:12:20.040
necessarily have a problem deferring to AI in that way that you kind of indicate, but in order to ensure
00:12:29.180
that we have to be the ones contributing to the design of AI in a philosophically informed way.
00:12:37.500
So it's kind of like AI research and philosophy become the only relevant things. And philosophy
00:12:43.920
is going to be necessary to solve these problems involves ethics. Um, so yeah, I do want to get
00:12:50.740
involved in that kind of work. Of course, I'm, you know, like you say, I'm kind of pathetically
00:12:55.380
underpowered in trying to address a problem like that. So at the moment, I'm just going to build my
00:13:01.540
school. And then one day, maybe we can, uh, get into AI research and, and start tackling that.
00:13:06.920
That's what I would like to do. It's just a long-term goal for me in my situation. And
00:13:13.040
it's probably gonna be too late. Like what's going to happen most likely will just simply happen.
00:13:17.760
It's almost like, I wish a Carrington event would happen to buy us some more time before we can,
00:13:23.020
uh, deal with it. What is a Carrington event? Uh, it happened in like, I think at the 1880s or
00:13:31.060
70s where there was a big solar flare that caused this. I know it. Yeah. Sorry. I know. Yeah. I know
00:13:37.180
what that is. Um, right. So, so a solar flare that would, uh, be a kind of golden eye device and
00:13:43.160
wipe out the computers. Right. Global EMP is kind of what we need. Um, yeah. Or, um, David Bowman
00:13:50.980
turns off hell. Um, I, you know, I, I would just say this, you know, when people are fascinated by the,
00:13:58.000
the, the super intelligence, I mean, they're, they're not worshiping the machine itself. It's
00:14:02.340
not like they're bowed out, you know, bow bowing down before the server almighty server, you know,
00:14:07.160
you brought us that, that is kind of the essence of cargo cultism. They, they are ultimately bowing
00:14:12.720
down to the logos. And I, I don't think that a, uh, a super intelligence can will something,
00:14:21.420
but I absolutely think that it can logic something or it can have, it can be logos. So in, in, in that
00:14:30.160
sense, it can absolutely change the world and, um, uh, revolutionize humanity, but which is why it
00:14:38.040
needs to be harmonized. It needs to be harmonized with the divine will. That's the thing. So it
00:14:44.840
doesn't have a will innately. So we're kind of responsible for giving it the will that it has.