Will Trump's Tech Alliance End in AI Disaster? | Guest: Joe Allen | 1⧸31⧸25
Episode Stats
Length
1 hour and 2 minutes
Words per Minute
164.3359
Summary
In this episode, I'm joined by Joe Allen, a regular on Steve Bannon's War Room, to discuss President Trump's executive order dedicating the United States government to being a leader in artificial intelligence. Joe is also a regular fixture on Steve's show and author of the book, "A User's Guide to AI."
Transcript
00:00:00.160
Hey everybody, how's it going? Thanks for joining me this afternoon. I've got a great
00:00:04.380
stream with a great guest that I think you're really going to enjoy. Of course, many of
00:00:09.280
us are ecstatic at the opening that the Trump administration has had. So many positive changes,
00:00:16.660
so many important executive orders signed and started to be executed. Deportations are
00:00:21.820
happening, all kinds of great stuff. But in that mix, one that is often passed over is
00:00:28.040
Trump's executive order about artificial intelligence and dedicating the United States
00:00:32.620
government to being a leader in that field. Somebody who thinks a lot about the implications
00:00:37.820
of artificial intelligence is Joe Allen. He has a book on the subject, and of course,
00:00:43.500
he's also a fixture over on Steve Bannon's War Room. Joe, thanks for coming on, man.
00:00:48.960
All right. I really appreciate it, man. Thank you for having me.
00:00:51.500
Absolutely. I know that you have recently written a piece about the Stargate program
00:00:57.180
that Trump has initiated. And of course, we also saw the big news with the DeepSeek AI coming out
00:01:05.760
of China. I want to talk about all of that. But before we do, guys, let me tell you a little bit
00:01:09.900
about today's sponsor. How far will a teacher go to save a kid on the brink of losing everything?
00:01:15.700
From Angel Studios, the studio behind Sound of Freedom, comes Brave the Darkness, an inspiring
00:01:21.640
true story about a troubled teen struggling to survive in a world that lets him down.
00:01:27.180
Haunted by torturous childhood memories, Nate Williams finds himself engulfed in darkness.
00:01:33.420
When his drama teacher, Mr. Dean, bails him out of jail and takes him in,
00:01:38.000
Nate must confront his past before it leads him to his own destruction.
00:01:42.900
Brave the Darkness reminds us that one meaningful connection can change everything.
00:01:47.980
This powerful film will leave you uplifted and inspires as it shows the strength of compassion
00:01:53.540
and the impact of never giving up on someone. I encourage you to see Brave the Dark in theaters
00:02:00.360
now. Get your tickets today at angel.com slash Oren. That's angel.com slash Oren.
00:02:08.720
All right, Joe. I think most people know at this point that the kind of Silicon Valley tech bro
00:02:16.000
alliance is a significant faction inside the Trump administration. In many ways, it was incredibly
00:02:22.380
beneficial. It granted Trump a large amount of funding that he needed. It granted him the reach
00:02:27.680
that he needed. A free platform or at least semi-free platform like Twitter had a huge impact on the news
00:02:34.180
cycle and what was available. And of course, just the social cachet and kind of the elite
00:02:40.540
influence that comes with a guy like Elon Musk or David Sachs and some of these other guys
00:02:46.360
was really critical, I think, in ensuring that Trump got that second term. And as is always the
00:02:53.220
case with coalition politics, once things are over, then you need to start making good on some of
00:02:58.300
those promises. You need to go ahead and give people what they want in order to keep their support.
00:03:03.400
We saw a little bit of a clash between the different wings of the coalition with the
00:03:07.520
populace and the tech bros when it came to the H-1B situation over Christmas, that kind of thing.
00:03:14.180
But in general, we see a lot of the populace, a lot of the base getting what they want when it comes
00:03:18.500
to deportations and birthright citizenship, possibly, and all of these other executive orders
00:03:23.500
that Trump is issuing. However, you raised an alarm when it came to kind of the Stargate project
00:03:30.240
and Trump's executive order on AI. Could you give people a little bit of an idea of what
00:03:36.140
the executive order was and what this project entails?
00:03:41.340
Well, the executive order itself is simply to a declaration that regulations would either be
00:03:49.940
lifted or not be put on tech development. This has been part of the narrative in Trump's campaign.
00:03:57.080
It shouldn't really come as any sort of surprise. There was one phrase that was used or two words
00:04:04.780
used in that EO that really stuck out that the artificial intelligence should be used for
00:04:10.520
human flourishing. And I'm very skeptical of the power of artificial intelligence to do that,
00:04:18.160
but to the extent it can, what has to be taken into account is the degree to which AI is going to lead
00:04:25.640
to human atrophy. And it already has, you know, I'm sure your listeners are well aware of the effect
00:04:32.260
of becoming Google brained or wiki brained so that you know very little about any one thing
00:04:38.780
and just enough about everything to kind of get you through your day. But ultimately you become
00:04:43.900
basically a host for an algorithmic parasite, a kind of channel through which information flows.
00:04:50.600
So the EO is, again, it's just a declaration that the U.S. will not constrain AI. And this is
00:04:59.320
largely at the behest of people like Marc Andreessen, people like Peter Thiel, who have long argued that
00:05:06.600
tech regulation is strangling tech development and breakthroughs in the U.S. And, you know,
00:05:13.700
I don't know if I believe fully that the Biden administration planned to create a kind of cartel
00:05:19.960
system or, you know, a series of monopolies with Andreessen telling Joe Rogan that you would only
00:05:27.320
have three big AI companies. I suppose that would be OpenAI, Google, and perhaps Microsoft. I don't
00:05:35.360
remember any specifics stated, so forgive me on that. But I do, it was very clear that the way
00:05:43.200
that the Biden administration and the would have been future Harris administration would have approached
00:05:49.080
it would be to use AI safety as a way to bolster the power of big tech and suppress the freedom of
00:05:57.760
little tech companies. Now, why is any of that a problem? I think that the Stargate project is more
00:06:04.700
important as a symbol, at least right now, than it is as actual technological development. You had
00:06:13.760
Trump's second day in office, he had a press conference in which he brought out Sam Altman
00:06:21.440
of OpenAI, which really shocked me. Maybe I just wasn't keeping up. Larry Ellison of Oracle and Masayoshi
00:06:28.580
son of SoftBank. And Trump is basically endorsing the project, the half billion right now, half billion
00:06:35.800
dollar project to build out data centers right now in Abilene, Texas, and certainly elsewhere.
00:06:42.540
And this, the goal is to keep the U.S. ahead of China and any other competitors on artificial
00:06:50.720
intelligence. I get the strategic element of it. But to me, what I see, especially in my interactions
00:06:58.360
with populists and deeply religious traditionalists, is that there is a direct conflict between the vision
00:07:05.840
that's held by big tech or little tech. The idea that artificial intelligence will yield artificial
00:07:13.800
general intelligence, which is basically a flexible superhuman mind, and then ultimately to artificial
00:07:20.960
superintelligence, which we heard Masayoshi-san say there at the press conference. These are inherently
00:07:28.340
religious ideas. And I think any deeply religious person, even to the extent that you believe that
00:07:35.020
these are just tools that could be used perhaps for the benefit of religion or spiritual depth,
00:07:41.700
or even just simply to protect human beings from the predations of the outside world,
00:07:48.360
you have to reckon with the ultimate vision that these people have, including Elon Musk,
00:07:54.360
and understand that it's quite possible that you aren't going to use them, but that they are going to use you.
00:08:02.340
Yeah, this is something that I always try to explain to people. You know, when you talk about AI
00:08:08.780
and the possible dangers, it's so difficult to get a handle on what is actually going on because
00:08:16.060
you'll have one expert say, don't be ridiculous. AI basically can't tell you anything that you don't
00:08:22.360
teach it to tell you. It can't gain its own intelligence of any type. You know, this is a pipe
00:08:29.780
dream that this will ever be any kind of danger. Then you have other people saying, oh no, at any moment,
00:08:34.360
this is going to reach some level of, you know, sentience is going to like take over the world,
00:08:38.320
these kinds of things. But the point as you're making right here is whatever turns out to be
00:08:43.700
true about AI, whether it becomes just a mediocre kind of thing that vomits out bad emails, or if it
00:08:50.840
turns into something that actually runs the globe, the problem is the way that it trains you to think,
00:08:56.720
right? I taught, you know, exactly what you're talking about. I taught high school and I saw the
00:09:01.900
Google brain all the time. You know, the phrase was, we want to teach kids how to think, not what
00:09:08.380
to think. We want kids to be able to find where the answers are, not necessarily memorize all the
00:09:13.320
answers. And what actually happens in that scenario is children have no fundamental kind of substrate
00:09:20.540
from which to reason. They have nothing to collect together and actually bring forth when you ask them
00:09:26.680
a question. All they know how to do is go and retrieve the information that Google feeds them.
00:09:32.120
And so they, and, and they take this as the gospel. Like this is how you understand, this is their
00:09:37.600
epistemology, right? This is, this is where knowledge comes from. And so there's, there's a fundamental
00:09:42.860
break from the way that, you know, you and I, and, you know, we were probably, I don't know how old you
00:09:47.960
are, but I'm, I'm 40, you know, this is a relatively early generation, you know, that had computers in the
00:09:53.640
home for the first time. But even then we still remember, you know, going to the library, looking
00:09:59.500
up information, you need to find books, you need to, you know, memorize a certain amount of base
00:10:03.900
knowledge. But when you have this machine, it just, it trains you to rely on that algorithm to deliver
00:10:10.920
whatever information that you're looking for and trains you to treat this as a reliable source of
00:10:16.760
truth. And so whatever, whatever the end goal of AI is, whatever it actually turns out to be,
00:10:22.860
the danger ultimately, I think, I feel is, is the social engineering and the training of humans to
00:10:29.460
think more like machines than vice versa. Yeah, I couldn't agree more. I understand the logic of
00:10:37.020
the AI safety or effective altruist community with the idea that artificial general or artificial
00:10:44.100
super intelligence will evade human control and begin to wreak havoc as it's, as it gains control
00:10:52.740
of all of the digital and mechanical robotic infrastructure around us. The logic makes
00:10:58.500
sense. That's not my main concern. It's not really much of a concern at all for me, at least not right
00:11:04.980
now, certainly not with the level of performance that we see. The main concerns that I have are really
00:11:11.320
threefold. The first is that threat of human atrophy, that human beings, as they become reliant on these
00:11:21.040
technologies, lose their ability to reason, lose their research capability, oftentimes lose touch with
00:11:29.000
reality entirely. And you didn't need AI for that. It was plenty, that, that threat was plenty present with
00:11:36.180
print media, with radio, television, with the internet, with social media and smartphones.
00:11:43.260
It's just accelerating it. It's adding more capabilities and more risk. And it is something
00:11:49.040
that for right now, it's a matter of consumer choice. You don't want to do it. For the most part,
00:11:54.040
you don't have to, although it's becoming more difficult as government agencies adopt it. As we saw
00:11:59.500
with the recent announcement of the rollout of chat GPT-gov, which means that OpenAI will be
00:12:06.600
providing AI for government employees to basically have the AI do their research and thinking for
00:12:13.960
them. But also in companies, various companies, Salesforce is very popular. Many other AI companies
00:12:19.400
are being pushed into different corporations and even smaller businesses so that if you want to work
00:12:25.960
there, you need to be skilled in AI. And one of the big risks, I think too, is also in medicine.
00:12:33.340
There's a lot of promise, I'm sure, with AI used for diagnosis or for surgeries, AIs used to control
00:12:42.900
robotic equipment in order to conduct all sorts of sensitive surgeries. But as we already have a huge
00:12:50.020
problem with doctors and nurses being incompetent. And the more it's AI or any other sort of technologies
00:12:57.560
held out as a solution to this, the worse that'll get. The second is just, it's also in line as part
00:13:04.760
of a continuum, surveillance and psychological manipulation. Sam Altman at OpenAI and Mustafa
00:13:13.380
Suleiman make this very explicit. Mustafa Suleiman is head of Microsoft AI, which obviously there's a
00:13:19.400
direct connection there because Microsoft and OpenAI are partnered. And they talk about the use of AI
00:13:25.160
agents as an extension of oneself. And that would mean that the AI, and this is their way of thinking,
00:13:33.580
their words, the AI would know everything about you, every link you click, every website you visited,
00:13:39.860
how long you visited it. It would know every file on your computer. It would listen to every phone call.
00:13:45.180
It would read your every email. It would know all of your contacts and would function as an extension
00:13:50.580
of yourself. This is the ultimate in surveillance, a piece of software that is literally made. Think
00:13:58.920
Microsoft recall. It's literally made to watch everything you do for your own good. And from there,
00:14:07.020
it's very easy to manipulate people. The more you know about someone, the easier that person is to
00:14:12.040
manipulate. And you don't have to be super cynical to know that when someone has that power, more than
00:14:17.560
likely they're going to use it for their benefit and not for yours. And the last is what I call the
00:14:23.880
greater replacement. You don't have to have an AI that's better than a white collar worker or a robot
00:14:33.140
that's better than a blue collar worker in order for jobs to be lost and the value of a worker in any
00:14:40.580
capacity to be undermined. All you need is enough enthusiasm by either corporations or educational
00:14:49.100
institutions or just simply other people that you will be replaced. And the entire impetus behind
00:14:59.160
the development of artificial intelligence and robotics, as is expressed by Elon Musk, as is expressed
00:15:04.520
by Sam Altman, Dario Amadei of Anthropic, and on and on and on. They talk about this in terms of human
00:15:12.440
replacement, the greater replacement. It may not be possible. It may not pan out as they dream,
00:15:18.340
but you have to understand that in their minds, as they're developing it, they're developing it in
00:15:22.900
order to replace you. That alone should throw up all the red flags in the world to know that even if
00:15:30.000
they're benefiting you now in the short term, in the long term, that is not the vision that they hold.
00:15:36.160
Now, the problem that you always hear, and with the, again, the emergence of this deep seek,
00:15:43.500
you know, technology from China, those fears seem to be a little more realistic. The problem you always
00:15:49.860
hear from people is, well, this is the arms race problem, right? If we don't build the tank,
00:15:55.240
someone else will build the tank. If we don't build the atomic bomb, someone else is going to build the
00:15:59.160
atomic bomb. If we don't build AI, someone else is going to build AI, and we will be ruled by the
00:16:04.400
people who built AI in the same way we have ruled the people who didn't make it to the bomb in time,
00:16:09.200
right? So yeah, we understand the dangers, but ultimately, if we do not involve ourselves in
00:16:15.720
this process, then we're going to lose out in the long run. Could you talk a little bit about deep
00:16:22.440
seek? Because I think that's something that kind of popped up out of nowhere. A lot of people didn't
00:16:25.740
know, like, is that actually better? Is that actually some kind of Sputnik moment for the
00:16:31.000
United States when it comes to the racer AI? And ultimately, what do you think of this kind of the
00:16:35.400
tank problem argument about, you know, we have to be involved in the arms race, or we'll simply be
00:16:40.240
conquered by it? Well, deep seek R1 is certainly cheaper because it's free. And it's open sourced.
00:16:48.020
And you can download it and through OLAMA or any other platform and run it on your own private
00:16:53.640
server, meaning that the surveillance issue is sidestepped. Although when it hit number one
00:16:59.480
on the Apple App Store in the US, that indicates that most people are not going to do that. Most
00:17:04.540
people are simply going to use the app. And that means that just like with TikTok, your data is
00:17:09.160
being hoovered up by China. And China, for all of the, I think it's oftentimes the threat of China is
00:17:15.820
very much overblown. Certainly they do have deep impacts on US politicians, US companies,
00:17:22.420
and wider US sentiment. So it's certainly a concern. And with deep seek too, you saw that just
00:17:28.380
public perception has a huge impact so that investors got scared and suddenly you had this
00:17:35.400
massive tanking of Nvidia stock. Now I've had a lot of conversations with Ed Dowd about this,
00:17:40.660
about how overvalued Nvidia was and how that was somewhat inevitable. And this was kind of a
00:17:46.320
catalyst. But in the long run, what it also shows is the kind of willingness of US citizens to adopt
00:17:56.100
any of these technologies because of the enthusiasm of it, behind it. You've got this idea AI is going
00:18:02.380
to be this super useful tool in my life. And without it, how am I going to be able to compete? So much so
00:18:07.920
that they're willing to use the Chinese knockoff in order to stay up with the race and also oftentimes
00:18:14.460
just be entertained. So as far as its capabilities, it does meet a lot of the benchmarks that say OpenAI
00:18:22.480
or, you know, Claude from Anthropic. It does meet a lot of those benchmarks and in some ways maybe
00:18:28.280
better, but ultimately it was based off of OpenAI's model. And how much they poured into
00:18:37.900
it, you know, the reporting is that it took $5 million, $5 to $10 million to train it and that
00:18:44.920
it was trained on low grade H800 Nvidia chips. And so, you know, it was super efficient in order to
00:18:53.500
create it. I don't think there's any good reason to believe those numbers. You had Alexander Wang of
00:18:59.580
scale AI at Davos over a week ago saying that DeepSeq most likely has about 50,000 H100 Nvidia chips.
00:19:11.800
And even if that's not remote, you know, even if that's not keeping up with someone like OpenAI or the
00:19:16.660
Colossus super cluster in Memphis under XAI, Elon Musk, it does, you know, I think there's every reason
00:19:23.980
to believe that the Chinese are not ahead of the U.S. in the technology. Everyone I know in AI,
00:19:32.600
when they read Chinese papers, and there's way more Chinese papers published in the journals
00:19:37.380
than U.S. produced papers, the quality is enormous. I mean, I'm sorry, the quantity is enormous.
00:19:44.680
The quality really isn't. The same with patents. So that race, that race dynamic is in some ways
00:19:51.440
also inflated. Not to say that you want the Chinese to get ahead of us, just to say that it's in many
00:19:58.480
ways a talking point. It's a way to leverage enthusiasm and investment and government cooperation
00:20:05.660
with U.S. companies against a foreign adversary that isn't necessarily as big a threat as it's made
00:20:11.760
out to be. Just real quick, you know, to round off this notion of like, what does this all mean as far
00:20:17.400
as the ongoing AI arms race, which is absolutely real? With the DeepSeek situation, the crisis that
00:20:27.440
it created earlier this week, Monday, you've got three main camps that you have to look at here.
00:20:34.780
Big tech, so that would be Google, that would be OpenAI and Microsoft, that would be Meta and Amazon.
00:20:40.760
You've got little tech, as Marc Andreessen would say, that would be the startups. And I would also put
00:20:45.380
Elon Musk in that camp, even if XAI or SpaceX or Tesla are not little companies, he's still so
00:20:52.920
adversarial with the bigger companies that I think it's safe to put him over in little tech.
00:20:56.960
And then you've got China, right? You've got DeepSeek, you've got ByteDance with TikTok,
00:21:01.740
you've got Baidu and Tencent and Alibaba. So what the DeepSeek release did was undermine big tech.
00:21:08.660
It undermined the claim that you need OpenAI in order to drive this forward, that you need Microsoft,
00:21:14.380
that you need Anthropik and these US companies. And that you can do it cheaper,
00:21:22.460
you can, they're overvalued, so on and so forth. That benefited little tech. And Marc Andreessen was
00:21:28.320
among the first to point out, you know, hey, salute to you, DeepSeek, you know, for creating this
00:21:33.620
fantastic model and kind of upping the value of DeepSeek. I would imagine if I could get into his
00:21:39.380
head that he knows that undermines the big tech companies and allows the little tech companies
00:21:44.080
that are, you know, being invested in by Andreessen Horowitz and other startups to get a foothold
00:21:49.960
where they wouldn't otherwise be able to. The price of NVIDIA chips is probably going to drop
00:21:54.380
significantly because of this, allowing for startups. So, and then China has their own reasons,
00:22:00.640
obviously. So it's kind of a complicated web work in which there's a lot of cooperation,
00:22:06.080
but there's also a lot of competition. Again, it's an AI arms race. So, you know,
00:22:11.540
an unethical player will simply do whatever they can to get ahead in it.
00:22:14.700
We hope you're enjoying your Air Canada flight.
00:22:17.200
Rocky's vacation, here we come. Whoa, is this economy? Free beer, wine, and snacks. Sweet.
00:22:25.620
Fast, free Wi-Fi means I can make dinner reservations before we land. And with live TV,
00:22:30.980
I'm not missing the game. It's kind of like I'm already on vacation. Nice.
00:22:40.160
Wi-Fi available to Airplane members on Equipped Flight. Sponsored by Bell. Conditions apply.
00:22:43.480
Now, I wrote a book on the effects of scale, largely. And I think a big part of what we're
00:22:53.200
seeing now is the need to find another way of, kind of, as Nick Land calls it, the monkey business
00:23:02.700
problem of technology, right? Like we can't, we cannot scale efficiently any more than we really
00:23:10.340
have without running into very serious problems. In fact, right now, the United States is largely
00:23:15.600
trying to debug its kind of managerial class, right? It got full of this wokeness stuff,
00:23:22.280
you know, malfunctioned, caused a serious problem in kind of the functioning of the larger system.
00:23:28.840
And so I get the feeling that a large driver of this AI rat race, and you tell me if I'm wrong about this,
00:23:38.160
but I get the feeling that a lot of guys like Elon Musk and others and Mark Andreessen,
00:23:43.760
who ultimately don't really like their dependence on the, you know, managerial class,
00:23:49.380
are functionally trying to replace it with AI. That this will kind of stand in and do many of those jobs,
00:23:55.900
or, you know, at least a facsimile of these tasks that will allow them to continue to operate without
00:24:02.920
necessarily having a large, you know, wielding a large staff of managerial guys. On top of this,
00:24:09.240
it also allows for more effective, as we have both pointed out, social engineering, which is, again,
00:24:16.220
critical for scale. And so I feel like one of the reasons that AI has become so central, one of the
00:24:23.820
reasons that that is something that they've encouraged Trump to support, and that they
00:24:28.000
themselves are trying to kind of break open on several fronts, is that ultimately, this is the
00:24:32.860
way to evade a consequence of trying to stack complexity inside human systems. What do you
00:24:40.420
think about that idea? Do you think that that ties into their motivation?
00:24:43.640
Absolutely. You look at Mark Andreessen's recent interview with the New York Times, and he talks
00:24:51.840
about how the need to kind of overcome the woke mind virus, as it's often said, in order to actually
00:25:04.600
get ahead as a company, as a CEO. And a lot of that runs through our state bureaucracy. Also,
00:25:12.740
the regulatory system itself, which is suppressing these different technological
00:25:17.340
advancements or slowing them down, that managerial state is a huge obstacle. Now, with, and Peter Thiel
00:25:26.180
oftentimes speaks of this too, right? And has for a long time. But as far as the replacement of that same
00:25:36.240
sort of system, whether it's decentralized or centralized control, it's still an apparatus
00:25:44.440
that you, the average person, aren't going to have a whole lot of say in, and it's going to be this
00:25:48.440
behemoth that rolls over you. But in this case, more driven by corporate power than government power. I
00:25:56.240
think that's a very real danger. And it's not really how it's happening either. You have all of these
00:26:02.400
different partnerships between these companies and the US government and across the world. So
00:26:08.120
it's the real concerns, again, everything from human atrophy to surveillance and manipulation
00:26:14.820
to the undermining of human value. And so the replacement with algorithms and eventually on down
00:26:21.320
the road, more and more automation and robots, you've got kind of the same problem in different
00:26:27.960
flavors of it. Some of these guys, and this isn't to say that, I'm not trying to pin this necessarily
00:26:35.500
on Marc Andreessen or Peter Thiel or someone like that, but some of these guys really do see a world
00:26:40.220
in which algorithms are more effective at governance than human beings, by and large. There is this
00:26:49.460
notion of an algocracy in which it's not just human beings who are in power, who are deploying these
00:26:57.340
these algorithms or AIs in order to suppress or manipulate or whatever they're trying to do to
00:27:03.120
the population below them. But one in which the algorithm itself is responsible for decision making
00:27:08.940
at the highest levels, because it is less fallible than a human being. And it doesn't necessarily matter
00:27:17.600
if any one of these guys is taking on that sort of algocracy model in their ambitions to build AI companies
00:27:28.500
or robotics companies or anything else. What matters is that the more it's normalized, the more that AI is seen
00:27:34.660
as a source of authority, the closer you get to that on down the road as other people who are much more
00:27:40.540
interested in that model come to replace the current crop. I'll just make one point on this. Like with
00:27:47.660
Elon Musk, Musk has his upsides, obviously, right? And Musk in many ways carried Trump across the finish
00:27:56.880
line to this victory, which gives us Americans tremendous leverage, tremendous power to make our own
00:28:04.400
decisions, closing the border, eliminating affirmative action, DEI. Now, all these different
00:28:10.560
things allow Americans to start making these decisions and put pressure where we couldn't
00:28:15.380
otherwise do it. But in Musk's long-term vision, you know, I go into it in the article, the Stargate is
00:28:21.180
open, as he expressed it at the Consumer Electronics Show earlier in this month, in January. And he's been
00:28:30.540
saying this forever, that in the near future, and in his timeline, it's more and more aggressive.
00:28:38.360
You're going to have in the next few years, artificial general intelligence, meaning an AI that can do
00:28:44.740
any cognitive task that you can do, but better, meaning that wipes out the entirety of the white
00:28:51.420
collar class. And that in the not so distant future, decade or so, in his vision, again, this is an
00:28:59.540
advertisement. It's not the burger itself, right? But in his vision that you'll have three to one
00:29:05.920
ratio, four to one ratio of human to humanoid robot. So this, the biggest mass immigration
00:29:12.020
in history. So drawing from the platonic mathematical realms, you're flooding the planet with these
00:29:19.560
humanoids, in his estimation, some 30 billion of them that can do any task you can do. And what's his
00:29:25.100
response? It's kind of a chuckle. And he says, well, maybe it'll be kind of like retirement.
00:29:31.540
And what faith can you have that people who are thinking like this really do have some kind of
00:29:41.400
human-centered philosophy that will benefit you in mind? I am very, very skeptical of that. And I think
00:29:48.120
that whatever direction it goes, whether it's governmental power, whether it's corporate power,
00:29:52.780
or the public-private partnership that we're hearing so much about in recent years, it is
00:29:59.380
inherently anti-human. And even if it benefits elite humans, it is undoubtedly, for most of the people
00:30:10.000
Yeah, I agree with the majority of that. And I guess that only makes this more concerning,
00:30:16.840
because honestly, when I look at what's happening, I don't really see any brakes on this train,
00:30:23.120
right? Like, as you point out, now that this technology is more available, that you don't
00:30:29.100
have to be a Microsoft or a Google to wield it, then that's going to make it harder and harder
00:30:34.580
to stop, right? It's, again, to draw a parallel to nuclear proliferation. You can't just control
00:30:42.660
the uranium and then, you know, other people can't develop it. The problem is everywhere and
00:30:47.540
simultaneous and more and more available. Also, I think that the history of humanity pretty much
00:30:55.540
points to the fact that we are fundamentally incapable, or incapable, rather, of resisting
00:31:01.920
the metaphysics of the washing machine, as Alexander Dugan calls it. You know, we are drawn to these
00:31:08.280
developments. We are very bad at denying ourselves to them. Those that have really are considered
00:31:16.280
basically, you know, ascetic monks. You know, like, that's the level of discipline required to
00:31:20.900
kind of withdraw from these kind of things. And so, while I think that all of the warnings that
00:31:26.720
you're giving are there, and I say this all the time, you know, people will, you know, chat with
00:31:30.660
AIs. I'm like, you should stop talking to demons, and they'll just laugh at me. And I'll be like,
00:31:34.880
no, that's not funny. But as much as I get mocked for this, I do think that we are on a rendezvous
00:31:43.220
with this kind of one way or another. I think that whether you welcome the acceleration or are
00:31:50.540
terrified of it, it's coming either way. Capital will emancipate itself from humanity one way or
00:31:57.880
another. And so, I guess, in some way, are we just Cassandra at this point? Or is there anything
00:32:04.680
that can actually be done? Again, in a free country like America, which is freer than others,
00:32:14.440
there are choices that you can make personally. You don't, nobody's forcing you as a private citizen
00:32:21.740
to use AI for much of anything, or any electronics for that matter. Now, when you're competing against
00:32:29.720
others, and if you're talking about what's normal and not normal in society, it was once very weird
00:32:35.360
for somebody to pull out a smartphone. Now, it's very weird if you don't have one. And that happened
00:32:39.860
very, very fast. So, there are definitely trade-offs to rejecting technology, but the possibility is still
00:32:46.800
there on a personal level, and even on a small communal and institutional level. The real issue
00:32:53.700
is that kind of inevitability that you're talking about. One thing that does seem pretty certain,
00:33:00.260
even if it's in fits and starts, is that technological development is upward and therefore somewhat
00:33:07.840
inevitable. But what's also inevitable is that the predictions around technology are oftentimes very
00:33:16.120
wrong, or at least they're only half correct, meaning that whatever one's vision of that inevitable
00:33:22.740
future is, it's not necessarily that specific future, such as Ray Kurzweil's singularity, that is
00:33:30.260
what is inevitable. Simply, the kind of abstract of development is what's inevitable. So, there are a lot
00:33:37.000
of different paths it can go. Just to take one example, it's quite possible that in order for the U.S.
00:33:43.340
to maintain dominance geopolitically and economically, that you would have AI, the AI arms race focused on
00:33:53.100
the military, focused on intelligence, outward focusing intelligence agencies like the CIA in theory,
00:33:59.940
and that these technologies would be used to surveil foreign adversaries and would be used to apply
00:34:07.380
military pressure on them to stay out of our business and stay out of our way without turning those same
00:34:15.680
technologies on the population itself and making it either necessary or mandatory or semi-necessary for the
00:34:23.420
population. And there's a good argument for that because we've already seen all these unintended consequences of
00:34:29.180
digital technology, the Google brain, the mass surveillance, the mass manipulation, the deterioration
00:34:36.000
of traditional culture. And maybe the best analogy to my mind, because it's so in your face, is the
00:34:43.500
progress in the creation of more and more advanced synthetic opioids. Okay, that was maybe in some way
00:34:50.200
inevitable, but it was not inevitable that you had the biggest pharmaceutical companies in the world
00:34:55.600
pushing it through compliant doctors onto the population, leading to the death of many people
00:35:01.380
that are very close to me and many people that are probably close to those listening. This irresponsible
00:35:07.200
and reckless use of it is not necessarily inevitable. It didn't have to go that way. And just on a final
00:35:15.860
note, the promise of technology is not always the actual technology itself. You don't have to believe
00:35:23.660
that AI is a demon in order to know that it is an entity in the world that is not there for your
00:35:32.080
benefit so much as the creator's benefit. And you don't have to believe that AI will become a god
00:35:37.520
to know that there are already people who believe that it is a kind of low-level god building power and
00:35:45.880
that in the future, AI will be a god that these people believe never existed. And we're not talking
00:35:53.060
about basement dwellers. We're talking about Sam Altman. We're talking about Elon Musk. And Bill
00:35:59.360
Gates was actually really conservative on this, but more and more he's in line with his thinking.
00:36:03.400
We're talking about Bill Gates, talking about Larry Page, the wealthiest men on earth with the most
00:36:09.580
powerful corporations on earth, funded or supported by the most powerful governments and militaries on
00:36:16.280
earth. So it has to be grappled with. I'm not blackpilling. I am not hopeless. We faced
00:36:22.460
so many different things as a species, as a country, as a race. I don't blackpill on this,
00:36:29.600
but I do think that in order to confront that future just in your own personal life and to the
00:36:34.760
extent you can control wider policy as a country, then you have to be realistic about what the motives
00:36:41.800
are behind the creation and deployment of these technologies and what the actual effect is.
00:36:47.580
Is this a miracle cure, an elixir, or is this simply an addictive substance that will maybe
00:36:54.900
not kill your body, but absolutely kill your mind?
00:36:58.920
Now, I think a lot of people look at Trump as the return of human agency and governance
00:37:08.940
to a certain extent. Maybe they wouldn't voice it that way, but they act as if that's the case,
00:37:15.920
right, that we have been ruled by systems for a long time. You know, the constitution itself is an
00:37:21.720
attempt to kind of put a system above human rule, right, where it's the rule of law, not the rule
00:37:27.980
of men. And then the managerial regime is like another abstraction of that, right? We don't make
00:37:36.000
decisions directly. There's not direct accountability. It's a process. It's a system. We're outsourcing
00:37:41.420
our responsibility to kind of a hive mind intelligence, though mostly human directly in
00:37:47.060
that scenario. Now you're pointing to the desire to move that decision-making process into the digital
00:37:53.560
realm to put the artificial intelligence in charge. It becomes our new constitution. It becomes our new
00:38:00.120
managerial class. It's, you know, I don't know what level of hyperreality that puts us in, but pretty deep
00:38:06.560
in the stack. Do you find a certain level of irony that at a time in which people are choosing
00:38:15.040
politically to embrace human accountability and action more directly than they have in a very long
00:38:21.940
time with Trump and his desire to dismantle much of the managerial state, that simultaneously we are
00:38:27.540
working as rapidly as possible to then remove human agency from the government's process?
00:38:32.420
Well, it's a good thing that we do at least have Trump in there, even if he is right now
00:38:39.200
functioning as a kind of salesman for the singularity. And even if in the end, we'll
00:38:43.880
remember this era as Trumpian transhumanism, it gives us a foothold. And if nothing else,
00:38:52.020
Trump and his team have been very responsive to the voice of the people, the voice of our people,
00:39:00.760
as it were. So I do have a lot of hope there. It's not naive hope. And it's certainly not the
00:39:08.780
direction it's going right now. But, you know, I see what Trump is doing in closing off the border
00:39:15.460
as something that is very pro-human in its essence, because there's nothing more anti-human
00:39:19.780
than flooding a country with foreigners and then expecting everyone to act like it's normal.
00:39:25.840
That's not, you are not a human being if you're forced to watch your town, the demographic shift
00:39:32.000
overwhelmingly. And you have to be like, and that's a good thing. That is inherently anti-human.
00:39:38.500
He's making space for real human instinct territorialism and a desire to define and drive
00:39:46.260
one's own destiny. Same thing with repealing DEI programs and hiring in the government. The same
00:39:52.800
thing with the seeming like maybe putting pressure on universities to do the same. There's some things
00:39:59.540
that I don't like, right? I don't like the idea of clamping down on criticizing various excesses by
00:40:07.200
Israel, for instance, and in some ways criminalizing anti-Semitism. But that isn't, I don't think,
00:40:15.880
the defining momentum of the Trump administration. I think that it is giving us more space to make
00:40:24.640
these decisions at a critical time. And the more we use that leverage, the more we use the voice that
00:40:31.880
we have and the power that we have to direct it in a way that at least leaves us intact, the better off
00:40:39.200
we are. You know, where are we at now? It's less than two weeks. You know, there's no way to know
00:40:45.680
exactly where everything will go. And all it will take is a massive crisis to throw everything into
00:40:50.800
question and anything might be on the table. But I am personally more hopeful with Trump being in
00:40:57.840
place than Kamala Harris and the various shadowy forces around that administration taking control,
00:41:05.240
because that would mean we basically had no outlet other than things that are best not spoken of on
00:41:11.720
YouTube. So it's, I think, hopeful, but it's something that will require for those of us who do
00:41:19.100
not want, that don't share this vision of a transhuman or a posthuman future as outmoded and
00:41:26.740
passé as those terms may be, those of us who do not want a singularity and do not necessarily even
00:41:33.640
believe such a thing as possible. This is our opportunity to put certain stakes in the ground,
00:41:41.160
define our territory, define our cultural boundaries, and hopefully certain legal boundaries,
00:41:48.040
personal privacy, data ownership, so on and so forth. But ultimately, this is all,
00:41:55.040
it's hanging in the balance. I do hope that people will make the wisest decisions possible.
00:42:00.480
Let me play a little bit of devil's advocate in this case, perhaps literally. One of the promises,
00:42:09.520
like I said, of AI is the possibility that it could defeat the problem of scale. At this point,
00:42:17.760
we have had to build our human systems as inhumanly as possible. We've had to socially engineer people
00:42:23.840
radically in order to scale our societies to the point we have them now. And if you don't do that,
00:42:29.520
then you're going to lose, right? This is always the threat. You know, that's why one way or another,
00:42:33.560
you're going to get AI because it's the next step in kind of doing this. And yeah, maybe the Chinese
00:42:39.400
get one version of social credit score and we get another one, but everyone ends up with social
00:42:44.000
credit scores in a panopticon because that's what the modern total state needs to operate.
00:42:49.560
But one of the possibilities for AI is it could, you know, crash us out of that paradigm
00:42:55.120
time and allow, uh, smaller states to function and, uh, compete with states that are just trying
00:43:01.600
to wield masses amounts of humanity. Um, and so, you know, a lot of people have pointed out that,
00:43:07.520
well, you know, unfortunately right now we still have the tech bros trying to get as many H1B visa
00:43:12.600
holders into the, uh, United States as possible to build something like this. Once it's built,
00:43:17.360
uh, you might not need, uh, the kind of immigrant labor that you have now, you might not need the
00:43:22.620
managerial state that you might have now. You might not need, uh, the, this kind of vast network.
00:43:28.140
Again, that kind of, uh, demand scale of every government, every state, and they could actually
00:43:33.120
allow us to return to something much smaller, something like a city state, uh, that I think
00:43:38.380
would be much more conducive to human flourishing rather than trying to build these giant, uh,
00:43:43.500
global empires and, and, and smash them against each other on a regular basis. If it could do that,
00:43:49.260
if it could collapse the need for, you know, rampant immigration and manage your regimes and
00:43:55.180
allow us to scale society down to manage a level level while keeping the same level or an increased
00:44:00.520
level of complexity, would that not be a possible trade-off that could be worth it?
00:44:05.760
I think that people who are operating with those sorts of ideals in mind are certainly better allies
00:44:11.700
than those who foresee, you know, some total cyborgization of humanity and ultimate replacement
00:44:19.500
by algorithms. Uh, Balaji Srinivasan's idea of the network state, for instance, uh, were even,
00:44:28.220
well, I, I, I, I, let's just stick with that. Uh, this, this idea of kind of a replacement of the
00:44:37.680
modern and outmoded nation state and the current, uh, geopolitical arrangement with different kind of
00:44:45.360
opt in modes of government and economies. I think that that, that holds out a better model than some
00:44:54.680
of the more draconian and, and basically culturally and sometimes biologically genocidal ideas as to where
00:45:03.140
this could all go. And in the short term, those tools will need to be used, right? Like if all of
00:45:10.020
us right now, uh, decide to become monks or to become Amish, then it's pretty clear who will inherit
00:45:17.240
the earth. So it's very, very complicated. Um, you know, I think about this just, uh, this is, I know
00:45:24.320
we're running short on time, so I can't go into this too much, but I do think about this in terms of
00:45:29.640
religion. This is in fact, it's the only way that I really know how to think about it. And all
00:45:35.620
throughout history, especially Christian history, you have all these compromises that are made with
00:45:40.920
worldly power. And sometimes that's for the best. And sometimes that's for the worst. Uh, so long as
00:45:46.920
people do hold higher spiritual ideals in their hearts and in their motivations, it's at least going
00:45:54.800
to be better as they get their hands dirty and sometimes bloody out in the world. And so you can't
00:46:01.820
necessarily step back and become that aesthetic in order to survive, especially not if you are a
00:46:08.720
programmer, right? But you can at least put that distance there and shield your own heart from this
00:46:15.200
nightmarish vision. That's for the most part shared by the most powerful players in this. And that can
00:46:24.080
be positive, more positive than it would otherwise be. Uh, so I, you know, I'm not one who I'm not a
00:46:30.680
total purist mainly because you can't afford to be, I'm not one who says that if you are a programmer
00:46:36.380
or, you know, even if, even, even if I give you hell for it, uh, even if you use these AIs to help
00:46:42.620
you with your writing and research or whatever, I'm not one to say, okay, we just need to push all of
00:46:46.960
it out and become purists. Uh, if you can afford to do that, great. If you can't, well, then you live in the
00:46:52.720
real world with the rest of us, just like we're using this technology now. But I do think that
00:46:57.620
those, those kinds of alternative models, just to return just briefly to, uh, Bologi's notion of
00:47:03.620
the network state. He has another way of thinking about things that I think is really instructive,
00:47:07.860
this idea of the gray tribe and then the red tribe and the blue tribe, right? Like down here on earth,
00:47:12.880
you have the red tribe, the conservatives, traditionalists, so on and so forth. And then the
00:47:17.240
blue tribe, the progressives, the, you know, tranny baby makers, all that sort of stuff.
00:47:21.000
And then you have this gray tribe that exists above it in many ways, kind of outside of it,
00:47:26.700
this, this group of, uh, genius autists, um, who will bolt onto either one of those as is needed,
00:47:35.180
but still have their own motivations. I think that that way of thinking about things is going to be
00:47:39.920
really important to see it, not as necessarily these people are like us so much as these people
00:47:46.720
will actually be able to help us on occasion. And that includes Peter Thiel, Elon Musk,
00:47:52.420
Mark Andreessen. But as long as you understand where their motives are, and even if it does get
00:47:59.100
out of our hands and into theirs, at least, you know, where it was going. And hopefully though,
00:48:03.180
I have a lot of faith in human nature that hopefully humans, most humans will be able to
00:48:10.400
maintain their humanity, their cultures, their spiritual traditions, and their continuity over
00:48:15.840
time. Uh, it's going to be very complicated, nothing that you can put in and, uh, just a few
00:48:20.720
minutes, but, um, the short answer to your question is those sorts of models, uh, those sorts of ambitions
00:48:27.900
to kind of break the back of the managerial state or break the back of the leftist dominance. Um,
00:48:37.280
it can very well make space for you, even if you don't, it, even if you don't necessarily adhere
00:48:43.920
to the kind of higher ideals of technologists. Yeah. Funny, funny enough, I've, I've, uh, just started
00:48:51.020
writing my next book, which is going to be on post human politics. Cause I think that is where we go.
00:48:57.140
We're going, whether we like it or not, uh, you know, hyper agents have been with us since the
00:49:01.620
beginning of time. Uh, but the technological ones I think are about to mingle with the spiritual ones
00:49:06.660
in, in ways that are, uh, going to be rather exciting. Uh, but, uh, but either way, these
00:49:12.160
questions are not, are not going away. Uh, they are going to be with us for a long time. Like you said,
00:49:17.760
you know, personally, I go out of my way to make sure I don't use AI when I'm writing or anything like
00:49:22.780
that. I want my voice to be my own. I want my thoughts to be my own as much as possible,
00:49:26.200
but I think at this point, if we aren't aware that everything we look at, everything we're
00:49:31.700
consuming, everything we're involved in is letting in, uh, another awareness, another thing,
00:49:37.160
and a possible hyper agent on a regular basis, that's going to influence and compel us into
00:49:41.280
specific actions. Uh, then we're missing a pretty big piece of, of the pie of kind of how power and,
00:49:47.260
and, and authority will be ordered in, in kind of, uh, the coming years. And I think that we have
00:49:52.320
to maintain that awareness. Uh, I think, like you said, it's important to not completely retreat
00:49:57.620
from the battlefield, knowing that if we leave, uh, entirely, then those that wield the weapons
00:50:02.760
will, will ultimately win it. But at the same time, to also guard the hearts of ourselves,
00:50:06.760
our families, the people that we care about against total possession, uh, by, by kind of things
00:50:11.580
that we don't yet understand. And again, I know people are, you know, mocking me right now. Oh,
00:50:15.940
what an idiot. You don't understand. I, you don't get it. That's fine. You, you don't,
00:50:19.860
you don't have to be a skeptic. Uh, uh, but, uh, at the same time, I think those realities are,
00:50:25.080
are pretty critical, uh, ultimately for us to continue to invest in, uh, Joe, we are going to
00:50:29.920
go to the couple of questions of the people real quick before we do. Is there anything you want
00:50:34.020
people to check out a last word that you have anything you want people to find that you're
00:50:38.560
working on? Uh, just to jump off of what you just said, the, the mockers in the crowd, Eliza,
00:50:45.860
the chat bot, Eliza has been around since the sixties. Let's just imagine for a moment that
00:50:51.240
these chat bots are not really any more advanced than Eliza. The big difference is that people
00:50:56.500
believe they are. And if you don't think that human belief shifts the currents of history,
00:51:02.060
I don't know what to tell you. Uh, you, you clearly have not looked into the history of religious
00:51:07.660
belief in the way in which it has directed human ideals and human societies. Uh, but this ain't
00:51:13.340
Eliza and anybody who says that it is, uh, ain't keeping up, you know, I get it garbage
00:51:17.660
in garbage out all that, but this ain't your grandpappy's chat bot. Um, I, jobot.xyz latest
00:51:26.460
article. The stargate is open one coming out in the very near future about actually kind of
00:51:32.400
more about the nuts and bolts of AI, uh, for the layman and, uh, my book, dark aeon transhumanism
00:51:40.700
and the war against humanity sign copies available at dark aeon.xyz or my site, joebot.xyz.
00:51:48.500
All right, guys, I definitely recommend that you go check out Joe's work. So make sure that you do
00:51:53.840
that. Let's go to the questions of the people real quick. Uh, death says, I think that China
00:51:58.880
is ahead of the U S because I noticed that there has been a massive mainline investment, uh, main
00:52:05.040
line investment to blocking slash censoring and social engineering. Uh, I, uh, what AI can learn,
00:52:12.160
uh, for geopolitical multicultural reasons. Do you think that, uh, China is ahead strictly because it
00:52:18.760
did not have the kind of, uh, politically correct, uh, safetyism the United States was imposing?
00:52:24.040
You know, that word political, politically correct, or that term really bugs me because
00:52:30.840
right now it's seen as politically correct means that you can't use a racial slur or call a transgender
00:52:37.520
person by their appropriate pronoun. But there's a lot of different kinds of political correctness.
00:52:43.520
Uh, if you're in a fundamentalist church, that's very politically incorrect to talk about evolution
00:52:50.940
and the facts behind it. And if you are in a Chinese communist government, it's very politically
00:52:57.400
incorrect to talk about Tiananmen square, to talk about the Uyghur encampments, uh, to talk about
00:53:05.140
the sharp eye surveillance program and so forth. And if you go to deep seeks are one and start asking
00:53:11.920
such questions, it will censor you. That might give you semi-accurate IQ, uh, statistics. It might give
00:53:20.020
you, um, semi-accurate descriptions of human sexual biology, but it is absolutely censored. So I, you
00:53:28.960
know, I, I, I kind of reject that, uh, out the gate. I think that in many ways, just like with human
00:53:34.540
beings and human societies, it's not, there is no real free speech anywhere. And to the extent that
00:53:41.900
somebody is going to say something really horrific about children or about it, to be honest, things that
00:53:47.760
I hold sacred, uh, I don't think that unbridled free speech, uh, is necessarily an ideal. Freer is
00:53:54.820
better. Uh, but there are many things that you shouldn't say. And if you do say them, uh, you
00:54:00.060
should at the very least have your ass kicked. Yeah. China has its own set of taboos and they might be
00:54:06.660
different from our political orthodoxy, but that doesn't mean that they are not imposing their own
00:54:11.940
restrictions, uh, in, in a similar manner. Uh, let's see here. Uh, the, uh, the metal mystic says,
00:54:20.720
uh, Musk said, quote, the only thing we won't have is meaning at bot utopia, uh, zero self-awareness
00:54:29.460
of implications. Proud of that. In fact, bots everywhere, humans with no meaning building mass
00:54:35.560
despair go Elon. Yeah. I'm not familiar with that specific quote, but yeah, to, to kind of dovetail
00:54:42.100
with what you said with Mark Andreessen, he's just like, well, yeah, nobody might have a jobs anymore,
00:54:46.480
but everything will be super cheap. Right. So it all kind of works out. It's like nobody, like people
00:54:50.760
need meaning in their lives. That's actually far more important than, you know, securing the cheapest
00:54:55.980
of, you know, junk from plastic junk from China. That's actually far, far more important at the end.
00:55:00.980
Yeah. I'd say the metal mystic has it right on. I, I, I believe I detect sarcasm with the go Elon
00:55:07.380
unless the metal mystic is staring into the void and loving it, uh, like nihilist Arby's. Um, yeah,
00:55:14.920
I, yeah, I agree. I mean, that was, and that was a really important point. You know, he's talking about
00:55:20.280
how, and a lot of these guys are talking about this, that it won't be universal basic income.
00:55:25.180
Um, it'll be universal high income. And with, you already see on the very superficial level,
00:55:31.440
if you want an image of something, you don't have to learn to draw and paint. You don't even have to
00:55:35.820
learn how to use a graphic design software. You just simply have to ask the bot to create it. And
00:55:40.560
maybe it's not what you exactly wanted, uh, but it is there for you. Same thing. If you want an essay,
00:55:47.000
uh, same thing. If you want the answers to your questions, even if they're not accurate and even if
00:55:50.760
they're not well written, AI is kind of a genie, it's kind of a, um, a cognitively impaired genie.
00:55:59.560
Uh, and, and I think that they just see it becoming smarter and smarter, but without a doubt,
00:56:03.200
even if let's say these pipe dreams come true, let's say that just like Satan promised Jesus in
00:56:08.580
the desert, that he could turn the stones to bread, that they literally take sand, make it think
00:56:13.640
and produce unlimited quantities of bread and wine and opiates, robots, sex bots, everything you could
00:56:21.620
possibly want. Right. Um, where is the meaning then? What are you striving for? What are you even
00:56:26.500
doing? Um, it's not, there's really not much of any, right. Except for just to kind of enjoy the ride
00:56:32.380
of this, the singularity, this, this sort of post-human ascent. Uh, and you know, Musk, yeah,
00:56:38.680
I get it, man. He's, he's not, he's a complex figure. I don't mean to demonize anybody just
00:56:43.420
simply to say that they may be full of some demons. Aren't we all? But I think that that
00:56:48.120
question of meaning it is, you know, as the metal mystic just pointed out, I think that's another
00:56:53.060
really important thing to think about. There are tons of people on earth right now already without
00:56:59.080
meaning. And the, the, the cultural push should be towards reaching out to those people and helping
00:57:07.300
them find ways towards meaning and find ways towards use not to demoralize them and not to
00:57:15.240
in actual reality, push them out of the picture so that they, they are ultimately meaningless. Uh, so
00:57:21.180
yeah, uh, I, to me, this demoralization for right now, cause we, these technologies can't just produce
00:57:28.400
everything you want, right? We don't have that radical abundance. We can't turn stones into bread.
00:57:34.040
Um, and so that demoralization is extremely dangerous because yeah, it would be bad if you
00:57:40.580
had a, some kind of singularity scenario where we were rendered obsolete, but also, and perhaps even
00:57:46.900
worse would be if you demoralized an entire generation and told them that the only thing they
00:57:52.300
would be good for is a prompt engineer. Uh, and then it turns out that these AIs and the robotics
00:57:58.380
are more like Eliza than they are God, then you've basically crippled them. And so, yeah,
00:58:05.420
that's gotta be a really important emphasis to try to build humans up and build up human
00:58:11.520
excellence now, no matter what they're promising. Yeah. It's, it's a kind of interesting thought to
00:58:17.880
pursue. Uh, what if AI is like, uh, an asymptote that you, you're, you're, but you're just going to
00:58:23.280
weaken humanity without ever actually reaching it. And so in the process, you just kind of
00:58:27.380
break down America, uh, a America and the wider world's ability to function while chasing, uh,
00:58:33.580
you know, a, a singularity you never quite reach. Uh, you know, yeah, that, that's an interesting one
00:58:39.200
to, uh, to think through there here. Uh, let's see here. A non-reviewer says, if AI gives us ninja
00:58:45.140
robots, at least we won't be bored. Very true though. Again, nightmares that could possibly ensue.
00:58:53.280
Uh, and then, uh, the metal mystic also says, not a fan of Claremont. They do advocate, uh, uh,
00:58:59.160
advocate a digital bill of rights. Seems like worthwhile allies. Uh, uh, uh, archo futurism
00:59:05.580
seems like a plausible alternative positive vision. We must put forth something.
00:59:12.920
Yeah. Um, yeah. Archo futurism, uh, Gillian Faye, if I'm pronouncing his name correctly. Uh,
00:59:19.900
it's a, I never read archo futurism. I've read passages that people have, have, uh, quoted.
00:59:26.640
And so I can't speak to it with any depth, but I, I heard Gillian Faye speak one time.
00:59:34.300
It's quite a funny story. I'll leave the funny stuff aside. And his, his vision of cultivating,
00:59:42.140
as he explained it then, first off cultivating a kind of, uh, white alliance, ethno-state
00:59:50.480
alliance of the global North against the, uh, predations of the global South, but doing so
00:59:55.680
using technology outwardly, but then also just due to, I don't, I feel, I fear I'm going to mangle
01:00:02.540
this, but just the way, what I took from it some decade plus ago, uh, was that certain technologies
01:00:09.900
are going to be necessary. Certain technologies are going to go forward, but it's not going to be
01:00:14.340
a techno utopia. And so you have to have some kind of redundancy. You have to, you know, he's French.
01:00:19.100
So I imagine that would involve, uh, you know, very sophisticated, uh, agriculture, hands-on
01:00:24.420
agriculture. And I imagine that that's also very, um, uh, deep and rich cultural production
01:00:30.380
that doesn't necessarily involve all these technologies again, without wanting to, I don't
01:00:35.440
want to get too far out over my skis on, on a book that I've never read and a concept that I haven't
01:00:40.120
contemplated that much, but the, the overall vision of technology being directed, especially
01:00:47.200
damaging technologies being directed outward and not inward so that you don't have the military
01:00:53.120
technologies that went into the smartphone being pumped into the brains of children every day.
01:00:57.600
I think the overall view that you need some sort of redundancy in the civilization, you
01:01:04.320
need people to keep knowing how to grow crops and raise animals and slaughter them. You need
01:01:10.640
people to know how to hunt. You need people to know how to talk to other people. You need
01:01:14.740
people to know how to do machining. You need people to do all of these things. And you especially
01:01:18.540
need people who know how to teach the next generation, how to be people and how to catechize
01:01:24.540
them. Um, yeah, I, that, that element, uh, I agree with, but again, I'm, I'm, I am not endorsing
01:01:30.940
Guiyan Faye. I'm probably not even pronouncing his name correctly. And, um, and I don't pretend
01:01:36.420
to speak for the vision of archaeofuturism, but I do think redundancy is going to be essential.
01:01:43.080
All right, guys, well, we're going to go ahead and wrap things up, but Joe, it has been great
01:01:47.420
talking with you. Like I said, if anybody has not checked out your work, they should most definitely
01:01:51.400
head that direction after we're done. And of course, if this is your first time on the channel,
01:01:56.260
go ahead and subscribe, click the bell notifications, all that YouTube stuff. So the algorithm can
01:02:01.060
program you with this show instead. Uh, make sure that if you would like to get these broadcasts
01:02:05.680
as podcasts, you subscribe to the Orma Mac and Tire show on your favorite podcast platform.
01:02:11.260
And if you would like to pick up my book, the total state, you can do so on both in print and now
01:02:16.880
in audio book as well. Thank you everybody for watching. And as always, I will talk to you next time.