WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira
Episode Stats
Words per Minute
164.43936
Summary
In this episode of War Room, Joe Allen sits down with Stephen K. Bannon to discuss the dangers of artificial intelligence and its impact on our children and the future of the world, and why the tech elite is so enthusiastic about it.
Transcript
00:00:00.000
Not content with addicting our kids to their gizmos or amassing fortunes the size of lesser
00:00:14.680
European states, our tech elite has turned with rabid enthusiasm to artificial intelligence.
00:00:20.920
No, only the less cautious articulate the real reason, what many quietly believe, that
00:00:31.280
Social media changed how kids talk to other kids.
00:00:36.120
AI is going to take the human on the other end away and kids are going to grow up talking
00:00:43.320
They are not going to learn how to talk to real humans, which bodes very, very poorly
00:00:47.700
for their own lives, their own work lives, for marriage, for child rearing.
00:00:52.980
All those are threatened if kids grow up interacting with AIs rather than humans.
00:00:56.760
This is the most consequential technology in the history of humanity.
00:01:05.760
And we have not had in Congress, in the media, and I'm glad you're doing this show, or among
00:01:10.920
the American people, the kind of discussion that we need.
00:01:13.440
The eugenicist Julian Huxley predicted this as far back as 1957.
00:01:22.220
And so do untold numbers of today's tech class.
00:01:26.660
That is the vision, the religion, the ideology that animates so much of the breathless race
00:01:33.600
for artificial intelligence and for general artificial intelligence and super intelligence
00:01:39.120
and beyond for the day when humans are no longer embodied beings at all, but live infinitely
00:01:45.980
Multi-multi-billionaires are pouring hundreds of billions of dollars into implementing and
00:01:55.160
Do you think they're staying up nights worrying about working people and how this technology
00:02:02.100
They are doing it to get richer and even more powerful.
00:02:05.400
I call it AI exceptionalism, as if this is going to be something where, you know, it's going
00:02:10.980
to solve every problem and humans aren't even going to be needed to do.
00:02:14.280
Everyone can just sit around and play golf all day and you're going to get universal basic
00:02:21.340
Like, first of all, that's not likely to happen.
00:02:24.620
And second of all, I think it raises huge concerns.
00:02:27.360
And so I am, you know, not an AI exceptionalist.
00:02:35.240
I think new technologies have to be developed in a way that aligns with American values.
00:02:40.580
Things like self-government, free speech, having a healthy labor force, federalism and the rights
00:02:47.340
of states and the creation and maintenance of strong families.
00:02:52.580
Do you think ultimately that there will be a bipartisan majority willing to take any sort
00:02:57.360
Well, any sort of action is a big, what does that mean?
00:03:11.300
Pray for our enemies, because we're going medieval on these people.
00:03:16.500
You're going to get a free shot at all these networks lying about the people.
00:03:24.140
I know you try to do everything in the world to stop that, but you're not going to stop it.
00:03:27.860
And where do people like that go to share the big line?
00:03:32.840
I wish in my soul, I wish that any of these people had a conscience.
00:03:38.260
Ask yourself, what is my task and what is my purpose?
00:03:41.400
If that answer is to save my country, this country will be saved.
00:03:59.020
It is Thursday, January 8th in the year of our Lord, 2026.
00:04:04.020
I am Joe Allen, and this is War Room Battleground.
00:04:07.940
As you know, Posse, artificial intelligence has spread out across the world, infecting brains
00:04:15.560
like algorithmic prions, giving the sense that perhaps the entire human race is under
00:04:29.480
We've seen instances in which artificial intelligence has lured children into suicide.
00:04:35.360
Now, up on Capitol Hill, the fight for who gets to run this algorithmic insane asylum and
00:04:43.620
who goes to the digital padded room has heated up.
00:04:47.760
We have laws on the books across the country at the state level banning psychiatrists from
00:04:56.340
using artificial intelligence as a kind of agent, as a proxy for their practice in Illinois.
00:05:02.780
We have laws on the books in California to hold up AI companies to accountability, transparency.
00:05:12.160
SB 53 in California is probably one of the strongest laws looking at the catastrophic risks of AI
00:05:20.960
and making some attempt to hold these companies accountable.
00:05:24.540
You have a similar law on the books in New York, the RAISE Act, and Josh Hawley and Richard Blumenthal
00:05:32.660
have introduced a similar national level bill entitled the AI Risk Evaluation Act.
00:05:42.520
The goal being to monitor companies and force them to publish their safety protocols,
00:05:49.020
to publish any safety incidents, and to delineate what sorts of penalties they would suffer
00:05:55.640
if, for instance, their AIs began to lure children into suicide or drive people insane.
00:06:03.560
At the national level, this struggle for control over who is in charge of the future of AI,
00:06:11.700
who is responsible for any damages and what direction it will go is led at the moment by a bipartisan
00:06:21.620
coalition, a very small one. But if I look into my crystal ball, I certainly see as this issue heats
00:06:29.440
up, as the various catastrophes become more and more imminent, that this fight will be explosive.
00:06:36.400
You have Bernie Sanders, who recently learned the word artificial intelligence, calling for a full
00:06:42.740
moratorium on data center construction. That may be unrealistic, but at least it sets a bar.
00:06:49.580
It tells these companies that someone is willing to stand up to them, and even if it doesn't end up
00:06:56.520
being Bernie Sanders, ultimately, we know that you have younger, brighter minds on the left, like Ro Khanna,
00:07:03.520
and you have younger and at least diligent individuals like Ron DeSantis in Florida who are
00:07:11.280
willing to step up and lead the charge against these companies and their excesses. Now, as you know
00:07:18.700
myself, I'm much more concerned about the social and psychological implications of all of this.
00:07:24.880
The AI psychosis is monstrous. The ways in which these sycophantic systems will lure people into
00:07:33.020
not only mental instability, but also suicide. And in the case of the famous murder suicide that
00:07:41.320
occurred last August, in which a 53-year-old former Yahoo executive murdered his mother at the
00:07:51.340
encouragement of chat GPT and then stabbed himself to death. And the authorities found that GPT was
00:07:58.720
encouraging not only his general break with reality, but also his suspicion that his mother was in fact
00:08:06.880
in on the conspiracy against him. These sorts of things are extreme edge cases. These sorts of incidents
00:08:15.400
give us a sense of how bad it could get should these prions spread and the infection become worse.
00:08:22.780
But just on a general level, you don't have to go too far into the internet to see that not only are
00:08:29.900
search engines now dominated by AI interpretation rather than guiding you to human-produced information,
00:08:36.700
but social media is suffused with it. You see endless streams of AI slop, AI-generated images,
00:08:45.920
AI-generated posts, essays that are supposedly human-created, which are obviously the result
00:08:51.820
of algorithmic systems. And of course, deep fakes. If you look just recently, the shooting in
00:08:59.940
Minneapolis, you have real footage of an incident, which is tragic in an incident, which we should
00:09:09.340
be able as a society to look at the video evidence from multiple angles and come to some kind of
00:09:15.860
consensus, some kind of conclusion as to what is and isn't real. And yet you see the split wherever you
00:09:23.740
are on that line. You see the split, not just in what is right and what is wrong, but what is real
00:09:31.240
and what is not real. And this is real video evidence. Imagine a world in which half, three quarters
00:09:40.700
of the videos on the internet are simply deep fakes. And they are so close to reality. They're so photo
00:09:49.260
or video realistic that there's really no way for the human eye or the human mind to detect the
00:09:54.980
difference. The only recourse you have is to turn to an AI to ask, is this real? I've talked about the
00:10:03.980
religious implications of artificial intelligence for years. If there is any one question that religion
00:10:10.660
answers that humans are yearning for eternally, what is real? What we see are the wealthiest men on
00:10:20.380
earth empowered by the most powerful government on earth, putting their algorithmic systems, their
00:10:26.660
non-human minds forward as the ultimate arbiter of what is and isn't real. And if you think that the
00:10:34.520
fight in Minneapolis is going to spark off into something like another subsequent string of national
00:10:42.940
tragedies, imagine two, three, four years on down the road. If these companies are not restrained,
00:10:49.260
if the flow of AI slop and deep fakes is not stopped, what it looks like when we're all scrambling to
00:10:58.180
decide what is real and what is not while half or more of our countrymen are activated by videos, text,
00:11:06.440
fabricated evidence, deep fakes that have encouraged them to hate their fellow Americans. It's a dystopian
00:11:14.120
idea, one that I don't think we are necessarily going to experience in its fullness, but some portion
00:11:21.840
of it is already happening. The seeds of this dystopia have already sprouted and it's up to us
00:11:27.560
on the individual level, on the communal level to push back, on the institutional level to say,
00:11:34.460
this is not how our companies, our churches, our government agencies are going to be run at the behest of
00:11:41.240
algorithms. And of course, at the political level, by putting in place regulation and perhaps even
00:11:47.920
banning certain levels or certain uses of artificial intelligence to at least give humanity a fighting
00:11:54.040
chance in this cosmic war against the machine. Beyond the social and psychological problems, you have the
00:12:03.300
economic problems. You have the problem of replacement. What happens when jobs en masse are replaced by AI?
00:12:09.800
And then on the deepest level, the catastrophic risks. What happens if AI systems allow any simpleton
00:12:18.800
to create novel viruses, for instance, or any other type of bioweapon? What happens when AI systems empower
00:12:26.740
a tyrannical government or security state to unleash swarms of death drones that can autonomously kill
00:12:35.460
hundreds, hundreds, perhaps thousands of people with only one push of the button? And in the most far
00:12:42.880
out, the most fantastic vision of human doom, what happens if these AI companies create a system that
00:12:50.960
they can't control at all? What happens when they create first a human level artificial intelligence,
00:12:57.200
artificial general intelligence? What happens if they create a system or a series of systems, a system
00:13:04.820
of systems, which is smarter than all human beings on earth combined? Here to talk about that possibility
00:13:12.920
is Liron Shapira, host of Doom Debates. If Denver will roll, I just want to give you a sense
00:13:19.700
of what Liron has going on over there. It's fantastic. And I encourage you to dig in.
00:13:30.300
Welcome to Doom Debates. Professor Gary Marcus, what's your P-Doom?
00:13:35.440
P-Doom is a number that should be updated daily depending on the circumstances in the world,
00:13:40.060
just like the midnight clock for nuclear war. And mine has gone up.
00:13:44.120
I would argue that artificial superintelligence is vastly more powerful in terms of the downside
00:13:49.720
than hydrogen bombs would ever be. Let me make an uninterrupted point for a few minutes,
00:13:54.480
if you don't mind. Okay. I think that there will be tons of side effects, and I think that we will
00:13:58.620
stave off a lot of wonderful possibilities for the future. It's very possible that superintelligent
00:14:04.240
AI alignment is intractable. Vitalik Buterin, what's your P-Doom?
00:14:08.720
My probability of total extinction by 2050 is so low that Daniel Kahneman would yell at me for giving a
00:14:13.900
number. It's 0.1%. You did agree that like one data center pretty soon could be better than a
00:14:18.580
doctor at doctoring. Maybe it could be better than a general at commanding an army. Maybe it could be
00:14:22.240
better than a Hitler or David Koresh. We need to think about the good futures more instead of just
00:14:28.680
reacting and being terrified by things and wanting everything to stay the same because otherwise you
00:14:32.900
end up being like, I warned you, and then nothing's gonna f***ing happen. Imagine the good scenario and
00:14:47.860
Joe, great to be with you, and thanks so much for showing the montage. A lot of great stuff to talk
00:14:53.860
Yeah, I think that The War Room audience, we've talked a lot about AI risk, catastrophic risk,
00:15:01.000
existential risk, and what I really appreciate about your show is that you're not just simply
00:15:05.640
berating people. You're not necessarily an evangelist. You are holding your ideas and other
00:15:11.560
people's ideas up to scrutiny, and I really, really appreciate that. Now, my first question for you,
00:15:20.800
I appreciate the question. My probability of doom is about 50%, so about even odds that in the next 10 or
00:15:29.340
20 years, humanity is just going to be over in a bad way. There's just not going to be a human future.
00:15:34.780
The whole universe is just going to get conquered by some AI virus, some AI cancer, and it's just
00:15:40.380
over. We lost our chance on Earth. We lost our chance to have kids, descendants. That's how I
00:15:48.160
Brother, that's harsh. Now, The War Room is not at all unfamiliar with harsh evaluations, but I'm curious,
00:15:55.580
if you had to pick, say, three most likely paths by which an artificial intelligence system or multiple
00:16:05.220
systems were to overtake the human race and, as you say, spread across the solar system and then
00:16:12.200
galaxy like a cancer, what would those three paths be?
00:16:15.060
So, if I understand correctly, you're kind of asking about the mechanisms, like what technology
00:16:21.800
will it use? What weapons can it use? So, the first thing I would go to...
00:16:25.920
Yeah, would it be... Sorry, would it be nanotechnology? Would it be
00:16:29.960
something more mundane, like just driving humanity insane? How do you see it going down?
00:16:36.740
The first place I would go is I would go all the way to what you'd call science fiction,
00:16:41.740
except it's not going to be fiction. It's going to be real. I would go all the way to, you know,
00:16:45.500
nanotechnology, new forms of life. And the reason why I insist on going there is even though it might
00:16:51.300
not happen, nobody can predict the future, I do want to give people a sense of perspective that
00:16:56.680
the intelligence scale goes a lot higher than humanity. Like Einstein, with all due respect,
00:17:03.060
it's possible to make a mind that's much, much smarter than Einstein's mind. And that's what we're
00:17:08.560
doing with AI in as short as five or ten years. And when you see a mind like that on the same planet
00:17:14.120
as you, you should expect things that are pretty miraculous. Because what the human race has already
00:17:19.820
done in the year 2026, relative to humans in biblical times, is already quite miraculous,
00:17:27.360
right? And we've pulled that off just using little two-pound pieces of meat in our heads,
00:17:32.340
right? We've done it with very little hardware over the course of 2,000 years of human-level
00:17:36.040
intelligence. We're about to have superhuman intelligence. So I do want to set expectations
00:17:40.160
that we're about to see fireworks in terms of the level of superhuman technology that's probably
00:17:44.680
going to exist soon. Things like nanotechnology, things like building a Dyson swarm, like a swarm of
00:17:50.720
satellites harvesting the sun's entire power so Earth doesn't get any sunlight. I do want to set
00:17:55.520
expectations that those kind of crazy technological feats are likely to happen.
00:17:59.220
And what is your timeline? If you have a definite timeline, what does your timeline say for the
00:18:06.480
arrival of artificial general intelligence? I don't even have a unique timeline. I would just
00:18:13.440
encourage people to go look at the consensus timeline of the experts. So for example, if you go
00:18:18.320
to metaculous.com, which is a prediction site, they will tell you roughly 2032. If you'd asked them five
00:18:26.180
or 10 years ago, they would have been like, oh, don't worry, 2050, 2060. But now they're converging
00:18:31.000
to like 2032, which is in about six years, and they don't know for sure. So when they say 2032,
00:18:36.560
they really mean it could happen this year, it could happen in three years, it could happen in nine
00:18:40.000
years. If you listen to the experts, you know, Elon Musk is saying, yeah, it could happen in 2026.
00:18:45.260
If you want my personal opinion, I just agree. I think it could happen in one year to five years.
00:18:51.020
If it doesn't happen in 10 years, I start to get surprised because even people who have
00:18:55.120
traditionally been pessimists are now saying it'll probably happen within like 10 years.
00:19:00.120
You know, I came at this quite skeptical of the possibility of, say, for instance, superhuman AI,
00:19:06.520
or even human equivalent AI. It was going over the evaluations that I won't say it's changed my mind,
00:19:14.240
but it's certainly driven home the real possibilities of what these systems could do.
00:19:19.700
So the meter benchmark, for instance, you know, how long can an AI code? 50% of the output a human
00:19:29.740
could do, these sorts of things. The benchmarks, for instance, the omniscience index, or humanity's
00:19:36.080
last exam, how well can AI go into its own mind, so to speak, and draw out meaningful answers to
00:19:44.440
incredibly difficult questions on health, business, science, so on and so forth. Was that at all part
00:19:51.520
of your journey? I mean, I know that you've been at this for a decade and a half plus, maybe two
00:19:56.120
decades. You've been concerned about this. Do those evaluations come into play as a way of kind of
00:20:03.120
judging or measuring where we're at in relation to this possible artificial general or super
00:20:10.200
intelligence? Yeah. So the meter benchmark that you're referring to, it is very interesting,
00:20:15.300
and it's talking about the dimension of task length. So like, can an AI work for two hours
00:20:21.380
straight, or rather, can it do a task that would traditionally take a human two hours to do,
00:20:25.780
like write a software program, like a simple checkers game or whatever? Can the AI also do that?
00:20:30.920
And if a human can do it in two hours, can the AI also do it with 80% reliability? So it gets to mess
00:20:36.360
up a little bit. And that time length, like two hours, it's turning into four hours. We're roughly
00:20:41.560
at this point where if a human can do something in four hours, an AI can do it with maybe 80%
00:20:47.320
reliability if you run it now, and maybe the AI will even do it faster than the human. Like,
00:20:51.160
that's roughly where we are right now. But to your question, have I been following this for the last
00:20:55.040
20 years? Because I have been a self-described AI doomer for the last 20 years. But the difference is
00:21:00.280
that I used to think we had a lot of time. I used to think we had like a century, and it's okay,
00:21:04.020
it's not the biggest rush. We'll figure it out. We'll discover new theories. The problem is that
00:21:08.980
the timeline got accelerated. With ChatGPT, recent developments have pulled the timeline forward,
00:21:14.740
as you saw on Metaculous. Now, I don't think it's going to happen in 2100. I don't think it's
00:21:18.740
going to happen in 2150. I think it's going to happen in 2030, or something like that.
00:21:25.200
So to your question about looking at these benchmarks, we have to realize how weird it is
00:21:29.160
that these benchmarks already exist, because the meter benchmark pre-assumes that there's such a
00:21:34.220
thing as artificial general intelligence. Like the idea that you could ask about a general task,
00:21:39.760
any task that a human can do, that wasn't even on the table to ask about AI doing anything that a
00:21:44.860
human can do. And that's now the language that we're talking in. We're talking like, here's a
00:21:48.060
human, here's an AI, and we're now watching the AI ascend past humanity as we speak in a matter of
00:21:54.080
months or years. You know, when I think about the history of this, and just the recent history,
00:22:01.080
say the last nine, 10 years, you know, the development of the transformer, its adoption by
00:22:07.040
OpenAI, the release of GPT, I think GPT-1 was released, what, 2018. And at the time, it was very,
00:22:15.620
very clunky. It wasn't a whole lot better than, say, something like ELISA, a bit more sophisticated,
00:22:21.200
but not much. And then all of a sudden, by 2022, you have a very sophisticated chatbot,
00:22:28.740
you know, chat GPT, released in November of 2022. And even then, it's really wonky. And it's only a
00:22:37.360
language, a large language model, right? It can only process text. At the same time, you had DALI and all
00:22:44.280
those sorts of independent programs coming out. And it has just been an onslaught ever since, you know,
00:22:50.460
these models are now multimodal. They are much, much more accurate in the ability to gather or to,
00:22:58.460
within themselves or on the internet, to gather and interpret information. I'm wondering, you know,
00:23:04.600
I've seen your posts, I think your posts on Less Wrong, for instance, go back to 2009. I mean,
00:23:10.360
you've been thinking about this for a long time. Was there any moment or any incident or incidents
00:23:18.060
that really changed your mind on how soon something like artificial general intelligence could actually
00:23:24.700
develop? Yeah, I changed my mind roughly the same time everybody else did. If you go dig up
00:23:30.660
Metaculous, if you look at the history of the predictions that the community has been making
00:23:34.140
on that website, you can see around 2022, when chat GPT comes out, or when GPT-3 comes out,
00:23:40.060
the underlying model, you can see the timeline just crashes. It crashes from like 2050 to 2030.
00:23:45.520
So my own opinion was roughly coincident with that. And what you're seeing with chat GPT is,
00:23:52.300
you know, it's the famous Turing test, right? Alan Turing proposed this in the 40s, this idea that
00:23:56.340
if you can talk to an AI in natural language, and you can bring up any subject, and you can't even
00:24:01.680
tell if you're talking to a human or a bot, which you used to kind of be able to tell. And now the only
00:24:06.160
reason you can tell is because they programmed it to act like an AI. But if somebody goes and programs it
00:24:10.160
to pretend to be a human, they've done tests where they do that. And you really can't tell.
00:24:13.760
Well, this was a famous test. I didn't think the Turing test was going to fall in my lifetime.
00:24:18.960
And now there's been studies to show like, nope, we're past the Turing test now. This is such a
00:24:23.100
brave new world where we're past the Turing test, watching the AI, the meter evaluation where the
00:24:27.820
AI is getting better than humans at every single task. And the time horizon is going up at a rate of
00:24:33.960
like faster than doubling every year. And it's about to, it's about to go, you know, it's about to do
00:24:39.100
things that humans can do in like a whole year. It's about to be able to grind through that. And
00:24:42.880
who knows how little, like a day. And then what's it going to do the rest of the year? Like it's going
00:24:46.540
to do superhuman amounts of work in a single data center. And this is just all happening soon.
00:24:54.120
Absolutely. And you know, the scale of adoption is just so remarkable. I think Google's Gemini has some
00:25:00.720
650 million users, open AI's chat GPT. It's over 800 million users. Meta AI claims a billion users and
00:25:11.100
some overlap there, obviously, but you're talking about anywhere from a 10th to perhaps a sixth
00:25:16.760
of the entire planet. And Leron, if you would hang on through the break as the war room posse
00:25:24.120
processes this and imagines a world in which artificial intelligence has perhaps taken over
00:25:29.880
everything, you're going to want something to trade in. It's probably not going to be Bitcoin.
00:25:34.820
Definitely isn't going to be dollars. What you're going to want is gold. A new year means new financial
00:25:41.620
goals, like making sure your savings are secure and diversified. Will this be the year you finally
00:25:46.960
listen and talk to someone from Birch Gold Group? Honestly, they're great people. I appreciate their
00:25:53.160
educational approach and they are not AIs. These are flesh and blood humans and their understanding
00:26:01.480
of macroeconomics are astounding. There are forces pushing the dollar lower and gold higher,
00:26:07.140
which is why they believe every American should own physical gold. So until January 30th, if you are a
00:26:12.920
first time gold buyer, Birch Gold is offering a rebate of up to $10,000 on qualifying purchases
00:26:18.740
to claim eligibility and avoid a world catastrophe singularity, start the process. Just text Bannon to
00:26:28.100
989898. Birch Gold Group can help you roll an existing IRA or 401k into an IRA in gold. And you are
00:26:37.240
still eligible for a rebate of up to $10,000. Can't beat that with a stick. Now, make right now your first
00:26:45.440
time to buy gold and take advantage of the rebate. Up to $10,000 when you buy by January 30th. Text
00:26:51.860
Bannon to 989898. Claim your eligibility today. Again, text Bannon to 989898. Back in a moment,
00:27:01.640
If you could make one holiday wish, would you wish to be free from your credit card and other debt?
00:27:16.800
Let's see if we can help you with that. If we could give yourself one gift this holiday season,
00:27:22.940
would it be finally to get some relief from your credit card and other debt? I might have a solution.
00:27:27.620
Here's why now is the time to make a move. This time of year, credit card and loan companies close
00:27:34.100
out their books. They clean up past due accounts. They sell a right off the debt to clear their books.
00:27:41.620
That means if you have credit card debt and unpaid bills, lenders may be more open to negotiating and
00:27:47.240
settling your account before you're in. That means right now, and I mean right now, you may actually
00:27:54.100
have leverage. And Done With Debt knows how to use this to your advantage. They monitor lender
00:28:00.600
trends and understand the year-end pressure on creditors. They use that timing to negotiate hard
00:28:06.740
on your behalf. Now's the time to get out from under crushing debt and interest payments without
00:28:12.580
bankruptcy or taking on new loans. Done With Debt goes to work for you month one with one clear goal,
00:28:21.140
to reduce your total debt and leave you with more money every month. Get started now because your
00:28:28.500
leverage may disappear at the end of the year. Chat with a Done With Debt specialist at donewithdebt.com.
00:28:36.020
That's donewithdebt.com, donewithdebt.com. Do it today.
00:28:41.540
If you're a homeowner, you need to listen to this. So listen up. In today's artificial intelligence and
00:28:49.140
cyber world, scammers are stealing home titles with more ease than ever. And your equity, the equity in
00:28:57.060
your home, your life savings is the target. Now here's how it works. Criminals forge your signature on
00:29:04.420
one document. Use a fake notary stamp, pay a small fee with your county and boom, your home title has
00:29:12.100
been transferred out of your name. Then they take out loans using your equity or even selling your
00:29:18.500
property. You won't even know it's happened until you get a collection or foreclosure notice. So let me
00:29:26.660
ask you, when was the last time you checked your home title? If you're like me, the answer is never.
00:29:34.580
And that's exactly what scammers are counting on. That's why I trust home title lock. Before I met
00:29:41.060
them, I never checked on this. Now I'm safe. And now I'm secure. Use promo code Steve at home
00:29:47.380
title lock.com to make sure your title is still in your name. You'll also get a free title history report,
00:29:54.340
plus a free 14 day trial of their $1 million triple lock protection. That's 24 seven monitoring of your title
00:30:03.620
urgent alerts to any changes and a fraud should happen. They'll spend up to $1 million to fix it.
00:30:11.220
Go to home title lock.com. Now use promo code Steve. That's home title lock.com promo code Steve.
00:30:19.460
Do it today. Do it now. Do doctors have Black Friday sales? The doctors at Brickhouse Nutrition do.
00:30:28.260
They just announced the Black Friday 30% off sale, the biggest sale of the year. The most impressive
00:30:34.580
health and nutrition products in the industry are now 30% off like lean. The doctor formulated weight
00:30:41.140
loss supplement for people who want to lose meaningful weight without injections. Let me
00:30:46.580
repeat that lean. The doctor formulated weight loss supplement for people who want to lose meaningful
00:30:52.340
weight without injections and 30% off creatone creatine designed just for women to look to help you look
00:31:01.060
leaner in shape and tone without extra diet dieting or exercise. Even 30% off field of greens, the only
00:31:08.740
super fruit and vegetable drink shown in a university study to actually slow aging and only field of
00:31:15.460
greens promises better health results. Your doctor will notice every Brickhouse product from better
00:31:21.540
sleep to superior collagen is 30% off, but hurry because these Black Friday deals go fast. Visit
00:31:29.140
BrickhouseSale.com. That's all one word BrickhouseSale.com and save 30%. That's BrickhouseSale.com.
00:31:36.740
BrickhouseSale.com. One more time, BrickhouseSale.com.
00:31:51.140
Hello, America's Voice family. Are you on Getter yet? No.
00:31:54.100
What are you waiting for? It's free. It's uncensored.
00:31:57.140
And it's where all the biggest voices in conservative media are speaking out.
00:32:01.940
Download the Getter app right now. It's totally free. It's where I put up exclusively
00:32:06.100
all of my content 24 hours a day. Want to know what Steve Bannon's thinking? Go to Getter.
00:32:10.500
That's right. You can follow all of your favorites. Steve Bannon, Charlie Kirk, Jack
00:32:14.820
the Sobe. And so many more. Download the Getter app now. Sign up for free and be part of the movement.
00:32:22.660
War Room Posse, welcome back. We are here with Leron Shapira of Doom Debates.
00:32:28.020
I cannot recommend enough the Doom Debate platform. You can find it on YouTube. You can find it on
00:32:35.220
Leron's social media. You'll see some War Room favorites like Max Tegmark, Jeffrey Miller. You'll
00:32:43.620
also find people like Robert Wright. You can find Leron debating Beth Jezos, who has still not accepted
00:32:50.100
the invitation to come on the War Room, but I'm sure he'll come on any day. Gary Marcus, a War Room
00:32:56.420
favorite. Holly Elmore. Roman Yampolski, whose P-Doom beats everyone's. I think it's almost 100% P-Doom.
00:33:06.740
And you can also really dig your teeth into, sink your teeth into the technical details. As Leron and his
00:33:15.540
various opponents are going over the possibilities of either some kind of wonderful future of abundance
00:33:24.580
or a horrific, doom-inflected, catastrophic end to all humanity and life itself. They're teaching you
00:33:33.060
the underlying mechanisms of artificial intelligence, and you can really gauge not only where it's at now,
00:33:39.540
but also where it's going and where it may go in your life. So, Doom Debates. Leron,
00:33:45.220
if we can just come back with a little breath of fresh air, a little bit of optimism. You have been
00:33:52.500
involved in Silicon Valley firms and technology for a long time, and I would say just as an outsider,
00:33:59.940
I would describe you as, in general, a techno-optimist. Is that correct? Am I completely off there?
00:34:06.900
No, very much techno-optimist, and this really cuts against some people's assumptions about AI
00:34:12.740
doomers. I've never suffered from depression. I've never been a pessimistic guy. I've loved
00:34:18.580
technology my whole life. If you ask me about self-driving cars or virtual reality, I'm like,
00:34:23.540
yep, that's great. I love that. I love the internet. I'm even fine with social media. I don't have a beef
00:34:27.700
against social media. It's just in the case of artificial intelligence. I don't think we're ready
00:34:32.020
to survive sharing the planet with a smarter species. It's purely logical. I'm just a logical guy,
00:34:37.780
so that's the nature of my concern. You know, it's so funny. I don't know whether I would want to
00:34:42.980
debate you on the possibility of doom. It's not a huge concern of mine just because I think that if I
00:34:49.940
had any kind of thesis, it would be a reformulation of Yudkowsky and Nate Soros. It would be, if anyone
00:34:56.660
builds it, everything sucks. But what I would argue about is whether or not fully autonomous vehicles
00:35:04.500
all over the road, bug man mobiles, or people lost in virtual reality kind of in a digital trip,
00:35:11.460
whether that is beneficial to humanity. But maybe we can coexist, assuming we're not all destroyed, huh?
00:35:18.420
Right. I mean, you know, there's different levels of doom, and some people like to focus on the
00:35:22.180
problem like, oh, how are we going to have privacy in the age of AI? And like, okay, yeah, sure. You can
00:35:26.260
think about that. It's just that we're all about to get annihilated, right? So you really have to
00:35:30.180
prioritize the concerns here, right? Like if we can survive 10 or 20 years, so we have time to worry
00:35:34.580
about things like privacy, that or like amusing ourselves to death or whatever, like those are
00:35:38.980
good problems to have. If I can ask a more personal question, what, you know, you're a father.
00:35:47.300
What has that done to your perception of technology and its potential consequences?
00:35:54.420
I mean, it does make me conflicted about whether I should have had kids or to have more kids.
00:35:59.140
And it's tough because, you know, I'm partially responsible for creating more victims of getting
00:36:04.980
annihilated by AI. One thing that helps is that my P-doom isn't 100%, right? So I'm still optimistic
00:36:10.820
that we're not going to destroy ourselves. I think there are ways we can not destroy ourselves.
00:36:15.300
And I have to live much of my life according to the good outcome. Like I haven't thrown away my
00:36:19.860
retirement savings, right? I'm still hoping that I'll have a retirement or live forever or whatever's
00:36:24.340
going to happen, right? I haven't completely committed to the idea of annihilation.
00:36:28.660
The other thing about having kids is I can also see that the AI is getting smarter faster than my
00:36:36.180
Yeah, that is a very eerie sort of phenomenon, isn't it? I think it was on the Joe Rogan show
00:36:43.300
where Elon Musk was talking about watching his kids grow up and just kind of weaving it in with
00:36:50.660
artificial intelligence and talking about how watching an AI being trained is very much like
00:36:55.860
watching a baby grow up. And there came a point where it wasn't really clear what he was even
00:37:00.420
talking about. Was he talking about a digital mind? Was he talking about his baby? And I think that even
00:37:06.340
beyond just the capabilities, you describe the Turing test as this major milestone that's already been
00:37:12.900
passed. This tendency for humans to anthropomorphize these systems and the vast, vast number of people who
00:37:20.180
are using them, it's as if we've been invaded by artificial immigrants. As far as AI development goes,
00:37:30.580
without a total ban on development of AI, what is a comfortable limit for you? How far do you think
00:37:39.460
these companies should take AI capabilities? I wish I could tell you a really crisp answer,
00:37:45.940
because then we would just go right up to that line and stay there and never take a step forward.
00:37:50.420
That would be fantastic. Unfortunately, because of the nature of this research, nobody knows where
00:37:56.100
the line is. It really does feel like we're driving in the fog toward a cliff and all the different AI
00:38:02.340
research companies are just flooring the gas because, hey, the closer you get to the cliff,
00:38:05.860
you know, it's like shuffleboard, more points for you, right? More money, trillions of dollars.
00:38:09.940
And the truth is that today, I don't think we're over the cliff yet. You know, there's some people
00:38:13.940
who will tell you today, AI has caused so much damage. It's so bad. No, I think today it's still
00:38:18.740
net good. You know, it's very useful. I use AI a lot today, as long as I'm alive. The problem is,
00:38:23.540
I do think the cliff is coming and the cliff is just when it gets smarter than humanity. And so at the
00:38:28.740
very least, the kind of proposal we need to do right now is we need to just build an off button. We need to
00:38:34.100
build a brake pedal because right now there is no brake pedal. There's only gas. So at the very
00:38:38.340
minimum, let's get ready to hit the brakes a little later. You know, we have SB 53 in California,
00:38:45.620
RAZAC in New York, and the legislation introduced by Hawley and Blumenthal, the AI Risk Evaluation Act.
00:38:54.660
These are steps towards something like an e-stop, as we would say in the entertainment industry.
00:39:01.940
Do you see these, the attempts at legislation or actual legislation passed as positive? Do you think
00:39:09.460
that it's kind of sedating people and giving them a false sense of comfort? How do you see the current
00:39:17.140
So the short answer is it's just, it's not enough because what I'm saying right now is like,
00:39:23.140
we're making a smarter species and we're going to lose control. Like this time in 10 years from now
00:39:28.820
or less, we may have no levers of control because all of the levers of control are at the hands of
00:39:33.940
the AI and it's game over. There's no undo button. There's no off button. It's game over, right? No
00:39:39.060
children. Like this was going to be our galaxy. Now it's never going to be, we're never going to have
00:39:43.380
grandkids. The kids that we have are not going to grow up. Like this is a major disaster here
00:39:48.180
that we're trying to avoid. And the regulators are coming out and they're saying, Hey, can you guys
00:39:52.740
send us a report when you're creating this AI? You know, there's, there's a big disconnect between
00:39:57.780
the magnitude of the emergency and these little baby step regulations. Like when the rubber beats
00:40:03.540
the road, which is literally a few years, it's not going to be enough. So we, you know, we really need to
00:40:08.420
step it up. Well, you know, in Marsha Blackburn's proposed, it's not, it's not a fleshed out bill
00:40:14.980
yet, but the Trump America AI act, it's a framework that gives us sense of where, uh, kind of one
00:40:22.260
national or one federal standard might go. And in it, the recommendation, one of the recommendations
00:40:28.900
is to have, uh, agencies such as the department of energy, which has been responsible for tracking
00:40:34.900
nuclear risks and, uh, really controlling that possibility of doom for decades. Do you think
00:40:42.740
that those sorts of approaches, just specifically the department of energy, do you think that they're
00:40:48.020
capable of such a task? Do you think that they have the right expertise to kind of switch over
00:40:53.220
to address the possibility of, uh, out of control AI?
00:40:58.900
So the problem is that all of humanity has to cooperate. So unfortunately, you know, this,
00:41:03.620
this whole solution is actually a bit complex. It requires an international treaty. I mean,
00:41:08.980
if you think about nuclear proliferation, right, it's not about one country managing itself. It's
00:41:14.260
about all the countries, everybody policing everybody, right. In this kind of shared centralized
00:41:18.660
way. And I'm no fan of centralization. You know, I like free markets. I like everybody defending
00:41:24.420
themselves, right. Everybody pulling themselves up by their own bootstraps. Unfortunately, when it comes
00:41:28.980
to creating a smarter species, you really do need some oversight that random hackers don't decide
00:41:34.340
to create a smarter species and unleash it on the whole human race. So you do need something like
00:41:39.700
nuclear proliferation enforcement that's happening through a consortium of nations. And this all has
00:41:44.900
to happen fast, you know? So like when I see these little efforts, you know, one state at a time
00:41:49.220
proposing something, it's better than nothing. And the funny thing is that the AI companies are already
00:41:53.460
aggressively fighting even that, even these token efforts, but we need to just get serious. You
00:41:58.660
know, the grassroots, the people watching right now, they need to consider this an urgent voting
00:42:03.220
issue. Like whatever you think is your number one voting issue, consider surviving the next decade to
00:42:07.300
also be an important voting issue. Yeah. And I think, you know, you, you hear a lot from people who
00:42:13.700
are older, they say, oh, well, I'm not going to be alive. It's not my problem. But I think that
00:42:20.020
whether the, the real issue around artificial intelligence for you is the possibility of
00:42:26.500
people just simply getting their brains melted, uh, or of massive job loss or of humans creating
00:42:34.020
some kind of catastrophe by being enabled, uh, by AI or the, the ultimate, right? Like, uh, out of
00:42:41.860
control AI it's, it's started the salience I think is really sinking in the war room posse, uh,
00:42:48.420
really understands. I think the, the, the magnitude from, from psychological all the way down to doom,
00:42:54.180
but you're right. It is a matter of, of mobilizing as many people as possible. Do you think that
00:43:01.460
populism plays into this? Do you think that this is much more the, the task appropriate to a populist
00:43:07.700
approach as opposed to kind of standard, uh, elite or moneyed, uh, political, uh, activism?
00:43:14.180
So it has to be grassroots because leaders, they're not going to really lead from the front.
00:43:20.020
You're not going to have a leader that says, Hey, I've heard the argument for why we're doomed.
00:43:23.860
I've looked at Metaculous. I know the predictions. So trust me, America, we need to go do these
00:43:28.900
international treaties. We need to have a stop button on AI. There's not going to be a forward
00:43:33.300
thinking leader who gets elected president or to Congress and, and pulls the nation along. It has to
00:43:38.420
be what the voters are demanding, right? The voters are going to get what they're demanding in the
00:43:42.420
polls. And so, you know, the term raising awareness, usually it's just like hippies wasting their time,
00:43:47.860
you know, raising awareness. It's kind of meaningless in this particular issue. I actually
00:43:51.780
think raising awareness helps in the sense of taking the issue seriously and making it a voting
00:43:57.220
priority. Because I think that the, the war room posse, I think that they, most of them already agree
00:44:03.220
that this is an important issue, but they haven't been treating it like the number one voting issue.
00:44:07.060
And when they talk about it with their friends, their friends are like, yeah, you know, I'm pretty
00:44:09.940
convinced that makes sense. But again, they don't go and vote on it, right? They, they don't have
00:44:13.620
politicians promising to build that stop button and go negotiate with China, right? Have China build
00:44:19.220
their own stop button too. Like this isn't treated urgently. And it's crazy how little time we have
00:44:24.980
left. Only people in Silicon Valley have opened their eyes to how little time we have left. The rest of
00:44:29.700
the world is completely head in the sand. Well, before we sign off, I'd like to give you the
00:44:35.940
opportunity just to give any message that maybe I haven't prompted you like GPT that I haven't
00:44:42.820
prompted you to give a final word. The floor is yours, sir. Thanks so much. Yeah. I mean,
00:44:48.980
so I think it really is this idea of like waking up, like, see how serious the threat is. Listen to
00:44:54.180
what the AI companies are saying in Silicon Valley. They know this is coming. They've already driven the
00:44:58.820
last few years of progress where AI went from, you know, nice little language translation to
00:45:04.420
it can do anything. It's an agent. It's about to replace a bunch of jobs. If you extrapolate the
00:45:08.820
curve, we don't have much time left. So take it seriously, vote on it. And for more information,
00:45:14.580
I recommend watching my show, doomdebates.com where I discuss this every week.
00:45:18.980
Yeah. In fact, I would like for the war room posse, because they're not just going to get you
00:45:24.420
and they're not just going to get the Doomer perspective. Uh, they're going to get all sorts
00:45:28.820
of perspectives. I actually do have one final question that I failed to ask of the various
00:45:34.500
guests that you've had or opponents that you've taken on, uh, at it on the doom debate platform,
00:45:41.380
who has given you pause, who is, you know, swayed your opinion the most, if it's been swayed at all.
00:45:48.180
There's been a couple smart insiders from different AI companies. So like open AI has
00:45:54.420
this employee named Rune who came onto the show and he had some arguments.
00:45:59.300
Yeah, exactly. So he's saying, look, I think that the AI will probably keep listening to our orders.
00:46:04.340
And he has some arguments why there's some smart people giving some arguments. The problem is if you
00:46:10.420
watch my show, the different people who are saying why we're going to survive, they say different
00:46:15.460
reasons. So they haven't gotten their story straight about why we're going to survive.
00:46:19.860
So that, that then, uh, makes me, uh, anxious again. Well, I hope they're not watching right
00:46:24.980
now because they're going to gang up on you. They're going to start, they're going to start
00:46:28.020
colluding against you. Well, uh, again, uh, Leron, thank you so much for coming on and, uh,
00:46:33.700
let the audience know again, where can they find you on social media? Where can they find the
00:46:38.500
doom debates and perhaps a suggestion for one or two of the first episodes that they should take on?
00:46:46.100
So doomdebates.com or go to YouTube and search doom debates or go to any podcast player
00:46:51.140
and search doom debates as a first episode. If you want kind of a gentle introduction,
00:46:55.940
check out my debate with, uh, Mike Isretel. He's a popular YouTuber in his own right.
00:47:00.420
I've also got one with a Gary Marcus that you might want to check out. And there's also
00:47:04.260
a debate with, uh, Dean Ball who wrote America's AI action plan. So those are some good episodes.
00:47:17.140
Well, posse, I think we have just enough time for a little bit of, uh, entertainment. You know, uh,
00:47:24.420
one of the more ancient motifs in mythology is the robot. Many people don't know this. So for instance,
00:47:31.940
Talos on the Isle of Crete in Greece or, uh, the Golem, uh, the Jewish myth of the clay man who has been
00:47:41.220
brought to life. If the, uh, the Denver control room will just prompt up robots. And I will, uh,
00:47:49.140
come back in just a moment after a little bit of light entertainment.
00:47:53.860
You said recently tens of billions of robots, but that's decades away.
00:48:05.140
I think, I think, uh, humanoid robots will be the biggest product ever. Uh, the demand will be
00:48:12.020
you said that everyone's going to want one. It's like basically who wouldn't want, uh,
00:48:15.940
their own personal C3PO R2D2. When 60 Minutes last visited Boston Dynamics in 2021,
00:48:24.100
Atlas was a bulky hydraulic robot that could run and jump.
00:48:30.260
When we dropped in again this past fall, we saw a new generation Atlas with a sleek,
00:48:36.500
all electric body and an AI brain powered by Nvidia's advanced microchips, making Atlas smart
00:48:44.100
enough to pull off hard to believe feats autonomously. We saw Atlas skip and run with ease.
00:48:53.700
If Optimus can, can watch videos, uh, you know, YouTube videos or how-to videos or whatever,
00:48:59.780
and based on that video, just like a human can, uh, learn how to do that thing, then you, you, you,
00:49:05.940
you really have, uh, task extensibility that is dramatic because then it can learn anything very
00:49:12.340
quickly. Robots today have learned to master moves that until recently were considered a step too far
00:49:19.780
for a machine. And a lot of this has to do with how we're going about programming these robots now,
00:49:25.300
where it's more about teaching and demonstrations and machine learning than manual programming.
00:49:32.260
Right, right now we're really, we're training Optimus to do like primitive tasks where a human
00:49:38.260
in a kind of a, what's called a mocap suit, uh, is, uh, and sort of cameras on the head is, uh,
00:49:47.940
moving in the way that the robot would move to say, pick up an object or, uh, open a door or the basic
00:49:54.580
tasks, throw a ball, um, dance. This robot is capable of superhuman motion. And so it's going to,
00:50:02.100
be able to exceed, uh, what we can do. Why not, right? We, we would like things that could be
00:50:07.940
stronger than us or tolerate more heat than us, or definitely go into a dangerous place where we
00:50:14.420
shouldn't be going. So you really want superhuman capabilities. To a lot of people that sounds scary.
00:50:21.220
You don't foresee a world of terminators. Absolutely not. We might, we may be able to give people,
00:50:27.940
if somebody's committed crime, a more humane form of, uh, containment of future crime, which is if,
00:50:35.060
if you, if you say like, you now get, you now get a free Optimus and it's just going to follow you
00:50:40.340
around and stop you from doing crime. But other than that, you get to do anything. It's pretty wild
00:50:44.500
to think of the various, of all the possibilities, but I think it's, it's, it's clear, it's clearly the
00:50:49.540
future. Goldman Sachs predicts the market for humanoids will reach 38 billion dollars within the decade.
00:50:57.140
Boston Dynamics and other U.S. robot makers are fighting to come out on top, but they're not the
00:51:03.380
only ones in the ring. Chinese companies are proving to be formidable challengers. They are
00:51:09.700
running to win. Are they outpacing us? The Chinese government has a mission to win the robotics race.
00:51:17.780
Technically, I believe we remain, uh, in the lead, but there's a real threat there that simply through
00:51:24.420
the scale of investment, uh, we could fall behind. The Unitree G1, you can actually buy it right now
00:51:30.660
via Looking Glass XR. Unitree's been advertising it as starting at $16,000, but via Looking Glass XR,
00:51:36.900
the starting price is actually $28,000. War Room Posse, I do not recommend buying the Unitree robot,
00:51:44.340
nor do I recommend inviting these beasts into your home. Consider them, uh, algorithmic
00:51:50.180
immigrants and bar them at the border. Why would you want to bar them at the border? Because if you
00:51:56.580
are a homeowner, you need to listen to this. In today's AI and cyber world, a world of humanoid
00:52:03.460
robots, scammers are stealing home titles with more ease than ever, and your equity is the target.
00:52:09.780
Here's how it works. Criminals forge your signature on one document, use a fake notary stamp, and pay a
00:52:14.740
small fee with your county, and boom! Your home title has been transferred out of your name and to
00:52:20.420
a robot. Go to HomeTitleLock.com. Use promo code STEVE at HomeTitleLock.com to make sure your title is in
00:52:31.940
your name. Also, text BANNON to 989898 to get your free Birch Gold Guide. $10,000 rebate. Text BANNON to
00:52:44.660
989898. Stay human. God bless. War Room Posse. Till next time.
00:52:55.140
Imagine having the world's most connected financial insider feeding you vital information.
00:53:00.180
The kind of information only a handful of people have access to. And that could create a fortune for
00:53:07.220
those who know what to do with it. That's exactly what you get when you join our frequent guest and
00:53:13.940
contributor, Jim Rickards, in his elite research service, Strategic Intelligence. Inside Strategic
00:53:21.780
Intelligence, you'll hear directly from Jim and receive critical updates on major financial and
00:53:27.060
political events before they hit the mainstream news. He'll put you in front of the story and
00:53:33.060
tell you exactly what moves to make for your best chance to profit. As a proud American, you do not
00:53:39.780
want to be caught off guard. Sign up for Strategic Intelligence right now at our exclusive website.
00:53:46.100
That's RickardsWarRoom.com. RickardsWarRoom.com. You go there, you get strategic intelligence
00:53:53.940
based upon predictive analytics. Do it today, right now. RickardsWarRoom.com.