INTELLIGENCE WILL WIN-HUMAN OR MACHINE?
Episode Stats
Words per Minute
155.81302
Summary
Senators Ron Johnson and Chuck Grassley have their own investigation into the alleged corruption of the Biden family. The Navy has hired a drag queen to be their new "Digital Ambassador" to attract new Navy recruits. And the man credited with killing Osama bin Laden gives his thoughts on the Navy's new recruitment strategy.
Transcript
00:00:00.000
Hello, everybody. I'm Lou Dobbs, and welcome to The Great America Show. Great to have you with us.
00:00:05.360
A beautiful day in America. Chairman James Comer's investigation into the Biden family
00:00:11.300
moving full speed ahead. Senators Ron Johnson and Chuck Grassley also have their own investigation
00:00:17.140
away. The two senators honing in on Secretary of State Antony Blinken and his wife and their
00:00:24.900
contact with Hunter Biden relating to Burisma. Burisma is the Ukrainian energy company that
00:00:31.980
paid Hunter Biden and his associates $11 million. Joe Biden was Obama's vice president for four of
00:00:39.900
the five years that Hunter Biden sat on Burisma's board. In December of 2020, Blinken sat for a
00:00:47.620
transcribed interview with Senate investigators in which he was asked, did you ever talk with
00:00:53.820
Hunter Biden on the phone? To which he replied, not that I recall. He was then asked, did you have
00:01:00.240
any other means of correspondence with him? Emails or texts? Blinken replied, no. Emails uncovered on
00:01:09.000
the infamous Hunter Biden laptop show that Hunter Biden in fact did urge Blinken through his wife to
00:01:15.540
speak with executives of a Democrat consulting firm representing Burisma. The firm is called Blue Star
00:01:22.600
Consulting and Burisma Consulting and Burisma hired them to improve their image amid allegations of
00:01:28.380
corruption. It's unclear what role, if any, Blinken's wife played in that scheme. But at that time, she was
00:01:36.500
also working for the State Department. She was the Assistant Secretary of State for Educational and Cultural
00:01:43.160
Affairs. And turning now to our woefully woke military that is falling far short of its recruiting goals,
00:01:50.980
the Navy has responded and confirmed that they are now using a drag queen influencer as their so-called
00:01:59.080
digital ambassador to attract recruits to the Navy. That ambassador is Joshua Kelly, U.S. Navy Yeoman,
00:02:08.440
second class. He goes by the stage name Harpy Daniels. This should be, at best, an utter embarrassment to
00:02:16.360
Secretary of Defense Lloyd Austin and an insult to all who serve the U.S. Navy. And speaking of great
00:02:25.000
Americans who served, we asked Navy SEAL Robert O'Neill, the man credited with killing Osama bin Laden
00:02:31.520
for his take on the Navy's new recruitment strategy. Here he is. I don't have a problem with what anyone
00:02:38.200
does when the doors close behind them. But coming out front here and showing a drag queen as being
00:02:45.800
the head of our, a yeoman has the face of the toughest Navy in the world. China is literally
00:02:52.640
laughing at us. Russia is laughing at us. Why is there so many wars going on in the world? The world
00:02:56.680
is a better place when America is strong. We're supposed to be ferocious, not fabulous.
00:03:01.920
Rob O'Neill, a great American. He'll be joining us next week here on the Great America Show.
00:03:08.620
The NSA cybersecurity director has a warning for Americans. He says, buckle up for generative
00:03:15.420
artificial intelligence. Director Rob Joyce says the National Security Agency expects the generative
00:03:22.920
AI to be the source for a lot of scams and problems to come. To get a better sense of where we are right
00:03:29.760
now with artificial intelligence, we bring in our guest, James Barrett. James is a journalist,
00:03:36.560
a documentary producer, and author. He's also one of the first people to sound the alarm
00:03:42.260
on artificial intelligence. James, great to have you with us here on the Great America Show.
00:03:48.240
And let's start with your judgment about where we are right now as we're caught somewhere between
00:03:53.400
GPT-4 and 5 and a nation that doesn't truly understand what's happening with artificial
00:04:00.460
intelligence. Thank you. Thank you. It's great to be here. You know, I hate to be right all the time,
00:04:08.200
but most of the things I predicted in our final invention have come to pass. And right now,
00:04:13.120
we're experiencing this, you know, collective kind of head rush and disorientation as, you know, no longer
00:04:21.600
do we get LAMDA and DALI, then GPT-3, 4, and auto-GPT are thrown at us. And it's restarting this cycle of
00:04:32.200
shock and acceptance. But now this is the new normal, the exponential development of thinking machines.
00:04:38.400
And of course, they have huge commercial potential. They have reckless emergent properties, and they're
00:04:45.380
they're shaking everybody up. Well, I'm amongst those. I will tell you, I'm shaken up by thinking
00:04:53.380
machines that can outthink me and produce papers in a matter of moments that would take any
00:05:02.120
grad student at least a couple of weeks to do a similar job. We're watching the potential
00:05:09.240
here of replacing just about everything we accept as normal, deep research, time taken.
00:05:19.880
There are multipliers of all kinds, whether in depth, whether in time, in knowledge. It is just
00:05:27.900
striking to think that I've got an app on my phone, that I can ask it any question, I will get an
00:05:34.440
intelligent answer. It may not be the right answer, and it may not be the one I sought. But it's
00:05:39.720
remarkable what we can do just simply with an app right now. What in the world lies ahead?
00:05:46.960
Well, it's, you know, I'm a skeptic of AI, but I'm just as fascinated as you are. I play with
00:05:53.980
ChatGPT all the time. I don't really use it for writing, but I ask it questions, and it's an
00:05:59.760
excellent, if you ask it to connect two ideas, you know, I'm just making something up like, you know,
00:06:06.820
chrome mating patterns and climate, it will write a persuasive and well-thought-out essay.
00:06:15.960
The danger, I think, that we all have is when we receive writing or speech from something, we
00:06:25.400
perceive that there's a mind behind it. And that's just our, you know, we've never heard,
00:06:31.220
we've never talked to a mind that wasn't a human mind. And so we give it a lot of authority. And as
00:06:37.340
you pointed out, ChatGPT is wrong about 25% of the time. I did an experiment with it where I asked
00:06:44.600
a question, and I said, provide citations. And then I looked up the citations, and they were
00:06:50.020
completely made up. Authors, book titles. It was amazing. It was just completely, they're just
00:06:57.140
completely wrong. So even the head of OpenAI said, don't trust this for anything important.
00:07:06.960
You know, being wrong is one thing, but I've seen a number of accounts with those who are
00:07:13.940
spending some considerable time and have great expertise, talking about the hallucinations of AI
00:07:21.740
and various forms. And it doesn't seem to matter which machine, which example of AI is being
00:07:31.900
interacted with. It is a common, apparently, a common response endemic to AI at this stage.
00:07:45.380
At this stage, yes, absolutely. Because, you know, these large language modules,
00:07:51.620
they use deep learning techniques and massive amounts of data to summarize and generate and
00:07:59.520
predict new content. They're prediction machines. They predict what is the next best word,
00:08:06.000
what is the next best sentence. And they base that on the prompts that you give them, like,
00:08:12.120
you know, what are the mating habits of ravens and crows? The trouble is, they don't actually know
00:08:19.920
anything. There's nothing inside. There's no mind. There's no sense of self. There's nothing like
00:08:25.280
consciousness. They have the trappings of language, but not any of the understanding. So
00:08:29.700
you know that you can put a cup on your table and you can cup your hands.
00:08:35.780
These large language modules don't know what a cup or a table or hands are.
00:08:40.780
They're very excellent mimics. And that's why, to them, saying something that's untrue has absolutely no,
00:08:48.280
no, there's no demerits for that. It's not, it's not looking for truth or falseness. It's,
00:08:55.200
it's scanning the, uh, vast amount of, of, uh, text and books and, and articles. It's, and, and the
00:09:03.240
whole, the whole contents of the internet. And it's looking for a statistical alignment of, of words,
00:09:09.640
but it doesn't care if it's, if it's true or not. And it has no way, no mind to evaluate its truth.
00:09:16.260
And no way to do that. It separates from what we think of coding, uh, programming, uh, and producing
00:09:26.260
a result in inner in human interaction with our machines. But the power can't, seems like it is
00:09:35.020
quickly slipping from the programmer operator user, uh, to the machine itself, to the CPU that is becoming
00:09:42.740
something, uh, organic unto itself. It seems to be that way. And, you know, but there's a, there's a
00:09:50.620
big gap. Now what, what, what all these companies really want to do, they're making these, they're
00:09:55.600
making these, uh, these image generators and word generators, and they're really powerful and really
00:10:00.840
persuasive, but where they're headed is called AGI. As you probably know, artificial general intelligence,
00:10:06.740
intelligence, and that's human level intelligence in a machine. That's job number one for all these
00:10:11.420
companies for, for, for Google, for open AI, for deep mind, for meta. And they've said as much,
00:10:17.740
they've said that they want to get there because it's, it would be, it would be a huge, a huge product.
00:10:24.000
Um, but you know, they're getting there and as fast as they can, but there's a lot missing in these large
00:10:31.200
language modules before they can become as, before they can become smart. I mean, they have to know
00:10:37.920
about the world and they don't know anything about the world. Um, they need to be connected to what
00:10:43.580
they call a, uh, uh, just a, a, a database of common sense knowledge and, uh, or they need to be
00:10:51.720
embodied in a robot and learn about the world by going out and looking at it and seeing it and hearing it.
00:10:57.060
So, and, and for that, you can also use old fashioned programming. Um, they're not going
00:11:03.300
to get all the way through neural networks, which is what these large language modules are based on,
00:11:08.340
but they need old fashioned rule-based programming to, to complete the, the circle, to get to human
00:11:15.480
level intelligence. And, and I want to share with everyone the subtitle to your, your terrific book,
00:11:24.040
our final invention. And I want to, and I want to mention, if I may, the publication date. And by the
00:11:29.800
way, uh, we recommend our final invention, uh, James Buront's, uh, terrific book on AI, uh, to you.
00:11:38.440
It's available, uh, on, on, uh, Amazon, but here's the, here is the subtitle. Are you ready folks?
00:11:46.600
Artificial intelligence and the end of the human error. Now that's an attention grabber, uh,
00:11:54.040
James, that, that, uh, if, if you didn't get, if you didn't pick up on our final invention,
00:11:59.660
uh, the end of the human era, uh, is a, uh, a stone cold, uh, inspiration to, to read, read,
00:12:07.160
read, uh, give us your sense of, uh, you, you mentioned that you've had many of your predictions
00:12:14.620
and forecasts come to, to fruition to reality. Uh, how likely is it that we will see artificial
00:12:22.920
intelligence at a level sentient, uh, independent, uh, in the next decade?
00:12:31.500
I'd say it's very high. I used to say, uh, 2029, which happens to be Ray Kurzweil's anticipation.
00:12:39.220
And he said, by 2029, we should have at the price of a computer, a, uh, basically a, a human
00:12:46.340
level brain in a machine. Um, so I think we, we'll get to that and to, to AGI human level
00:12:54.900
intelligence within a decade. But the next step is the really, the really sensitive one.
00:12:58.840
And we call that the intelligence explosion and you can already see it forming. Now you have, uh,
00:13:04.220
you have, uh, chat GPT four and people have already said that chat GPT four will help program
00:13:10.520
chat GPT five. And there's also another GPT called auto GPT, which, which improves its own
00:13:16.660
programming. And there's talk about, you know, you know, how you improve the programming of
00:13:21.480
one of these models is you generate more data. And right now, believe it or not, we're running
00:13:26.800
out of data. So, um, every question you ask chat GPT becomes part of its data for training,
00:13:33.040
but if it could make its own data, if it could make its own unbiased data and, you know, good,
00:13:39.100
clean data, then it could improve its own intelligence. And that's what the intelligence
00:13:43.940
explosion. It really is. It's, it's, it's a idea that's been around since the 1960s and says,
00:13:49.040
if we create machines that are as smart or smarter than us, uh, they'll be able to, to do many things
00:13:57.240
that we do better than us. One of those things critically is artificial intelligence research
00:14:03.680
and development. So right now we're developing, uh, machines that are, that are going to be good
00:14:09.440
at artificial intelligence research and development. And that's, that's the recipe for the intelligence
00:14:14.860
explosion. Now, what happens after that, you know, do we come through? Okay. Because, you know,
00:14:19.700
one day we're playing with chat GPT or it's, you know, it's, it's much smarter cousin in five years.
00:14:27.100
And then the next thing we know, we're sharing the planet with something that's a thousand or a
00:14:31.560
million of times more intelligent than we are. And we don't know how to do that. We don't know how to,
00:14:38.100
uh, share the planet with something smarter than us. You know, I, I was lucky enough to interview
00:14:42.940
Arthur C. Clarke, the science fiction writer years ago. Uh, people don't know he was also a scientist,
00:14:48.040
right? Um, but he said, uh, you know, we share the planet or we steer the future,
00:14:53.260
not because we're the fastest creature or the strongest creature, but because we're the most
00:14:58.000
intelligent when we share the planet with something more intelligent than we are, they will steer the
00:15:03.000
future. So that's, you know, that's where we're headed. That's, that's where the, uh, end of the
00:15:08.860
human era comes from. We have a couple of really big problems to solve, or I'm afraid it's, you know,
00:15:15.540
it's going to be lights out for, for our species. You know, I hadn't heard, uh, Arthur C. Clarke's
00:15:22.860
name, uh, for a very long time. I had the opportunity to spend some time with him, uh, years, uh, years ago
00:15:29.580
when, whenever he came to town to talk about, uh, all, all of his splendid writings, uh, beautiful
00:15:37.860
writings, including of course, uh, space odyssey and the computer to think how prescient he was in so
00:15:47.740
many areas, uh, is, is humbling. But, uh, anyway, I, I, a great, a great man, a great, uh, author, uh,
00:15:55.920
and he absolutely, he absolutely, uh, thought, and one of the other things we, we, when I interviewed him,
00:16:01.560
he said, intelligence will win out in whatever form. Intelligence is the superpower of, of our
00:16:07.780
planet. And, uh, if it's, if it's machine or human, the greater intelligence will win out.
00:16:13.380
Well, let's, let's continue this conversation, uh, with, with James Barrett. Uh, he is the author
00:16:19.920
of our final invention, artificial intelligence and the end of the human era. And we're going to
00:16:25.720
find out more about the explosion of both intelligence and knowledge. And is there a
00:16:31.420
countervailing influence available or imaginable, uh, to AI and what looks to be an almost certain
00:16:41.280
path to its dominion over our planet? We're coming right back with James Barrett. We're back. We're
00:16:47.380
talking with author James Barrett and his book is our final invention, artificial intelligence and the
00:16:53.380
end of the human era. Uh, welcome back. And James, let's talk about that, uh, intelligence explosion,
00:17:00.820
uh, the ever, the ever, uh, growing need on the part of AI to find more data, a greater,
00:17:08.620
ever greater knowledge base. Is there any countervailing, any possibility of a countervailing
00:17:15.460
influence against what will be the awesome intelligence and capacity of AI in whatever
00:17:23.140
form it takes, let's say over the next 10 to 20 years? Sure. Fortunately, there is, there is hope.
00:17:29.520
And I'm actually writing a book proposal for a, for a follow-up to our final invention
00:17:34.060
called the intelligence explosion. Um, what, what, what has to happen is these companies have to,
00:17:41.440
you know, there was recently the future of life Institute released a open letter saying, let's,
00:17:46.500
let's pause development for, for six months. Another, uh, prominent AI theorist,
00:17:51.620
Eliezer Yudkowsky said, no, let's just stop it until we understand, because here's the thing,
00:17:57.060
even the experts say, we don't understand what's going on inside these large language modules.
00:18:02.340
They're not programmed the way we used to program. They ingest information and they organize it
00:18:07.340
themselves. If you were to open it up and peer inside, you'd see a lot of decimals and, and random
00:18:13.060
marks. And it's really the way that it encodes the vast amounts of, of data it ingests, but, but it's not
00:18:20.580
programmed and you can't like insert some code to make it safe. So what, what's that, what that's
00:18:28.100
called is interpretability. We need, we need to be able to, to interpret and to explain what's going
00:18:34.260
on inside these machines. And, you know, the head of, uh, open AI and, uh, Stuart Russell, who's one of
00:18:40.740
the world's preeminent, uh, AI, AI thinkers. He wrote this code of the standard texts on, on artificial
00:18:47.300
intelligence called a modern approach. He says, you know, we, we, we don't understand what's going
00:18:52.740
on inside and we're headed for what he called a Chernobyl size disaster. Now we, there are ways around
00:19:01.220
that, uh, that conclusion. We can, we can, we can, uh, so a slow down and B insist that, that, uh, AI
00:19:12.900
makers are able to explain how the systems are working. And what that may mean is going back to
00:19:19.860
last generation AI techniques like expert systems, symbolic AI, case-based reasoning.
00:19:26.260
And these, these old techniques allow programmers to interpret and explain how the systems work,
00:19:33.300
making them more predictable, making them safer than models based on deep, deep learning.
00:19:39.940
Um, some experts advocate a hybrid approach of both deep learning and older techniques.
00:19:45.620
So you don't lose all the awesomeness of, of the large language modules models.
00:19:50.740
Um, that may be a good, that may offer a route to solving, you know, to, to solving mankind's
00:19:57.620
problems. I mean, there are many benefits to artificial intelligence, but there's also some,
00:20:03.220
some harrowing risks. And what we need to do is, as you said, we need to, we need to mitigate them.
00:20:08.900
And, and just knowing where to start with a, with a reliable mitigation, a workable mitigation is going to
00:20:17.220
be a fascinating enterprise all, uh, all into itself. Uh, we, you mentioned the pause that had been called for by
00:20:24.420
some what 1400, uh, uh, leaders in technology to, to moving from, uh, GPT four to five, uh, or wherever the
00:20:36.580
research is at this point, Bill Gates, I thought it was interesting. He, he absolutely rejected the idea of that pause
00:20:43.700
on AI says just, it's impractical and impossible. I don't often agree with Bill Gates about, uh,
00:20:52.020
a number of things, but the, the reality is I can't see, uh, the CCP and China and Beijing saying,
00:21:00.020
you know, yeah, that's a good idea. Let's pause. We don't want to have additional advantages in our
00:21:05.380
technology to, uh, to win over the world. Uh, your thoughts about the nationalism that persists,
00:21:13.700
the, uh, the idea that there's going to be a global governance, which will be rejected,
00:21:18.660
I think by most, uh, Americans just by nature, uh, and the efficacy of, of what would result from a pause
00:21:27.940
or from just plunging ahead? Well, you're right. Getting China to the table would be tough,
00:21:35.540
but there's a cultural thing about China that, that we need to keep in mind that the, the,
00:21:41.780
the programmers in China right now, there's a chip embargo. So they don't have the chips to make a chat
00:21:46.980
GPT. They don't have the Nvidia chips and, uh, they're starting their own multi-billion dollar chip
00:21:52.820
industry because right now they're dependent on us. So they won't be coming up with one of these,
00:21:57.180
these things soon. Secondly, there's a cultural thing where they don't want to be seen as to do
00:22:03.980
anything to, that would threaten the premier. So they, they will not make something that is more
00:22:10.460
powerful. I'm not kidding. They will, they will probably not make something willingly or wittingly
00:22:15.820
that is more powerful than, than, uh, than the premier of China. So it's not a, it's not a given
00:22:23.580
that they will eat our lunch as, as people are afraid. Um, we do, we, we, that we already seem
00:22:30.380
to have a lot of buy-in from the, the, the heads of, uh, of some of these, of the, of the big tech
00:22:36.460
companies. So, you know, six, I'm not sure if six months is going to be enough, but China is,
00:22:43.580
is they're, they're tech technologically extremely astute. They know what's happening and they,
00:22:50.300
they know what's coming now, boy, you know, maybe we can get them to the table because they know that
00:22:56.540
what's best for, for, uh, for, for China in AI is best for all of us. What's best for
00:23:04.220
slowing down the intelligence explosions, slowing down that this development is not just good for
00:23:09.980
us, but good for them. Um, and another way to look at is this, if China did win and were suddenly
00:23:17.580
ruled by, you know, the, the guy with the biggest AI, well, that wouldn't be, would that be any worse
00:23:23.980
than being ruled by, ruled by AI? You know, we, we'd be, we'd be, we'd be under, uh, a tyranny
00:23:32.060
one way or the other. So I'm trying to find, I'm trying to find in that James, some cold
00:23:38.780
comfort, at least, uh, but the choice you offered, uh, is not, you know, it doesn't warm my heart.
00:23:46.460
Uh, it's, and it's precisely what I think concerns most Americans. Uh, and that is we have a
00:23:53.900
unique, uh, uh, life, uh, in America, a lifestyle, a way of life, uh, and it is decentralized. And
00:24:03.980
here we are, we thought that, uh, the CPU in all those desktops would be decentralizing and
00:24:10.140
democratizing, uh, to all the population. It hasn't quite worked out that way, but it's better than it
00:24:16.940
was. And it does open up avenues for far more people. But once we go to AI, we are talking about,
00:24:24.700
if not the singularity, we're talking about high, high dense levels of centralization of both
00:24:32.300
capacity and power. Well, yeah, absolutely. I mean, look at it this way. Our, our lives are in the hands
00:24:40.620
of about five CEOs and they weren't elected and, you know, I didn't vote for them. And what gives
00:24:47.340
them this power? Here's the, there's a, there's a, there's a, there's a kind of a, a metaphor that
00:24:52.380
people use a lot. And I like it. It's about getting on an airplane. Let's, let's say, uh,
00:24:57.820
you're about to get on an airplane when you're told half of the engineers who designed it, believe
00:25:02.460
it has a 10% chance of crashing. Furthermore, the airplane fails the explainability test.
00:25:08.780
It's a black box system and none of the engineers can explain how it works.
00:25:13.260
Then there's the problem of control. The longer the plane stays in the air, the more prone it is
00:25:18.300
to unpredictable, uncontrollable behavior. And the engineers themselves, each has a sorted
00:25:24.860
rap sheet of rotten behavior, like lying to Congress, employing predatory business practices,
00:25:30.460
infringing on copyright, copyrights and publishing incendiary fake news. Now, would you get on that airplane?
00:25:38.940
Well, since you put it in such an inviting, uh, inviting terms, I'm, I'm thinking probably
00:25:43.900
I'll catch the next flight. Yeah. Well, here's the, the bad news is you're on it already. And so is
00:25:49.500
everybody. And so is everybody you care about. We're on this plane. It's, it's, uh, you know, we're,
00:25:55.900
we're, we're rocketing ahead as fast at warp speed with these architectures. And we simply don't
00:26:05.180
understand how they work. Well, we're, we're talking with author James Barat, who has, uh,
00:26:11.020
offered us a thought experiment that, uh, I don't think it leads to a lot of places pleasant,
00:26:16.300
but we'll find out more and what we can do to keep this, uh, this flight, uh, airborne. Uh,
00:26:23.340
when we continue with a brief message from our sponsors, right after this, we're coming right
00:26:27.020
back. Stay with us. We're back with author James Barat and documentary, uh, uh, Tarion as well.
00:26:35.100
Uh, his book is our final invention, artificial intelligence and the end of the human era.
00:26:40.860
Uh, you've put together a, a, a, a, a challenging, uh, experiment. Uh, we're aloft. Uh, we have all of
00:26:49.900
the conditions that you laid out. Now, what do we do, James? Well, we, we, the, I, I, I'm the,
00:26:58.940
usually the last to recommend regulation. Um, but you know, right now there's a bill in front of
00:27:06.700
Congress called the algorithm accountability act of 2022, and that requires, uh, impact assessments
00:27:13.820
of AI systems to check for biases and effectiveness and presumably safety. These regulations are not
00:27:20.060
moving very fast. Um, but, but, you know, we need some oversight. These, you know, even, even, uh,
00:27:28.860
even, uh, the heads of some of these countries say, or companies say we need oversight. I heard an
00:27:34.220
amazing thing from the CEO of Google who said, uh, yeah, Sundar Pichai said, society isn't prepared
00:27:41.900
for the rapid advancement of AI, but establishing common sense precautions is not for a company to
00:27:47.980
decide. So he completely, completely said, it's not our responsibility. Now, when, when, when we release
00:27:56.300
a new food, a new drug, we have the FDA that gives it a thorough checkout. When we,
00:28:02.780
when we introduce a new airplane, the FAA has to approve it. Um, the IAEA, the international
00:28:08.060
atomic energy agency has to approve any kind of, uh, development in, in, in nuclear energy or nuclear
00:28:14.700
weapons, but we have nothing. It's bizarre given the stakes that the industry has been left to self
00:28:20.620
regulate. Um, you know, there's an issue. There's, there's an issue in that too, that, uh,
00:28:28.860
uh, everyone's talking gently about our talking around it. And that is how many minds do you think,
00:28:37.180
uh, are extant in American society right now who can fully comprehend and engage the first,
00:28:46.060
the concept of AI and who truly understand it, uh, uh, scientifically, uh, and operationally.
00:28:54.620
Oh, you know, I, I, I, I, there are so few, you know, I would say that you could, you could fit them
00:29:01.260
in a phone booth. My problem with that is James, I'm a populist. I really believe in the consent
00:29:08.460
of the governed. I believe in our constitutional Republic. And I don't believe that there is a
00:29:13.900
existing at this moment, another government system that comes anywhere close to assuring
00:29:21.260
individual rights as does the American, the American governmental system, uh, and political
00:29:28.380
system is as flawed and messed up as it is right now. I still think it's, uh, the done correctly.
00:29:35.260
It's the best there is. I don't know a way, uh, to preserve it. If we have so few who can comprehend
00:29:41.740
and, and navigate, uh, the future. That's, that's a really good point. I'm, I'm with you. I'm a populist
00:29:48.540
too. And that's why I wrote our final invention to be read by anybody. Um, I tell you, if you,
00:29:55.180
if you stroll through a bookstore and you pick up an AI book, it's generally hard going. And I really
00:30:01.180
try to make our final invention, uh, not, not, you know, like I really targeted like ninth or 10th grade
00:30:09.180
reading level, which happens to be where the New York times is and a lot of, a lot of other, uh,
00:30:15.180
papers. I'm doing the same thing with, um, the, the, the, the followup to our final invention,
00:30:21.420
because here's the thing you're asking about how we, how are we going to change things? How are we
00:30:25.100
going to save ourselves? And I think it comes down to this. We have to save ourselves. That means we
00:30:29.740
have to put pressure on politicians. We have to do it, you know, vote with our vote. We have to, um,
00:30:35.900
look at politicians. You know, the white house has a AI bill of rights. It's non-binding guidance on the
00:30:41.980
use of AI systems. It doesn't have any power. There's no money behind it. Just like there's
00:30:46.620
no money behind the algorithm accountability act, but we need to insist on that. You know,
00:30:52.860
we need to say, Hey, we're the voters. We don't want the, the power, uh, the, the, the fate of our
00:31:00.620
lives to be in the hands of five, you know, CEOs, all of whom have getaway houses in the desert. And
00:31:07.180
I'm not even kidding. You know, everybody, you, everybody who gets on TV, who's a CEO of one of
00:31:12.620
these companies has a place to escape to or try to escape to if, if the stuff hits the fan.
00:31:19.740
Yeah. It's is every, and every American should be so blessed. Uh, nothing is more concerning to
00:31:28.300
me right this moment than to hear the department of Homeland security secretary, Alejandro Mayorkas
00:31:36.220
come back again with wanting to control freedom of expression, freedom of thought,
00:31:43.660
and to do so by creating yet again, another task force. And the purpose of that task force, James,
00:31:50.060
how to use artificial intelligence to protect critical infrastructure, including screening cargo.
00:31:57.580
And they doesn't mention people, but it would be people as well
00:32:02.060
to ferret out any threat to quote unquote, the system. And suddenly, uh, DHS, which should,
00:32:10.700
in my opinion, be well down the number of agencies and departments that should be working with AI
00:32:16.380
AI wants to have AI chat GPT in several months to, so they can test how to use the technology to
00:32:26.140
protect the homeland as they would put it. I always hear them do that with an accent when I think of
00:32:31.820
them describing it as such, uh, that's deeply concerning to me because they haven't, they don't
00:32:37.900
have officials in the department of Homeland security who are capable of even the most rudimentary cyber, uh,
00:32:44.620
protection as they've proved in election after election.
00:32:48.700
I, I totally agree with you. It's, it's a giant mistake. You know, they did, they, they released
00:32:53.820
chat GPT onto the web, which is a big mistake as far as I'm concerned, because as chat GP develops,
00:33:00.460
and they're also, they're also trying to imbue it with initiative. So it's not inert. It's not just
00:33:06.060
sitting there waiting for a prompt. It's actually taking action and getting things done. Well, you don't
00:33:11.420
want that. You don't, you don't want to release it onto the internet. Um, and it was somebody that
00:33:15.900
wasn't thinking very hard about, about, about the risks who did that. Um, you don't want, I, you don't
00:33:23.740
want to put it in the hands of, uh, of Homeland security to guard anything because right now it's so
00:33:29.580
full of holes. It's just a giant security breach waiting to happen. Um, you know, it would, it would take
00:33:36.860
a bureaucrat to come up with that idea. Exactly. Exactly. Uh, James, this has been a delightful,
00:33:45.260
uh, conversation with you. I appreciate it so much. You're taking the time to educate, uh, educate us
00:33:51.020
and to illuminate AI. Uh, we always give our guests the last word here. Uh, and, and if we may, your
00:33:58.780
concluding thoughts, uh, here on the great. And the pleasure has been mine. Thank you very much for
00:34:03.820
having me on. Um, I would say to everybody, these concepts, while it is hard to, to, to program this
00:34:11.420
kind of AI, uh, and it's not up to everybody, but it is up to everybody to get involved, to start to
00:34:17.420
understand the basics of what it is about this era. We're going through a period that's as transformative
00:34:23.420
as electricity or the internet. And it's up to each individual in this Republic to, to get educated so
00:34:31.180
that they can, they can force their, and by, by peaceful voting, they can force their politicians
00:34:37.180
to take action to regulate. I just want to say again, what a pleasure. It's been talking with
00:34:42.780
you and thank you for being with us. And I hope you'll come back soon. And, uh, we want to recommend
00:34:48.300
James book, our final invention, artificial intelligence and the end of the human era.
00:34:55.020
We recommend it to you highly available on amazon.com straight away, uh, and fire up your
00:35:02.380
Kindle and let's go get them. Thank you. Thanks for being with us here. And please join us Monday
00:35:09.580
when our guest will be Ed Rollins, the savant of political strategy and the New York post,
00:35:16.380
Michael Goodwin. Join us for all that. Have a great weekend and we'll be right back Monday.
00:35:22.220
Until then, thanks. God bless you. And may God bless America.