DOES A.I. SCARE YOU? WILL A.I. MAKE US SMARTER, BETTER, MORE PRODUCTIVE OR WILL A.I. BE THE END OF CIVILIZATION, THE END OF THE HUMAN ERA?
Episode Stats
Words per Minute
176.90584
Summary
Biden announces his plans to run for re-election in 2024. Lou Dobbs explains why this is a bad idea. Plus, a look at the dangers of artificial intelligence, and why we should all be worried about it.
Transcript
00:00:00.000
Hello, everybody. I'm Lou Dobbs, and this is The Great America Show. We welcome you. Great to have
00:00:05.420
you with us. The big story of the week, that is right after the breaking installments from the
00:00:10.820
running drama at Fox News, is the 6 a.m. video announcement from the White House that the
00:00:17.160
world's greatest living puppet president, who is 80 years old, impaired and compromised,
00:00:23.060
has, after careful deliberation, deep prayer, earnest consultation with his family,
00:00:30.000
and humble counsel with President Xi Jinping and Vladimir Zelensky, he has made up his mind,
00:00:37.360
and Mr. Biden has decided to run for reselection in 2024. And part of the excitement around the
00:00:45.560
president's announcement, and let's face it, this wasn't entirely unexpected. Part of the excitement
00:00:51.740
is Mr. Biden has already thrown a changeup. And this is my insight, my analysis of what may not
00:00:59.180
be obvious to casual observers. Biden's video message is the message, I believe. Biden announced
00:01:07.920
he's running on video. And that's a big change from 2020, when he was doing most of his talking
00:01:14.980
and campaigning from his basement in Wilmington, Delaware. Now it looks to me like he's made a
00:01:21.340
solid commitment to being on video throughout his re-election campaign. A lot to look for there
00:01:28.420
in the coming weeks and months as President Biden seeks a second term and a way to energize and to
00:01:35.240
excite a base that, for the most part, isn't excited or perhaps even excitable. The 30 percent of
00:01:43.380
Democrats who do approve of President Biden's madcap policies of the last two and a half years
00:01:49.320
are cheering. We haven't, however, checked in with the other 70 percent who are opposed to the idea.
00:01:55.820
Biden's $7 trillion fiscal 2024 budget is a killer. There's no other way to say it. That is a heavy
00:02:04.840
load for any, any incumbent to carry to the voters. And so is the almost $2 trillion deficit
00:02:11.880
that results. For all practical purposes, Biden is leading a regime that has little resemblance to
00:02:19.480
the constitutional republic that preceded it for 240 years. Biden has gone rogue and he's getting
00:02:26.980
away with it. He leads by fiat. He issues executive orders and does exactly what he wants. Wide open
00:02:34.960
borders. He's sending more than $100 billion to Ukraine. He refuses to arbitrate a peace deal with
00:02:41.760
Russia and Ukraine. He's doing whatever China wants. Ends U.S. energy independence. Draws down
00:02:48.560
our strategic petroleum reserves to the lowest level in 40 years and sells some of that oil to the
00:02:55.120
Chinese, for crying out loud. Then drives Saudi Arabia into the arms of Vladimir Putin and the
00:03:01.080
Ayatollahs. Inflation is at 6 percent. Markets are volatile and the economy nearing recession.
00:03:06.960
This is an incumbent looking for re-election? Really? And President Biden and his family are
00:03:14.360
crooked, corrupt and dangerous, I'm told reliably. The House investigating committees already have
00:03:20.440
evidence that a dozen family members are profiting from Biden's influence peddling. And we know now for
00:03:27.360
a fact that the Marxist Dems, the Deep State and Biden campaign in 2020 conspired to create a cover-up
00:03:35.540
of Hunter's laptop with the help of the Obama intelligence chiefs and a deputy State Department
00:03:42.000
official who is now Biden's Secretary of State. Biden may not be held to account for what he did in
00:03:49.160
2020. But I do believe he doesn't have a chance of re-election. Mark my words, not a chance. Biden may,
00:03:57.380
of course, do more damage, and almost certainly he will. But this rogue Biden-Marxist-Dim regime
00:04:03.280
is in its last throes. But as they say, there's more. Not about Biden or even presidential politics,
00:04:11.160
but about what some have styled humanity's last invention, or the end of civilization,
00:04:17.180
the end of the human era. Now, I would call that somewhat sensationalist exposition. But what do I
00:04:23.760
know? What do any of us know, really, when compared to artificial intelligence, which promises to be
00:04:29.680
humanity's next big thing? Revolutionary, explosive, amazing, transformational. And only a few thousand
00:04:38.600
people on this planet seem to know what this thing called artificial intelligence really is,
00:04:44.480
or what it's likely to become. But we do know we're all about to be profoundly affected by it.
00:04:51.140
And I thought it might be useful if you and I started to learn more about AI, or AGI,
00:04:56.740
artificial generative intelligence, its benefits, its dangers. And our guest today is one of those
00:05:04.020
few on this planet who's been thinking about AI for some time. He co-founded, in fact, a machine
00:05:10.120
learning startup. The company was Geometric Intelligence, which he sold to Uber seven years
00:05:15.920
ago. He's a cognitive scientist, NYU professor emeritus of psychology and neural science,
00:05:22.040
and author of Rebooting AI, Building Artificial Intelligence We Can Trust. Available on Amazon,
00:05:29.640
and we recommend the book to you highly. And we welcome to The Great America Show, Gary Marcus.
00:05:35.780
Gary, good of you to be with us. We're almost all of us hearing a lot about AI. We know the big tech
00:05:41.840
companies are warning all of us about the dangers of AI, the dangers of AI just in the hands of the
00:05:48.020
corporations themselves. And we're told by the Biden Department of Homeland Security that they
00:05:53.980
want all the AI they can get to control people and to control information. And we don't like the sound
00:06:00.580
of that either, Gary. The truth is, most of us don't know whether to fear or cheer AI. So could we
00:06:07.620
please start, Gary, with just what is artificial intelligence?
00:06:11.520
It's actually hard to define. It's a little, I know it when you see it, but it's basically the idea of
00:06:17.100
machines doing smart things. You could argue and say a calculator counts as a little bit of artificial
00:06:22.200
intelligence. And you could say that the Star Trek computer that could talk about anything is maybe a lot
00:06:27.900
more artificial intelligence. Intelligence actually has many different aspects to it. But basically, we're talking
00:06:33.240
about machines that can do smart things, maybe replacing people, maybe augmenting people to do things they
00:06:38.300
couldn't do before. Augmentation. That's a that's an interesting word, because this concern right now being
00:06:45.680
expressed by that in that open letter, and now in follow up to that, there's, I will put it this way, AI
00:06:54.280
leaders, including yourself, talking about really the dangers that AI represents. And right now, that seems to be
00:07:03.900
more the focus than the benefits of AI. Do you agree? Well, I mean, they're both. I think a lot of
00:07:10.040
people are thinking every day about the benefits. So for example, these new systems save computer
00:07:13.940
programmers a lot of time. And increasing the productivity of programmers is a great thing. So
00:07:18.760
there's definitely some benefits here. They're also fun to play with and so forth. But you ultimately,
00:07:22.960
you want to ask a kind of cost benefit trade off. And there are certainly a lot of risks around these
00:07:28.460
systems. And things have moved so quickly that I would say, the first thing that a lot of us are
00:07:33.900
saying is maybe we need to slow down and understand where we are. So I don't think anybody can give an
00:07:39.160
honest question to do the benefits outweigh the risks, because nobody fully understands with these
00:07:44.720
new technologies, what the limits are and how they can be used. So probably someone else might make
00:07:49.780
the case for the positive. I've been focusing on the negative and trying to understand that.
00:07:53.600
My biggest short term concern is about misinformation, the ability of say, foreign
00:07:59.680
countries to disrupt elections by making up as much misinformation as they want in incredibly
00:08:04.640
plausible ways. Nobody can tell the difference. Just much greater volume. So the cost of misinformation
00:08:10.000
has gone to zero. And I think that's threatening democracy. So that's one concern I have. Another
00:08:15.360
concern is that cyber criminals can use this stuff to trick people. And there are now new tools like
00:08:20.580
auto GPT where one AI controls another. And we might see situations where people do like
00:08:25.940
fishing expeditions to get people's credentials, or there's something called a pig butchering scheme
00:08:32.260
where you pretend to be somebody's friend and eventually get them to send you money and you milk
00:08:36.760
them for money. And we may see those things automated in a way that we've never seen before.
00:08:41.080
So there are a lot of risks like that. I would call those nearer term risks. Another near term risk is that
00:08:45.880
people might trust these new search engines, don't really know what they're talking about,
00:08:50.020
aren't really that reliable with, for example, medicine or consult them almost like they would
00:08:55.880
consult a psychiatrist. And there may be problems there as well. Misinformation, bad advice. We've
00:09:02.380
already seen a situation where people are essentially in love with these chatbots. And then the chatbot
00:09:07.760
stopped basically having, what is the polite way to say this, having relations with them, verbal relations
00:09:14.740
with them. And people were really upset. And so like, we have attachment issues to it. Most people don't
00:09:19.540
understand that these systems aren't really in form attachments. So there are a lot of different
00:09:26.880
questions like that. And then there are longer term questions about what happens if you have a lot of
00:09:31.400
systems that aren't fully reliable, and you start hooking them up to more and more aspects of the
00:09:35.860
world. And they're really long term questions. I don't take so, so seriously, but I don't think we
00:09:40.940
have a full answer to like a kind of terminator scenario. Like what if they turn on us? And I don't think
00:09:46.000
that's very plausible. But I don't think we have a formal proof that it can't happen. And so I think
00:09:49.880
we do need to take it into consideration. And the reality is, although these new AI systems are very
00:09:55.520
interesting, we don't fully control them. We don't fully understand what they're doing. We call them
00:09:59.800
black boxes. We put in a lot of input data. We don't know exactly what comes out. We don't know
00:10:04.740
exactly what they do. And that's enough for at least some of us to say, maybe we should slow down.
00:10:09.920
You know, my TED talk yesterday, what I called for was for global governance for AI,
00:10:14.180
some kind of coalition where we bring governments together with the companies and
00:10:19.240
brought a representation around the world to try to figure out what should we do
00:10:23.300
about many different individual questions in AI in some kind of coordinated way.
00:10:27.900
Nobody really wants to have, you know, 195 companies, 195 countries with 195 sets of rules.
00:10:33.740
That's not even the interest of the companies. And the companies have to spend a lot of money to
00:10:38.040
train their models. If they do that, you know, uniquely for every country, that's not really good
00:10:41.980
for them either. So I think we're in a rare moment in political history where kind of everybody
00:10:46.820
actually wants the same thing, which is to figure out a regulatory framework where the tech companies
00:10:51.700
can do what they want, but where the citizens are protected and where governments still, you know,
00:10:57.280
have some power. And so it's an interesting and complicated moment in history, but I think it's a
00:11:02.160
good moment to try to work this out. And we don't want to do this like five years from now. We really
00:11:07.080
want to do it now. It's fascinating to start pondering all of the possibilities here to even
00:11:14.540
in restricting, if you will, containing artificial intelligence, restraining it still under human
00:11:21.440
control, if you will. And when I said that word augmentation is interesting because augmentation can
00:11:29.200
also suggest giving great power to the folks you just talked about, criminals, those who mean to
00:11:37.940
do harm in any fashion. And we ultimately are talking about policing, not AI, but ourselves
00:11:45.800
in that instance. And yet the long-term dangers, as you put out, and frankly, Gary, you're the only one
00:11:53.840
I know who's talked about this in both the near term and the long-term, talked about it in terms
00:11:59.220
of the immediate societal impact as well as the long-term. And I think that all of that has to be
00:12:07.560
discussed. But I really wonder when I see Time magazine saying, you know, we really can't,
00:12:15.740
we can't pause. We've, you know, we've got to pause, excuse me, and we've got to hold fire.
00:12:24.400
The Chamber of Commerce comes out and says, we can't pause. We've got to beat the doggone Chinese
00:12:29.700
or the Russians or whomever. That divide is proximate and it's critical because that's what
00:12:38.840
we're talking about now is competitive AI. And the nation states are the ones who are going to be
00:12:46.100
most competitive in terms of retaining power. The corporations most aggressive, given the fact
00:12:54.740
that they are competitive institutions and mean to win as well, but in a different realm, right?
00:13:00.780
I mean, there's a lot going on there. So there's definitely competition between companies,
00:13:04.860
there's competition between the countries. There's an argument that we shouldn't pause
00:13:08.960
because China will get way ahead of us. I'm not so worried about that particular thing
00:13:13.200
in the sense that the AI that we have now is still fairly limited. I think there are a lot
00:13:18.200
of fantasies that like if China gets to use GPT-5 three months before we do, that they'll invent,
00:13:24.180
I don't know, spaceships, or they'll invent some renewable energy that we don't have or something
00:13:28.720
like that. And these tools are not really good for that. There will be AI like that that's
00:13:33.100
sort of genuinely super intelligent, can do scientific reasoning, invent new technologies.
00:13:38.740
What we're talking about now is more like a productivity enhancer. I mean, people probably
00:13:42.920
play with chat GPT. You can use it to write boilerplate text for you and things like that.
00:13:47.980
It's not going to completely change the world if one nation can write boilerplate text faster than
00:13:52.280
the other, or even if one can code faster than the other for a few months. It might make some
00:13:56.720
differences in productivity. I don't think we're at the level of the technology that some people are
00:14:02.000
fantasing about. You know, China's not going to wake up and build interstellar travel that we don't
00:14:06.340
because they have this tool a few months earlier. But there are all these nearer term problems that
00:14:11.900
everybody faces, whether they have GPT-4 or GPT-5. I don't know that that's really the critical
00:14:16.140
variable. And nobody really is calling for the end of all AI research. The letter itself actually
00:14:22.580
called for a pause only on particular research on this one model, GPT-5, and actually encouraged
00:14:29.820
more research around safety, around making these systems trustworthy and reliable.
00:14:34.600
I think of the AI we have right now as sort of like a teenager, like it's powerful, but not very
00:14:39.400
well controlled yet. It doesn't have a prefrontal cortex to kind of tell it what's right and wrong.
00:14:44.140
And I think we should be mostly focused not on like who builds the biggest model fastest,
00:14:48.980
but who can figure out how to make this stuff tractable and reliable, have the stuff work in an
00:14:55.400
ethical way, in an honest way. They have a huge problem with hallucinations, making stuff up.
00:15:00.600
Nobody's calling for a ban on that kind of research. And I think we should focus more on
00:15:04.940
that, on how to make it so that these systems are things we can count on. I mean, like, you know,
00:15:09.320
it's a nightmare science fiction story when the computer goes out of control. What we really want
00:15:14.160
are computers that will do what we want them to do and that are aligned with our interests. And we don't
00:15:20.120
really have that so much yet. We're going to find out what we do have here next. We're talking with
00:15:25.660
Gary Marcus, a leading voice in artificial intelligence. Stay with us for this quick
00:15:30.740
message from our sponsors. We're coming right back. We're back now. We're talking with Gary Marcus
00:15:36.660
and Gary, you mentioned science fiction and I go back to 1969 and, you know, Space Odyssey and how
00:15:45.920
we're talking about something that's 50 years old is a very good, I think, metaphor for what all is
00:15:55.880
going on. An avatar of two is for that matter. How close are we to Hal?
00:16:03.400
Well, Hal had very good language, very good comprehension. I would say it's ahead of what
00:16:07.500
we have right now. What we have now gives an illusion of understanding language, but it doesn't
00:16:12.000
really. Sometimes I think of her, if you want to talk about science fiction, where the Scarlett
00:16:17.340
Johansson character was sort of all-purpose general assistant who understood a lot of things,
00:16:22.840
had a good theory of how human beings worked. I would say we're somewhat far from that. The latest
00:16:28.300
research shows these systems don't really have that. The thing that we have now that's probably
00:16:32.800
most impressive is these systems are general. They can work on many different things, but their level
00:16:37.260
of comprehension is still pretty poor. In that sense, I think we're still fairly far from Hal.
00:16:41.920
And of course, you know, I don't want to give away the plot for anybody who still hasn't seen 2001,
00:16:45.600
but let's just say that issues about control are important there. They weren't fully resolved in
00:16:50.680
the movie, and they're certainly not fully resolved in the real world right now.
00:16:54.600
And as we are, Sundar Pichai said, the CEO of Google, said he doesn't believe this is a decision for any
00:17:04.180
corporation. And he is calling, just as you intimated, for a broader discussion with ethicists,
00:17:12.780
philosophers, and we're talking about then, of course, the possibility of the intrusion. And I
00:17:20.440
know you're calling for government involvement, global governance. But right now, we have government
00:17:26.740
that I'm a product of the 60s, Gary. I don't trust government. And I am not the first person to
00:17:33.700
get to the line and say, you know, what we need here is more government and government control.
00:17:39.160
Your thoughts on the problems that would be created with one world governance?
00:17:45.660
I mean, I think that, you know, a critical part of what makes the US government work as well as it
00:17:50.300
does, which is not perfectly, is checks and balances. And I think we need some checks and
00:17:55.400
balances in the global governance of AI. We need to have both the companies and the governments
00:18:01.200
at the table. And they both have interests that are not necessarily truly aligned with the citizens'
00:18:06.580
interest. In an ideal world, maybe they would be, but we don't live in that ideal world.
00:18:10.840
And so I think we need a lot of stakeholders to balance a lot of things here.
00:18:14.280
I think that, you know, we don't want the companies to have all the power either, right? So nobody
00:18:21.380
really thinks about it, for example. But ChatGPT is sucking down lots of private information. We saw,
00:18:26.200
I think it was Samsung, we go, you know, people were typing in private company data, and then
00:18:30.820
suddenly OpenAI had its hands on. And so, you know, there's multiple concerns here about who has
00:18:37.860
control of data, about who makes decisions about the politics, essentially, of these systems.
00:18:44.280
There are concerns about whether we want any regulation about what you can release. So,
00:18:49.580
for example, you know, in the pharmaceutical system, we have phase one, phase two, phase three
00:18:53.720
trials. You don't just try something out on 100 million people without testing it first.
00:18:58.360
Probably don't want the tech firms to be able to do that. That's what they did
00:19:01.300
with Sydney. And they didn't really quite know what they were doing, as far as I can tell.
00:19:07.020
You know, they kind of just threw it out there and wanted to see what happens. And so,
00:19:10.840
we may want some regulation around that, for example, that the tech companies might not do
00:19:15.460
on their own. So, there are many different trade-offs that have to be made. But I think
00:19:21.680
to leave them entirely to the government or entirely to industry, neither of those models
00:19:25.100
really works. And so, we need some way of balancing those interests. And you mentioned,
00:19:29.860
like, having philosophers at the table. I think we need a lot of people at the table. We need economists,
00:19:33.860
we need philosophers, anthropologists, pretty much all fields. I think we really do want
00:19:39.100
global representation here to try to come to something that works for everybody across the
00:19:44.380
Working for everybody across the table, the folks I worry about are the folks. That is,
00:19:51.580
people who are underrepresented right now, for example, in the United States. It's a government
00:19:57.260
that is divided between two parties, neither of which is arguably, well, I'll put it this way. In my
00:20:03.600
view, government is too much about growing government. We are very suspicious of what these policies are
00:20:12.040
leading to. We're very suspicious of China and its intentions toward the United States, and indeed,
00:20:19.720
world civilization. The threat of AI, and I understand that this is not proximate, that it is some time off,
00:20:29.920
but the way in which it's progressing seems geometric to me. What GPT-4 is, next up is five,
00:20:40.280
and how quickly do we get to 10 or 15? Is there a velocity multiplier here that we should also expect?
00:20:51.800
I'll talk about that in a second, but I'll go back first and say, I don't think that AI governance
00:20:56.540
actually should be a right-left issue. And I think it's interesting, for example, that Peggy Noonan,
00:21:01.380
who is, as you well know, was Reagan's speechwriter, one of Reagan's speechwriters,
00:21:05.720
came out in the Wall Street Journal saying, we need a longer pause. I think that everybody,
00:21:11.020
whatever their party is, should be concerned about tech companies having that much power to shape
00:21:15.200
our lives without any kind of government say over it at all, or any say for the people over it at all.
00:21:23.200
It's sort of like what we've seen with social media, but I think an even greater extent in terms of
00:21:28.580
invasion of privacy and control about what information we see and so forth. So I think
00:21:33.080
that we may actually see a surprising amount of unity between the right and left,
00:21:37.880
which, as we all know, has largely been a dysfunctional divide for a long time. But I
00:21:41.700
think on this issue, there's reason for everybody to care. On the acceleration issue,
00:21:47.860
it's not clear because the enormous energy costs, enormous expense of training bigger and bigger
00:21:56.120
models, it's not clear how long we can push it. I like to think of Moore's law. We all thought
00:22:00.700
you could just double the amount of transistors you had indefinitely forever and keep cutting the
00:22:06.780
costs. And it actually, by most people's accounts, started to slow down around the year 2000.
00:22:12.100
So Moore's law is not a physical law of the universe like gravity. It's just something that's
00:22:16.840
a generalization that we saw over time for a while, and it lasted for a while, and then it stopped.
00:22:21.180
It's not clear there's enough, let's say, electricity in the United States to actually train
00:22:25.740
GPT-10 or GPT-11 or something like that. So at some point, these things are going to
00:22:31.900
stop accelerating at the speed that they are. But they will continue for a while, and we're not that
00:22:39.240
good at projecting out what they'll look like, say, even two years from now.
00:22:42.920
And as we think about artificial intelligence, and we think about the cloud, can AI be contained?
00:22:54.520
Or will we see an array of computers that just gets double the volume every, you know, following
00:23:01.900
Moore's law, as you suggest, it perhaps won't last much longer than Moore's law. But the fact is,
00:23:08.740
it wouldn't have to do too much in the way of geometric progression to be just an unthinkable
00:23:16.080
and extraordinary regenerative artificial intelligence that would be working at light speed
00:23:32.160
Well, yes and no. So like, I wrote a piece, an essay in my substack, Gary Marcus, that substack,
00:23:38.600
called What to Expect When You're Expecting GPT-4. And I predicted that it would have a lot of the
00:23:43.400
same problems as GPT-3, like hallucinating, making stuff up, having trouble understanding
00:23:49.460
the physical world, the psychological world. And all of those predictions were actually true.
00:23:53.820
Like, there's some ways in which these systems are better, and there's some ways in which they
00:23:57.360
really haven't improved at all. I'll give you another example. GPT-4 was trained on a lot of chess
00:24:02.580
games and on the rules of chess, but it can't even always follow the rules. And it doesn't play any
00:24:07.500
better than a chess computer from 1978. You wouldn't want to put it in a car to drive your
00:24:12.480
car. There are lots of ways in which these AIs are actually still pretty limited. And it's not
00:24:18.080
clear that doubling and doubling the current technology is actually going to solve those
00:24:21.820
problems. So you will see kind of more of what we have now of this kind of being able to write
00:24:27.620
boilerplate text and be able to do some interesting things. But I wouldn't assume that it's going to
00:24:31.760
be what we sometimes call, excuse me, artificial general intelligence that can solve any problem.
00:24:37.340
We're talking with Gary Marcus, a leading voice in AI. He's author of the book, Rebooting AI,
00:24:45.180
among the very first to be cautionary in terms of AI. And we're going to continue our discussion.
00:24:54.060
And if I may say, Gary, a fascinating discussion. Stay with us, please, for this brief message from
00:24:58.900
our sponsors. We're coming right back. We're back now talking with Gary Marcus. And Gary,
00:25:05.680
I have to say, thank you so much for being here today, because you're instructive and you're
00:25:12.880
illuminating. And we appreciate very much your time and your thoughtfulness. Elon Musk's warning of a
00:25:20.200
threat to civilization. It's a threat that you don't see, apparently, as...
00:25:25.120
Not as imminent. I mean, I think it's possible, but I don't, I'm not as concerned about that
00:25:31.080
particular one. I think we should have some awareness of it.
00:25:34.900
And as we look at what's involved here, there is an effort to talk about transhumanism and AI as if
00:25:42.300
they are, well, they're going to meld into one form. Your thoughts about that?
00:25:50.800
I think we will see more and more kind of augmentation. I don't know about transhumanism,
00:25:56.360
but like already, like my cell phone augments my mental life, right? Like it remembers all my phone
00:26:01.320
numbers and appointments for me. And we will see more and more of that where we rely on machines to
00:26:06.340
do more and more for us. And I think there's been a lot of talk about people losing jobs. So far,
00:26:11.240
AI has not taken that many jobs, but it's made a lot of jobs more powerful and more effective. So like
00:26:16.260
we heard five years ago that all the taxi drivers were going to lose their jobs and they didn't.
00:26:20.820
We heard that radiologists were all going to lose their jobs and they didn't. What radiologists do
00:26:26.100
now is they can do more work faster by using the AI, but there's still some human judgment there.
00:26:31.260
You know, in a hundred years, maybe machines will just do most of our work for us, but at least
00:26:35.460
in the near term, they're just going to make our jobs easier. And they'll change some people's jobs,
00:26:41.260
change the dynamics of it. But we will mostly be working together with the machines for a while.
00:26:46.120
Well, working together, that sounds good. I have to say some people, as you well know,
00:26:54.160
consider it to be the next step in human evolution, which is a fascinating concept,
00:27:00.820
but you can't get perfection from artificial intelligence, at least now. What is the future
00:27:07.980
as you see it? I guess it depends on the timescale. I don't think anybody can predict
00:27:12.840
what it's all going to be like a hundred years from now. I mean, think about all the things that
00:27:16.200
weren't here a hundred years ago. Like there weren't commercial airliners, there weren't cell
00:27:22.380
phones, there was no social media. I guess there wasn't television yet. And then maybe on the drawing
00:27:28.080
board, a hundred years is a long time. And I think in the AI world, it's particularly long. I'm just
00:27:34.340
looking at what happened in the last few months. I don't think we can really predict that. I think
00:27:38.740
we can predict that in the next decade, employment will still be pretty good, but maybe not as good
00:27:43.220
as it is now. I think we can predict that driverless cars are actually going to take a while yet,
00:27:48.520
that we're not really to the level of reliability to do that. And we can predict that AI is going to
00:27:53.780
be more and more of our daily life. And that's going to rapidly escalate over the next several years.
00:27:59.220
Uh, beneficial and helpful and, uh, extraordinary. I'm sorry, good. They are the good and the bad.
00:28:07.640
We're going to see more good and we're going to see more bad. We're going to see more of it. It's going
00:28:11.080
to be more of a focus of our lives. Going back to that word you used initially, augmentation, uh, the
00:28:17.420
choices, and you were talking about, it shouldn't be a red or blue thing, uh, a partisan matter. Uh, but we
00:28:24.120
always seem to devolve, uh, to ideological and partisan differences, uh, around the world. Uh,
00:28:32.140
and we know those differences between China and the United States now are widening, uh, and more
00:28:37.820
intense than ever. Give us your judgment about what is a safe way to proceed to have a, a geostrategic,
00:28:49.400
uh, advantage, uh, against this country's enemies. I mean, I think, you know, every country has to
00:28:56.620
continue to do the kinds of things that it's done in its defense. And, and, you know, the U S needs
00:29:03.080
to think about how, for example, its defense department can use these technologies, how it
00:29:08.060
can deal with the limitations of these technologies. Like, I don't think anything really changes there.
00:29:13.220
We're always trying to, to figure out how to maximize our use of new technologies. And we certainly
00:29:18.660
should be doing that. Um, I think we have to do it with eyes open. I think a lot of people
00:29:23.000
treat these technologies as if they're magic and really they're just a set of tools that have
00:29:27.540
strengths and weaknesses. So we need to be informed and nuanced about how we do it. Um, but I don't
00:29:32.680
think any of that changes fundamentally, but I think it is going to keep a lot of people really busy
00:29:38.020
because suddenly there are all these opportunities and all these risks, and it's going to take a lot of
00:29:43.360
work to really understand, you know, how does this technology work in the real world? What are the use
00:29:47.640
cases where it's actually helping me? What are the use cases where I can't really trust it?
00:29:51.640
So there's plenty of work. And I mean, the Chinese have to do that just like the people
00:29:55.000
in the United States. I'm an American citizen. Um, you know, everybody has to look at these new
00:30:01.360
tools and say, what are the risks? What are the benefits? How are we going to use them?
00:30:05.420
And when we look at the tools, we're talking about government, uh, and those in government
00:30:10.800
trying to make assessments, what strikes me and one of the reasons we're having this discussion
00:30:16.300
and this program will be discussing this issue a lot because of what you have been talking about
00:30:23.260
here today, the potentialities are tremendous. I don't think I can think of a, uh, of a development
00:30:31.180
that has any more powerful, uh, uh, than this, uh, seems to be, uh, and government is not
00:30:41.120
possessed of the minds that are necessary to comprehend and to, to shape that future because
00:30:47.980
of their technological, uh, yeah, there's a very serious problem that governments aren't
00:30:54.040
technologists and that they need technologists at the table and they need not only the technologists
00:30:58.620
at the big corporations who obviously have vested interests, they need, you know, smart
00:31:02.540
academics and researchers and so forth who have thought about these things too. Part of
00:31:06.800
the reason to have a global alliance, which is what I'm pushing for is to have a lot of
00:31:10.400
expertise on board so that, you know, individual governments that don't have that expertise to
00:31:15.420
have a place to consult and, and, you know, to ask, you know, what should I think of this
00:31:21.040
new technology? We have nothing like that. Most governments have no training in this, or they
00:31:25.540
have a few people that have a little bit of training. Um, and that means they're not really
00:31:29.620
up to speed. So, you know, governments need a place to turn. I think an international organization
00:31:35.360
that's neutral could be, could be a place for the governments to get informed. I think there's
00:31:39.440
also, we haven't talked about it, but a huge need for AI literacy around the entire globe,
00:31:43.240
um, for all citizens, for all governments. And that needs to be part of this too, is, is figuring
00:31:49.360
out how to get people up to speed on like, when do you trust these things? Why you shouldn't
00:31:53.760
treat them as humans, even though they seem like humans, um, you know, how fast are they
00:31:58.300
moving? What can they do? Where do they go wrong? Why do they hallucinate? Um, you know,
00:32:02.940
we need a lot of literacy around that. I'm going to start doing some like, uh, uh, animated
00:32:07.520
videos with, with one of the television networks to try to raise some AI literacy, but we need
00:32:12.620
to do this at every level from, you know, young kids all the way up to governments.
00:32:17.760
Well, and that's why we're, again, we're having this discussion here today. And one of the
00:32:22.760
reasons we're deeply appreciative is because this, this podcast is going to be dedicated
00:32:27.380
to, to raising that literacy and, and bringing, uh, information to our audience, uh, because
00:32:33.320
it's just, it's, it's, it's, uh, I'm a populist and I want, uh, you talked about people around
00:32:39.600
the table. I want the people around that table, uh, and not intermediate areas. I want them there,
00:32:45.320
uh, because the center of this country depends on, uh, on them. Uh, that's, that's where we live.
00:32:52.620
And our values shine brightest, uh, when this nation is at its best, uh, give us, if you will,
00:32:59.940
you're, as we're wrapping up here, give us, can I actually jump in for one second? Um, I don't
00:33:06.500
usually plug things so directly, but I think it's so relevant. I have a new podcast called coming out
00:33:10.900
called humans versus machines. And it's really designed to get people kind of a deep dive into
00:33:15.500
how all of these things work. So we'll, for example, talk about the rise of IBM Watson and
00:33:20.220
how they went in jeopardy and then how they overpromised and said they'd solve cancer and
00:33:23.580
how that failed. So that's humans machines comes out coming out next week. Um, and it's very much
00:33:28.360
designed to go to the people and teach people, um, how this all works.
00:33:32.300
And the title is humans and machines, humans versus machines versus machines. I think you've
00:33:38.980
got a great time. It looks, it looks like that's exactly what, uh, we're going to be looking at
00:33:46.040
talking about it in terms of, uh, whether appropriately or not. Uh, I think that's wonderful. And I, I wish
00:33:52.040
all of the best of luck. Uh, and I want to, uh, once again, it's humans versus machines, a podcast
00:33:59.480
starting, you said next week, Gary, Gary markets will be leading that hosting it. And we look
00:34:04.900
forward to that. And Gary, we also hope you will come back and join us on this podcast, uh, for more
00:34:11.960
discussions. It's fun. It's fun. It's really important. So anytime. And we always give our
00:34:17.260
guests the last word. Uh, so you're concluding thoughts, uh, here today. I think everybody needs
00:34:23.600
to come together around this left, right governments, corporations, citizens. We all need to make sure that
00:34:29.440
we get the value out of these things, but also that we have enough control over them that we can
00:34:33.700
trust them. Well said. And I want to say to you, thank you so much for being with us here today for
00:34:39.460
educating, uh, us. I will add, um, specifically myself. I appreciate it so much, Gary. It's just
00:34:47.600
been wonderful talking with you. I hope you'll come back soon. I'd love to real pleasure. Thank you very
00:34:52.560
much. Gary Marcus. Thanks for the tutorial. Thanks for being with us. I hope you found Gary is
00:34:57.320
interesting and instructive as I did. It's a tough subject, but one I think we all need to be
00:35:02.940
thinking about, and we're going to have a number of guests here to lead us through all of this.
00:35:08.820
Thanks everybody for being with us here tomorrow. Our guest will be former Trump presidential
00:35:12.840
assistant, Peter Navarro. Peter Navarro has been charged with contempt of Congress for honoring the
00:35:19.080
presidential executive privilege that the January 6th committee chose to utterly ignore. Peter's been in a
00:35:26.100
battle and he's fighting through. Please join us tomorrow. Till then, thank you and God bless you.