The Benefits and Dangers of Artificial Intelligence, with Nick Bostrom and Andrew Ng | Ep. 151
Episode Stats
Length
1 hour and 36 minutes
Words per Minute
183.85825
Summary
In this episode of The Megyn Kelly Show, host Meghan Kelly sits down with Nick Bostrom and Andrew Ang, two of the world s most brilliant minds in the field of artificial intelligence, to talk about what it is, where it is going, and how it needs to be handled.
Transcript
00:00:00.580
Welcome to The Megyn Kelly Show, your home for open, honest, and provocative conversations.
00:00:12.380
Hey everyone, I'm Megyn Kelly. Welcome to The Megyn Kelly Show.
00:00:15.600
Oh, we have a fascinating show for you today. Fascinating.
00:00:21.140
I've been asking my team to line up a show on this and we have the two greatest guys,
00:00:25.020
the most brilliant, just greatest guys to talk about it with.
00:00:28.800
Like, don't you want to know where this is going, right?
00:00:31.880
Like, okay, there's Amazon Alexa and then there's something called super intelligent computers
00:00:37.560
that are going to take over the world and possibly eliminate humanity.
00:00:43.360
It can be wonderful and it can be life-changing in a great way and it could also potentially
00:00:47.320
be life-extinguishing if it gets into the wrong hands and so on.
00:00:55.540
We're going to kick it off with a guy named Nick Bostrom.
00:01:00.920
He's the director of something called the Future of Humanity Institute.
00:01:11.640
He's the founding director, as I say, of this Future of Humanity Institute.
00:01:16.120
Researches the far future of human civilization.
00:01:21.440
He has been included in Foreign Policy's Top 100 Global Thinkers list repeatedly.
00:01:28.600
He was listed by Prospect Magazine and their list of world's top thinkers.
00:01:34.100
And he's probably best known for his incredibly bestselling book, Super Intelligence, Paths,
00:01:42.600
It's been recommended by everyone from Elon Musk, who's a huge fan of our guest, Nick
00:01:49.140
And he is one of the leading thinkers on where super intelligence, what it is, where it's
00:01:56.540
That's sort of where the machines become smarter than the humans.
00:02:01.260
Then we're going to be joined by a guy named Andrew Ang.
00:02:07.480
He's the founder of deeplearning.ai, co-founder of Coursera.
00:02:14.480
This is the world's leading massive open online courses platform.
00:02:23.460
He was the founding lead of the Google Brain team.
00:02:28.960
He was the chief scientist at Baidu, which is China's Google.
00:02:34.700
I mean, this guy's been he's led a 1300 person AI group for China's Google.
00:02:46.080
And I would describe him as more of a happy warrior when it comes to AI.
00:02:51.620
And talk about how it could change your life for the better.
00:02:54.080
And I think you're going to be delighted with the show.
00:02:56.740
And I predict you'll be sharing it with everyone, you know.
00:02:58.960
OK, so we're going to start with our guests in one minute real quickly.
00:03:04.700
There is so much that I want to go over with you.
00:03:10.780
Just treat me like I am AI 101 because I know almost nothing about this field, but am dying to know more.
00:03:19.060
And just having read what I've read now of your work and having listened to your TED Talks and so on.
00:03:34.940
I just use it as a term for any form of a general artificial intelligence that greatly surpass humans in all cognitive abilities.
00:03:45.140
And so, in other words, when the machines get smarter than we are.
00:03:56.300
I think it is highly likely that it will eventually come into existence.
00:04:00.860
I think almost a certainty if we avoid destroying ourselves through some other means before.
00:04:07.980
But if science and technology continue to advance on the wide front, then I think eventually we'll figure out how to produce high-level machine intelligence and super intelligence.
00:04:21.520
Well, I mean, in some sense, it has been in the works for a long time in that people have been trying to understand better how the brain works, how to use statistical methods to better extrapolate from past data, how to build faster computers.
00:04:38.880
And, of course, the field of artificial intelligence has really burgeoned in the last eight years or so with the deep learning revolution.
00:04:51.620
And so there's quite a lot of excitement now about what is becoming possible with machine learning.
00:04:58.320
But predicting how far we are from being able to match and then maybe surpass human-level intelligence is really hard.
00:05:05.960
And I think we just have to acknowledge that there's enormous uncertainty on the timeline of these kind of things.
00:05:13.580
Now, we're going to be joined after you by another guest who his belief is, the way he phrases it, is there's two types of AI.
00:05:24.420
There's ANI, artificial narrow intelligence, and AGI, artificial general intelligence.
00:05:29.660
And he says artificial narrow intelligence is basically like the stuff we've seen already where you're typing on your computer and it recognizes the word you're typing and completes it.
00:05:41.200
You know, or you're, I don't know, maybe Amazon Alexa or the self-driving car, like those things that are improving our day-to-day living.
00:05:49.700
But general intelligence is what you're talking about, super intelligence, which is, that's a whole different realm.
00:05:56.060
And that's the thing, as I understand it, that you're sounding the alarm on.
00:06:00.240
Yeah, or at least trying to draw attention to ask something that would be very important.
00:06:06.480
I think it has an equally large upside if we get this transition to the machine intelligence here on right.
00:06:13.540
But, but, but I do think also there are significant risks associated with this, but yeah, I think it is useful to make this distinction between kind of specialized AI systems that can only do one thing, maybe sometimes at the superhuman level.
00:06:27.380
So for a long time, we've had like chess computers that can beat any human, but contrasting that to something that matches humans, say in our general learning abilities and reasoning abilities that, that make it possible for a human to learn any of thousands of different occupations or to solve novel problems that you've never seen before and to use common sense.
00:06:51.280
So why would we be seeking superintelligence, you know, because we're going to get into the risks of it and, you know, the possibility that machines not only get smarter than humans, but actually take over the world and possibly eliminate humans.
00:07:07.600
Why wouldn't we have just seen that future and said, why would we create another being on earth that's smarter than we are that could take over this planet?
00:07:14.860
For the most part, we are not seeking superintelligence, but greater intelligence, like to have, you have some AIs today and it'd be nice if they made fewer errors and were a little bit more capable.
00:07:27.520
But then of course, if we succeed in that, we would want them to be better still.
00:07:30.840
So it's not so much that there are a lot of people who are specifically trying to create superintelligence, but there are huge strivers for making progress in having better forms of machine intelligence.
00:07:43.000
I mean, in general, it's not as if human civilization has some kind of great master plan either, right?
00:07:51.120
I mean, we are not sort of having a hundred year plans for which technologies we're going to promote and which less.
00:07:57.100
So for the most part, things just happen and there are these local reasons why people do things.
00:08:04.580
And that I think is also true for the field of AI.
00:08:08.100
I know that you've said you, a possible scenario is we create a machine, a computer that has general intelligence below human level, but is superior mathematically.
00:08:21.860
And in this scenario, human beings understanding the risks of creating a superintelligent machine would take safety measures.
00:08:28.660
They would pre-program it, for example, so that it would always work from principles that are under human control.
00:08:39.420
How do you see the possibility of that machine that we've tried to take these precautions with, nonetheless, on its own, becoming a superintelligent being, for lack of a better word?
00:08:51.600
Well, so I think we will keep trying to make machines smarter.
00:08:55.820
And if we succeed in this, at some point, they will become smarter than us.
00:09:01.960
I think at that point, once you have maybe even weak superintelligence, development is likely to be very fast for various reasons.
00:09:12.400
For a start, at this point, the technology would be extremely economically valuable.
00:09:17.820
So massive investments would flow in to running these AIs on even larger data centers or applying even more human ingenuity to improve them still further.
00:09:28.260
At some point, also, you might get this feedback loop when the AI itself is able to contribute to its own further improvement.
00:09:34.780
So you might get a kind of intelligence explosion where you go from something maybe just slightly human level to something radically superintelligent within a relatively brief span of time.
00:09:49.340
And then the question becomes, would we be able to steer what such a superintelligent system would decide to do?
00:09:58.560
Like, it would be very powerful for basically the same reasons that we humans are very powerful on this planet today compared to other animals.
00:10:07.760
The gorillas are much stronger than us, and cheetahs are much faster.
00:10:12.940
And yet, the fate of the gorillas depend a lot more on what we humans decide to do than what the gorillas do.
00:10:18.900
So if you have something that radically outstrips us in terms of its general intelligence, its ability to strategize, to develop new technologies,
00:10:28.560
then it might well be that the future will be shaped by its preferences and its decisions.
00:10:34.640
And it might be non-trivial for us to make sure that those are aligned with our human values,
00:10:40.400
especially if we need to get it right on the first try.
00:10:44.260
Right. You've been saying that for a while, saying if we're going to do this,
00:10:48.340
we have to make absolutely sure that they are aligned with our human values,
00:10:52.680
and there are all sorts of dangers in doing it anyway.
00:10:56.040
I mean, who's going to determine what the values are, and what if not everybody's on the same page,
00:11:00.960
and what if we do it, but it gets into the wrong hands, and people misuse it, and so on.
00:11:07.540
But let me just stick with the gorilla thing. That's interesting.
00:11:09.920
So, I mean, because I've heard you use the example of the tiger, too.
00:11:13.140
The reason the tiger gets in the cage and can be controlled by us is because we have superior intelligence to it.
00:11:19.420
So it may be more powerful, more lethal, but we're smarter, and so we can trick the tiger into the cage and keep it there.
00:11:28.520
And so in the scenario where we have a super intelligent machine, we're the gorilla.
00:11:33.020
Well, that would be one type of scenario or one type of risk that could arise from future advances in AI,
00:11:40.760
that the AI itself somehow takes over or runs a mark or is poorly aligned.
00:11:47.880
I think there are also scenarios in which we maybe manage to tie it to our purposes,
00:11:53.540
but then we do with it as we have done with practically every other general-purpose technology in human history,
00:12:01.840
that we've also used it for a lot of bad ends to oppress each other, to wage war against each other.
00:12:09.400
And so that's another way in which advances in AI could turn out to be harmful
00:12:15.000
if they become a means of kind of amplifying human conflict,
00:12:18.520
or if they empower more people to develop other dangerous technologies,
00:12:24.820
like maybe you could use AI to more rapidly invent new biological warfare agents
00:12:29.760
or something like that, that might proliferate.
00:12:32.700
So I think there are several distinct classes of dangers that one would have to be aware of
00:12:42.780
Well, I know, I mean, you think that if you're the creator of it, you can control it, right?
00:12:47.800
You can program it such that it won't get smarter than you, and it won't...
00:12:51.580
How could you... I look at the computer on my desk, how could it ever control me?
00:12:54.600
How is that... It doesn't seem possible that...
00:12:57.460
Because you're not talking about robots running around, you know, threatening us with knives and guns.
00:13:01.340
You're talking about this thing, this thing sitting on the desk,
00:13:04.780
getting smart enough that somehow it's controlling humans.
00:13:07.500
And you think about that in the abstract, and you think, how could that ever...
00:13:11.420
How could this thing sitting on my desktop ever control me?
00:13:14.080
Yeah, I mean, presumably not the thing that actually sits on the desktop now.
00:13:19.520
It's easy enough to not develop superintelligence for any one individual or group.
00:13:24.800
But I think it's likely that we as a civilization will nevertheless do it.
00:13:29.940
And I think actually, probably we should be doing it.
00:13:33.780
I see it kind of as this portal in a sense that all plausible paths to a really great future
00:13:42.660
Now, it might be that it would be wise for us to go a little bit more slowly as we approach this gate,
00:13:49.760
so we don't kind of slam into the wall on the side.
00:13:52.140
We certainly should be very careful with this transition.
00:13:54.880
But I think it's kind of unrealistic to think that everybody, all the different countries,
00:14:00.760
all the different labs would decide to refrain from pushing forward with this,
00:14:05.660
when it has such enormous potential for positive applications in the economy,
00:14:10.720
for medicine, for security, for arts and entertainment,
00:14:14.680
for practically any area at all where human intelligence is useful,
00:14:20.360
So I think our focus should be not so much should we do it or not,
00:14:24.700
but like how can we position ourselves in the best possible ways?
00:14:31.520
Do the research in advance, say on how to align, to find scalable methods for AI alignment,
00:14:38.500
try as much as we can to build cooperative institutions and norms and practices
00:14:43.160
around the deployment of AI, and then proceed cautiously.
00:14:48.160
But can you walk us through that scenario for people who don't,
00:14:52.160
I mean, this is a big concept for folks who don't work in your field.
00:14:55.740
How could it ever be that the machines would take over?
00:14:58.540
I mean, I know you've spoken about, look, it could happen.
00:15:02.940
They could control all the other computers and things.
00:15:05.480
Humanity could cease to exist, and we need to be cognizant of this possibility.
00:15:14.800
humans have caused a lot of mischief over the course of history,
00:15:18.620
that it's for the most part not because they use their own personal bodily strengths
00:15:24.960
to wield a sword and go around chopping people's head up.
00:15:28.020
It's they've used maybe their pen or their voice to issue commands,
00:15:31.560
to persuade others, and then thereby to exert great influence.
00:15:35.160
So those modes of action would be available even just to a laptop, right, sitting on a desk.
00:15:43.720
If it could print text on a screen, I think that's already enough
00:15:46.560
for a sufficiently great intelligence to be very powerful.
00:15:51.320
But of course, there is no reason to think it would have to stop with these indirect methods.
00:15:57.400
You could maybe persuade humans to be your arms and legs to do your work in some lab
00:16:05.280
to develop different robotic systems that you could use or hack into
00:16:15.400
that would then give you more direct access to the world.
00:16:19.960
I think there are many ways with a sufficient level of intelligence
00:16:24.200
to kind of think above and around and through humans and achieve your ends.
00:16:31.960
It's also likely that if we develop this, we would want to give them access to a lot of stuff
00:16:38.860
If you could have an AI that drives your car, that's more useful than an AI
00:16:42.500
that just sits and tells you how to drive the car.
00:16:46.440
If it could run your factories, if it could pilot your airplanes.
00:16:50.020
Maybe we will have a lot of robots by the time this transition happens
00:16:54.860
so that there would be an even more ready-made infrastructure for it to tap into.
00:17:01.680
because I've heard you say that this super intelligent computer
00:17:07.480
create nanofactories covertly distributed at undetectable concentrations
00:17:14.840
that would produce a worldwide flood of human-killing devices on command
00:17:19.760
and that AI would then achieve world domination.
00:17:34.500
I mean, it's kind of almost by definition impossible for us
00:17:37.060
to know exactly what the best strategy would be
00:17:45.400
much more deeply in the strategic space than we can.
00:17:49.840
But I think what that particular scenario is meant to illustrate
00:17:59.780
that we can already see are physically possible
00:18:03.700
that we haven't yet, however, been able to actually manufacture and build
00:18:13.240
But if research were done on a digital timescale
00:18:16.100
rather than on a kind of slow biological human timescale,
00:18:30.160
would possibly be one way to leverage its power
00:18:35.820
It's not the only one, but I think that's one possible path.
00:18:40.080
Whether it would be specifically by developing nanorobots
00:19:04.620
that the United States was going to be engaged in
00:19:13.720
It started, it was going to launch the missiles anyway,
00:19:23.080
why don't you just unplug the damn thing, right?
00:19:45.880
it's not going to be so simple to just unplug it
00:19:55.000
the apes that we evolved from were still around
00:58:18.360
saying hey you know all presidential candidates
00:58:20.880
if you want to be featured in any of our our um
00:59:40.920
every single other democrat who ran for president
00:59:56.760
then um a voters really don't have the ability to
01:00:02.120
make an informed decision in a true democracy and and
01:00:06.680
then b the reality is that if you want to if you want to
01:00:09.420
talk about issues if you want to get information to
01:00:11.360
people so they can make this informed decision then
01:00:13.340
clearly running for office is not the way to do
01:00:15.720
it gabbard has just struck a new deal with rumble
01:00:18.720
the video social network youtube competitor so i think
01:00:21.860
we're we're about to hear a lot more from her in the
01:00:24.420
weeks to come and good and we in the meantime will
01:00:27.580
keep bringing you more of our best episodes from the
01:00:35.000
thank you for being here i'm i'm excited for this
01:00:42.420
conversation we just wrapped up with nick bostrom uh who he
01:00:46.880
wasn't totally anti-ai right he he's pro-ai uh but has
01:00:51.900
some concerns about i i think what you call artificial
01:00:54.900
general intelligence agi the the the long-term game where
01:00:59.620
you develop a machine that develops super intelligence
01:01:03.040
intelligence so let's just start there what's your take on
01:01:05.820
the likelihood that we will develop super intelligent
01:01:09.700
machines in this century nick bostrom is an interesting
01:01:13.900
character um ai is the new electricity is transforming tons
01:01:17.340
of industries revolutionized the way we do things in the
01:01:20.020
united states and around the world uh as for artificial
01:01:22.840
general intelligence i think we'll get there but whether it
01:01:25.980
will take 50 or 500 or 2 000 years to make computers as
01:01:29.900
intelligent as you know you or me or other people i think
01:01:33.080
that's a really long-term open research project
01:01:35.720
it's exciting okay i like 2 000 2 000 makes me feel better than
01:01:39.840
by the end of this century when my kids are still
01:01:41.900
god willing alive you know i think that uh one of the
01:01:48.540
is is is it's confusing in this way uh there's a one type of
01:01:52.360
ai called agi artificial general intelligence things to do
01:01:57.120
and artificial narrow intelligence which is the ai that does one thing
01:02:02.740
turns out over the last you know 10 20 years we've had tons of progress in
01:02:06.600
artificial narrow intelligence those ai that do one thing really really
01:02:12.980
of progress in ai i agree with that but just because there's tons of
01:02:16.240
progress in ai doesn't mean from where i'm sitting i'm candidly not
01:02:19.360
seeing that much progress toward artificial general intelligence
01:02:22.900
uh so i think that's led to some of the unnecessary hype and fear-mongering
01:02:27.600
candidly about ai that makes me feel better i'm feeling better already
01:02:31.820
now you know a thing or two about narrow artificial narrow intelligence just so
01:02:36.180
the audience understands you've led teams at google and is it
01:02:39.180
is it pronounced baidu forgive me for not knowing
01:02:41.160
oh yes i i started led the google brain team also ran ai for baidu which
01:02:46.060
which is a large web search engine company in china
01:02:48.760
because china doesn't use google so this is china's google
01:02:51.680
uh china's leading web search engine was this is baidu and then i'm also
01:02:56.180
really proud of the work that i did leading the google brain team which is
01:02:59.360
a team that hopes a lot of google embrace modern ai so
01:03:02.300
if you use google you know you're probably using technology that that that my
01:03:06.220
former team wrote actually almost that's amazing
01:03:08.300
so now what are what are some of the fun things that you and your team have
01:03:11.560
introduced into my life that i don't even know i should be thanking you for
01:03:14.220
uh don't don't don't thank me thank the many millions well thousands of
01:03:18.880
people around the world building these technologies um i think that all of us use
01:03:23.380
ai dozens of times a day maybe even more perhaps without even knowing it uh thanks
01:03:28.460
to modern ai when you do an internet search you get much more relevant
01:03:31.780
results or every time you check your email there's a spam filter in there
01:03:35.560
kind of saving us from massive amounts of spam uh that's ai every time you use a
01:03:39.960
credit card it's probably an ai trying to figure out if it is you
01:03:43.120
or if you know someone stole the credit card and we should not let that transaction
01:03:47.180
through so all of us probably use ai albums many many dozens of times a day
01:03:51.700
maybe without even knowing it and what about the self-driving car
01:03:56.080
is it you know that makes the news every so often and it's interesting to me i
01:04:00.060
it's scary to me because you also hear some reports of crashes and you
01:04:02.900
understand that okay that the technology is not exactly where they want it to be
01:04:05.880
yet but what do you see when it comes to self-driving cars i think that many
01:04:11.240
people including me uh collectively um uh underestimated how difficult it will
01:04:17.220
be to get to you know true fully autonomous self-driving cars that could drive
01:04:21.700
the way that that the person can um i think we will get there but it's been a
01:04:26.720
longer road than any of us estimated when i drive these cars i'm happy for the
01:04:30.780
driver assistance technology i personally don't really fully trust them yet so i
01:04:35.060
keep an eye on the road you know when i'm driving and one of these technologies
01:04:37.780
is supposedly doing something well yeah so here's a dumb question i
01:04:41.960
understand why somebody if we perfect the technology somebody like my mom who's 80
01:04:45.720
and really not all that well physically mentally she's great but
01:04:49.300
physically um i could see why a self-driving car would work well for her
01:04:53.240
it's like you're a built-in chauffeur but why do young able-bodied people need
01:04:58.160
that why is it an improvement for people our age
01:05:00.440
um i think that it depends a lot on the individual
01:05:04.460
i sometimes find it fun to drive you know if i don't know take my daughter out on
01:05:09.620
the road drive around that's fun but sometimes if i'm driving to work in
01:05:13.160
traffic it's like boy i wish someone else could do the driving for me and if a
01:05:17.720
computer could do that so i could maybe even sit in the back seat
01:05:20.960
and you know play with my daughter i would rather do that than be stuck in
01:05:26.100
individual it's funny because i asked this having
01:05:31.680
new jersey for the summer i had to go to the city
01:05:33.360
it's a couple of hours and i had the choice of driving myself
01:05:37.080
or sometimes we use a driver and i said you know i'm going to use a driver
01:05:41.180
because i had a bunch of interviews to do today and i said i want to read all my
01:05:44.920
stuff and i and so it's a dumb question right it's basically
01:05:47.680
you can read everything if you have a self-driving car it's going to make your
01:05:50.640
life really convenient if it doesn't kill you or all the
01:05:54.300
and and i and again i i think i i think i know you have kids right my my kids are
01:05:59.900
really young part of me worries you know when they grow up
01:06:02.540
will they ever get in the car accident so with when my daughter grows up
01:06:06.520
uh if you know there's a computer that can drive her safer than if i were to
01:06:10.260
drive or she would drive herself i think it'll make all of us better off
01:06:13.600
you know how far away are we from that you know the ai world we keep on we've
01:06:19.400
made a lot of predictions and and sometimes we're not very good at
01:06:22.460
predicting the timeline uh by which on which this will happen
01:06:26.760
i think that self-driving cars are kind of getting there in limited
01:06:29.860
environments uh so i'm seeing exciting progress for example if you're driving
01:06:34.560
around the constrained environment of a port you know shipping stuff or in a
01:06:38.980
mine or sometimes on the farm that's actually kind of getting there
01:06:42.100
um if we're willing to rejigger some of the cities uh i think we'll be there
01:06:46.340
pretty soon um i don't know i think i think it'll
01:06:49.200
still be quite a few years i still be many many many years before
01:06:52.520
you know we can drive in downtown new york or downtown new jersey
01:06:56.280
yeah i understand that they're they're not as good at picking up
01:07:01.200
things like the hand signals that a construction worker might be issuing to
01:07:08.460
things yet so they're they're not quite where they need to be
01:07:12.060
um okay so so let's talk about other ways in which
01:07:15.860
ai is going to be helping our lives and how you see it because
01:07:19.800
one of the things that nick said that concerned me was we're probably headed toward
01:07:23.660
total unemployment total eventually in the distant future once the
01:07:27.500
machines become as smart as we think they they're likely to get and that
01:07:31.400
concerns me you know i don't know what life looks like if nobody works for a
01:07:36.420
everything so what is the journey from here to there look like
01:07:39.020
in terms of technological advances you know i think that um uh total
01:07:47.660
ever happen or if it does happen and maybe i don't know how many thousands of
01:07:51.460
years away um you know it turns out it's just just let's let's demystify ai
01:07:55.880
what can we make the idea what can we not it turns out um to get a little bit
01:07:59.600
geeky and technical almost all of ai today is about
01:08:02.460
input output mappings such as uh input an email
01:08:05.860
you know output is it spam or not or input a picture of what's in front of
01:08:10.120
your car and output the position of the other cars
01:08:16.560
output a diagnosis does this person have pneumonia or not or some of the
01:08:22.780
input output uh that is creating 99 percent of the value of the economic
01:08:27.680
value of today's ai system turns out this is a ton of economic value
01:08:31.480
uh the large ad platforms have an ai that inputs
01:08:34.680
an ad and some information of other user and outputs are you going to click on
01:08:38.700
this ad or not uh because you know if you can get people to click on more ads
01:08:42.300
this direct impact on the bottom line of the large ad platforms
01:08:45.020
so it's creating tons of economic value but frankly this input output thing if we
01:08:50.700
think about how much more people do uh uh this is so much more people could do
01:08:55.720
i don't think anyone in the world uh uh has a realistic roadmap for getting the
01:09:03.320
has been has been over heightened fear mongered
01:09:06.060
um i do worry about unemployment with every wave of technology looking back
01:09:10.400
you know industrial revolution uh invention of electricity
01:09:13.180
i mean well all the people working on steam engines they unfortunately really
01:09:17.500
sadly lost their jobs or we used to have uh human operator
01:09:21.660
elevators right you know there was someone standing in the elevator
01:09:24.760
that would dial it up and down when someone invented automatic elevators those
01:09:28.820
jobs went away so i worry about that for ai that
01:09:31.940
creates some amounts of um disruption and and effect work
01:09:36.300
but complete total unemployment this this input output mapping i don't see that
01:09:40.700
piece of software replacing you know you anytime or me anytime soon
01:09:45.520
can you talk about the radiology things i read i read about the work being done
01:09:49.420
is it stanford with the ai and radiology but the conditions have to be just so can
01:09:54.860
you just talk about that sure so um i think that uh i'm sad about ai and
01:09:59.300
potential to improve health care but um actually my some of my friends and i um
01:10:04.560
worked on ai that can pitch input a picture of an x-ray and i'll put you know
01:10:09.080
what's the appropriate diagnosis and it turns out
01:10:11.220
we were able to show in the lab that we could diagnose or recognize many
01:10:16.540
conditions as accurately as a board certified highly trained
01:10:20.460
doctor radiologist but it turns out that it worked great
01:10:23.560
if we were to uh train on data we collected from you know our
01:10:27.580
research from stanford hospital uh and then you see if the system did
01:10:31.500
work well on data from the same hospital from the same set of x-ray machines
01:10:35.140
it turns out if you take that ai system and walk it down to a different hospital
01:10:40.340
down the street with maybe an older x-ray machine maybe the technician has a
01:10:46.100
the performance gets much worse whereas any human doctor can walk down the street
01:10:49.980
uh and diagnose at this other hospital you know you kind of roughly equally well
01:10:55.120
so i think that um one of the challenges of ai is we have a lot of
01:10:59.780
prototypes in the lab uh that you didn't read about in the news you know you see oh ai does
01:11:04.520
us what diagnose that human radiologists or something you may be about in the news
01:11:07.460
but it turns out that we collectively in the ai field we still have a lot of work
01:11:11.260
to do um to take those lab prototypes and put them into production uh in in a
01:11:16.760
hospital setting it will happen it's just that this will be some additional
01:11:20.280
years of work before some of the things that you know have been promised right
01:11:23.740
come come to fruition well the medical field is is so ripe for uh help from this
01:11:30.420
kind of technology i can think of a million ways in which it could change lives and
01:11:33.860
save lives but it's really every industry i know you've been making the
01:11:36.800
point it's that it's every industry that's going to be touched by this
01:11:38.900
eventually but before we move off the medical field may i just ask you about a
01:11:42.500
report in the wall street journal that got my attention
01:11:44.480
um okay among other things they're talking about
01:11:50.480
toilets that screen for disease uh it says researchers at stanford
01:11:56.040
have developed a prototype toilet that uses an artificial intelligence trained
01:12:00.280
camera to track the form of feces and monitor the color and flow of urine why
01:12:04.740
is this necessary because it could potentially analyze micro stool samples
01:12:08.840
to detect viruses like covet 19 and blood it could potentially detect irritable
01:12:13.600
bowel syndrome or colorectal cancer and here was the part forgive me because i'm
01:12:17.800
really just a 12 year old boy at heart um that i wanted to ask you about so that
01:12:21.220
the toilet could identify individual users by scanning their anuses unique
01:12:25.980
characteristics or anal print now we no one wants an anal print going off to
01:12:32.440
some ai researcher but this is happening this is actually they're saying these
01:12:38.280
units could cost between 300 and a thousand bucks
01:12:40.640
they could be rolled out in the next couple of years is this what life is going
01:12:44.500
to hold for us yeah let's hope not a lot of that description was i i think uh a lot
01:12:50.380
of that you know the description you read sounds disturbing uh having said that i
01:12:55.600
think uh there are you know doctors that have to do many disturbing things for the
01:13:00.320
good of the patients but i think i i i don't i think a lot of us will not want
01:13:04.740
this in our homes anytime soon but we'll see you know okay doctors got to innovate we'll
01:13:08.920
we'll we'll see what the fda proves and what seems to be appropriate for patients
01:13:12.580
that may need it even if it doesn't seem like the right thing for everyone
01:13:16.060
because you know that's going to turn into one of these things where you get false
01:13:19.220
alarms every other day and you're in the doctor saying oh my anal print suggested i've got
01:13:24.020
colorectal cancer i don't know sounds like there's a ton of internet means to be created
01:13:29.660
off what you just said like it and i i listen as somebody who's on camera for a living uh a lot of
01:13:35.840
my life i there there are limits to how far i'm willing to go and i think i speak for a lot
01:13:39.740
of people so what about the other industries like how else could ai improve or negatively
01:13:48.180
one of the challenges i see is uh ai as of today has clearly transformed the computer
01:13:54.960
uh software the consumer software internet industry where you use a website to launch website
01:14:00.780
operating app operating companies almost all of them i mean all of them use ai to great effects
01:14:06.060
um one of the challenges that still faces us is ahead of us is figure out how to use ai
01:14:10.960
to improve transform create value for all of the other industries out there um so for example one
01:14:18.240
thing i'm personally passionate about is manufacturing i think that for um american manufacturing to be more
01:14:24.400
competitive the road forward is not you know to try to just try harder to do the jobs that were around
01:14:31.440
20 years ago i think it's america and frankly all nations around the world should um raise a head
01:14:38.220
a head to figure out how this technology can work for manufacturing and for all those other
01:14:43.000
industries so for example uh it turns out that in many factories around the world today there are tons
01:14:48.640
of people standing around using their eyes to inspect you know manufacture things like automotive
01:14:54.000
component or pill bottle or food and beverage you know like a food component to see if there's a defect on
01:15:00.740
i think ai is clearly going to be able to do a lot of that work in the near future in an automated way
01:15:07.200
and if we in america want to embrace this technology you figure out how to use ai for automatic vision
01:15:14.080
inspection is coming now i'm i'm working on it my friends are working on it i think that's how many
01:15:19.280
industries become competitive but it turns out getting you had to work for manufacturing for health care
01:15:24.180
for agriculture these industries there's actually a different recipe it turns out the stuff that i was
01:15:28.480
doing you know like google and other internet companies it doesn't quite work so there's
01:15:32.400
something a little bit more needed but again a bunch of us in the ai view are working on this i hope we'll get
01:15:37.760
don't leave me now we got more coming up in 60 seconds
01:15:42.340
when we talk about baidu for a minute and just talk about china and its approach to data because
01:15:51.140
i know that they they really want to be leaders in the ai field and the united states is watching them
01:15:55.880
and they're watching us do you think that the chinese are any better than the googles of the
01:16:02.160
world where you were also the top guy at collecting information um synthesizing it keeping an eye on
01:16:09.100
people's habits and so on yeah i think that uh uh i think that uh china is phenomenal at some types of
01:16:16.900
technology the u.s is phenomenal at some types of technology um we do i think we do live in a you know
01:16:23.000
multi-polar world where i see innovations in the u.s and europe and china really frankly all around
01:16:28.300
the world and the ai community tends to be very global uh there is a global network where researchers
01:16:35.280
you know in research in singapore may publish a paper and then like two weeks later it's running in
01:16:41.240
some you know site in the united states and then someone in the uk will read it too and figure out
01:16:46.280
something to to apply and deploy in europe so i think we live in a global world where different
01:16:52.260
teams sometimes collaborate and different teams sometimes compete um uh i i i think actually one
01:16:58.920
thing i will say a lot of people underestimate um the importance of government support uh in the early
01:17:05.720
days of ai so not many people know this when i was running ai way back uh before modern ai deep
01:17:11.700
learning became popular a lot of reasons i was able to do my work was because doctor you know
01:17:16.980
of the defense agency in washington dc was willing to fund some of my work so i think without doctor
01:17:22.280
you know funding some of my research work i don't know that i would ever have gone to google
01:17:26.600
to propose starting the google brain project so i think i think just ensuring american competitiveness
01:17:31.840
is something i would love to see where are we on the scale are we are we the world leaders you
01:17:36.960
know you look at sort of the military superpowers and we know where we are but where is america when it
01:17:41.000
comes to ai i think that uh the two leading countries in the world in ai are quite clearly um uh uh the
01:17:48.300
u.s and china i think the u.s is the world leader in um a lot of basic research innovations but this is
01:17:56.140
not a lead that we should take for granted and we just got to keep on working really hard and what
01:18:01.340
about the creation of super intelligence because i i i read something about you creating something about
01:18:08.300
where a computer can recognize a cat i don't know you can tell me what it was but to me that sounded
01:18:14.900
like working toward developing super intelligence you know a computer that can learn on its own and
01:18:21.020
you know develop its own intelligence and improve its own intelligence but can you talk about that
01:18:25.440
about where we are on it what you've done on it and whether you think well you know how far along we
01:18:30.280
are um yeah the the the the cat result uh what was the google brain team one of the early results we
01:18:35.880
had was uh we built an ai system called a neural network and had it watched tons of youtube video
01:18:41.360
basically had it you know sit in front of the computer and watch youtube video for for like a
01:18:45.500
week and then we all said hey what did you learn and to our surprise one of the things that learned
01:18:50.720
was it had figured out or had uh learned to detect this thing which turns out to be a cat because it
01:18:57.900
turns out when you have an ai system watch youtube videos for a lot it learns to detect things that
01:19:02.540
occur a lot in youtube video so people faces occur a lot in youtube figured out you know how to detect
01:19:08.020
that there are also a lot of cats right there's another internet meme on youtube so it also figured
01:19:12.160
out how to detect that um it wasn't a very good cat detector uh but but the wrong thing about that
01:19:17.820
was that it had figured out that you know there's this thing it didn't know it was called cat cat
01:19:22.220
but there was this thing so they just learned to boy see a lot of this thing whatever it is i don't know
01:19:27.140
what it is so so it was it's pretty remarkable the ai system your network had figured that out uh by
01:19:32.720
itself now but again you know between that and and super intelligence or agi i i think it's very far
01:19:38.980
away i think that's worrying about um ai super intelligence today is a bit like over is like
01:19:45.820
worrying about overpopulation on the planet mars um i should hope that we will you know manage to
01:19:52.160
colonize mars and and maybe someday we'll have so many people on mars that we have children dying
01:19:57.960
because of pollution on mars and and you may be saying hey andrew how do you be so heartless to
01:20:03.280
not care about all the children dying on mars and my answer is well you know we haven't even kind of
01:20:08.140
landed people on the planet yet so i i don't know how to productively defend against overpopulation
01:20:13.320
there so i feel a little bit about i i think it's fine if academics study it you know publish some
01:20:18.380
theories on what to do when we have agi but it's so far away uh i personally don't really know how
01:20:24.780
to productively work on that problem now you are the co-founder of a group called corsera is that how
01:20:32.140
you pronounce it yes corsera and i feel like this this dovetails very nicely with one of the things
01:20:37.620
that nick was recommending when i talked to him about the future our children and so on and he was saying
01:20:42.320
the one thing the kids of the future are going to need to be able to do is understand that learning
01:20:46.240
is a lifetime process right that nothing is as static as it used to be the world is changing so
01:20:51.940
rapidly and our kids are going to need to be able to handle information at an even more rapid pace
01:20:57.100
than it now comes into their life which is already faster than ever and i feel like this is one of the
01:21:02.040
missions of corsera is to to nurture lifelong learning can you talk about it because it sounds
01:21:06.880
really interesting and it's been hugely successful yeah so yeah through corsera um hope we can give
01:21:13.140
anyone the power to transform their lives through learning um i was teaching at snappy university
01:21:19.300
about a decade ago actually over a decade ago and uh put my class on machine learning type of ai on the
01:21:25.580
internet and kind of to my surprise uh a hundred thousand people uh signed up for it and and i kind
01:21:31.820
of did the map you know i was teaching 400 people 400 students a year but when i did the map i realized
01:21:36.880
that uh for me to reach a similar audience a hundred thousand people teaching 400 people a year i would have to
01:21:42.500
teach at stanford university for you know like 200 years um and so so so based on that early traction
01:21:50.100
uh i got together uh with with uh with a friend uh to start corsera uh to create a platform that now
01:21:56.840
works with uh over 200 um uh universities and other institutions and companies uh in order to create
01:22:04.120
online learning courses that you know pretty much anyone in the world can access that's so great i mean
01:22:10.880
so it's like for those of us who didn't go to stanford or harvard or what have you uh but want access
01:22:15.940
to that kind of education though not full-time we can go here yeah in fact you know to to actually
01:22:23.420
i want to share two thoughts relevant to to to you to you know all of you watching this uh if you want
01:22:29.440
to learn about ai and not you know and cut through the hype one of the classes i'm most proud of is
01:22:34.900
ai for everyone uh on on corsera and i think i tried to uh give a non-technical presentation of
01:22:42.100
ai so if you want to know how will ai affect your life in the future how will ai affect your job
01:22:47.120
your industry you know there's a several hours of video that i hope will give anyone that's interested
01:22:52.520
uh a non-technical introduction to ai so you can think about this strategically and know how it will
01:22:57.820
affect you but also learn to recognize ignore some of the hype um there's one of the trend i'm
01:23:04.760
excited about which is you know with the rise of tech um i think we may i hope will eventually
01:23:11.300
shift toward the world uh and it's just irrelevant to all of you you know with children for example
01:23:15.980
but i hope we'll shift toward a world where almost everyone will know a little bit about coding and i
01:23:22.500
say this because many many hundreds of years ago we lived in a society where you know some people
01:23:28.060
believe that maybe not everyone needs to read right maybe there are just a few priests you know and
01:23:33.120
monks they had to learn to read so they could read the holy book to the rest of us or something and
01:23:37.980
and the rest of us we didn't need to read we just sit there and listen to them fortunately society
01:23:42.500
wised up and now with widespread literacy uh we've figured out that it makes human to human
01:23:48.260
communications much better i think that with the rise of computers in today's society you know for
01:23:54.200
good and for ill this is very powerful force i would love to see a lot of people able to just
01:23:59.840
learn to code so not just like not all of us need to learn to be great authors right you know i can
01:24:05.040
write but i'm not a great author i don't think everyone needs to be a great programmer but for
01:24:09.860
many of us there will become a time where um if if you're uh uh uh you know if you could write a few
01:24:16.880
lines of code get your computer to do what you want um just like literacy has created much deeper
01:24:22.340
human to human communications i think if everyone can kind of learn you know a little bit of coding or
01:24:28.000
computer literacy then all of us can have much deeper interactions with our computers and that'd
01:24:32.760
be a very powerful tool for all of you in the future well it certainly had a massive impact on
01:24:36.940
your life just reading your background um how did you get into it at such a young age it was your dad
01:24:41.860
i understand oh yes uh so my my dad's a doctor and uh when i was a teenager i was born in the uk but i was
01:24:50.840
living in uh singapore at the time but so my dad was interested in ai for healthcare so you know he kind of
01:24:56.200
taught me about uh his attempts to use you know frankly like 1980s ai which is not that advanced
01:25:03.420
to do medical diagnosis so that sparked off a lifelong interest in you know i do remember when
01:25:09.500
in high school i once had a internship i once had a job uh as a as an office admin and i don't remember
01:25:16.120
much from that job i just remember doing a lot of photocopying and and even i was like whatever you
01:25:21.440
15 16 years old i remember thinking boy if only why am i doing so much photocopying if only we could
01:25:28.360
write some software have a robot or something do all this photocopying maybe i could do uh something
01:25:34.340
something even more interesting and more valuable and i think that for me was part of my uh lifelong
01:25:40.120
inspiration to just write software that can help you know automate some of the more repetitive things
01:25:46.000
so that all of us collectively can tackle more challenging and exciting things well it's so great
01:25:50.920
because i tell you i went out to google and i i spoke to a bunch of executives there a couple years ago
01:25:56.040
and i know that they try to give the coders some stress relief some like a break because it can be very
01:26:05.060
intense work and one of the one of the stations on campus was sword fighting i'm like this is so great
01:26:11.220
you know they're just because you know you spend all day doing that it's very intense and you do need
01:26:15.220
a mental break a break for your eyes a break for your for your body so it's it's just a totally
01:26:21.700
different way of approaching the workplace yeah i think yeah i i find that uh i think coding is hard
01:26:29.060
work but i find that almost you know when i look across our society i think uh almost everything is
01:26:35.540
hard work right when when i walk into a manufacturing plant some of the work that you know my company
01:26:39.820
landing ai does for manufacturing i see the men and women on the manufacturing shop floor and they're
01:26:44.540
really smart uh at you know what they do and then i uh meet up with my friends from google and i think
01:26:51.720
they're really smart at at what they do i i i think that uh the world is a challenging has has lots of
01:26:59.180
challenging intellectually simulating or physically challenging work for us to to to do and and hopefully
01:27:04.360
ai tools can help make things a little bit better for everyone well i like that you sort of decide where
01:27:09.000
you're going to put your energies because i understand looking at you today and your blue
01:27:11.880
shirt it is no accident you are wearing that blue shirt and it is one of the areas of your life in
01:27:16.040
which you've chosen to simplify and streamline your decision making yeah i think uh yeah a few friends
01:27:23.700
have asked me yeah there's actually a four of those i think someone actually asked publicly why does
01:27:27.580
andrew wear a blue shirt all the time uh uh so i used to wear either blue or like a light purple
01:27:34.060
but then i realized every morning is like oh don't wear a blue shirt or a purple shirt
01:27:38.100
i can't decide it's like forget it i'm just buying a full stack of blue shirts and do that
01:27:43.740
i don't know so you don't have to think about it in the morning vera wang does the same thing vera
01:27:47.860
wang who dresses you know the most beautiful successful you know prominent people in the world
01:27:52.000
just wears sort of a black a column of black every day that's her uniform i did not know that
01:27:57.320
because she doesn't want to think about it same as you
01:27:59.700
yeah turns out there is a downside to this uh one day one of my friends was working on ai for
01:28:04.920
fashion thing uh and i tried to express him in a pinet said well you want to do ai for fashion
01:28:09.700
how about this how about this and she said andrew you have no credibility whatsoever when it comes
01:28:14.380
to fashion so okay i have to ask you one other personal question now i understand you married
01:28:19.360
somebody who's in robotics and i read that you you used a 3d printer to make your wedding rings
01:28:26.120
which brought up a lot of things for me which is number one i do not understand the 3d printer
01:28:31.340
at all my kids are using it at school it scares me i don't get it what is it what is it and how
01:28:36.580
does it print out a wedding ring how does it produce a wedding ring yeah so i think uh so carol
01:28:42.860
um she's from a prince michigan but we now are in washington state um so 3d printer takes you know
01:28:50.140
one way that 3d printing works is it uh takes little bits of um metal and melts them and kind
01:28:56.060
of you know deposits little drops of metal until gradually you end up building a ring i'm not
01:29:00.760
wearing a ring now i have it i have enough sense uh uh and then you end up with this you know
01:29:04.900
incredible shape uh whatever whatever you can almost anything you can imagine and program into
01:29:11.660
computer it can just by putting little drops of plastic or little drops of metal or some of the
01:29:17.760
substance uh just create this you know incredible 3d shape that's that's maybe difficult to manufacture
01:29:23.880
via other ways so i don't know actually this is one fun thing about technology or 3d printing
01:29:28.420
were still on really really cutting edge technology but now we have high school students able to use it
01:29:35.520
um i hope we like that for ai too frankly i i find that today ai seems a little bit mysterious maybe a
01:29:41.180
little bit overly so but uh actually last week i was chatting with a few high school students in
01:29:46.040
different parts of the country uh talking about you know they're taking online ai classes from
01:29:50.760
coursera or from or from whatever and now we have um high school students able to do things that
01:29:57.560
if done just five or six years ago would have been a chapter in a phd thesis at a place like stanford
01:30:04.380
right so really like what like what uh actually so so one thing happened to me i was attending a fair
01:30:10.460
make a fair um uh where i met you know this students that was demoing his robot that was taking
01:30:15.980
pictures of plants trying to figure out if they were um diseased if they had a you know disease on the
01:30:20.940
on the leaves or not so uh i looked at his work and i thought boy if this had been done five or six years
01:30:26.940
ago uh this would have been a chapter in someone's phd thesis at stanford university and you know what i
01:30:33.020
asked him how old are you and he said oh i'm i'm 12 years old uh so this is today's world no no no so
01:30:39.980
this is today's world i think anyone in the world you know can go and learn this stuff and then
01:30:45.260
implement this uh and and even though some technology seems so cutting edge i think that if someone out
01:30:52.220
there is you know watching this and wants to learn it a lot of tools are now on the internet right go go
01:30:57.500
go go learn it online from deep learning the article sarah and then on your computer you could
01:31:01.980
actually start developing stuff that while not the cutting edge stuff right that's actually still
01:31:06.620
pretty difficult you could actually do stuff that was kind of state of the art just a few years
01:31:10.940
all right i had on this subject i have a confession to make to you i were moving into a
01:31:15.420
new home or moving towns and i decided to not make my new home a smart home because my old smart
01:31:21.980
home was annoying me my dishwasher was yelling at me and my microwave was yelling at me and i was
01:31:27.340
walking around my apartment all day saying you are not the boss of me i am the boss of you shut up
01:31:32.140
i will unload you when i'm good and ready and uh the tv required 40 000 buttons to turn on and it's
01:31:38.460
like i i just want a dumb home for me because maybe it says i'm a dumb person but it it seemed easier to
01:31:45.420
me um and yet all of these appliances are getting smarter by the day and they're saying there's going to
01:31:51.340
be a refrigerator that's going to tell you whether things are spoiled on the inside and so on
01:31:55.660
so do you have a smart home do you recommend a smart home and how if at all concerned
01:32:03.260
should we be about people spying on us for lack of a better term you know i think people
01:32:09.020
they distrust google they think google's amassing information on them they distrust the government
01:32:12.780
they think the government could possibly hack into one of these appliances you know these are real
01:32:16.780
concerns you hear from people yeah so i know i i think that a lot of people are concerned about privacy
01:32:22.780
so in my voice web i have friends at many of the large internet companies and i know uh they're my
01:32:29.020
friends i trust them to tell me the truth many of my friends are genuinely uh a concern but also very
01:32:35.340
respectful of privacy so a lot of the large internet companies you know some better than others really do
01:32:40.700
have stringent privacy controls and makes it incredibly difficult for anyone to just spy on you now having
01:32:46.620
said that i i actually would be um disappointed i have no reason to think the u.s government can hack
01:32:51.660
into these devices but frankly i'd be a little bit disappointed if they can't um uh but yeah yeah so
01:32:58.460
you know by the way i i used to work on speech recognition right so i worked on this voice activated
01:33:03.020
devices uh one thing i'm not proud of for a long time even as working on these devices i had exactly one
01:33:08.780
light bulb in my home that was connected to my smart speaker because the configuration process is so
01:33:13.740
annoying so i got through you know configuring one light bulb so i could turn it on with a voice
01:33:18.300
command but after that i couldn't be bothered so i i think we still we still gotta make these things
01:33:23.820
better you know i we tend to inject the margin of a lot of things sometimes it's really great and i
01:33:28.940
love it uh but sometimes you know you do wonder right if if we're really helping solve people's problems
01:33:35.340
hey if we have more people working on it maybe even all collectively make all this tech much better
01:33:39.580
yeah no i've said in in this day and age it's not enough to pretend you actually have to be a
01:33:44.460
good person because someone's probably always listening watching amassing data they're gonna
01:33:49.100
know uh one way or another it's disconcerting but i don't know if you're not a criminal and you're not
01:33:54.860
you know dealing with terrorists and so on how worried do you need to be i don't know i'll give you
01:33:59.180
the last word yeah you know um yeah i think uh ai is the new electricity much of the rise of electricity
01:34:06.060
starting about 100 years ago transform every industry i think ai is now on a path to do the
01:34:11.420
same so i think really to anyone wondering if it's worth learning about it jumping in trying to help
01:34:16.540
all of us collectively navigate the future i think every citizen every government all of us individuals
01:34:23.180
should jump in and play a role in shaping a better future for everyone in light of this amazing
01:34:29.020
technology wonderful talking to you thank you so much for your expertise and your insights
01:34:34.140
you know thank you thank you it was really really fun to do this with you
01:34:42.540
so as i mentioned in our other episode this week we're we're scaling back a little for this week and
01:34:47.580
next week on our episodes just as we get ready to launch on serious my team especially my team has a
01:34:52.540
lot they need to be doing so we're going to launch five days a week starting on september 7th
01:34:57.500
but in the meantime we're a little bit of scale backs uh schedule for those of you wondering and
01:35:01.660
but our next guest who's going to be coming up on monday is one we've really wanted to have on for a
01:35:07.340
while controversial guy because he worked for trump and you know he's been completely excoriated by the
01:35:12.220
mainstream media but fascinating and really smart dude steven miller is going to be here you used to
01:35:17.420
have him on the kelly file all the time then you saw what the press did to him when he went uh on
01:35:22.460
inside the trump team but even just you know i've spent years talking to him there is no better person
01:35:29.820
to talk to if you want to understand what's happening in this country with our southern border
01:35:34.860
our northern border and our approach toward immigration in general uh so i'm really looking
01:35:39.020
forward to the conversation steven miller monday don't miss it in the meantime go ahead and subscribe
01:35:42.940
so you don't uh miss it download give me a five-star rating while you're there and give me a review
01:35:48.460
let me know what you think uh what do you think of ai are you in favor and uh what would you like
01:35:53.420
me to ask steven miller taking your thoughts right now in the apple review section uh or wherever you
01:35:58.700
download your podcasts thanks thanks for listening to the megan kelly show no bs no agenda and no fear
01:36:09.180
the megan kelly show is a devil may care media production in collaboration with red seat ventures