The Megyn Kelly Show - August 26, 2021


The Benefits and Dangers of Artificial Intelligence, with Nick Bostrom and Andrew Ng | Ep. 151


Episode Stats

Length

1 hour and 36 minutes

Words per Minute

183.85825

Word Count

17,768

Sentence Count

335

Misogynist Sentences

9

Hate Speech Sentences

19


Summary

In this episode of The Megyn Kelly Show, host Meghan Kelly sits down with Nick Bostrom and Andrew Ang, two of the world s most brilliant minds in the field of artificial intelligence, to talk about what it is, where it is going, and how it needs to be handled.


Transcript

00:00:00.580 Welcome to The Megyn Kelly Show, your home for open, honest, and provocative conversations.
00:00:12.380 Hey everyone, I'm Megyn Kelly. Welcome to The Megyn Kelly Show.
00:00:15.600 Oh, we have a fascinating show for you today. Fascinating.
00:00:18.920 It's about artificial intelligence.
00:00:21.140 I've been asking my team to line up a show on this and we have the two greatest guys,
00:00:25.020 the most brilliant, just greatest guys to talk about it with.
00:00:28.800 Like, don't you want to know where this is going, right?
00:00:31.880 Like, okay, there's Amazon Alexa and then there's something called super intelligent computers
00:00:37.560 that are going to take over the world and possibly eliminate humanity.
00:00:42.160 Opposite extremes.
00:00:43.360 It can be wonderful and it can be life-changing in a great way and it could also potentially
00:00:47.320 be life-extinguishing if it gets into the wrong hands and so on.
00:00:50.840 So we've got all these angles covered.
00:00:52.540 You are going to love, love, love this show.
00:00:55.540 We're going to kick it off with a guy named Nick Bostrom.
00:00:58.800 He's a professor at Oxford.
00:01:00.920 He's the director of something called the Future of Humanity Institute.
00:01:04.980 He's done so many things.
00:01:07.160 He's been a teacher at Yale.
00:01:09.000 He did his postdoctoral fellowship at Oxford.
00:01:11.640 He's the founding director, as I say, of this Future of Humanity Institute.
00:01:14.500 That's at Oxford as well.
00:01:16.120 Researches the far future of human civilization.
00:01:19.820 A professor of philosophy at Oxford.
00:01:21.440 He has been included in Foreign Policy's Top 100 Global Thinkers list repeatedly.
00:01:28.600 He was listed by Prospect Magazine and their list of world's top thinkers.
00:01:32.000 You get it?
00:01:32.600 Sensing a theme.
00:01:34.100 And he's probably best known for his incredibly bestselling book, Super Intelligence, Paths,
00:01:40.560 Dangers, Strategies.
00:01:42.600 It's been recommended by everyone from Elon Musk, who's a huge fan of our guest, Nick
00:01:47.000 Bostrom, to Bill Gates.
00:01:49.140 And he is one of the leading thinkers on where super intelligence, what it is, where it's
00:01:54.120 going and how it needs to be handled.
00:01:56.540 That's sort of where the machines become smarter than the humans.
00:02:00.420 Now we're going to talk to him.
00:02:01.260 Then we're going to be joined by a guy named Andrew Ang.
00:02:04.420 He's also incredibly brilliant.
00:02:06.020 So excited to talk to these guys.
00:02:07.480 He's the founder of deeplearning.ai, co-founder of Coursera.
00:02:13.220 Coursera is huge.
00:02:14.480 This is the world's leading massive open online courses platform.
00:02:19.420 He's also an adjunct professor at Stanford.
00:02:22.060 He's the founding.
00:02:23.460 He was the founding lead of the Google Brain team.
00:02:27.120 He coined the term Google Brain.
00:02:28.960 He was the chief scientist at Baidu, which is China's Google.
00:02:33.180 There's no Google in China.
00:02:34.260 This is China.
00:02:34.700 I mean, this guy's been he's led a 1300 person AI group for China's Google.
00:02:40.440 All right.
00:02:40.620 So he's done.
00:02:41.180 He's basically been in charge of everything.
00:02:43.340 He's cool.
00:02:43.720 He's a globally recognized leader in AI.
00:02:46.080 And I would describe him as more of a happy warrior when it comes to AI.
00:02:49.160 Very optimistic about it and what it can do.
00:02:51.620 And talk about how it could change your life for the better.
00:02:54.080 And I think you're going to be delighted with the show.
00:02:56.740 And I predict you'll be sharing it with everyone, you know.
00:02:58.960 OK, so we're going to start with our guests in one minute real quickly.
00:03:02.200 Here's this.
00:03:04.700 There is so much that I want to go over with you.
00:03:10.780 Just treat me like I am AI 101 because I know almost nothing about this field, but am dying to know more.
00:03:19.060 And just having read what I've read now of your work and having listened to your TED Talks and so on.
00:03:24.020 I'm terrified.
00:03:26.140 I'm terrified.
00:03:28.220 So let's start here.
00:03:31.520 What is super intelligence?
00:03:34.940 I just use it as a term for any form of a general artificial intelligence that greatly surpass humans in all cognitive abilities.
00:03:45.140 And so, in other words, when the machines get smarter than we are.
00:03:47.640 Yeah.
00:03:49.480 OK.
00:03:50.240 And how likely is it to come into existence?
00:03:56.300 I think it is highly likely that it will eventually come into existence.
00:04:00.860 I think almost a certainty if we avoid destroying ourselves through some other means before.
00:04:07.980 But if science and technology continue to advance on the wide front, then I think eventually we'll figure out how to produce high-level machine intelligence and super intelligence.
00:04:18.440 Is it in the works right now?
00:04:21.520 Well, I mean, in some sense, it has been in the works for a long time in that people have been trying to understand better how the brain works, how to use statistical methods to better extrapolate from past data, how to build faster computers.
00:04:36.840 All these are potential ingredients.
00:04:38.880 And, of course, the field of artificial intelligence has really burgeoned in the last eight years or so with the deep learning revolution.
00:04:51.620 And so there's quite a lot of excitement now about what is becoming possible with machine learning.
00:04:58.320 But predicting how far we are from being able to match and then maybe surpass human-level intelligence is really hard.
00:05:05.960 And I think we just have to acknowledge that there's enormous uncertainty on the timeline of these kind of things.
00:05:13.580 Now, we're going to be joined after you by another guest who his belief is, the way he phrases it, is there's two types of AI.
00:05:24.420 There's ANI, artificial narrow intelligence, and AGI, artificial general intelligence.
00:05:29.660 And he says artificial narrow intelligence is basically like the stuff we've seen already where you're typing on your computer and it recognizes the word you're typing and completes it.
00:05:41.200 You know, or you're, I don't know, maybe Amazon Alexa or the self-driving car, like those things that are improving our day-to-day living.
00:05:49.700 But general intelligence is what you're talking about, super intelligence, which is, that's a whole different realm.
00:05:56.060 And that's the thing, as I understand it, that you're sounding the alarm on.
00:06:00.240 Yeah, or at least trying to draw attention to ask something that would be very important.
00:06:06.480 I think it has an equally large upside if we get this transition to the machine intelligence here on right.
00:06:13.540 But, but, but I do think also there are significant risks associated with this, but yeah, I think it is useful to make this distinction between kind of specialized AI systems that can only do one thing, maybe sometimes at the superhuman level.
00:06:27.380 So for a long time, we've had like chess computers that can beat any human, but contrasting that to something that matches humans, say in our general learning abilities and reasoning abilities that, that make it possible for a human to learn any of thousands of different occupations or to solve novel problems that you've never seen before and to use common sense.
00:06:51.280 So why would we be seeking superintelligence, you know, because we're going to get into the risks of it and, you know, the possibility that machines not only get smarter than humans, but actually take over the world and possibly eliminate humans.
00:07:04.740 Why would we be even going down that route?
00:07:07.600 Why wouldn't we have just seen that future and said, why would we create another being on earth that's smarter than we are that could take over this planet?
00:07:14.860 For the most part, we are not seeking superintelligence, but greater intelligence, like to have, you have some AIs today and it'd be nice if they made fewer errors and were a little bit more capable.
00:07:27.520 But then of course, if we succeed in that, we would want them to be better still.
00:07:30.840 So it's not so much that there are a lot of people who are specifically trying to create superintelligence, but there are huge strivers for making progress in having better forms of machine intelligence.
00:07:43.000 I mean, in general, it's not as if human civilization has some kind of great master plan either, right?
00:07:51.120 I mean, we are not sort of having a hundred year plans for which technologies we're going to promote and which less.
00:07:57.100 So for the most part, things just happen and there are these local reasons why people do things.
00:08:04.580 And that I think is also true for the field of AI.
00:08:08.100 I know that you've said you, a possible scenario is we create a machine, a computer that has general intelligence below human level, but is superior mathematically.
00:08:21.860 And in this scenario, human beings understanding the risks of creating a superintelligent machine would take safety measures.
00:08:28.660 They would pre-program it, for example, so that it would always work from principles that are under human control.
00:08:34.340 We would try to box it in with limitations.
00:08:37.200 We would try to be careful.
00:08:39.420 How do you see the possibility of that machine that we've tried to take these precautions with, nonetheless, on its own, becoming a superintelligent being, for lack of a better word?
00:08:51.600 Well, so I think we will keep trying to make machines smarter.
00:08:55.820 And if we succeed in this, at some point, they will become smarter than us.
00:09:01.960 I think at that point, once you have maybe even weak superintelligence, development is likely to be very fast for various reasons.
00:09:12.400 For a start, at this point, the technology would be extremely economically valuable.
00:09:17.820 So massive investments would flow in to running these AIs on even larger data centers or applying even more human ingenuity to improve them still further.
00:09:28.260 At some point, also, you might get this feedback loop when the AI itself is able to contribute to its own further improvement.
00:09:34.780 So you might get a kind of intelligence explosion where you go from something maybe just slightly human level to something radically superintelligent within a relatively brief span of time.
00:09:49.340 And then the question becomes, would we be able to steer what such a superintelligent system would decide to do?
00:09:58.560 Like, it would be very powerful for basically the same reasons that we humans are very powerful on this planet today compared to other animals.
00:10:07.760 The gorillas are much stronger than us, and cheetahs are much faster.
00:10:12.940 And yet, the fate of the gorillas depend a lot more on what we humans decide to do than what the gorillas do.
00:10:18.900 So if you have something that radically outstrips us in terms of its general intelligence, its ability to strategize, to develop new technologies,
00:10:28.560 then it might well be that the future will be shaped by its preferences and its decisions.
00:10:34.640 And it might be non-trivial for us to make sure that those are aligned with our human values,
00:10:40.400 especially if we need to get it right on the first try.
00:10:44.260 Right. You've been saying that for a while, saying if we're going to do this,
00:10:48.340 we have to make absolutely sure that they are aligned with our human values,
00:10:52.680 and there are all sorts of dangers in doing it anyway.
00:10:56.040 I mean, who's going to determine what the values are, and what if not everybody's on the same page,
00:11:00.960 and what if we do it, but it gets into the wrong hands, and people misuse it, and so on.
00:11:07.540 But let me just stick with the gorilla thing. That's interesting.
00:11:09.920 So, I mean, because I've heard you use the example of the tiger, too.
00:11:13.140 The reason the tiger gets in the cage and can be controlled by us is because we have superior intelligence to it.
00:11:19.420 So it may be more powerful, more lethal, but we're smarter, and so we can trick the tiger into the cage and keep it there.
00:11:26.660 And the same is true of the gorilla.
00:11:28.520 And so in the scenario where we have a super intelligent machine, we're the gorilla.
00:11:33.020 Well, that would be one type of scenario or one type of risk that could arise from future advances in AI,
00:11:40.760 that the AI itself somehow takes over or runs a mark or is poorly aligned.
00:11:47.880 I think there are also scenarios in which we maybe manage to tie it to our purposes,
00:11:53.540 but then we do with it as we have done with practically every other general-purpose technology in human history,
00:12:01.840 that we've also used it for a lot of bad ends to oppress each other, to wage war against each other.
00:12:09.400 And so that's another way in which advances in AI could turn out to be harmful
00:12:15.000 if they become a means of kind of amplifying human conflict,
00:12:18.520 or if they empower more people to develop other dangerous technologies,
00:12:24.820 like maybe you could use AI to more rapidly invent new biological warfare agents
00:12:29.760 or something like that, that might proliferate.
00:12:32.700 So I think there are several distinct classes of dangers that one would have to be aware of
00:12:39.160 as we move into this future.
00:12:42.780 Well, I know, I mean, you think that if you're the creator of it, you can control it, right?
00:12:47.800 You can program it such that it won't get smarter than you, and it won't...
00:12:51.580 How could you... I look at the computer on my desk, how could it ever control me?
00:12:54.600 How is that... It doesn't seem possible that...
00:12:57.460 Because you're not talking about robots running around, you know, threatening us with knives and guns.
00:13:01.340 You're talking about this thing, this thing sitting on the desk,
00:13:04.780 getting smart enough that somehow it's controlling humans.
00:13:07.500 And you think about that in the abstract, and you think, how could that ever...
00:13:10.400 That doesn't make any sense.
00:13:11.420 How could this thing sitting on my desktop ever control me?
00:13:14.080 Yeah, I mean, presumably not the thing that actually sits on the desktop now.
00:13:18.120 But, I mean, you're right.
00:13:19.520 It's easy enough to not develop superintelligence for any one individual or group.
00:13:24.800 But I think it's likely that we as a civilization will nevertheless do it.
00:13:29.940 And I think actually, probably we should be doing it.
00:13:33.780 I see it kind of as this portal in a sense that all plausible paths to a really great future
00:13:40.720 will eventually go through.
00:13:42.660 Now, it might be that it would be wise for us to go a little bit more slowly as we approach this gate,
00:13:49.760 so we don't kind of slam into the wall on the side.
00:13:52.140 We certainly should be very careful with this transition.
00:13:54.880 But I think it's kind of unrealistic to think that everybody, all the different countries,
00:14:00.760 all the different labs would decide to refrain from pushing forward with this,
00:14:05.660 when it has such enormous potential for positive applications in the economy,
00:14:10.720 for medicine, for security, for arts and entertainment,
00:14:14.680 for practically any area at all where human intelligence is useful,
00:14:19.040 which is pretty much any area.
00:14:20.360 So I think our focus should be not so much should we do it or not,
00:14:24.700 but like how can we position ourselves in the best possible ways?
00:14:31.520 Do the research in advance, say on how to align, to find scalable methods for AI alignment,
00:14:38.500 try as much as we can to build cooperative institutions and norms and practices
00:14:43.160 around the deployment of AI, and then proceed cautiously.
00:14:48.160 But can you walk us through that scenario for people who don't,
00:14:52.160 I mean, this is a big concept for folks who don't work in your field.
00:14:55.740 How could it ever be that the machines would take over?
00:14:58.540 I mean, I know you've spoken about, look, it could happen.
00:15:01.580 They could be controlling us.
00:15:02.940 They could control all the other computers and things.
00:15:05.480 Humanity could cease to exist, and we need to be cognizant of this possibility.
00:15:10.240 But how?
00:15:12.340 Well, I mean, so if we look at, say,
00:15:14.800 humans have caused a lot of mischief over the course of history,
00:15:18.620 that it's for the most part not because they use their own personal bodily strengths
00:15:24.960 to wield a sword and go around chopping people's head up.
00:15:28.020 It's they've used maybe their pen or their voice to issue commands,
00:15:31.560 to persuade others, and then thereby to exert great influence.
00:15:35.160 So those modes of action would be available even just to a laptop, right, sitting on a desk.
00:15:43.720 If it could print text on a screen, I think that's already enough
00:15:46.560 for a sufficiently great intelligence to be very powerful.
00:15:51.320 But of course, there is no reason to think it would have to stop with these indirect methods.
00:15:57.400 You could maybe persuade humans to be your arms and legs to do your work in some lab
00:16:05.280 to develop different robotic systems that you could use or hack into
00:16:12.480 or maybe develop some kind of nanotechnology
00:16:15.400 that would then give you more direct access to the world.
00:16:19.960 I think there are many ways with a sufficient level of intelligence
00:16:24.200 to kind of think above and around and through humans and achieve your ends.
00:16:31.960 It's also likely that if we develop this, we would want to give them access to a lot of stuff
00:16:36.620 because that would make it more useful, right?
00:16:38.860 If you could have an AI that drives your car, that's more useful than an AI
00:16:42.500 that just sits and tells you how to drive the car.
00:16:46.440 If it could run your factories, if it could pilot your airplanes.
00:16:50.020 Maybe we will have a lot of robots by the time this transition happens
00:16:54.860 so that there would be an even more ready-made infrastructure for it to tap into.
00:17:00.080 Well, so let's talk about the factories
00:17:01.680 because I've heard you say that this super intelligent computer
00:17:05.040 or these computers could, quote,
00:17:07.480 create nanofactories covertly distributed at undetectable concentrations
00:17:12.140 in every square meter of the globe
00:17:14.840 that would produce a worldwide flood of human-killing devices on command
00:17:19.760 and that AI would then achieve world domination.
00:17:23.580 What?
00:17:25.280 That doesn't sound good.
00:17:28.200 No.
00:17:29.000 And I think there is a kind of...
00:17:34.500 I mean, it's kind of almost by definition impossible for us
00:17:37.060 to know exactly what the best strategy would be
00:17:39.580 that would come into view
00:17:40.840 if you were a malicious super-intelligence
00:17:43.320 because kind of by definition it can think
00:17:45.400 much more deeply in the strategic space than we can.
00:17:49.840 But I think what that particular scenario is meant to illustrate
00:17:53.140 is the idea that one of the things
00:17:55.500 that a super-intelligence certainly could do
00:17:57.240 would be to invent new technologies
00:17:59.780 that we can already see are physically possible
00:18:03.700 that we haven't yet, however, been able to actually manufacture and build
00:18:08.380 because they involve a lot of detail work
00:18:10.800 to kind of figure out the specifics.
00:18:13.240 But if research were done on a digital timescale
00:18:16.100 rather than on a kind of slow biological human timescale,
00:18:18.740 then these futuristic technologies
00:18:21.340 might become available quite quickly
00:18:23.900 after you have super-intelligence.
00:18:27.100 And then using those futuristic technologies
00:18:30.160 would possibly be one way to leverage its power
00:18:33.020 and get the kind of advantage.
00:18:35.820 It's not the only one, but I think that's one possible path.
00:18:40.080 Whether it would be specifically by developing nanorobots
00:18:43.840 or whether there's some other technologies
00:18:45.520 that we haven't yet thought of,
00:18:47.080 I think we'll get stuff to be agnostic about.
00:18:49.840 It makes me think of the movie War Games
00:18:51.780 to go back to the 1980s when I grew up.
00:18:54.940 And in War Games, they created this computer
00:18:59.160 that could help with nuclear war
00:19:02.960 and planning out the war games
00:19:04.620 that the United States was going to be engaged in
00:19:06.540 with presumably Russia.
00:19:08.760 And they couldn't stop it.
00:19:11.280 The computer sort of got a mind of its own.
00:19:13.720 It started, it was going to launch the missiles anyway,
00:19:16.840 even when they had figured how to like
00:19:18.440 turn off the computer.
00:19:20.560 But when they were trying to deal with it,
00:19:22.200 first one guy says,
00:19:23.080 why don't you just unplug the damn thing, right?
00:19:25.380 And that wasn't going to work.
00:19:26.900 And then even when they found a way
00:19:29.280 of telling it to stand down,
00:19:31.160 it wouldn't stand down.
00:19:32.620 It had a mind of its own and it kept going.
00:19:34.680 Just to bring it to sort of an example
00:19:36.480 that a lot of people may have seen.
00:19:38.220 Is that basically what we're talking about?
00:19:39.900 That once, and you've said before,
00:19:41.980 it may not be possible
00:19:42.920 to put the genie back in the bottle.
00:19:44.460 Once we create the thing,
00:19:45.880 it's not going to be so simple to just unplug it
00:19:48.740 or tell it not to do the thing
00:19:50.500 that we find awful.
00:19:51.640 Yeah, I think like,
00:19:53.940 I mean, so you might not know if like
00:19:55.000 the apes that we evolved from were still around
00:19:59.280 and they thought,
00:19:59.980 well, maybe these humans were a bad idea.
00:20:02.000 Like it would kind of be hard for them
00:20:03.380 to unwind what had happened.
00:20:07.160 And similarly,
00:20:07.820 if you have some super intelligence,
00:20:09.660 once in existence,
00:20:10.480 it might have strategic incentives
00:20:12.120 to avoid us shutting it down.
00:20:16.880 And so if it's very skillful
00:20:18.440 at achieving its goals,
00:20:19.680 then in particular,
00:20:20.860 it would be skillful
00:20:22.200 at achieving this goal
00:20:23.640 of preventing its own shutdown.
00:20:25.920 Or maybe it will make surreptitious copies
00:20:27.760 in other computer systems
00:20:28.960 so that it doesn't matter
00:20:29.840 if sort of the original is terminated
00:20:32.600 or spawn other sub-agents
00:20:35.380 that can do,
00:20:37.140 you know,
00:20:37.680 execute on its preferences.
00:20:38.880 So I think we shouldn't rely on that
00:20:42.140 as these salt methods
00:20:43.340 of ensuring a safe future
00:20:45.440 that we build systems.
00:20:46.720 We don't bother to align them
00:20:48.000 just to see what they do
00:20:49.360 and then planning,
00:20:50.500 well,
00:20:51.020 if things go wrong,
00:20:51.780 we just unplug them.
00:20:52.760 I think we need a better strategy than that.
00:20:55.540 You know,
00:20:55.760 when I was reading up
00:20:57.000 on some of the things
00:20:58.120 that you've written and said,
00:20:59.680 it made me feel like
00:21:01.240 here in America,
00:21:02.100 we've had,
00:21:02.860 you know,
00:21:03.120 a couple hundred years
00:21:03.880 of feeling pretty well protected
00:21:05.220 from the world
00:21:06.000 thanks to these oceans
00:21:07.040 that surround our country,
00:21:09.080 you know,
00:21:09.260 on the East and the West Coast.
00:21:10.760 And obviously with nuclear weapons,
00:21:12.720 that's less the case,
00:21:14.180 but we've reached a detente
00:21:16.160 with other nuclear powers
00:21:17.900 that we understand
00:21:18.800 it would be mutually assured destruction
00:21:20.200 and we don't launch those
00:21:21.280 for the most part.
00:21:24.240 This threat doesn't recognize
00:21:26.920 oceans,
00:21:27.960 boundaries.
00:21:28.720 This makes everyone accessible.
00:21:30.520 If as you've posited,
00:21:33.080 the supercomputer,
00:21:34.740 the super intelligence,
00:21:35.520 can create drones
00:21:38.620 that come right to your doorstep
00:21:40.140 and drop a bomb
00:21:40.940 or create an office robot
00:21:42.800 that may be cleaning
00:21:44.260 the carpets at night,
00:21:45.580 but then assassinate the CEO
00:21:47.320 when they turn around.
00:21:48.200 Like,
00:21:48.560 there's all sorts of nefarious ways
00:21:50.900 in which it could be unleashed
00:21:52.580 on people worlds away.
00:21:56.020 Yeah,
00:21:56.240 I think with respect
00:21:57.860 to super intelligence,
00:21:58.800 like,
00:21:59.080 yeah,
00:21:59.340 I think we're all kind of
00:22:00.740 eggs in the same basket.
00:22:02.020 at least with respect
00:22:04.980 to this class of dangerous
00:22:07.400 that arise from the AI itself
00:22:09.100 doing something
00:22:10.660 that is on the line
00:22:12.480 with its creator's intention.
00:22:15.160 So,
00:22:16.120 yeah,
00:22:16.360 I think we have a common cost there
00:22:17.860 to try to figure out
00:22:19.420 how to align these systems.
00:22:21.700 So,
00:22:22.920 and I mean,
00:22:23.800 I'm reasonably hopeful
00:22:25.300 about that.
00:22:25.880 When I was writing
00:22:26.740 this book of mine,
00:22:28.380 I think it came out
00:22:29.220 in 2014,
00:22:30.500 this was an almost
00:22:32.200 entirely neglected field.
00:22:33.520 It looked like
00:22:34.020 we were moving
00:22:35.200 towards developing
00:22:37.000 the most important
00:22:37.860 technology ever
00:22:38.740 and hardly anybody
00:22:40.020 was thinking about
00:22:40.760 what would happen
00:22:41.420 if we were succeeding
00:22:42.600 in this goal of AI.
00:22:44.700 It's all along been
00:22:45.320 to not just do specific tasks,
00:22:47.340 but to make machines
00:22:48.800 generally smart like humans.
00:22:49.920 But it was like
00:22:51.180 that was such a radical goal
00:22:52.460 that the imagination
00:22:54.340 exhausted itself
00:22:55.260 in just conceiving
00:22:56.020 of this possibility
00:22:56.920 of matching humans
00:22:57.920 that it couldn't take
00:22:59.040 the obvious next step
00:23:00.040 that if we reach that
00:23:00.860 we will have super intelligence
00:23:01.980 and then thinking
00:23:02.680 about the consequences.
00:23:04.260 So,
00:23:04.760 yeah,
00:23:05.060 drawing attention to that
00:23:06.020 was a big part
00:23:07.420 of the reason
00:23:07.860 for writing this book.
00:23:10.360 But since then,
00:23:11.580 there has now sprung up
00:23:12.580 a kind of technical subfield
00:23:13.880 of people doing
00:23:14.920 serious research
00:23:16.060 and actually trying
00:23:16.720 to figure out
00:23:17.240 how to align
00:23:18.640 arbitrarily capable
00:23:22.060 AI systems
00:23:22.860 by harnessing
00:23:24.640 their ability
00:23:25.280 to learn
00:23:25.980 to make them
00:23:28.500 better able
00:23:28.940 to learn
00:23:29.660 and understand
00:23:30.360 what our intentions
00:23:32.700 are when we ask
00:23:33.560 them to do something
00:23:34.240 or what
00:23:34.720 to train them
00:23:35.640 on specific tasks.
00:23:37.540 That's crazy
00:23:37.880 that that was only
00:23:38.460 seven years ago
00:23:39.300 that this wasn't
00:23:40.360 even being discussed
00:23:42.180 that seriously
00:23:43.200 by those academics
00:23:44.960 and so on
00:23:45.440 who are now
00:23:45.960 taking such a hard
00:23:46.540 look at it.
00:23:46.880 Meanwhile,
00:23:47.680 this is probably
00:23:48.940 going to be an industry
00:23:49.600 that employs
00:23:50.240 many of our children,
00:23:51.900 grandchildren,
00:23:52.580 and so on.
00:23:53.620 Yeah,
00:23:53.860 I mean,
00:23:54.140 I think,
00:23:54.840 I mean,
00:23:56.560 certainly if there are
00:23:57.660 advances in AI,
00:23:58.440 it's going to have
00:23:58.860 a big economic impact.
00:23:59.920 I mean,
00:24:00.060 it might be that
00:24:00.700 if you get super intelligence,
00:24:01.920 then the effects
00:24:03.000 on employment
00:24:04.400 will be,
00:24:06.060 I mean,
00:24:06.340 at some point,
00:24:07.120 like if you have
00:24:07.740 sufficiently generally
00:24:08.980 capable AI,
00:24:10.800 basically all jobs
00:24:11.780 become automatable.
00:24:12.860 so I think
00:24:16.740 in a good scenario,
00:24:19.200 I mean,
00:24:19.480 in some sense,
00:24:20.060 the goal
00:24:20.440 is full-on employment,
00:24:21.820 right?
00:24:22.120 So the idea
00:24:23.080 is to try
00:24:23.480 to develop
00:24:24.320 technologies
00:24:24.980 so powerful
00:24:27.740 that we don't
00:24:29.640 have to do
00:24:30.380 stuff we don't
00:24:31.480 like to do.
00:24:33.440 And so if you
00:24:34.460 define work
00:24:35.100 as the kind
00:24:35.680 of things
00:24:35.980 people have
00:24:36.940 to pay you
00:24:37.400 to do,
00:24:37.680 then,
00:24:38.700 yeah,
00:24:40.400 almost all of that
00:24:41.320 could theoretically
00:24:42.400 be done
00:24:42.820 by a sufficiently
00:24:43.560 capable AI system.
00:24:45.460 It sounds totally
00:24:46.300 unfulfilling.
00:24:47.020 It sounds awful.
00:24:48.980 Well,
00:24:49.660 I think it would
00:24:50.780 be a situation
00:24:51.380 where we would
00:24:51.980 have to rethink
00:24:53.060 a lot of
00:24:53.660 our assumptions
00:24:55.100 about what it
00:24:55.960 means to be human
00:24:56.640 kind of from
00:24:57.160 the ground up.
00:25:00.100 I actually believe
00:25:01.300 that there would
00:25:01.800 be some extremely
00:25:02.440 wonderful possibilities
00:25:03.700 that would be
00:25:04.980 unlocked
00:25:05.380 by this.
00:25:07.160 but it
00:25:09.620 would require
00:25:10.360 a pretty,
00:25:11.140 yeah,
00:25:12.760 grounds-up
00:25:13.180 rethink.
00:25:15.000 We would,
00:25:15.840 for example,
00:25:16.300 have to find
00:25:18.380 our dignity
00:25:18.960 and meaning
00:25:19.920 in life,
00:25:20.640 not in what
00:25:22.560 we do for a
00:25:23.520 living or
00:25:23.980 like being a
00:25:24.660 breadwinner,
00:25:26.000 but in
00:25:28.440 other areas,
00:25:29.660 in relationships,
00:25:30.960 in hobbies,
00:25:31.740 in things we
00:25:33.340 do for their
00:25:33.780 own sake,
00:25:34.300 rather than as
00:25:34.880 a means to
00:25:35.440 some other
00:25:37.140 end.
00:25:39.360 But,
00:25:40.220 yeah,
00:25:40.560 I mean,
00:25:40.860 that I think
00:25:41.340 would be a
00:25:41.820 kind of high
00:25:42.540 quality problem
00:25:43.260 to have for us.
00:25:44.680 I think first
00:25:45.200 we need to
00:25:45.560 make sure
00:25:45.880 we don't
00:25:48.180 kind of
00:25:48.940 crash into
00:25:49.640 something on
00:25:50.180 the way there.
00:25:51.520 Well,
00:25:52.060 and before we
00:25:52.540 get to sort
00:25:52.940 of the
00:25:53.640 benefits,
00:25:54.840 let's talk
00:25:56.200 about the
00:25:56.620 possibility of
00:25:57.700 a terrorist
00:25:58.380 getting a
00:25:59.740 hold of
00:26:00.500 this technology
00:26:01.220 if we create
00:26:02.440 it or it
00:26:03.020 creates itself
00:26:03.880 from something
00:26:04.800 we've created
00:26:05.520 or even
00:26:06.900 an actor
00:26:08.080 like China,
00:26:08.880 which is very
00:26:09.460 advanced in
00:26:10.060 the AI field
00:26:10.860 and our
00:26:11.580 defense secretary
00:26:12.320 has made clear
00:26:13.000 that this is
00:26:14.100 an area in
00:26:14.620 which we're
00:26:15.700 equal.
00:26:16.320 We're not,
00:26:16.920 at best,
00:26:17.660 we're equal
00:26:18.060 with China.
00:26:19.300 It's not like
00:26:19.920 our military
00:26:20.500 is so much
00:26:21.340 more powerful
00:26:21.800 than theirs.
00:26:22.400 It is,
00:26:22.940 but I'm just
00:26:23.300 saying in this
00:26:23.720 department,
00:26:24.420 which is a
00:26:24.800 potential security
00:26:25.800 threat,
00:26:26.940 they're on par
00:26:27.860 with us and
00:26:29.020 they're working
00:26:29.580 it and they aim
00:26:30.160 to be the world
00:26:30.680 leader in AI.
00:26:31.740 And we don't
00:26:32.760 trust China
00:26:33.320 for good
00:26:34.000 reason.
00:26:34.380 So we do
00:26:35.480 need to be
00:26:35.880 worried about
00:26:36.300 what they're
00:26:36.700 going to
00:26:36.900 create,
00:26:37.980 not to
00:26:38.780 mention,
00:26:39.100 as I say,
00:26:39.520 somebody more
00:26:40.520 nefarious like
00:26:41.200 a terrorist
00:26:41.600 actor.
00:26:42.140 So what is
00:26:43.020 the likelihood
00:26:43.440 of that?
00:26:44.660 I think at
00:26:45.620 present,
00:26:47.460 the West
00:26:47.840 is ahead
00:26:48.940 in AI,
00:26:50.380 certainly in
00:26:51.040 this kind
00:26:51.540 of basic
00:26:52.120 research of
00:26:53.720 trying to
00:26:54.160 develop general
00:26:56.000 artificial
00:26:57.000 intelligence.
00:26:58.680 But it's
00:26:59.200 not a huge
00:27:00.080 lead.
00:27:00.300 It's not
00:27:00.580 20 years
00:27:01.760 ahead or
00:27:02.120 something like
00:27:02.560 that.
00:27:02.760 the field
00:27:03.680 is very
00:27:04.240 open.
00:27:05.780 Researchers
00:27:06.120 publish their
00:27:07.120 findings and
00:27:08.240 so other
00:27:08.780 teams can
00:27:09.360 catch up
00:27:09.880 within six
00:27:10.860 months or
00:27:11.320 a year or
00:27:12.320 so.
00:27:14.640 I'm not so
00:27:15.560 worried really
00:27:16.260 about terrorists
00:27:17.880 using AI
00:27:19.740 for particular
00:27:21.060 things.
00:27:21.560 I would be
00:27:22.060 more worried
00:27:22.520 about terrorists
00:27:23.080 using, say,
00:27:24.360 biological
00:27:24.900 weapons,
00:27:26.180 which at
00:27:28.280 the moment
00:27:28.660 would be a
00:27:29.900 lot more
00:27:30.260 destructive
00:27:30.660 and are
00:27:32.400 also becoming
00:27:33.100 much easier
00:27:34.180 to use
00:27:35.020 or obtain
00:27:35.500 through
00:27:36.200 advances in
00:27:36.880 synthetic
00:27:37.220 validity.
00:27:40.520 But it is
00:27:41.680 plausible that
00:27:42.620 AI will
00:27:43.140 become one
00:27:44.140 dimension of
00:27:45.260 a great
00:27:46.520 power of
00:27:46.940 competition
00:27:47.400 as it
00:27:50.220 becomes an
00:27:50.820 increasingly
00:27:51.260 important both
00:27:52.100 economic
00:27:52.700 factor and
00:27:54.460 also
00:27:54.880 factor in
00:27:57.680 national
00:27:58.120 security.
00:28:00.500 Because I
00:28:00.700 know you've
00:28:00.980 said that
00:28:01.960 the first
00:28:03.340 super
00:28:04.300 intelligence to
00:28:04.940 be creative
00:28:05.460 will have a
00:28:06.260 decisive
00:28:06.760 first mover
00:28:08.220 advantage,
00:28:08.680 that there
00:28:09.200 will be a
00:28:09.600 lot of
00:28:09.940 power in
00:28:10.880 being the
00:28:11.180 first one
00:28:11.600 to come
00:28:11.920 up with
00:28:12.220 it.
00:28:12.680 And so,
00:28:13.540 I mean,
00:28:13.780 how worried
00:28:14.280 should we be
00:28:14.820 that somebody
00:28:15.360 not all that
00:28:16.140 friendly to
00:28:16.560 the United
00:28:16.840 States will be
00:28:17.500 the person
00:28:17.880 who has it?
00:28:18.780 Yeah,
00:28:19.780 well,
00:28:20.020 I mean,
00:28:20.280 it's possible
00:28:20.980 that it
00:28:21.340 would have
00:28:21.680 this
00:28:21.980 decisive
00:28:22.600 first mover
00:28:23.200 advantage.
00:28:23.860 I'm not at
00:28:25.300 all sure
00:28:25.700 about that.
00:28:26.240 You could
00:28:26.540 also imagine
00:28:27.180 scenarios in
00:28:27.880 which the
00:28:28.900 transition
00:28:29.460 happens a
00:28:30.100 bit more
00:28:30.460 gradually.
00:28:31.800 If it's not
00:28:32.820 like an
00:28:33.180 overnight or
00:28:33.840 overweek
00:28:34.280 thing where
00:28:34.720 you get
00:28:35.000 from human
00:28:35.460 to radical
00:28:35.980 super
00:28:36.380 intelligence,
00:28:36.940 but suppose
00:28:37.460 it takes
00:28:37.900 several years,
00:28:38.780 then you
00:28:39.260 could easily
00:28:39.680 have multiple
00:28:40.940 labs or
00:28:43.180 countries
00:28:44.040 being more
00:28:46.260 or less
00:28:46.620 going through
00:28:47.700 this
00:28:48.080 transition
00:28:49.840 in tandem
00:28:50.340 and you
00:28:51.480 might then
00:28:51.780 have a
00:28:52.040 multipolar
00:28:52.500 outcome.
00:28:54.620 But,
00:28:55.200 yeah,
00:28:55.400 I do think
00:28:55.860 it's potential
00:28:56.620 for
00:28:57.780 exacerbating
00:28:59.860 conflicts
00:29:00.460 of different
00:29:01.080 kinds
00:29:01.480 or empowering
00:29:03.020 say
00:29:03.560 despots
00:29:05.200 to,
00:29:05.740 you know,
00:29:06.800 make themselves
00:29:07.400 more immune
00:29:08.060 from overthrow
00:29:08.760 by intelligence
00:29:10.320 applications,
00:29:11.360 surveillance
00:29:11.600 applications,
00:29:12.440 and so forth.
00:29:12.980 That is certainly
00:29:13.660 one concern.
00:29:16.540 I think
00:29:17.580 it will
00:29:18.580 be important
00:29:20.340 to try
00:29:20.700 to manage
00:29:21.180 that both
00:29:21.820 to kind
00:29:22.680 of avoid
00:29:23.740 conflicts
00:29:24.640 on that,
00:29:25.020 but also
00:29:25.340 because I
00:29:25.840 think it
00:29:26.120 might make
00:29:26.600 the first
00:29:28.300 danger
00:29:29.000 harder to
00:29:30.600 avoid,
00:29:31.200 the danger
00:29:32.000 coming from
00:29:32.480 the AI
00:29:32.820 itself.
00:29:33.840 Like,
00:29:34.040 if you're
00:29:34.280 thinking about
00:29:34.840 this,
00:29:35.220 suppose you
00:29:36.820 are like
00:29:37.320 some researchers,
00:29:38.540 you've like
00:29:38.920 got to the
00:29:39.860 point where
00:29:40.440 you have
00:29:40.820 something almost
00:29:41.460 human level,
00:29:42.060 you think
00:29:42.360 with a bit
00:29:42.760 more work,
00:29:43.340 we can make
00:29:43.820 it super
00:29:44.200 intelligent.
00:29:44.560 And ideally
00:29:46.020 at this
00:29:46.480 point,
00:29:46.740 you would
00:29:47.060 really want
00:29:47.820 to go
00:29:48.080 slow,
00:29:48.540 right?
00:29:48.780 And really
00:29:49.440 check everything,
00:29:50.740 double check it,
00:29:51.720 make sure it's
00:29:52.340 all right,
00:29:53.560 increment it
00:29:54.280 step by step,
00:29:55.680 not just
00:29:56.140 turning on the
00:29:56.820 gas full throttle.
00:29:58.560 And maybe
00:29:59.160 over several
00:30:00.040 years,
00:30:01.220 like trying to
00:30:01.820 do this while
00:30:02.760 having a lot
00:30:03.300 of people
00:30:03.680 helping you
00:30:04.260 and looking
00:30:04.640 over what
00:30:05.620 you're doing
00:30:05.980 to make
00:30:06.320 sure it's
00:30:06.640 right.
00:30:07.000 But if you
00:30:07.460 are in
00:30:07.700 some kind
00:30:08.100 of arms
00:30:08.940 race,
00:30:09.560 then that
00:30:11.180 might be
00:30:11.500 very hard
00:30:11.900 to do.
00:30:12.160 It might
00:30:12.780 basically mean
00:30:13.820 that if
00:30:14.140 you go
00:30:14.400 slow,
00:30:14.780 it just
00:30:15.080 means you
00:30:15.440 lose the
00:30:15.800 race and
00:30:16.140 become
00:30:16.380 irrelevant.
00:30:16.960 So you
00:30:17.360 feel forced
00:30:18.400 to rush
00:30:18.840 ahead as
00:30:20.140 quickly as
00:30:20.540 you can,
00:30:20.940 and then
00:30:21.300 you throw
00:30:21.780 caution to
00:30:22.320 the wind,
00:30:22.780 and then
00:30:23.120 this risk
00:30:24.180 from the
00:30:24.600 AI itself
00:30:25.200 creating
00:30:26.760 a destruction
00:30:27.920 will increase.
00:30:29.520 So the
00:30:29.880 two problems
00:30:30.400 are connected.
00:30:31.840 Yeah,
00:30:31.940 it's kind
00:30:32.300 of like a
00:30:32.580 Dr.
00:30:32.860 Frankenstein
00:30:33.420 situation.
00:30:34.580 It's a
00:30:34.720 Frankenstein
00:30:35.120 situation,
00:30:35.680 right,
00:30:35.820 where the
00:30:36.180 entity you
00:30:36.720 create becomes
00:30:37.900 super dangerous
00:30:38.800 and turns on
00:30:39.460 you,
00:30:40.220 even though
00:30:40.820 we're so
00:30:41.580 full of
00:30:41.920 hubris,
00:30:42.440 I think
00:30:42.700 most humans
00:30:44.080 would believe
00:30:44.740 that they
00:30:45.060 could continue
00:30:45.600 controlling a
00:30:46.880 machine.
00:30:48.140 Again,
00:30:48.720 it's hard,
00:30:49.260 I think,
00:30:50.200 to conceptualize
00:30:51.300 that the
00:30:51.640 machine I
00:30:52.280 control now
00:30:53.040 will someday
00:30:53.700 have the
00:30:54.200 capability of
00:30:55.080 controlling me,
00:30:56.220 but I know
00:30:57.020 you've said
00:30:57.400 right now
00:30:58.000 the potential
00:30:58.720 for this
00:30:59.140 super intelligence,
00:31:00.300 right now
00:31:00.840 it's lying
00:31:01.320 dormant,
00:31:02.260 but it's
00:31:02.900 akin to the
00:31:03.620 power of
00:31:04.160 the atom
00:31:04.640 and how
00:31:05.440 it laid
00:31:06.380 dormant
00:31:06.880 through much
00:31:07.500 of our
00:31:07.820 human history
00:31:08.620 until 1945,
00:31:10.120 in which
00:31:11.200 case it
00:31:11.580 was very
00:31:12.320 much not
00:31:12.880 dormant and
00:31:14.280 we saw
00:31:14.780 its power
00:31:15.580 in really
00:31:16.740 raw and
00:31:17.420 disturbing
00:31:17.820 ways.
00:31:19.740 Yeah,
00:31:20.040 I think
00:31:20.420 in general
00:31:21.100 we are
00:31:24.260 kind of
00:31:24.660 reaching into
00:31:25.960 this giant
00:31:26.480 urn of
00:31:26.960 invention.
00:31:27.780 This is
00:31:28.200 almost like
00:31:28.960 the picture
00:31:29.360 of human
00:31:29.740 history.
00:31:30.300 We reached
00:31:31.340 in,
00:31:31.640 we pulled
00:31:31.960 out one
00:31:32.360 ball after
00:31:32.880 another,
00:31:33.240 one idea,
00:31:33.980 one technology,
00:31:34.620 and I
00:31:35.820 think we've
00:31:36.160 kind of
00:31:36.380 been lucky
00:31:36.860 so far
00:31:37.360 in that
00:31:37.700 for the
00:31:38.080 most part
00:31:38.640 the net
00:31:39.640 effect of
00:31:40.220 all this
00:31:40.640 technological
00:31:41.580 progress has
00:31:42.300 been hugely
00:31:42.760 positive.
00:31:44.760 But if
00:31:46.120 there is a
00:31:46.520 black ball
00:31:47.080 in this
00:31:47.440 urn,
00:31:47.860 some
00:31:48.360 technology,
00:31:49.300 there could
00:31:49.600 be some
00:31:49.920 technology
00:31:50.360 that is
00:31:50.860 just such
00:31:51.700 that it
00:31:52.420 invariably
00:31:53.440 discovers
00:31:54.040 the
00:31:54.400 civilization
00:31:55.000 that
00:31:55.480 invents
00:31:56.920 it.
00:31:57.840 It looks
00:31:58.540 like we're
00:31:58.920 just going
00:31:59.220 to keep
00:31:59.660 reaching
00:32:00.080 into this
00:32:00.540 urn
00:32:00.860 until we
00:32:02.380 get the
00:32:02.640 black ball
00:32:03.040 if it's
00:32:03.360 in there.
00:32:03.860 And while
00:32:04.620 we have
00:32:04.940 developed a
00:32:05.800 great ability
00:32:06.360 to pull
00:32:07.440 balls out
00:32:08.240 of the urn,
00:32:08.680 we don't
00:32:09.100 have an
00:32:09.460 ability to
00:32:09.940 put them
00:32:10.240 back in.
00:32:10.720 We can't
00:32:11.220 uninvent
00:32:11.680 our
00:32:11.920 inventions.
00:32:13.300 So it
00:32:14.780 looks like
00:32:15.260 our strategy
00:32:15.880 such as
00:32:16.360 this is
00:32:16.900 basically
00:32:17.480 just to
00:32:17.880 hope that
00:32:18.420 there is
00:32:18.940 no black
00:32:19.360 ball in
00:32:19.720 the urn.
00:32:21.720 And I
00:32:22.160 think not
00:32:22.520 just AI
00:32:23.000 but some
00:32:24.120 other
00:32:24.560 technologies
00:32:25.340 as well
00:32:25.760 could be
00:32:26.440 potential
00:32:27.480 black
00:32:28.800 balls.
00:32:29.360 I alluded
00:32:30.140 to
00:32:30.520 synthetic
00:32:31.140 biology
00:32:31.580 before.
00:32:33.360 which is
00:32:33.980 one area
00:32:34.460 where we
00:32:35.500 might discover
00:32:36.300 means that
00:32:36.940 would make
00:32:37.280 it a lot
00:32:37.600 easier to
00:32:38.220 create
00:32:38.560 highly
00:32:39.940 enhanced
00:32:40.440 pathogens.
00:32:42.020 In some
00:32:42.440 sense we
00:32:42.740 were lucky
00:32:43.180 with nuclear
00:32:43.660 weapons.
00:32:44.120 They are
00:32:44.260 enormously
00:32:44.640 destructive
00:32:45.200 but at
00:32:45.760 least they
00:32:46.220 are hard
00:32:46.500 to make.
00:32:47.680 You need
00:32:48.220 highly enriched
00:32:49.060 uranium or
00:32:49.600 plutonium to
00:32:50.340 be able to
00:32:50.800 make a
00:32:51.120 nuclear bomb.
00:32:52.480 And that
00:32:52.860 requires large
00:32:53.900 facilities, huge
00:32:54.740 amounts of
00:32:55.180 energies, really
00:32:55.940 only states
00:32:56.620 can do
00:32:57.040 this.
00:32:57.960 But suppose
00:32:59.360 it had
00:32:59.720 turned out
00:33:00.200 that there
00:33:00.560 were an
00:33:00.800 easier way
00:33:01.280 to do
00:33:01.580 this, to
00:33:02.060 unleash
00:33:02.400 the
00:33:03.140 energy
00:33:03.500 of the
00:33:03.760 atom.
00:33:04.680 Before we
00:33:05.300 actually did
00:33:05.780 the relevant
00:33:06.200 nuclear
00:33:06.580 physics, how
00:33:07.260 could we
00:33:07.580 have known
00:33:07.880 how it
00:33:08.200 would turn
00:33:08.460 out?
00:33:08.800 But if
00:33:09.520 there had
00:33:09.720 been some
00:33:10.040 easy way,
00:33:10.660 like something
00:33:11.600 baking
00:33:12.280 sand in the
00:33:12.920 microwave
00:33:13.260 oven between
00:33:13.860 two nickel
00:33:15.460 plates or
00:33:15.960 something like
00:33:16.440 that, then
00:33:17.100 that might
00:33:17.920 have been the
00:33:18.340 end of
00:33:18.620 human
00:33:18.820 civilization
00:33:19.260 once we
00:33:20.420 discovered how
00:33:20.980 to do
00:33:21.220 that.
00:33:21.500 Because then
00:33:21.880 anybody
00:33:22.360 would be able
00:33:24.300 to destroy
00:33:24.860 a city.
00:33:26.480 And in a
00:33:27.300 sufficiently
00:33:27.660 large
00:33:28.080 population,
00:33:28.800 there's
00:33:28.940 always going
00:33:29.320 to be a
00:33:29.680 few
00:33:29.960 individuals who
00:33:31.300 would choose
00:33:31.920 to do
00:33:32.160 that.
00:33:32.400 Whether
00:33:33.040 because they
00:33:33.540 are mad or
00:33:33.980 they have
00:33:34.240 some grudge
00:33:34.780 or they
00:33:35.100 have some
00:33:35.440 extortion
00:33:35.840 scheme or
00:33:36.520 some
00:33:37.080 ideology.
00:33:39.040 So we
00:33:39.820 can't really
00:33:40.240 afford a
00:33:41.200 kind of
00:33:41.620 democratization
00:33:42.940 of the
00:33:44.140 ability to
00:33:45.020 cause mass
00:33:46.380 destruction.
00:33:47.640 But if we
00:33:48.420 discover some
00:33:48.940 easily, you
00:33:49.820 know,
00:33:50.680 implementable
00:33:51.280 recipe for
00:33:51.900 this, then
00:33:53.260 it looks like
00:33:54.080 we are in a
00:33:54.580 pretty dire
00:33:55.480 situation.
00:33:56.060 up next,
00:33:58.400 how important
00:33:59.020 is it going
00:33:59.400 to be to
00:33:59.820 have
00:34:00.120 ethicists
00:34:01.060 involved in
00:34:02.460 creating these
00:34:03.180 super intelligent
00:34:04.060 beings, right?
00:34:05.040 It's not just
00:34:05.540 the people who
00:34:06.000 can make the
00:34:06.420 machines function,
00:34:07.480 it's those who
00:34:08.020 can lay in
00:34:08.500 some sort of
00:34:08.920 an ethical
00:34:09.340 code, and
00:34:09.860 is that even
00:34:10.280 possible?
00:34:11.120 And then we're
00:34:11.500 going to get
00:34:11.740 into the
00:34:13.040 future of
00:34:13.880 humanity and
00:34:15.060 how it can
00:34:15.520 be helped
00:34:15.960 by technology.
00:34:17.320 This is
00:34:17.700 something that
00:34:18.140 Nick has
00:34:18.400 studied for a
00:34:18.960 long time.
00:34:19.760 Could there
00:34:20.080 be something
00:34:20.540 like an
00:34:20.940 anti-aging
00:34:21.520 pill coming
00:34:22.660 our way?
00:34:23.060 And how
00:34:24.000 far away
00:34:24.960 is that?
00:34:26.000 And also
00:34:26.740 cryogenics.
00:34:27.760 Is he going
00:34:28.260 to freeze
00:34:28.520 himself?
00:34:29.020 And why?
00:34:29.720 And should
00:34:30.040 you?
00:34:30.640 Stay tuned.
00:34:34.640 I know
00:34:35.300 you've said
00:34:36.040 you could
00:34:36.760 use this
00:34:37.460 technology or
00:34:38.440 some bad
00:34:40.040 actor could or
00:34:40.660 the super
00:34:41.100 intelligent
00:34:41.420 computer could
00:34:42.080 for ethnic
00:34:42.620 cleansing.
00:34:43.160 I mean,
00:34:43.280 imagine if
00:34:43.740 Hitler had
00:34:44.160 control over
00:34:45.260 this type of
00:34:45.820 technology where
00:34:46.780 he could target
00:34:47.860 some particular
00:34:49.400 groups this
00:34:50.980 way.
00:34:51.300 It would be
00:34:51.820 efficient.
00:34:52.460 It could be
00:34:52.780 a killing
00:34:53.080 machine and
00:34:53.940 this is why
00:34:55.040 we need
00:34:55.380 ethical people
00:34:56.040 creating the
00:34:56.780 technology if
00:34:57.540 it gets
00:34:57.780 created at
00:34:58.300 all.
00:34:59.200 And that
00:34:59.520 leads me to
00:35:00.380 something I
00:35:00.740 heard recently
00:35:01.220 and I know
00:35:01.520 you've been
00:35:01.880 saying all
00:35:02.620 along,
00:35:03.020 which is not
00:35:04.000 only are we
00:35:04.500 going to need
00:35:04.900 if we're
00:35:05.400 going down
00:35:05.860 this route
00:35:06.280 and we're
00:35:06.540 going to
00:35:06.700 create
00:35:06.980 super
00:35:07.240 intelligence
00:35:07.600 or something
00:35:08.280 less than
00:35:09.140 super
00:35:09.400 intelligent
00:35:09.760 computers
00:35:10.220 for the
00:35:11.500 good that
00:35:11.780 they can
00:35:12.020 do,
00:35:12.780 one of the
00:35:13.280 most important
00:35:13.780 roles we're
00:35:14.280 going to have
00:35:14.740 in this
00:35:14.960 process will
00:35:15.560 be ethicists,
00:35:17.460 philosophers.
00:35:18.600 It's not all
00:35:18.980 about kids
00:35:20.000 now who are
00:35:20.520 in robotics or
00:35:21.540 kids who are
00:35:21.980 somehow trying to
00:35:22.780 study AI.
00:35:23.800 We're going to
00:35:24.580 need people who
00:35:25.240 consider and
00:35:26.700 can even
00:35:27.200 program for
00:35:28.600 the ethics
00:35:29.300 of a well
00:35:32.040 meaning life,
00:35:32.980 of a well
00:35:33.520 meaning existence.
00:35:35.280 Yes, I mean
00:35:36.020 not necessarily
00:35:36.840 people whose
00:35:38.180 jump title are
00:35:39.060 ethicists at
00:35:39.800 some university,
00:35:40.660 but yeah,
00:35:41.200 certainly ethics
00:35:42.580 and other
00:35:45.140 sources of
00:35:46.320 wisdom about
00:35:47.620 what we want
00:35:48.680 and what we
00:35:49.840 should be
00:35:50.740 wanting I
00:35:51.300 think will be
00:35:51.820 important.
00:35:52.100 it's not
00:35:52.540 just a
00:35:52.920 purely
00:35:53.100 technical
00:35:53.540 problem,
00:35:54.020 it's a
00:35:54.240 kind of
00:35:54.600 all of
00:35:55.340 society
00:35:55.820 problem,
00:35:56.360 how to
00:35:56.740 figure out
00:35:57.480 how to
00:35:57.900 create a
00:35:58.540 happy world
00:35:59.100 with this
00:36:00.160 new technology.
00:36:00.900 I mean it
00:36:01.140 will have
00:36:01.540 economic
00:36:02.040 implications,
00:36:02.880 right?
00:36:03.060 We've
00:36:03.320 alluded to
00:36:03.880 the security
00:36:04.340 implications
00:36:04.900 before.
00:36:06.680 And then
00:36:07.440 more cultural
00:36:09.020 dimensions are
00:36:10.220 like what do
00:36:12.280 we want the
00:36:12.740 role of humans
00:36:13.320 to be versus
00:36:13.980 our technology
00:36:15.020 and automation
00:36:16.280 in the future?
00:36:17.520 I think it's
00:36:19.640 like, yeah,
00:36:20.240 it needs to
00:36:20.740 draw on all
00:36:21.220 the different
00:36:21.780 aspects of
00:36:23.480 human wisdom
00:36:24.400 such as it
00:36:24.980 is, it's
00:36:26.360 not much to
00:36:26.980 boast about,
00:36:27.640 but we'll
00:36:29.280 have to do
00:36:29.640 our best at
00:36:30.300 least.
00:36:30.560 Yeah, I think
00:36:31.400 it will require
00:36:32.020 a much wider
00:36:32.740 purview than
00:36:33.440 just a narrow
00:36:34.260 technical focus,
00:36:35.260 although the
00:36:35.660 technical focus
00:36:36.280 also is really
00:36:37.000 important for
00:36:37.640 AI alignment.
00:36:38.180 I've read
00:36:39.180 that many
00:36:40.540 leading
00:36:41.180 researchers in
00:36:42.060 this field
00:36:42.520 say it's
00:36:43.620 extremely likely
00:36:44.460 this will
00:36:44.820 happen by
00:36:46.040 as early
00:36:46.840 as 2075
00:36:48.140 to 2090.
00:36:49.680 That's in
00:36:50.360 the lifetime
00:36:50.720 of our
00:36:51.260 children right
00:36:52.020 now.
00:36:52.880 Do you agree
00:36:53.540 with that?
00:36:53.940 Could it
00:36:54.140 happen that
00:36:54.520 soon?
00:36:56.340 Yeah, I
00:36:57.080 think that's
00:36:57.740 a real
00:36:58.160 possibility.
00:36:59.340 That's hard
00:37:00.140 to get your
00:37:00.480 arms around,
00:37:00.960 right?
00:37:01.120 Like my own
00:37:02.060 children alive
00:37:02.960 right now could
00:37:03.540 be dealing
00:37:04.000 with computers
00:37:06.440 in the corners
00:37:07.140 of the world
00:37:07.760 that are
00:37:08.080 trying to
00:37:08.620 erase
00:37:09.480 humanity?
00:37:10.620 Or save
00:37:12.400 humanity,
00:37:13.060 right?
00:37:13.380 I mean,
00:37:13.840 that's the
00:37:15.500 other possibility
00:37:16.280 or help us.
00:37:16.980 Well, I'm
00:37:17.120 not worried
00:37:17.460 about that
00:37:17.860 one.
00:37:18.540 That one
00:37:19.140 sounds good.
00:37:20.520 I mean,
00:37:20.980 that would
00:37:24.340 be what
00:37:24.660 people are
00:37:25.100 aiming for
00:37:25.740 almost
00:37:27.140 universally.
00:37:28.020 Like AI
00:37:29.340 researchers,
00:37:30.140 I suppose,
00:37:31.120 they're
00:37:31.940 usually quite
00:37:34.180 well-meaning
00:37:34.640 and idealistic
00:37:35.300 people.
00:37:35.940 Some are
00:37:36.220 just curious
00:37:36.800 about what
00:37:37.360 they're doing
00:37:37.800 and think
00:37:38.320 it's fun.
00:37:38.920 Some have
00:37:39.320 a kind
00:37:39.640 of general
00:37:40.280 sense of
00:37:41.000 wanting to
00:37:41.900 do something
00:37:42.300 good for
00:37:42.660 the world.
00:37:43.080 So I
00:37:43.300 think the
00:37:44.480 intention
00:37:44.860 is positive
00:37:47.300 and it's
00:37:49.320 just the
00:37:49.660 outcome that
00:37:50.340 there is
00:37:50.740 more uncertainty
00:37:51.260 around.
00:37:52.420 Right,
00:37:53.040 because the
00:37:53.280 dark side
00:37:53.740 is this
00:37:54.920 could be
00:37:55.220 more dangerous
00:37:55.820 than any
00:37:57.240 pandemic,
00:37:58.040 than nukes,
00:37:59.560 than
00:37:59.780 catastrophic
00:38:00.920 climate change.
00:38:02.180 This could be
00:38:02.940 more than an
00:38:04.140 asteroid.
00:38:04.600 this could be
00:38:05.940 the thing
00:38:06.200 we're not
00:38:06.440 paying that
00:38:06.900 much attention
00:38:07.300 to,
00:38:07.580 to your
00:38:07.860 example
00:38:08.300 about what
00:38:08.620 was going
00:38:08.900 on in
00:38:09.140 2014,
00:38:10.400 that really
00:38:11.220 is an
00:38:11.640 existential
00:38:11.960 threat to
00:38:12.700 humanity
00:38:13.480 on the
00:38:14.640 earth.
00:38:16.080 And on
00:38:16.400 that front,
00:38:16.780 one of the
00:38:17.040 things I
00:38:17.340 wondered in
00:38:18.000 reading about
00:38:18.800 you,
00:38:19.040 Nick,
00:38:19.180 is whether
00:38:19.600 do you
00:38:21.140 worry about
00:38:21.600 anything
00:38:22.140 other than
00:38:22.620 this?
00:38:23.040 I mean,
00:38:23.200 if this
00:38:23.680 were my
00:38:24.040 world,
00:38:24.740 that I
00:38:24.960 were immersed
00:38:25.800 in full-time
00:38:26.440 thinking about
00:38:27.000 it,
00:38:27.340 I don't know
00:38:27.760 that I'd
00:38:28.460 worry about
00:38:28.860 anything else.
00:38:30.040 Would I
00:38:30.460 worry about the
00:38:30.980 crime rate?
00:38:31.560 Would I
00:38:31.780 worry about
00:38:32.360 the erosions
00:38:33.540 of free
00:38:33.900 speech?
00:38:34.280 We're seeing
00:38:34.640 government power
00:38:35.580 growing too
00:38:36.100 large.
00:38:36.760 Some of it
00:38:37.520 relates,
00:38:38.020 but I just,
00:38:39.940 do you walk
00:38:40.900 through the
00:38:41.140 day worried
00:38:41.580 about nothing
00:38:42.000 other than
00:38:42.360 this?
00:38:44.900 No,
00:38:45.520 I mean,
00:38:45.880 I think
00:38:46.360 it's useful
00:38:48.580 maybe to have
00:38:49.080 some division
00:38:49.600 of labor
00:38:50.100 here.
00:38:51.160 Also,
00:38:51.780 I think
00:38:52.140 it might,
00:38:53.480 for somebody
00:38:53.820 like me,
00:38:54.680 be worth
00:38:55.240 trying to
00:38:56.040 focus more
00:38:57.140 efforts on
00:38:57.880 relatively
00:38:58.500 neglected
00:38:59.020 areas where
00:39:00.020 one extra
00:39:00.980 person like
00:39:01.620 me might
00:39:01.940 make a
00:39:02.240 bigger
00:39:02.540 difference
00:39:03.800 than,
00:39:04.260 I mean,
00:39:04.440 global warming
00:39:05.040 people,
00:39:06.260 so many
00:39:07.740 thousands of
00:39:08.260 people have
00:39:08.660 been worrying
00:39:09.200 about this,
00:39:09.700 and I think
00:39:10.060 it's a
00:39:10.660 smaller concern.
00:39:11.980 But it's not
00:39:12.420 the only one.
00:39:12.980 I mean,
00:39:13.160 I think
00:39:13.480 certain advances
00:39:14.760 in biotechnology
00:39:15.640 are quite
00:39:16.500 concerning as
00:39:17.160 well.
00:39:19.820 That might
00:39:20.660 just make it
00:39:21.200 too easy
00:39:21.820 to create
00:39:22.760 really dangerous
00:39:23.560 stuff.
00:39:25.280 I mean,
00:39:25.840 to make it
00:39:26.320 concrete,
00:39:26.760 so we have
00:39:27.280 DNA synthesis
00:39:28.400 machines that
00:39:29.200 can print
00:39:29.680 out DNA
00:39:30.980 strings,
00:39:31.760 if you
00:39:34.820 have a
00:39:35.140 digital
00:39:35.400 blueprint.
00:39:36.020 We also
00:39:36.500 have in
00:39:36.840 the public
00:39:37.220 domain,
00:39:38.020 the DNA
00:39:40.200 sequence of
00:39:41.520 a lot of
00:39:42.260 really dangerous
00:39:43.520 viruses,
00:39:44.440 and ideas
00:39:45.040 how to make
00:39:45.840 them even
00:39:46.160 more dangerous.
00:39:47.220 So as
00:39:48.580 this sequencing
00:39:49.320 technology becomes
00:39:50.260 good enough to
00:39:51.160 print out
00:39:51.920 whole viral
00:39:53.100 genomes,
00:39:53.780 then,
00:39:54.600 I mean,
00:39:55.420 you just
00:39:57.020 connect the
00:39:57.460 dots,
00:39:57.700 that would
00:39:58.400 give anybody
00:39:58.860 with access
00:39:59.360 to these
00:39:59.820 DNA synthesis
00:40:01.020 technology the
00:40:02.340 ability to
00:40:02.980 create things
00:40:03.580 far worse
00:40:04.400 than COVID.
00:40:06.300 And at the
00:40:06.840 moment,
00:40:07.160 anybody can
00:40:07.820 buy one of
00:40:08.300 these DNA
00:40:08.740 synthesis
00:40:09.120 machines,
00:40:09.680 and if that
00:40:10.060 continues to
00:40:10.640 be the
00:40:10.900 model as
00:40:11.500 they improve
00:40:12.080 in capacity,
00:40:12.920 then we
00:40:14.380 soon get to
00:40:16.240 an unsustainable
00:40:17.280 situation.
00:40:17.920 So there needs
00:40:18.320 to be some
00:40:18.700 kind of
00:40:19.080 global
00:40:19.460 regulatory
00:40:20.520 framework,
00:40:22.420 I think,
00:40:23.140 imposed on the
00:40:23.960 DNA synthesis
00:40:24.660 market where
00:40:26.240 DNA synthesis
00:40:27.180 is provided
00:40:27.840 as a service.
00:40:28.640 Like if you're
00:40:29.020 a researcher,
00:40:29.600 if you need
00:40:30.000 a particular
00:40:30.800 string,
00:40:31.280 there's maybe
00:40:31.740 five or six
00:40:33.380 companies in
00:40:33.980 the world,
00:40:34.500 you'd send
00:40:34.860 a request,
00:40:35.440 you'd get
00:40:35.760 your product
00:40:36.360 back in a
00:40:36.860 vial,
00:40:37.120 you don't
00:40:37.360 need to
00:40:37.660 have the
00:40:37.940 machine
00:40:38.200 yourself
00:40:38.600 in a
00:40:39.260 lab.
00:40:39.500 Or if
00:40:39.780 you do,
00:40:40.140 that would
00:40:40.380 have to
00:40:40.660 be some
00:40:40.960 sort of
00:40:41.240 controls.
00:40:42.560 That's just
00:40:43.040 one example,
00:40:43.780 but it's a
00:40:45.120 real cauldron.
00:40:45.760 People are
00:40:46.100 inventing so
00:40:46.680 many cool
00:40:47.160 new stuff
00:40:47.680 in biotech
00:40:49.320 all the
00:40:49.580 time that
00:40:50.600 new ways of
00:40:52.400 creating mostly
00:40:53.340 good stuff will
00:40:54.040 come into
00:40:54.380 view,
00:40:54.660 but there
00:40:55.080 could also
00:40:55.480 be some
00:40:55.900 bad stuff
00:40:56.800 in there.
00:40:57.560 It's a
00:40:58.060 kind of
00:40:58.320 wild west
00:40:58.800 at the
00:40:59.120 moment.
00:40:59.580 The ethos
00:41:00.340 is very
00:41:00.720 much open
00:41:01.420 science.
00:41:02.280 Let's encourage
00:41:03.140 biohackers,
00:41:04.140 let's just
00:41:04.600 make it
00:41:05.300 available to
00:41:05.920 all,
00:41:06.320 because that's
00:41:06.900 nice.
00:41:07.540 Everybody has
00:41:08.260 equal access
00:41:08.760 to it.
00:41:09.360 With some
00:41:10.040 technologies,
00:41:10.780 like with
00:41:11.260 nuclear weapons,
00:41:12.160 we don't
00:41:12.640 think that's
00:41:13.080 the right
00:41:13.400 way.
00:41:13.780 I think
00:41:14.080 biotechnology
00:41:14.780 will have
00:41:15.220 to similarly
00:41:15.760 change to
00:41:18.080 something.
00:41:18.460 I think
00:41:23.700 I can
00:41:23.900 speak on
00:41:24.220 behalf of
00:41:24.480 my audience
00:41:24.800 and say
00:41:25.040 that we're
00:41:25.520 glad that
00:41:25.860 people like
00:41:26.200 you, I
00:41:26.440 know Stephen
00:41:26.780 Hawking has
00:41:27.240 joined you,
00:41:27.680 many others
00:41:28.280 have sketched
00:41:29.900 out some
00:41:30.700 priorities for
00:41:31.520 making this
00:41:32.700 field safer
00:41:33.660 and imposing
00:41:34.740 some guidelines
00:41:35.660 so that we
00:41:36.280 don't just
00:41:36.680 jump into
00:41:37.180 this willy-nilly.
00:41:38.800 Glad you're
00:41:39.440 out there.
00:41:39.880 It reminds me
00:41:40.180 of my husband
00:41:40.660 used to run
00:41:41.380 an internet
00:41:41.800 security firm
00:41:42.800 and he used
00:41:43.680 to speak of
00:41:44.060 the white
00:41:44.440 hat hackers
00:41:45.500 and the
00:41:46.360 black hat
00:41:47.000 hackers.
00:41:48.200 And, you
00:41:48.700 know, the
00:41:48.940 Russians have
00:41:49.460 black hat
00:41:50.120 hackers who
00:41:50.720 will try to
00:41:51.140 get into
00:41:51.460 your bank
00:41:51.780 accounts and
00:41:52.380 your private
00:41:53.000 information and
00:41:53.640 so on.
00:41:54.600 And he
00:41:55.680 was in a
00:41:56.220 firm that
00:41:57.320 he would
00:41:58.720 say he
00:41:59.080 employed the
00:41:59.580 white hat
00:42:00.140 hackers who
00:42:01.000 understood how
00:42:01.520 to do all
00:42:01.860 that stuff but
00:42:02.460 would try to
00:42:02.840 stay one step
00:42:03.380 ahead of the
00:42:03.740 bad guys to
00:42:04.860 protect you.
00:42:05.420 And it seems
00:42:05.680 like this is a
00:42:06.180 field where
00:42:06.720 everyone needs
00:42:07.640 to have the
00:42:08.060 same devious
00:42:09.520 skills but we
00:42:10.720 have to make
00:42:11.240 sure we have
00:42:11.720 more people
00:42:12.200 employing them
00:42:12.740 for good than
00:42:13.260 for bad.
00:42:13.820 Let me switch
00:42:15.460 it to you
00:42:15.880 with you for
00:42:16.700 one second
00:42:17.200 because I
00:42:17.820 do know
00:42:18.240 that you
00:42:19.420 is it
00:42:20.200 transhumanism
00:42:21.100 that you've
00:42:21.460 described yourself
00:42:22.000 as into
00:42:22.400 being transhumanism
00:42:23.820 I think
00:42:24.640 like your
00:42:25.700 proselytization
00:42:27.260 about the
00:42:28.520 medical field
00:42:29.160 and what could
00:42:29.660 happen for
00:42:30.240 humans during
00:42:31.020 our lifetime
00:42:31.560 to make life
00:42:32.300 better to
00:42:32.840 make to
00:42:33.260 improve life
00:42:33.900 when it comes
00:42:34.260 to these
00:42:34.540 scientific
00:42:34.900 advances made
00:42:36.160 me feel
00:42:36.500 hopeful you
00:42:37.320 know potential
00:42:37.880 anti-aging
00:42:38.680 pill a
00:42:39.080 potential the
00:42:40.120 potential to
00:42:40.740 bring one
00:42:41.100 back thanks to
00:42:42.120 cryogenics after
00:42:43.460 one dies can
00:42:44.320 you just talk
00:42:45.300 about that a
00:42:45.800 bit
00:42:46.000 yeah so I
00:42:48.480 I mean I
00:42:49.060 sometimes I
00:42:50.200 think I'm
00:42:51.560 mistaken for
00:42:52.320 some kind of
00:42:53.120 technophobe a
00:42:53.980 lot because I
00:42:54.840 spend a lot of
00:42:56.020 time describing
00:42:56.700 some specific
00:42:57.480 concerns or
00:42:58.300 dangers but I
00:42:59.340 mean broadly
00:42:59.720 speaking I'm
00:43:00.600 very excited
00:43:01.480 about the
00:43:02.440 potential to
00:43:03.300 improve the
00:43:05.280 human condition
00:43:05.960 through through
00:43:07.460 advances in
00:43:08.440 technology and
00:43:09.520 including by
00:43:11.520 enhancing the
00:43:12.100 human organism
00:43:12.760 itself in
00:43:13.500 different ways
00:43:13.900 like I
00:43:14.140 mean I'd
00:43:14.420 love to see
00:43:14.880 some kind
00:43:15.360 of anti-aging
00:43:17.020 pill that
00:43:17.480 really worked
00:43:18.100 or you know
00:43:20.300 something that
00:43:20.900 could make us
00:43:21.340 smarter or
00:43:22.100 improve our
00:43:23.200 quality of life
00:43:24.020 in other ways
00:43:24.560 and so
00:43:26.480 sometimes that's
00:43:27.920 referred to as
00:43:28.420 transhumanism I
00:43:29.440 don't tend to
00:43:30.040 use the word
00:43:30.540 very much because
00:43:31.480 a lot of people
00:43:33.940 use it for a lot
00:43:34.680 of different
00:43:35.160 ideas
00:43:35.940 yeah now it's
00:43:36.320 confusing
00:43:36.660 which are kind
00:43:37.240 of kooky
00:43:37.640 yeah and it
00:43:38.860 attracts some
00:43:39.600 some kind of
00:43:41.040 miscellaneous
00:43:41.640 folk that I
00:43:42.520 wouldn't necessarily
00:43:43.160 agree with on
00:43:44.200 all issues so
00:43:44.920 but yeah I
00:43:47.120 think I think
00:43:48.360 there's I think
00:43:50.060 there's a lot of
00:43:50.920 room for
00:43:51.300 improvement
00:43:51.700 do you think
00:43:54.020 academics are with
00:43:55.100 you on on
00:43:56.900 that desire
00:43:57.440 because I I've
00:43:58.340 heard you say
00:43:58.740 something like
00:43:59.260 there are a lot
00:44:00.020 of people in the
00:44:00.420 field who are
00:44:00.800 like why what
00:44:01.460 about overpopulation
00:44:02.360 and why would
00:44:03.140 we want to
00:44:03.520 extend our time
00:44:05.420 here on earth
00:44:06.180 significantly
00:44:07.440 right like
00:44:08.020 what's the
00:44:08.840 boredom of
00:44:09.720 living longer
00:44:10.360 is reason
00:44:10.800 enough not
00:44:11.200 to do that
00:44:11.640 so do you
00:44:12.520 think we have
00:44:12.920 an academic
00:44:13.340 field that is
00:44:14.120 devoted to
00:44:14.620 developing things
00:44:15.300 like that
00:44:15.660 well I think
00:44:17.820 I think this
00:44:18.760 have been
00:44:19.240 shifting so
00:44:20.300 slowly but
00:44:21.160 steadily over
00:44:22.460 the last 20
00:44:24.040 years or so
00:44:24.620 I think like
00:44:25.700 there certainly
00:44:27.660 used to be an
00:44:28.420 extremely strong
00:44:29.280 double standard
00:44:30.500 if we take
00:44:31.200 say something
00:44:31.940 that slows
00:44:32.420 like doing
00:44:32.920 something about
00:44:33.500 aging was a
00:44:34.940 big no-go
00:44:35.460 because people
00:44:36.780 couldn't see
00:44:37.280 why you would
00:44:37.760 possibly want
00:44:38.420 to do that
00:44:38.800 but then at
00:44:39.220 the same time
00:44:39.800 there were
00:44:40.140 billions of
00:44:40.680 fundings to
00:44:41.180 try to fix
00:44:41.900 cancer to
00:44:42.800 fix heart
00:44:43.280 disease to
00:44:44.120 fix diabetes
00:44:44.940 like to
00:44:47.060 fix wrinkles
00:44:47.780 like all
00:44:49.180 these things
00:44:49.620 right but
00:44:50.020 then like the
00:44:50.400 sum total of
00:44:51.140 that is like
00:44:51.700 aging like
00:44:52.240 aging is what
00:44:52.880 makes you more
00:44:53.420 vulnerable to
00:44:54.080 cancer to
00:44:54.660 heart disease to
00:44:55.420 diabetes to
00:44:56.240 wrinkles and
00:44:57.420 so for exactly
00:44:58.140 the same reasons
00:44:58.880 that you might
00:44:59.480 not like these
00:45:00.740 diseases and
00:45:02.180 symptoms for the
00:45:03.580 same reason you
00:45:04.100 might not like the
00:45:04.800 underlying thing
00:45:05.340 that is creating
00:45:06.100 a huge fraction
00:45:07.560 of all of that
00:45:08.200 so it's not
00:45:09.140 as if there
00:45:09.560 was some weird
00:45:10.500 special reason
00:45:11.260 you would have
00:45:12.080 to have for
00:45:13.220 favoring life
00:45:14.200 extension or
00:45:14.840 anti-aging it's
00:45:15.600 just exactly the
00:45:16.500 same reasons that
00:45:17.140 they play in all
00:45:17.680 its other cases
00:45:18.360 but for some
00:45:19.000 reason that was
00:45:19.480 this mental block
00:45:20.340 that a lot of
00:45:21.180 people had that
00:45:21.800 they I think
00:45:23.580 actually it was
00:45:24.240 because the
00:45:25.340 anti-aging thing
00:45:26.220 seemed unrealistic
00:45:27.740 and panciful so
00:45:28.820 it was not
00:45:29.360 evaluated by the
00:45:30.580 same standards
00:45:31.220 that like a pill
00:45:32.840 that is likely
00:45:33.540 better for treating
00:45:34.320 some cancer
00:45:34.860 would be
00:45:35.120 evaluated it
00:45:35.840 was more
00:45:36.280 evaluated by
00:45:37.100 reference to
00:45:37.720 some traditions
00:45:39.980 or like some
00:45:42.620 kind of spiritual
00:45:43.260 wisdom where
00:45:43.940 you're like you're
00:45:44.620 supposed to accept
00:45:45.700 the fact that
00:45:46.880 you're gonna die
00:45:47.580 and that's a sign
00:45:48.260 of wisdom and
00:45:49.020 you should come to
00:45:49.600 terms with it
00:45:50.220 rather than fight
00:45:50.860 against it
00:45:51.400 so it was kind
00:45:54.500 of the issue
00:45:55.160 was placed in
00:45:55.980 that mental
00:45:56.760 bucket rather
00:45:57.420 than this is
00:45:58.560 something we can
00:45:59.100 actually work on
00:45:59.800 yeah but you
00:46:00.780 raise a good
00:46:01.120 point if that's
00:46:02.000 your mindset
00:46:02.800 then why get
00:46:04.520 chemotherapy
00:46:05.020 why not eat
00:46:06.840 as many trans
00:46:07.920 fats as you
00:46:08.500 want right
00:46:08.920 like we all
00:46:09.460 take steps to
00:46:10.280 prolong our
00:46:10.980 lives even when
00:46:12.280 we get bad
00:46:12.700 diagnoses
00:46:13.220 even though we
00:46:14.580 may be people
00:46:15.080 of faith and
00:46:15.560 understand how
00:46:16.200 this is going
00:46:16.600 to end
00:46:16.860 eventually
00:46:17.280 yeah I mean
00:46:18.620 to take care
00:46:19.500 of your body
00:46:20.540 seems good
00:46:20.960 I mean it's
00:46:21.360 not to say
00:46:21.740 that in every
00:46:22.360 and all
00:46:22.700 circumstances
00:46:23.320 extending life
00:46:25.260 further I mean
00:46:25.880 if you're kind
00:46:26.600 of have a life
00:46:27.960 that's not worth
00:46:28.600 living you're in
00:46:29.260 some respirator
00:46:30.060 and you just
00:46:30.700 kind of kept
00:46:31.700 alive under
00:46:33.400 horrible conditions
00:46:34.220 like that I'm
00:46:34.760 not I'm not sure
00:46:35.360 it's a great
00:46:35.780 boon if that can
00:46:36.620 be like two
00:46:37.580 years of that
00:46:38.220 rather than one
00:46:38.780 year but but
00:46:39.600 if you have a
00:46:40.760 good quality of
00:46:41.520 life if you are
00:46:42.200 healthy you enjoy
00:46:42.860 life maybe you
00:46:43.680 can contribute to
00:46:44.560 society in some
00:46:45.240 way then that
00:46:45.740 seems something
00:46:47.660 that is definitely
00:46:48.380 worth preserving
00:46:49.180 or just if we can
00:46:49.860 improve our senior
00:46:50.560 years you know I
00:46:51.340 mean we've all
00:46:51.760 watched our parents
00:46:52.500 or grandparents
00:46:53.020 deteriorate mentally
00:46:54.180 some some would
00:46:55.380 argue we're
00:46:55.740 watching it right
00:46:56.240 now with our
00:46:56.600 president okay I
00:46:57.460 that was an
00:46:57.860 aside
00:46:58.160 but we've all
00:47:00.220 watched that and
00:47:00.860 thought to
00:47:01.160 ourselves oh we
00:47:02.380 you know I don't
00:47:02.920 want that for
00:47:03.480 them and if you
00:47:04.500 could create a
00:47:05.460 situation where we
00:47:06.220 could live to our
00:47:06.840 80s or 90s forget
00:47:08.080 beyond beyond would
00:47:08.900 be delightful but
00:47:09.660 even that old but
00:47:11.600 sharp with mental
00:47:12.940 acuity why wouldn't
00:47:14.620 we want that it'd be
00:47:15.440 great but can I
00:47:16.740 ask you um yeah
00:47:18.360 yeah go ahead
00:47:18.800 yeah and I'm like
00:47:20.140 if I think if and
00:47:21.560 whether it actually
00:47:22.620 is possible like if
00:47:23.620 there is actually a
00:47:24.300 pill on the market at
00:47:25.180 some point that does
00:47:25.940 this then I think a
00:47:26.880 lot of the people
00:47:27.440 who were previously
00:47:28.880 expressing skepticism
00:47:30.600 would kind of
00:47:31.200 quickly come around
00:47:32.080 how far away from
00:47:34.140 that are we
00:47:34.920 well I mean
00:47:39.200 from from curing
00:47:41.680 aging altogether I
00:47:42.660 mean I think it's
00:47:44.300 it seems quite hard
00:47:45.440 to do that I mean
00:47:46.820 maybe super
00:47:47.520 intelligence would
00:47:48.160 expedite that like
00:47:49.560 I think actually a
00:47:50.560 lot of these things
00:47:51.100 would happen soon
00:47:51.740 after super
00:47:52.320 intelligence but other
00:47:53.460 than that I mean
00:47:54.480 we've been kind of
00:47:55.420 on the verge of
00:47:56.060 curing cancer now
00:47:56.960 for the last 50
00:47:57.780 years right so we've
00:47:59.340 made small incremental
00:48:00.240 progress so the
00:48:02.080 super intelligence is
00:48:03.080 going to make us live
00:48:04.000 forever and then kill
00:48:04.860 us all off something
00:48:05.740 to look forward to
00:48:06.340 one or the other
00:48:07.360 yeah one of those
00:48:08.700 right they may have
00:48:10.540 buyers regret can we
00:48:11.760 talk about cryogenics
00:48:12.600 for a minute because
00:48:13.360 this is something you
00:48:14.260 know that I think most
00:48:14.920 of us grew up thinking
00:48:15.920 as I'm told now wrongly
00:48:18.180 Walt Disney had himself
00:48:19.680 frozen so he could be
00:48:20.700 brought back to life
00:48:21.500 but cryogenics is a
00:48:23.740 real thing and I
00:48:25.100 know you're in favor
00:48:25.940 of it can you talk
00:48:26.640 about it for a minute
00:48:27.240 well so it's the idea
00:48:28.980 that we know that
00:48:31.240 that's sufficiently
00:48:31.840 cold temperatures
00:48:32.860 basically all
00:48:33.960 physiological processes
00:48:35.740 stop so I mean you
00:48:38.640 know things last
00:48:39.300 longer if you put them
00:48:40.180 in your freezer right
00:48:41.100 but if you put them in
00:48:44.300 even colder temperatures
00:48:45.460 like in liquid nitrogen
00:48:47.200 then they can last for
00:48:50.300 hundreds of years
00:48:51.320 with basically not
00:48:52.460 change so so the idea
00:48:53.780 is if somebody dies
00:48:54.680 today if you freeze
00:48:57.460 them then you preserve
00:49:00.460 whatever is there and
00:49:02.160 of course if you thaw
00:49:02.960 them up they are still
00:49:03.920 dead because their
00:49:05.260 tissue well a that was
00:49:06.780 the original thing that
00:49:07.780 killed them and be the
00:49:08.760 actual freezing process
00:49:09.780 creates additional
00:49:10.660 damage but but if you
00:49:12.000 think that technology
00:49:12.720 will continue to improve
00:49:13.780 then maybe at some
00:49:14.680 point in the future
00:49:15.420 the technology will
00:49:16.760 exist to reverse
00:49:18.080 whatever originally
00:49:19.680 caused their death and
00:49:21.060 to cure frostbite that
00:49:23.160 the kind of damage that
00:49:24.080 happens during freezing
00:49:24.960 so if you if you just
00:49:26.660 preserve somebody long
00:49:27.880 enough there then there
00:49:28.780 is some hope that the
00:49:29.600 technology will one day
00:49:30.560 exist to to bring them
00:49:31.780 back to life unless you're
00:49:33.540 sure that that technology
00:49:34.520 will never be developed
00:49:36.280 then it seems like the
00:49:37.460 conservative thing to do
00:49:38.680 would be to to put them
00:49:40.140 in liquid nitrogen and uh you
00:49:43.260 know if if it doesn't work
00:49:45.220 well they'd be dead
00:49:46.000 anyway so uh the downside
00:49:48.200 is you're hedging your
00:49:48.820 bets yeah well would you
00:49:51.060 want to come back i mean
00:49:52.040 one of the fears i would
00:49:53.340 have is what if they wake
00:49:54.940 me up in the year 4000
00:49:56.140 and it's terrifying it's
00:49:58.960 like a caveman walking
00:50:00.100 into 2021 and nothing is
00:50:02.940 familiar no one he loves
00:50:05.300 is around anymore it's it
00:50:07.980 seems like a nightmare in
00:50:09.800 some ways yeah i mean
00:50:11.540 that's obviously very hard
00:50:13.700 to know what the future
00:50:14.600 would be like that that
00:50:16.280 you would be brought back
00:50:17.360 into um and so i think
00:50:19.760 yeah that probably mostly
00:50:22.600 comes down to to whether
00:50:24.160 some somebody's like more
00:50:25.720 of an optimistic person or
00:50:27.020 a pessimistic person i guess
00:50:29.380 that's true and and what
00:50:30.800 about what what is the
00:50:31.940 likelihood that the people
00:50:32.880 in the future who may have
00:50:34.120 these prolonged lifespans
00:50:35.580 are going to want to come
00:50:37.040 back and get people like
00:50:38.040 you and me well forget me
00:50:39.360 but you i mean you they
00:50:40.420 should want but seriously
00:50:42.200 what what is the given
00:50:43.400 overpopulation concerns and
00:50:44.980 the limited size of the
00:50:46.020 earth why would they want
00:50:47.560 to be bring people back
00:50:48.600 well i mean it would be
00:50:50.840 very cheap for them given
00:50:52.160 the resources in the future
00:50:53.820 this would be like a drop
00:50:55.480 in the pocket so if they
00:50:56.980 have sentimental reasons or
00:50:59.580 ethical reasons for doing
00:51:01.640 this i mean i i think
00:51:02.980 certainly if if there were
00:51:05.120 somebody who lived a
00:51:06.980 thousand years ago and i
00:51:08.360 could you know with the
00:51:09.660 snap of the finger or you
00:51:11.160 know bring them back i think
00:51:12.740 that that would be a very
00:51:13.760 nice thing to do and so
00:51:15.200 yeah like what if it were
00:51:16.200 he blinked you know what if
00:51:17.360 it were this historical
00:51:18.160 figure who would be right
00:51:19.300 i mean or even if it were
00:51:20.460 just some bumpkin right i
00:51:21.680 mean like you could save a
00:51:24.320 life even if it were a life
00:51:25.840 that started a long time ago
00:51:27.240 and had been sort of in
00:51:28.280 suspension that that would
00:51:30.120 be it would be nice do you
00:51:32.480 think do you have reason to
00:51:33.600 believe that in that
00:51:34.620 scenario the brain would
00:51:36.220 still have the information
00:51:37.280 you know upon revival that
00:51:40.460 it had on the way out that
00:51:42.420 you know the brain retains
00:51:43.400 information i i think so i
00:51:46.200 mean it depends obviously
00:51:48.420 somewhat on how you die so
00:51:49.680 if you die in a fire or
00:51:51.380 you're lost at sea right then
00:51:52.760 it's kind of gone but in a
00:51:55.480 reasonably good scenario i
00:51:57.580 think it's likely that uh the
00:51:59.820 information is basically
00:52:00.780 preserved it's quite hard to
00:52:03.760 destroy information so i think
00:52:05.300 if you have a book right okay
00:52:06.660 and you tear up all the
00:52:08.020 pages in small pieces so now
00:52:10.420 you can't read the book
00:52:11.240 anymore but the information
00:52:12.340 is still there like in theory
00:52:13.780 you can put the pieces
00:52:15.000 together in game if you have
00:52:16.460 enough patience and i think
00:52:19.180 it's similar with the brain
00:52:20.120 like the freezing damage will
00:52:21.880 kind of their ice crystals
00:52:23.880 forming actually they are trying
00:52:25.560 to use various anti-freeze
00:52:28.680 agents and they are not
00:52:29.860 literally freezing it they are
00:52:31.140 vitrifying it setting aside
00:52:33.260 these technical details i think
00:52:34.500 there is like some kind of
00:52:35.540 shoving around of different
00:52:37.200 pieces that happen in this
00:52:38.740 process but the information is
00:52:40.420 still there most likely and
00:52:42.700 with sufficiently advanced
00:52:44.340 technology it should be able to
00:52:45.680 to to to put it together i think
00:52:48.300 so but but i i my guess is if it
00:52:51.020 happens at all it will happen
00:52:52.320 after super intelligence as a
00:52:54.580 consequence of super
00:52:55.420 intelligence wow um the the last
00:53:02.580 sort of question i have for you
00:53:03.600 is a practical one which is
00:53:05.020 given that we may be looking at
00:53:08.500 total unemployment if there is
00:53:10.340 super intelligence total
00:53:11.440 unemployment what do you see as a
00:53:14.000 an important area for kids today
00:53:16.420 for young people today kids in
00:53:17.640 college to be looking at or even
00:53:20.220 younger you know i have i have a
00:53:21.980 almost 12 10 and eight year old
00:53:24.140 it used to be it would definitely
00:53:27.280 robotics and we talk about that a
00:53:28.700 lot in schools today but like
00:53:30.280 where where do we steer our kids if
00:53:32.640 they want to stay on the cutting
00:53:34.100 edge of technology and future jobs
00:53:36.820 and you know i know that one of the
00:53:38.340 fields that that artificial
00:53:40.900 intelligence may take over is
00:53:42.360 actually doing pretty pretty good job
00:53:43.740 of right now is radiology they can
00:53:45.600 read x-rays in certain settings so i
00:53:48.480 don't know maybe you don't want your
00:53:49.380 kid to be a radiologist but what what
00:53:50.820 is your thinking about sort of the
00:53:51.880 wave of the future and the likely
00:53:53.320 good industries to be in
00:53:54.840 yeah i mean so so for for kids i mean
00:53:58.360 i think it depends a lot on the kid
00:53:59.660 like you want to build on their unique
00:54:02.300 strengths and not everybody have to be a
00:54:04.680 computer programmer right that's a
00:54:06.300 small part of the economy um i i think
00:54:11.540 that i mean already today we are at an
00:54:16.020 unprecedented time of of wealth and
00:54:19.800 prosperity relative to any other time
00:54:22.300 in all of human history and so in
00:54:25.260 addition to having a focus on you know
00:54:29.780 finding a profitable career for your
00:54:32.360 your kid i think also equipping them to
00:54:34.920 actually enjoy life and to find do
00:54:37.700 something meaningful in their life would
00:54:40.600 be worthwhile because if not now then
00:54:42.260 like when i mean i get i guess after
00:54:44.380 the singularity maybe but um um so i think i
00:54:51.640 think that would be important like in
00:54:53.000 in terms of areas i mean i i think
00:54:54.840 computer stuff will continue to be
00:54:56.720 important but so will many many other
00:54:58.380 uh areas in the economy um i think
00:55:03.080 maybe inculculating certain habits like a
00:55:09.240 habit of continual learning like a
00:55:11.880 flexibility to to to be able and willing
00:55:14.300 sometimes and feel empowered to kind of
00:55:16.600 there's a field you don't know anything
00:55:17.800 about some skill you don't yet have well i
00:55:19.940 could try to learn it like there's so much
00:55:21.780 information online or training courses or
00:55:24.060 it's easier than ever to like get access to
00:55:26.940 new areas so giving them that sense of kind
00:55:29.440 of personal agency that i can take
00:55:31.120 responsible for what i want to do and i
00:55:33.400 can figure out how to learn it or if i
00:55:35.560 can't figure that out i can figure out who to
00:55:37.660 ask i think i think that would be useful
00:55:40.260 across a very wide range of different
00:55:41.940 scenarios they're gonna have to stay
00:55:44.480 nimble in the world that's coming their
00:55:46.320 way that's for sure thank you so much for
00:55:48.620 your expertise it's been an absolute
00:55:49.880 pleasure oh no i enjoyed it all right up
00:55:53.300 next andrew ang this guy was super high
00:55:56.380 up uh both at google and at china's google
00:55:59.540 it's called baidu he was their chief
00:56:01.020 scientist and has led huge teams when it
00:56:04.100 comes to developing the ai of both groups
00:56:06.340 what does he think about all of this and
00:56:09.060 is your computer already spying on you is
00:56:12.520 your government spying on you this is a
00:56:14.800 guy who's been at some very well-known
00:56:16.820 big uh leading corporations when it comes
00:56:20.000 to artificial intelligence and data
00:56:21.800 uh amassing what does he have to say
00:56:24.100 you're gonna love this guy he's next
00:56:25.580 before we get to him however want to bring
00:56:27.960 you a feature we have here on the mk show
00:56:29.460 called from the archives uh this is where
00:56:32.400 we bring you a bit of audio from our
00:56:33.780 growing library of content now nearing 150
00:56:36.420 episodes hard to believe today we're going
00:56:38.780 to go back to our 69th episode and one of
00:56:41.220 our most popular with tulsi gabbard a
00:56:44.000 veteran former congresswoman who shared
00:56:46.200 with us some stories of her time in the
00:56:47.520 military in washington dc and in the media
00:56:50.880 ringer here's just a bit on that on the
00:56:53.360 way she was covered during her 2020
00:56:55.800 presidential run against now president
00:56:57.800 biden and vp kamala harris take a listen
00:57:00.560 the ones who are writing about you of course
00:57:02.840 the mainstream press is left wing we're
00:57:04.960 writing bad things and the ones who
00:57:07.660 control the airwaves weren't giving you
00:57:09.780 any airtime that's exactly right and and
00:57:12.960 that's where that's where you know the the
00:57:16.580 evidence of this kind of facade of a
00:57:19.340 democracy comes to the forefront because
00:57:21.620 you really have these corporate media
00:57:24.080 interests who are are uh who most care
00:57:27.200 about ratings and entertainment and how
00:57:31.100 they can create conflict um you know on a
00:57:34.060 debate stage or uh push push a narrative
00:57:36.760 that they think will get more eyeballs to
00:57:39.920 their to their screens and i put social
00:57:42.820 media in this category as well uh combined
00:57:46.220 with a a party that uh pre-selecting who
00:57:52.560 they wanted voters to hear from and so
00:57:56.240 that's where you saw a lot of hey you know
00:57:57.840 they're changing the standards for the debates
00:57:59.660 as they go along
00:58:00.840 um you know just as you know hey okay we're
00:58:03.820 ticking up a little bit in the polls where we
00:58:05.580 think we're going to qualify for another
00:58:06.900 debate
00:58:07.300 oh sorry rules changed you know the day
00:58:10.680 before or right right when uh you know those
00:58:13.620 new polls were coming out and and just other
00:58:16.020 things you know the democratic uh the dnc
00:58:18.360 saying hey you know all presidential candidates
00:58:20.880 if you want to be featured in any of our our um
00:58:23.460 publicity that we're putting out then you got
00:58:26.480 to fork up i think it was something like
00:58:27.940 175 000 to the dnc just to be included in
00:58:32.760 their you know social media videos or whatever
00:58:35.360 and i'm just like no i'm not gonna do that
00:58:38.040 you know i got you know people across the
00:58:40.440 country who are giving you know five bucks
00:58:42.520 ten bucks contributing to my campaign because
00:58:45.340 they believe in the kind of leadership that
00:58:47.160 i'll bring in the message and the truth that
00:58:48.800 i'm i'm sharing with voters and and they're
00:58:50.820 certainly not giving me a whole bunch of
00:58:54.160 money to go and then pass it on to uh to the
00:58:57.200 dnc and and so ultimately that's where we saw
00:59:00.740 time and time again even even small thing it's
00:59:03.560 not that small but things that that went
00:59:06.240 unnoticed for example you know cnn had a
00:59:09.420 bunch of town halls where they featured
00:59:10.820 different candidates um they they only gave me
00:59:15.500 one most of the other candidates had more
00:59:17.880 than one and someone called me one day and
00:59:20.800 said hey you know i'm going through my um
00:59:22.760 cnn it's not dvr but if you go to cnn's i
00:59:27.320 guess digital library they had uh you could
00:59:30.840 replay the town halls of all the different
00:59:32.640 candidates they're like you're not on here
00:59:34.720 like it's just not it doesn't exist there's no
00:59:38.640 option to find your town hall but i can find
00:59:40.920 every single other democrat who ran for president
00:59:43.540 and so there there were there were things like
00:59:46.520 that and more more forward blatant things that
00:59:49.580 made it very clear that if the media makes a
00:59:52.020 decision not to allow voters to hear from you
00:59:56.760 then um a voters really don't have the ability to
01:00:02.120 make an informed decision in a true democracy and and
01:00:06.680 then b the reality is that if you want to if you want to
01:00:09.420 talk about issues if you want to get information to
01:00:11.360 people so they can make this informed decision then
01:00:13.340 clearly running for office is not the way to do
01:00:15.720 it gabbard has just struck a new deal with rumble
01:00:18.720 the video social network youtube competitor so i think
01:00:21.860 we're we're about to hear a lot more from her in the
01:00:24.420 weeks to come and good and we in the meantime will
01:00:27.580 keep bringing you more of our best episodes from the
01:00:31.020 archives up next andrew ang you'll love
01:00:35.000 thank you for being here i'm i'm excited for this
01:00:42.420 conversation we just wrapped up with nick bostrom uh who he
01:00:46.880 wasn't totally anti-ai right he he's pro-ai uh but has
01:00:51.900 some concerns about i i think what you call artificial
01:00:54.900 general intelligence agi the the the long-term game where
01:00:59.620 you develop a machine that develops super intelligence
01:01:03.040 intelligence so let's just start there what's your take on
01:01:05.820 the likelihood that we will develop super intelligent
01:01:09.700 machines in this century nick bostrom is an interesting
01:01:13.900 character um ai is the new electricity is transforming tons
01:01:17.340 of industries revolutionized the way we do things in the
01:01:20.020 united states and around the world uh as for artificial
01:01:22.840 general intelligence i think we'll get there but whether it
01:01:25.980 will take 50 or 500 or 2 000 years to make computers as
01:01:29.900 intelligent as you know you or me or other people i think
01:01:33.080 that's a really long-term open research project
01:01:35.720 it's exciting okay i like 2 000 2 000 makes me feel better than
01:01:39.840 by the end of this century when my kids are still
01:01:41.900 god willing alive you know i think that uh one of the
01:01:45.600 problems with the whole field of ai is um
01:01:48.540 is is is it's confusing in this way uh there's a one type of
01:01:52.360 ai called agi artificial general intelligence things to do
01:01:55.440 anything a human could do maybe someday
01:01:57.120 and artificial narrow intelligence which is the ai that does one thing
01:02:00.900 really really well and it's really valuable
01:02:02.740 turns out over the last you know 10 20 years we've had tons of progress in
01:02:06.600 artificial narrow intelligence those ai that do one thing really really
01:02:10.080 well so people say accurately there's tons
01:02:12.980 of progress in ai i agree with that but just because there's tons of
01:02:16.240 progress in ai doesn't mean from where i'm sitting i'm candidly not
01:02:19.360 seeing that much progress toward artificial general intelligence
01:02:22.900 uh so i think that's led to some of the unnecessary hype and fear-mongering
01:02:27.600 candidly about ai that makes me feel better i'm feeling better already
01:02:31.820 now you know a thing or two about narrow artificial narrow intelligence just so
01:02:36.180 the audience understands you've led teams at google and is it
01:02:39.180 is it pronounced baidu forgive me for not knowing
01:02:41.160 oh yes i i started led the google brain team also ran ai for baidu which
01:02:46.060 which is a large web search engine company in china
01:02:48.760 because china doesn't use google so this is china's google
01:02:51.680 uh china's leading web search engine was this is baidu and then i'm also
01:02:56.180 really proud of the work that i did leading the google brain team which is
01:02:59.360 a team that hopes a lot of google embrace modern ai so
01:03:02.300 if you use google you know you're probably using technology that that that my
01:03:06.220 former team wrote actually almost that's amazing
01:03:08.300 so now what are what are some of the fun things that you and your team have
01:03:11.560 introduced into my life that i don't even know i should be thanking you for
01:03:14.220 uh don't don't don't thank me thank the many millions well thousands of
01:03:18.880 people around the world building these technologies um i think that all of us use
01:03:23.380 ai dozens of times a day maybe even more perhaps without even knowing it uh thanks
01:03:28.460 to modern ai when you do an internet search you get much more relevant
01:03:31.780 results or every time you check your email there's a spam filter in there
01:03:35.560 kind of saving us from massive amounts of spam uh that's ai every time you use a
01:03:39.960 credit card it's probably an ai trying to figure out if it is you
01:03:43.120 or if you know someone stole the credit card and we should not let that transaction
01:03:47.180 through so all of us probably use ai albums many many dozens of times a day
01:03:51.700 maybe without even knowing it and what about the self-driving car
01:03:56.080 is it you know that makes the news every so often and it's interesting to me i
01:04:00.060 it's scary to me because you also hear some reports of crashes and you
01:04:02.900 understand that okay that the technology is not exactly where they want it to be
01:04:05.880 yet but what do you see when it comes to self-driving cars i think that many
01:04:11.240 people including me uh collectively um uh underestimated how difficult it will
01:04:17.220 be to get to you know true fully autonomous self-driving cars that could drive
01:04:21.700 the way that that the person can um i think we will get there but it's been a
01:04:26.720 longer road than any of us estimated when i drive these cars i'm happy for the
01:04:30.780 driver assistance technology i personally don't really fully trust them yet so i
01:04:35.060 keep an eye on the road you know when i'm driving and one of these technologies
01:04:37.780 is supposedly doing something well yeah so here's a dumb question i
01:04:41.960 understand why somebody if we perfect the technology somebody like my mom who's 80
01:04:45.720 and really not all that well physically mentally she's great but
01:04:49.300 physically um i could see why a self-driving car would work well for her
01:04:53.240 it's like you're a built-in chauffeur but why do young able-bodied people need
01:04:58.160 that why is it an improvement for people our age
01:05:00.440 um i think that it depends a lot on the individual
01:05:04.460 i sometimes find it fun to drive you know if i don't know take my daughter out on
01:05:09.620 the road drive around that's fun but sometimes if i'm driving to work in
01:05:13.160 traffic it's like boy i wish someone else could do the driving for me and if a
01:05:17.720 computer could do that so i could maybe even sit in the back seat
01:05:20.960 and you know play with my daughter i would rather do that than be stuck in
01:05:24.160 traffic so i think it depends a lot on the
01:05:26.100 individual it's funny because i asked this having
01:05:29.180 just yesterday i had to go to the city i'm in
01:05:31.680 new jersey for the summer i had to go to the city
01:05:33.360 it's a couple of hours and i had the choice of driving myself
01:05:37.080 or sometimes we use a driver and i said you know i'm going to use a driver
01:05:41.180 because i had a bunch of interviews to do today and i said i want to read all my
01:05:44.920 stuff and i and so it's a dumb question right it's basically
01:05:47.680 you can read everything if you have a self-driving car it's going to make your
01:05:50.640 life really convenient if it doesn't kill you or all the
01:05:53.580 people around you
01:05:54.300 and and i and again i i think i i think i know you have kids right my my kids are
01:05:59.900 really young part of me worries you know when they grow up
01:06:02.540 will they ever get in the car accident so with when my daughter grows up
01:06:06.520 uh if you know there's a computer that can drive her safer than if i were to
01:06:10.260 drive or she would drive herself i think it'll make all of us better off
01:06:13.600 you know how far away are we from that you know the ai world we keep on we've
01:06:19.400 made a lot of predictions and and sometimes we're not very good at
01:06:22.460 predicting the timeline uh by which on which this will happen
01:06:26.760 i think that self-driving cars are kind of getting there in limited
01:06:29.860 environments uh so i'm seeing exciting progress for example if you're driving
01:06:34.560 around the constrained environment of a port you know shipping stuff or in a
01:06:38.980 mine or sometimes on the farm that's actually kind of getting there
01:06:42.100 um if we're willing to rejigger some of the cities uh i think we'll be there
01:06:46.340 pretty soon um i don't know i think i think it'll
01:06:49.200 still be quite a few years i still be many many many years before
01:06:52.520 you know we can drive in downtown new york or downtown new jersey
01:06:56.280 yeah i understand that they're they're not as good at picking up
01:07:01.200 things like the hand signals that a construction worker might be issuing to
01:07:06.120 you that they don't totally understand those
01:07:08.460 things yet so they're they're not quite where they need to be
01:07:12.060 um okay so so let's talk about other ways in which
01:07:15.860 ai is going to be helping our lives and how you see it because
01:07:19.800 one of the things that nick said that concerned me was we're probably headed toward
01:07:23.660 total unemployment total eventually in the distant future once the
01:07:27.500 machines become as smart as we think they they're likely to get and that
01:07:31.400 concerns me you know i don't know what life looks like if nobody works for a
01:07:34.380 living and if the machines are in control of
01:07:36.420 everything so what is the journey from here to there look like
01:07:39.020 in terms of technological advances you know i think that um uh total
01:07:45.120 unemployment i i'm actually skeptical it'll
01:07:47.660 ever happen or if it does happen and maybe i don't know how many thousands of
01:07:51.460 years away um you know it turns out it's just just let's let's demystify ai
01:07:55.880 what can we make the idea what can we not it turns out um to get a little bit
01:07:59.600 geeky and technical almost all of ai today is about
01:08:02.460 input output mappings such as uh input an email
01:08:05.860 you know output is it spam or not or input a picture of what's in front of
01:08:10.120 your car and output the position of the other cars
01:08:12.700 um or uh input you know an x-ray image and
01:08:16.560 output a diagnosis does this person have pneumonia or not or some of the
01:08:19.920 condition so that's sort of the one idea
01:08:22.780 input output uh that is creating 99 percent of the value of the economic
01:08:27.680 value of today's ai system turns out this is a ton of economic value
01:08:31.480 uh the large ad platforms have an ai that inputs
01:08:34.680 an ad and some information of other user and outputs are you going to click on
01:08:38.700 this ad or not uh because you know if you can get people to click on more ads
01:08:42.300 this direct impact on the bottom line of the large ad platforms
01:08:45.020 so it's creating tons of economic value but frankly this input output thing if we
01:08:50.700 think about how much more people do uh uh this is so much more people could do
01:08:55.720 i don't think anyone in the world uh uh has a realistic roadmap for getting the
01:09:00.380 agi so i think sometimes the that that concept
01:09:03.320 has been has been over heightened fear mongered
01:09:06.060 um i do worry about unemployment with every wave of technology looking back
01:09:10.400 you know industrial revolution uh invention of electricity
01:09:13.180 i mean well all the people working on steam engines they unfortunately really
01:09:17.500 sadly lost their jobs or we used to have uh human operator
01:09:21.660 elevators right you know there was someone standing in the elevator
01:09:24.760 that would dial it up and down when someone invented automatic elevators those
01:09:28.820 jobs went away so i worry about that for ai that
01:09:31.940 creates some amounts of um disruption and and effect work
01:09:36.300 but complete total unemployment this this input output mapping i don't see that
01:09:40.700 piece of software replacing you know you anytime or me anytime soon
01:09:45.520 can you talk about the radiology things i read i read about the work being done
01:09:49.420 is it stanford with the ai and radiology but the conditions have to be just so can
01:09:54.860 you just talk about that sure so um i think that uh i'm sad about ai and
01:09:59.300 potential to improve health care but um actually my some of my friends and i um
01:10:04.560 worked on ai that can pitch input a picture of an x-ray and i'll put you know
01:10:09.080 what's the appropriate diagnosis and it turns out
01:10:11.220 we were able to show in the lab that we could diagnose or recognize many
01:10:16.540 conditions as accurately as a board certified highly trained
01:10:20.460 doctor radiologist but it turns out that it worked great
01:10:23.560 if we were to uh train on data we collected from you know our
01:10:27.580 research from stanford hospital uh and then you see if the system did
01:10:31.500 work well on data from the same hospital from the same set of x-ray machines
01:10:35.140 it turns out if you take that ai system and walk it down to a different hospital
01:10:40.340 down the street with maybe an older x-ray machine maybe the technician has a
01:10:44.400 slightly different way of imaging the patient
01:10:46.100 the performance gets much worse whereas any human doctor can walk down the street
01:10:49.980 uh and diagnose at this other hospital you know you kind of roughly equally well
01:10:55.120 so i think that um one of the challenges of ai is we have a lot of
01:10:59.780 prototypes in the lab uh that you didn't read about in the news you know you see oh ai does
01:11:04.520 us what diagnose that human radiologists or something you may be about in the news
01:11:07.460 but it turns out that we collectively in the ai field we still have a lot of work
01:11:11.260 to do um to take those lab prototypes and put them into production uh in in a
01:11:16.760 hospital setting it will happen it's just that this will be some additional
01:11:20.280 years of work before some of the things that you know have been promised right
01:11:23.740 come come to fruition well the medical field is is so ripe for uh help from this
01:11:30.420 kind of technology i can think of a million ways in which it could change lives and
01:11:33.860 save lives but it's really every industry i know you've been making the
01:11:36.800 point it's that it's every industry that's going to be touched by this
01:11:38.900 eventually but before we move off the medical field may i just ask you about a
01:11:42.500 report in the wall street journal that got my attention
01:11:44.480 um okay among other things they're talking about
01:11:48.040 what we should expect in the next few years
01:11:50.480 toilets that screen for disease uh it says researchers at stanford
01:11:56.040 have developed a prototype toilet that uses an artificial intelligence trained
01:12:00.280 camera to track the form of feces and monitor the color and flow of urine why
01:12:04.740 is this necessary because it could potentially analyze micro stool samples
01:12:08.840 to detect viruses like covet 19 and blood it could potentially detect irritable
01:12:13.600 bowel syndrome or colorectal cancer and here was the part forgive me because i'm
01:12:17.800 really just a 12 year old boy at heart um that i wanted to ask you about so that
01:12:21.220 the toilet could identify individual users by scanning their anuses unique
01:12:25.980 characteristics or anal print now we no one wants an anal print going off to
01:12:32.440 some ai researcher but this is happening this is actually they're saying these
01:12:38.280 units could cost between 300 and a thousand bucks
01:12:40.640 they could be rolled out in the next couple of years is this what life is going
01:12:44.500 to hold for us yeah let's hope not a lot of that description was i i think uh a lot
01:12:50.380 of that you know the description you read sounds disturbing uh having said that i
01:12:55.600 think uh there are you know doctors that have to do many disturbing things for the
01:13:00.320 good of the patients but i think i i i don't i think a lot of us will not want
01:13:04.740 this in our homes anytime soon but we'll see you know okay doctors got to innovate we'll
01:13:08.920 we'll we'll see what the fda proves and what seems to be appropriate for patients
01:13:12.580 that may need it even if it doesn't seem like the right thing for everyone
01:13:16.060 because you know that's going to turn into one of these things where you get false
01:13:19.220 alarms every other day and you're in the doctor saying oh my anal print suggested i've got
01:13:24.020 colorectal cancer i don't know sounds like there's a ton of internet means to be created
01:13:29.660 off what you just said like it and i i listen as somebody who's on camera for a living uh a lot of
01:13:35.840 my life i there there are limits to how far i'm willing to go and i think i speak for a lot
01:13:39.740 of people so what about the other industries like how else could ai improve or negatively
01:13:45.560 impact our lives over the next 10 25 years
01:13:48.180 one of the challenges i see is uh ai as of today has clearly transformed the computer
01:13:54.960 uh software the consumer software internet industry where you use a website to launch website
01:14:00.780 operating app operating companies almost all of them i mean all of them use ai to great effects
01:14:06.060 um one of the challenges that still faces us is ahead of us is figure out how to use ai
01:14:10.960 to improve transform create value for all of the other industries out there um so for example one
01:14:18.240 thing i'm personally passionate about is manufacturing i think that for um american manufacturing to be more
01:14:24.400 competitive the road forward is not you know to try to just try harder to do the jobs that were around
01:14:31.440 20 years ago i think it's america and frankly all nations around the world should um raise a head
01:14:38.220 a head to figure out how this technology can work for manufacturing and for all those other
01:14:43.000 industries so for example uh it turns out that in many factories around the world today there are tons
01:14:48.640 of people standing around using their eyes to inspect you know manufacture things like automotive
01:14:54.000 component or pill bottle or food and beverage you know like a food component to see if there's a defect on
01:15:00.740 i think ai is clearly going to be able to do a lot of that work in the near future in an automated way
01:15:07.200 and if we in america want to embrace this technology you figure out how to use ai for automatic vision
01:15:14.080 inspection is coming now i'm i'm working on it my friends are working on it i think that's how many
01:15:19.280 industries become competitive but it turns out getting you had to work for manufacturing for health care
01:15:24.180 for agriculture these industries there's actually a different recipe it turns out the stuff that i was
01:15:28.480 doing you know like google and other internet companies it doesn't quite work so there's
01:15:32.400 something a little bit more needed but again a bunch of us in the ai view are working on this i hope we'll get
01:15:37.760 don't leave me now we got more coming up in 60 seconds
01:15:42.340 when we talk about baidu for a minute and just talk about china and its approach to data because
01:15:51.140 i know that they they really want to be leaders in the ai field and the united states is watching them
01:15:55.880 and they're watching us do you think that the chinese are any better than the googles of the
01:16:02.160 world where you were also the top guy at collecting information um synthesizing it keeping an eye on
01:16:09.100 people's habits and so on yeah i think that uh uh i think that uh china is phenomenal at some types of
01:16:16.900 technology the u.s is phenomenal at some types of technology um we do i think we do live in a you know
01:16:23.000 multi-polar world where i see innovations in the u.s and europe and china really frankly all around
01:16:28.300 the world and the ai community tends to be very global uh there is a global network where researchers
01:16:35.280 you know in research in singapore may publish a paper and then like two weeks later it's running in
01:16:41.240 some you know site in the united states and then someone in the uk will read it too and figure out
01:16:46.280 something to to apply and deploy in europe so i think we live in a global world where different
01:16:52.260 teams sometimes collaborate and different teams sometimes compete um uh i i i think actually one
01:16:58.920 thing i will say a lot of people underestimate um the importance of government support uh in the early
01:17:05.720 days of ai so not many people know this when i was running ai way back uh before modern ai deep
01:17:11.700 learning became popular a lot of reasons i was able to do my work was because doctor you know
01:17:16.980 of the defense agency in washington dc was willing to fund some of my work so i think without doctor
01:17:22.280 you know funding some of my research work i don't know that i would ever have gone to google
01:17:26.600 to propose starting the google brain project so i think i think just ensuring american competitiveness
01:17:31.840 is something i would love to see where are we on the scale are we are we the world leaders you
01:17:36.960 know you look at sort of the military superpowers and we know where we are but where is america when it
01:17:41.000 comes to ai i think that uh the two leading countries in the world in ai are quite clearly um uh uh the
01:17:48.300 u.s and china i think the u.s is the world leader in um a lot of basic research innovations but this is
01:17:56.140 not a lead that we should take for granted and we just got to keep on working really hard and what
01:18:01.340 about the creation of super intelligence because i i i read something about you creating something about
01:18:08.300 where a computer can recognize a cat i don't know you can tell me what it was but to me that sounded
01:18:14.900 like working toward developing super intelligence you know a computer that can learn on its own and
01:18:21.020 you know develop its own intelligence and improve its own intelligence but can you talk about that
01:18:25.440 about where we are on it what you've done on it and whether you think well you know how far along we
01:18:30.280 are um yeah the the the the cat result uh what was the google brain team one of the early results we
01:18:35.880 had was uh we built an ai system called a neural network and had it watched tons of youtube video
01:18:41.360 basically had it you know sit in front of the computer and watch youtube video for for like a
01:18:45.500 week and then we all said hey what did you learn and to our surprise one of the things that learned
01:18:50.720 was it had figured out or had uh learned to detect this thing which turns out to be a cat because it
01:18:57.900 turns out when you have an ai system watch youtube videos for a lot it learns to detect things that
01:19:02.540 occur a lot in youtube video so people faces occur a lot in youtube figured out you know how to detect
01:19:08.020 that there are also a lot of cats right there's another internet meme on youtube so it also figured
01:19:12.160 out how to detect that um it wasn't a very good cat detector uh but but the wrong thing about that
01:19:17.820 was that it had figured out that you know there's this thing it didn't know it was called cat cat
01:19:22.220 but there was this thing so they just learned to boy see a lot of this thing whatever it is i don't know
01:19:27.140 what it is so so it was it's pretty remarkable the ai system your network had figured that out uh by
01:19:32.720 itself now but again you know between that and and super intelligence or agi i i think it's very far
01:19:38.980 away i think that's worrying about um ai super intelligence today is a bit like over is like
01:19:45.820 worrying about overpopulation on the planet mars um i should hope that we will you know manage to
01:19:52.160 colonize mars and and maybe someday we'll have so many people on mars that we have children dying
01:19:57.960 because of pollution on mars and and you may be saying hey andrew how do you be so heartless to
01:20:03.280 not care about all the children dying on mars and my answer is well you know we haven't even kind of
01:20:08.140 landed people on the planet yet so i i don't know how to productively defend against overpopulation
01:20:13.320 there so i feel a little bit about i i think it's fine if academics study it you know publish some
01:20:18.380 theories on what to do when we have agi but it's so far away uh i personally don't really know how
01:20:24.780 to productively work on that problem now you are the co-founder of a group called corsera is that how
01:20:32.140 you pronounce it yes corsera and i feel like this this dovetails very nicely with one of the things
01:20:37.620 that nick was recommending when i talked to him about the future our children and so on and he was saying
01:20:42.320 the one thing the kids of the future are going to need to be able to do is understand that learning
01:20:46.240 is a lifetime process right that nothing is as static as it used to be the world is changing so
01:20:51.940 rapidly and our kids are going to need to be able to handle information at an even more rapid pace
01:20:57.100 than it now comes into their life which is already faster than ever and i feel like this is one of the
01:21:02.040 missions of corsera is to to nurture lifelong learning can you talk about it because it sounds
01:21:06.880 really interesting and it's been hugely successful yeah so yeah through corsera um hope we can give
01:21:13.140 anyone the power to transform their lives through learning um i was teaching at snappy university
01:21:19.300 about a decade ago actually over a decade ago and uh put my class on machine learning type of ai on the
01:21:25.580 internet and kind of to my surprise uh a hundred thousand people uh signed up for it and and i kind
01:21:31.820 of did the map you know i was teaching 400 people 400 students a year but when i did the map i realized
01:21:36.880 that uh for me to reach a similar audience a hundred thousand people teaching 400 people a year i would have to
01:21:42.500 teach at stanford university for you know like 200 years um and so so so based on that early traction
01:21:50.100 uh i got together uh with with uh with a friend uh to start corsera uh to create a platform that now
01:21:56.840 works with uh over 200 um uh universities and other institutions and companies uh in order to create
01:22:04.120 online learning courses that you know pretty much anyone in the world can access that's so great i mean
01:22:10.880 so it's like for those of us who didn't go to stanford or harvard or what have you uh but want access
01:22:15.940 to that kind of education though not full-time we can go here yeah in fact you know to to actually
01:22:23.420 i want to share two thoughts relevant to to to you to you know all of you watching this uh if you want
01:22:29.440 to learn about ai and not you know and cut through the hype one of the classes i'm most proud of is
01:22:34.900 ai for everyone uh on on corsera and i think i tried to uh give a non-technical presentation of
01:22:42.100 ai so if you want to know how will ai affect your life in the future how will ai affect your job
01:22:47.120 your industry you know there's a several hours of video that i hope will give anyone that's interested
01:22:52.520 uh a non-technical introduction to ai so you can think about this strategically and know how it will
01:22:57.820 affect you but also learn to recognize ignore some of the hype um there's one of the trend i'm
01:23:04.760 excited about which is you know with the rise of tech um i think we may i hope will eventually
01:23:11.300 shift toward the world uh and it's just irrelevant to all of you you know with children for example
01:23:15.980 but i hope we'll shift toward a world where almost everyone will know a little bit about coding and i
01:23:22.500 say this because many many hundreds of years ago we lived in a society where you know some people
01:23:28.060 believe that maybe not everyone needs to read right maybe there are just a few priests you know and
01:23:33.120 monks they had to learn to read so they could read the holy book to the rest of us or something and
01:23:37.980 and the rest of us we didn't need to read we just sit there and listen to them fortunately society
01:23:42.500 wised up and now with widespread literacy uh we've figured out that it makes human to human
01:23:48.260 communications much better i think that with the rise of computers in today's society you know for
01:23:54.200 good and for ill this is very powerful force i would love to see a lot of people able to just
01:23:59.840 learn to code so not just like not all of us need to learn to be great authors right you know i can
01:24:05.040 write but i'm not a great author i don't think everyone needs to be a great programmer but for
01:24:09.860 many of us there will become a time where um if if you're uh uh uh you know if you could write a few
01:24:16.880 lines of code get your computer to do what you want um just like literacy has created much deeper
01:24:22.340 human to human communications i think if everyone can kind of learn you know a little bit of coding or
01:24:28.000 computer literacy then all of us can have much deeper interactions with our computers and that'd
01:24:32.760 be a very powerful tool for all of you in the future well it certainly had a massive impact on
01:24:36.940 your life just reading your background um how did you get into it at such a young age it was your dad
01:24:41.860 i understand oh yes uh so my my dad's a doctor and uh when i was a teenager i was born in the uk but i was
01:24:50.840 living in uh singapore at the time but so my dad was interested in ai for healthcare so you know he kind of
01:24:56.200 taught me about uh his attempts to use you know frankly like 1980s ai which is not that advanced
01:25:03.420 to do medical diagnosis so that sparked off a lifelong interest in you know i do remember when
01:25:09.500 in high school i once had a internship i once had a job uh as a as an office admin and i don't remember
01:25:16.120 much from that job i just remember doing a lot of photocopying and and even i was like whatever you
01:25:21.440 15 16 years old i remember thinking boy if only why am i doing so much photocopying if only we could
01:25:28.360 write some software have a robot or something do all this photocopying maybe i could do uh something
01:25:34.340 something even more interesting and more valuable and i think that for me was part of my uh lifelong
01:25:40.120 inspiration to just write software that can help you know automate some of the more repetitive things
01:25:46.000 so that all of us collectively can tackle more challenging and exciting things well it's so great
01:25:50.920 because i tell you i went out to google and i i spoke to a bunch of executives there a couple years ago
01:25:56.040 and i know that they try to give the coders some stress relief some like a break because it can be very
01:26:05.060 intense work and one of the one of the stations on campus was sword fighting i'm like this is so great
01:26:11.220 you know they're just because you know you spend all day doing that it's very intense and you do need
01:26:15.220 a mental break a break for your eyes a break for your for your body so it's it's just a totally
01:26:21.700 different way of approaching the workplace yeah i think yeah i i find that uh i think coding is hard
01:26:29.060 work but i find that almost you know when i look across our society i think uh almost everything is
01:26:35.540 hard work right when when i walk into a manufacturing plant some of the work that you know my company
01:26:39.820 landing ai does for manufacturing i see the men and women on the manufacturing shop floor and they're
01:26:44.540 really smart uh at you know what they do and then i uh meet up with my friends from google and i think
01:26:51.720 they're really smart at at what they do i i i think that uh the world is a challenging has has lots of
01:26:59.180 challenging intellectually simulating or physically challenging work for us to to to do and and hopefully
01:27:04.360 ai tools can help make things a little bit better for everyone well i like that you sort of decide where
01:27:09.000 you're going to put your energies because i understand looking at you today and your blue
01:27:11.880 shirt it is no accident you are wearing that blue shirt and it is one of the areas of your life in
01:27:16.040 which you've chosen to simplify and streamline your decision making yeah i think uh yeah a few friends
01:27:23.700 have asked me yeah there's actually a four of those i think someone actually asked publicly why does
01:27:27.580 andrew wear a blue shirt all the time uh uh so i used to wear either blue or like a light purple
01:27:34.060 but then i realized every morning is like oh don't wear a blue shirt or a purple shirt
01:27:38.100 i can't decide it's like forget it i'm just buying a full stack of blue shirts and do that
01:27:43.740 i don't know so you don't have to think about it in the morning vera wang does the same thing vera
01:27:47.860 wang who dresses you know the most beautiful successful you know prominent people in the world
01:27:52.000 just wears sort of a black a column of black every day that's her uniform i did not know that
01:27:57.320 because she doesn't want to think about it same as you
01:27:59.700 yeah turns out there is a downside to this uh one day one of my friends was working on ai for
01:28:04.920 fashion thing uh and i tried to express him in a pinet said well you want to do ai for fashion
01:28:09.700 how about this how about this and she said andrew you have no credibility whatsoever when it comes
01:28:14.380 to fashion so okay i have to ask you one other personal question now i understand you married
01:28:19.360 somebody who's in robotics and i read that you you used a 3d printer to make your wedding rings
01:28:26.120 which brought up a lot of things for me which is number one i do not understand the 3d printer
01:28:31.340 at all my kids are using it at school it scares me i don't get it what is it what is it and how
01:28:36.580 does it print out a wedding ring how does it produce a wedding ring yeah so i think uh so carol
01:28:42.860 um she's from a prince michigan but we now are in washington state um so 3d printer takes you know
01:28:50.140 one way that 3d printing works is it uh takes little bits of um metal and melts them and kind
01:28:56.060 of you know deposits little drops of metal until gradually you end up building a ring i'm not
01:29:00.760 wearing a ring now i have it i have enough sense uh uh and then you end up with this you know
01:29:04.900 incredible shape uh whatever whatever you can almost anything you can imagine and program into
01:29:11.660 computer it can just by putting little drops of plastic or little drops of metal or some of the
01:29:17.760 substance uh just create this you know incredible 3d shape that's that's maybe difficult to manufacture
01:29:23.880 via other ways so i don't know actually this is one fun thing about technology or 3d printing
01:29:28.420 were still on really really cutting edge technology but now we have high school students able to use it
01:29:35.520 um i hope we like that for ai too frankly i i find that today ai seems a little bit mysterious maybe a
01:29:41.180 little bit overly so but uh actually last week i was chatting with a few high school students in
01:29:46.040 different parts of the country uh talking about you know they're taking online ai classes from
01:29:50.760 coursera or from or from whatever and now we have um high school students able to do things that
01:29:57.560 if done just five or six years ago would have been a chapter in a phd thesis at a place like stanford
01:30:04.380 right so really like what like what uh actually so so one thing happened to me i was attending a fair
01:30:10.460 make a fair um uh where i met you know this students that was demoing his robot that was taking
01:30:15.980 pictures of plants trying to figure out if they were um diseased if they had a you know disease on the
01:30:20.940 on the leaves or not so uh i looked at his work and i thought boy if this had been done five or six years
01:30:26.940 ago uh this would have been a chapter in someone's phd thesis at stanford university and you know what i
01:30:33.020 asked him how old are you and he said oh i'm i'm 12 years old uh so this is today's world no no no so
01:30:39.980 this is today's world i think anyone in the world you know can go and learn this stuff and then
01:30:45.260 implement this uh and and even though some technology seems so cutting edge i think that if someone out
01:30:52.220 there is you know watching this and wants to learn it a lot of tools are now on the internet right go go
01:30:57.500 go go learn it online from deep learning the article sarah and then on your computer you could
01:31:01.980 actually start developing stuff that while not the cutting edge stuff right that's actually still
01:31:06.620 pretty difficult you could actually do stuff that was kind of state of the art just a few years
01:31:10.940 all right i had on this subject i have a confession to make to you i were moving into a
01:31:15.420 new home or moving towns and i decided to not make my new home a smart home because my old smart
01:31:21.980 home was annoying me my dishwasher was yelling at me and my microwave was yelling at me and i was
01:31:27.340 walking around my apartment all day saying you are not the boss of me i am the boss of you shut up
01:31:32.140 i will unload you when i'm good and ready and uh the tv required 40 000 buttons to turn on and it's
01:31:38.460 like i i just want a dumb home for me because maybe it says i'm a dumb person but it it seemed easier to
01:31:45.420 me um and yet all of these appliances are getting smarter by the day and they're saying there's going to
01:31:51.340 be a refrigerator that's going to tell you whether things are spoiled on the inside and so on
01:31:55.660 so do you have a smart home do you recommend a smart home and how if at all concerned
01:32:03.260 should we be about people spying on us for lack of a better term you know i think people
01:32:09.020 they distrust google they think google's amassing information on them they distrust the government
01:32:12.780 they think the government could possibly hack into one of these appliances you know these are real
01:32:16.780 concerns you hear from people yeah so i know i i think that a lot of people are concerned about privacy
01:32:22.780 so in my voice web i have friends at many of the large internet companies and i know uh they're my
01:32:29.020 friends i trust them to tell me the truth many of my friends are genuinely uh a concern but also very
01:32:35.340 respectful of privacy so a lot of the large internet companies you know some better than others really do
01:32:40.700 have stringent privacy controls and makes it incredibly difficult for anyone to just spy on you now having
01:32:46.620 said that i i actually would be um disappointed i have no reason to think the u.s government can hack
01:32:51.660 into these devices but frankly i'd be a little bit disappointed if they can't um uh but yeah yeah so
01:32:58.460 you know by the way i i used to work on speech recognition right so i worked on this voice activated
01:33:03.020 devices uh one thing i'm not proud of for a long time even as working on these devices i had exactly one
01:33:08.780 light bulb in my home that was connected to my smart speaker because the configuration process is so
01:33:13.740 annoying so i got through you know configuring one light bulb so i could turn it on with a voice
01:33:18.300 command but after that i couldn't be bothered so i i think we still we still gotta make these things
01:33:23.820 better you know i we tend to inject the margin of a lot of things sometimes it's really great and i
01:33:28.940 love it uh but sometimes you know you do wonder right if if we're really helping solve people's problems
01:33:35.340 hey if we have more people working on it maybe even all collectively make all this tech much better
01:33:39.580 yeah no i've said in in this day and age it's not enough to pretend you actually have to be a
01:33:44.460 good person because someone's probably always listening watching amassing data they're gonna
01:33:49.100 know uh one way or another it's disconcerting but i don't know if you're not a criminal and you're not
01:33:54.860 you know dealing with terrorists and so on how worried do you need to be i don't know i'll give you
01:33:59.180 the last word yeah you know um yeah i think uh ai is the new electricity much of the rise of electricity
01:34:06.060 starting about 100 years ago transform every industry i think ai is now on a path to do the
01:34:11.420 same so i think really to anyone wondering if it's worth learning about it jumping in trying to help
01:34:16.540 all of us collectively navigate the future i think every citizen every government all of us individuals
01:34:23.180 should jump in and play a role in shaping a better future for everyone in light of this amazing
01:34:29.020 technology wonderful talking to you thank you so much for your expertise and your insights
01:34:34.140 you know thank you thank you it was really really fun to do this with you
01:34:42.540 so as i mentioned in our other episode this week we're we're scaling back a little for this week and
01:34:47.580 next week on our episodes just as we get ready to launch on serious my team especially my team has a
01:34:52.540 lot they need to be doing so we're going to launch five days a week starting on september 7th
01:34:57.500 but in the meantime we're a little bit of scale backs uh schedule for those of you wondering and
01:35:01.660 but our next guest who's going to be coming up on monday is one we've really wanted to have on for a
01:35:07.340 while controversial guy because he worked for trump and you know he's been completely excoriated by the
01:35:12.220 mainstream media but fascinating and really smart dude steven miller is going to be here you used to
01:35:17.420 have him on the kelly file all the time then you saw what the press did to him when he went uh on
01:35:22.460 inside the trump team but even just you know i've spent years talking to him there is no better person
01:35:29.820 to talk to if you want to understand what's happening in this country with our southern border
01:35:34.860 our northern border and our approach toward immigration in general uh so i'm really looking
01:35:39.020 forward to the conversation steven miller monday don't miss it in the meantime go ahead and subscribe
01:35:42.940 so you don't uh miss it download give me a five-star rating while you're there and give me a review
01:35:48.460 let me know what you think uh what do you think of ai are you in favor and uh what would you like
01:35:53.420 me to ask steven miller taking your thoughts right now in the apple review section uh or wherever you
01:35:58.700 download your podcasts thanks thanks for listening to the megan kelly show no bs no agenda and no fear
01:36:09.180 the megan kelly show is a devil may care media production in collaboration with red seat ventures
01:36:18.460 so
01:36:25.580 so
01:36:28.940 so
01:36:29.420 so
01:36:30.220 you