The Joe Rogan Experience - October 24, 2018


Joe Rogan Experience #1188 - Lex Fridman


Episode Stats

Length

2 hours and 55 minutes

Words per Minute

164.35303

Word Count

28,833

Sentence Count

2,161

Misogynist Sentences

46

Hate Speech Sentences

29


Summary

In this episode, Lex and I talk about artificial intelligence and how it can help us understand the world around us, and how we can improve our understanding of what is going on inside of us. We also talk about what artificial intelligence is and why it's important to understand what it is, and what it means for us to have it in our lives. This episode was produced and edited by Alex Blumberg. Our theme song is Come Alone by Suneaters, courtesy of Lotuspool Records. The show was mixed and produced by Matthew Boll. Music by PSOVOD and tyops. Art: Mackenzie Moore Music: Hayden Coplen Editor: Will Witwer Additional Compositions: Jeff Kaale Logo by Ian Dorsch Theme by Mavus White Cover art by Ian McKellen Thank you to our sponsor, Amazon Prime - Subscribe to our new streaming service, VaynerMedia. Learn more about your ad choices. Become a supporter of our sponsorships and get 10% off your first month when you shop there! Subscribe, rate, review and review our new ad-free version of the show is available on Audible, and get 20% off the entire Audible membership plan when you become a patron through Audible starting January 1st, 2019. Subscribe to Audible.com/Vaynermedia. We are giving you access to our newest ad-only version of The Audible Prime, the Audible listening platform! and Vimeo is also available worldwide. Get 20% Offers get 25% off their entire month, starting Jan 1stories, they'll get 10GBR, they get the best deals throughout the entire deal, they also get 5GBR and VGA Pro? Vimeo Pro-Proverver Pro-Reed Pro, they're also get $5% off Prime Video Pro-Vee Pro, Bespoke Pro, Pro-Verse Pro-Workspace Pro, VSA Pro, and VCRist Pro-Friendship Pro-Club Pro, They'll get $4GBR Pro, RRP Pro-Beevering 4GB Pro-Outsourcing Pro-Orthomes, and they'll also get VIP access to VGA-Pro-Pro Pro-Only 3GB Pro and Veever Pro, VGA Plus Pro-Alfredo, BSA Pro-Geever, and G&G Pro-Cee-Beedee Pro-PC Pro-Mac Pro-Fee, B& Geever will get $50% off $5,000 Shipping & G& G& B-B-B & G-C-B?


Transcript

00:00:02.000 Four, three, two, one.
00:00:07.000 Hello, Lex.
00:00:08.000 Hey.
00:00:09.000 We're here, man.
00:00:10.000 What's going on?
00:00:11.000 We're here.
00:00:11.000 Thanks for doing this.
00:00:12.000 You brought notes.
00:00:13.000 You're seriously prepared.
00:00:15.000 When you're jumping out of a plane, it's best to bring a parachute.
00:00:18.000 This is my parachute.
00:00:19.000 I understand.
00:00:21.000 Yeah.
00:00:22.000 How long have you been working in artificial intelligence?
00:00:25.000 My whole life, I think.
00:00:27.000 Really?
00:00:27.000 So when I was a kid, I wanted to become a psychiatrist.
00:00:31.000 I wanted to understand the human mind.
00:00:33.000 I think the human mind is the most beautiful mystery that our entire civilization has taken on exploring through science.
00:00:45.000 I think you look up at the stars and you look at the universe out there.
00:00:48.000 You had Neil deGrasse Tyson here.
00:00:52.000 It's an amazing, beautiful, scientific journey that we're taking on and exploring the stars, but the mind to me is a bigger mystery and more fascinating and it's been the thing I've been fascinated by from the very beginning of my life and just I think all of human civilization has been wondering You know,
00:01:13.000 what is inside this thing?
00:01:16.000 The hundred trillion connections that are just firing all the time, somehow making the magic happen to where you and I can look at each other, make words, all the fear, love, life, death that happens is all because of this thing in here.
00:01:31.000 And understanding why is fascinating.
00:01:35.000 And what I early on understood Is that one of the best ways, for me at least, to understand the human mind is to try to build it.
00:01:47.000 And that's what artificial intelligence is.
00:01:51.000 It's not enough from a psychology perspective to study, from a psychiatry perspective to Investigate from the outside.
00:02:01.000 The best way to understand is to do.
00:02:04.000 So you mean almost like reverse engineering a brain?
00:02:09.000 There's some stuff, exactly, reverse engineering the brain.
00:02:12.000 There's some stuff that you can't understand until you try to do it.
00:02:15.000 You can hypothesize your...
00:02:18.000 I mean, we're both martial artists from various directions.
00:02:23.000 You can hypothesize about what is the best martial art.
00:02:28.000 But until you get in the ring, like what the UFC did...
00:02:33.000 And test ideas is when you first realize that the touch of death that I've seen some YouTube videos on that you perhaps cannot kill a person with a single touch or your mind or telepathy that there are certain things that work.
00:02:49.000 Wrestling works.
00:02:50.000 Punching works.
00:02:52.000 Okay, can we make it better?
00:02:54.000 Can we create something like a touch of death?
00:03:01.000 Can we figure out how to turn the hips, how to deliver a punch in a way that does do a significant amount of damage?
00:03:09.000 And then you, at that moment, when you start to try to do it, and you face some of the people that are trying to do the same thing, that's the scientific process.
00:03:17.000 And you try, you actually begin to understand what is intelligence.
00:03:23.000 And you begin to also understand how little we understand.
00:03:26.000 It's like Richard Feynman, who I'm dressed after today.
00:03:29.000 Are you?
00:03:31.000 He's a physicist.
00:03:32.000 I'm not sure if you're sure.
00:03:33.000 Yeah, he always used to wear this exact thing, so I feel pretty badass wearing it.
00:03:39.000 If you think you know astrophysics, you don't know astrophysics.
00:03:42.000 That's right.
00:03:43.000 Well, he said it about quantum physics, right?
00:03:44.000 Quantum physics, that's right.
00:03:47.000 So he was a quantum physicist.
00:03:51.000 I remember hearing him talk about that understanding the nature of the universe, of reality, could be like an onion.
00:04:01.000 We don't know.
00:04:01.000 But it could be like an onion to where you think you know you're studying a layer of an onion and then you peel it away and there's more.
00:04:07.000 And you keep doing it and there's an infinite number of layers.
00:04:10.000 With intelligence, there's the same kind of component to where we think we know.
00:04:15.000 We got it.
00:04:15.000 We figured it out.
00:04:16.000 We figured out how to beat the human world champion to chess.
00:04:19.000 We solved intelligence.
00:04:21.000 And then we tried the next thing.
00:04:23.000 Wait a minute.
00:04:24.000 Go is really difficult to solve as a game.
00:04:26.000 And then you say, okay.
00:04:29.000 I came up when the game of Go was impossible for artificial intelligence systems to beat and have now recently have been beaten.
00:04:36.000 Within the last five years, right?
00:04:38.000 The last five years.
00:04:39.000 There's a lot of technical, fascinating things of why that victory is interesting and important for artificial intelligence.
00:04:46.000 It requires creativity, correct?
00:04:48.000 It does not.
00:04:50.000 It just exhibits creativity.
00:04:52.000 So the technical aspects of why AlphaGo from Google DeepMind, that was the designers and the builders of the system that was the victor, they did a few very interesting technical things where… Essentially,
00:05:10.000 you develop a neural network.
00:05:13.000 This is this type of artificial intelligence system that looks at a board of Go, has a lot of elements on it, there's black and white pieces, and is able to tell you, how good is this situation?
00:05:26.000 And how can I make it better?
00:05:30.000 That idea, so chess players can do this.
00:05:32.000 I'm not actually that familiar with the game of Go.
00:05:35.000 I'm Russian, so chess to us is romanticized.
00:05:38.000 It's a beautiful game.
00:05:41.000 I think that you look at a board and all your previous experiences, all the things you've developed over tons of years of practice and thinking, you get this instinct of what is the right path to follow.
00:05:53.000 And that's exactly what the Neural Network is doing.
00:05:56.000 And some of the paths it has come up with are surprising to other world champions.
00:06:04.000 So in that sense, it says, well, this thing is exhibiting creativity because it's coming up with solutions that are something that's outside the box, thinking from the perspective of the human.
00:06:16.000 What do you differentiate between requires creativity and exhibits creativity?
00:06:23.000 I think, one, because we don't really understand what creativity is.
00:06:27.000 So, it's almost...
00:06:30.000 It's on the level of concepts such as consciousness.
00:06:37.000 For example, the question which there's a lot of thinking about whether creating something intelligent requires consciousness, requires for us to be actual living beings aware of our own existence.
00:06:49.000 In the same way, does doing something like building an autonomous vehicle, that's the area where I work in, does that require creativity?
00:07:00.000 Does that even require something like consciousness and self-awareness?
00:07:03.000 I mean, I'm sure in LA, there's some degree of creativity required to navigate traffic.
00:07:09.000 And in that sense, you start to think, are there solutions that are outside of the box an AI system needs to create?
00:07:18.000 Once you start to build it, you realize that to us humans, certain things appear creative, certain things don't.
00:07:24.000 Certain things we take for granted.
00:07:25.000 Certain things we find beautiful.
00:07:27.000 And certain things we're like, yeah, yeah, that's boring.
00:07:30.000 Well, there's creativity in different levels, right?
00:07:32.000 There's creativity like to write The Stand, the Stephen King novel.
00:07:35.000 That requires creativity.
00:07:37.000 There's something about his...
00:07:39.000 He's creating these stories.
00:07:41.000 He's giving voices to these characters.
00:07:44.000 He's developing these scenarios and these dramatic sequences in the book that's going to get you really sucked in.
00:07:50.000 That's almost undeniable creativity, right?
00:07:55.000 Is it?
00:07:57.000 He's imagining a world.
00:07:59.000 What is always set in New Hampshire, Massachusetts?
00:08:02.000 A lot of its Maine.
00:08:03.000 Maine, that's right.
00:08:04.000 So he's imagining a world and imagining the emotion of different levels surrounding that world.
00:08:09.000 Yeah, that's creative.
00:08:11.000 Although there's a few really good books, including his own, that talks about writing.
00:08:16.000 Yeah, he's got a great book on writing.
00:08:18.000 It's actually called On Writing.
00:08:21.000 If there's anyone who can write a book on writing, it should be Stephen King.
00:08:25.000 I think Stephen Pressfield.
00:08:27.000 I hope I'm not saying the wrong thing.
00:08:28.000 The War of Art.
00:08:28.000 The War of Art.
00:08:29.000 Beautiful book.
00:08:30.000 And I would say, from my recollection, they don't necessarily talk about creativity very much.
00:08:37.000 That it's really hard work, putting in the hours of every day of just grinding it out.
00:08:42.000 Well, Pressfield talks about the muse.
00:08:44.000 Pressfield speaks of it almost in like a strange, mystical sort of connection to the unknown.
00:08:55.000 I'm not even exactly sure if he believes in the muse, but I think if I could put words in his mouth, I have met him.
00:09:04.000 He's a great guy.
00:09:04.000 He was on the podcast once.
00:09:06.000 I think the way he treats it is that if you decide the muse is real and you show up every day and you write as if the muse is real, you get the benefits of the muse being real.
00:09:19.000 That's right.
00:09:20.000 Whether or not there's actually a muse that's giving you these wonderful ideas.
00:09:24.000 And what is the muse?
00:09:25.000 So, I think of artificial intelligence the same way.
00:09:28.000 There's a quote by Pamela McCordick from a 1979 book that I really like.
00:09:36.000 She talks about the history of artificial intelligence.
00:09:39.000 AI began with an ancient wish to forge the gods.
00:09:43.000 And to me, gods, broadly speaking, or religions, represents, it's kind of like the muse, it represents the limits of possibility, the limits of our imagination.
00:09:54.000 So it's this thing that we don't quite understand.
00:09:57.000 That is the Muse.
00:09:59.000 That is God.
00:10:02.000 Us chimps are very narrow in our ability to perceive and understand the world, and there's clearly a much bigger, beautiful, mysterious world out there, and God or the Muse represents that world.
00:10:14.000 And for many people, I think throughout history, and especially in the past sort of 100 years, artificial intelligence has become to represent that a little bit.
00:10:23.000 To the thing which we don't understand and we crave, we're both terrified and we crave in creating this thing that is greater, that is able to understand the world better than us.
00:10:35.000 In that sense, artificial intelligence is the desire to create the muse, this other, this imaginary thing.
00:10:45.000 And I think one of the beautiful things, if you talk about everybody from Elon Musk to Sam Harris to all the people thinking about this, Is that there is a mix of fear of that, of that unknown, of creating that unknown, and an excitement for it.
00:11:02.000 Because there's something in human nature that desires creating that.
00:11:06.000 Because like I said, creating is how you understand.
00:11:10.000 Did you initially study biology?
00:11:14.000 Did you study the actual development of the mind or what is known about the evolution of the human mind?
00:11:21.000 Of the human mind, yeah.
00:11:22.000 So my path is different as it's the same for a lot of computer scientists and roboticists.
00:11:29.000 We ignore biology, neuroscience, the physiology and anatomy of our own bodies.
00:11:37.000 And there's a lot of beliefs now that you should really study biology, you should study neuroscience, you should study our own brain, the actual chemistry, what's happening, what is actually, how are the neurons interconnected, all the different kinds of systems in there.
00:11:53.000 So that is a little bit of a blind spot, or it's a big blind spot.
00:11:57.000 But the problem is, so I started with more Philosophy almost.
00:12:02.000 It's where, if you think, Sam Harris, in the last couple of years, has started kind of thinking about artificial intelligence.
00:12:10.000 And he has a background in neuroscience, but he's also a philosopher.
00:12:15.000 And I started there by reading Camus and Nietzsche or Dostoevsky, thinking what is...
00:12:22.000 What is intelligence?
00:12:24.000 What is human morality?
00:12:27.000 Will?
00:12:28.000 So all of these concepts give you the context for which you can start studying these problems.
00:12:35.000 And then I said, there's a magic that happens when you build a robot.
00:12:40.000 It drives around.
00:12:42.000 I mean, you're a father.
00:12:44.000 I'd like to be, but I'm not yet.
00:12:47.000 There's a creation aspect that's wonderful, that's incredible.
00:12:51.000 For me, I don't have any children at the moment, but the act of creating a robot where you programmed it and it moves around and it senses the world is a magical moment.
00:13:04.000 Did you see Alien Covenant?
00:13:07.000 Is that a sci-fi movie?
00:13:08.000 Yeah.
00:13:09.000 No.
00:13:10.000 Have you ever seen any of the alien films?
00:13:13.000 So I grew up in the Soviet Union where we didn't watch too many movies.
00:13:19.000 So I need to catch up.
00:13:20.000 We should catch up on that one in particular because a lot of it has to do with artificial intelligence.
00:13:24.000 There's actually a battle between, spoiler alert, Two different but identical artificially intelligent synthetic beings that are there to aid the people on the ship.
00:13:41.000 One of them is very creative and one of them is not.
00:13:44.000 And the one that is not has to save them from the one that is.
00:13:49.000 Spoiler alert.
00:13:51.000 I won't tell you who wins.
00:13:52.000 But there's a really fascinating scene at the very beginning of the movie where the creator of this artificially intelligent being is discussing its existence with the being itself.
00:14:07.000 And the being is trying to figure out who made him.
00:14:10.000 And it's this really fascinating moment and this being winds up being a bit of a problem because it possesses creativity and it has the ability to think for itself and they found it to be a problem.
00:14:29.000 So they made a different version of it which was not able to create.
00:14:33.000 And the one that was not able to create was much more of a servant.
00:14:37.000 And there's this battle between these two.
00:14:40.000 I think you would find it quite fascinating.
00:14:41.000 It's a really good movie.
00:14:43.000 Yeah, the same kind of theme carries through Ex Machina and 2001 Space Odyssey.
00:14:50.000 You've seen Ex Machina?
00:14:51.000 Yeah, I've seen it.
00:14:53.000 So because of your...
00:14:54.000 I've listened to your podcast, and because of it, I've watched it a second time.
00:14:58.000 Because the first time I watched it, I had a Neil deGrasse Tyson moment where it was...
00:15:02.000 You said there's cut the...
00:15:04.000 Cut the shit.
00:15:05.000 Cut the shit moments?
00:15:06.000 Yes.
00:15:07.000 For me, the movie opening is...
00:15:12.000 Everything about it was...
00:15:14.000 I was rolling my eyes.
00:15:15.000 Why were you rolling your eyes?
00:15:17.000 What was the cut the shit moment?
00:15:20.000 So, that's a general bad tendency that I'd like to talk about amongst people who are scientists that are actually trying to do stuff.
00:15:28.000 They're trying to build the thing.
00:15:30.000 It's very tempting to roll your eyes and tune out in a lot of aspects of artificial intelligence discussion and so on.
00:15:37.000 For me, there's real reasons to roll your eyes and there's just Well, let me just describe it.
00:15:46.000 So this person in Ex Machina, no spoiler alerts, is in the middle of what, like a Jurassic Park type situation where he's like in the middle of land that he owns?
00:15:56.000 Yeah, we don't really know where it is.
00:15:58.000 It's not established, but you have to fly over glaciers and you get to this place and there's rivers and he has this fantastic compound and inside this compound he appears to be working alone.
00:16:07.000 Right.
00:16:08.000 And he's like doing curls, I think, like dumbbells and drinking heavily.
00:16:15.000 So everything I know about science, everything I know about engineering is it doesn't happen alone.
00:16:23.000 So the situation of a compound with no hundreds of engineers there working on this is not feasible.
00:16:32.000 It's not possible.
00:16:34.000 And the other moments like that were the technical, the discussion about how it's technically done.
00:16:43.000 They threw a few jargon to spice stuff up that doesn't make any sense.
00:16:49.000 Well, that's where I am...
00:16:53.000 Blissfully ignorant.
00:16:55.000 So I watch it and I go, this movie's awesome!
00:16:57.000 And you're like, ah, I know too much.
00:16:59.000 Yeah, I know too much.
00:17:00.000 But that's a stupid way to think for me.
00:17:04.000 So once you suspend disbelief, say, okay, well, right, those are not important details.
00:17:09.000 Yeah, but it is important.
00:17:11.000 I mean, they could have gone to you or someone who really has knowledge in it and cleaned up those small aspects and still kept the theme of the story.
00:17:19.000 That's right.
00:17:20.000 They could have, but they would make a different movie.
00:17:23.000 But slightly different.
00:17:25.000 I don't know if it's possible to make.
00:17:26.000 So you look at 2001 Space Odyssey.
00:17:29.000 I don't know if you've seen that movie.
00:17:31.000 That's the kind of movie you'll start, if you talk to scientists, you'll start making those kinds of movies.
00:17:37.000 Because you can't actually use jargon that makes sense because we don't know how to build a lot of these systems.
00:17:44.000 So the way you need to film it and talk about it is with mystery.
00:17:48.000 It's this Hitchcock type.
00:17:50.000 You say very little.
00:17:53.000 You leave it to your imagination to see what happens.
00:17:56.000 Here everything was in the open.
00:17:58.000 Right.
00:17:59.000 Even in terms of the actual construction of the brain when they had that...
00:18:03.000 Foam looking, whatever, gel brain.
00:18:07.000 Right.
00:18:07.000 Yeah.
00:18:08.000 If they gave a little bit more subtle mystery, I think I would have enjoyed that movie a lot more.
00:18:14.000 But the second time, really because of you, you said I think it's your favorite sci-fi movie.
00:18:19.000 It's absolutely one of my favorite sci-fi movies.
00:18:21.000 One of my favorite movies, period.
00:18:23.000 I loved it.
00:18:23.000 Yeah, so I watched it again and also Sam Harris said that he also hated the movie and then watched it again and liked it.
00:18:33.000 So I give it a chance.
00:18:36.000 Why would you see a movie again after you hate it?
00:18:40.000 Because maybe you're self-aware enough to think there's something unhealthy about the way I hated the movie.
00:18:47.000 Like you're like introspective enough to know It's like I have the same experience with Batman, okay?
00:18:55.000 I watched...
00:18:55.000 Which one?
00:18:57.000 Dark Knight, I think.
00:18:58.000 Christian Bale?
00:18:59.000 Christian Bale one.
00:19:01.000 So, to me, the first time I watched that is a guy in a costume, like, speaking excessively with an excessively low voice.
00:19:11.000 I mean, it's just something with like little bunny ears, not bunny ears, but like little ears.
00:19:15.000 It's so silly.
00:19:17.000 But then you go back and, okay, if we just accept that that's the reality of the world we live in, what's the human nature aspects that are being explored here?
00:19:27.000 What is the beautiful conflict between good and evil that's being explored here?
00:19:34.000 And what are the awesome graphics effects that are being on the exhibit, right?
00:19:39.000 So if you can just suspend that, that's beautiful.
00:19:44.000 The movie can become quite fun to watch.
00:19:47.000 But still, to me, not to offend anybody, but superhero movies are still difficult for me to watch.
00:19:55.000 Yeah, who was talking about that recently?
00:19:59.000 Was it Kelly?
00:20:01.000 Kelly Slater?
00:20:02.000 No.
00:20:02.000 No, it was yesterday.
00:20:03.000 Yesterday.
00:20:03.000 It's Kyle.
00:20:04.000 It's Kyle.
00:20:05.000 Yeah, he doesn't like superhero movies or something.
00:20:07.000 Right.
00:20:07.000 He doesn't like superhero movies.
00:20:08.000 We were talking about Batman, about Christian Bale's voice, and he's like, the most ridiculous thing was that he's actually Batman, not his voice.
00:20:17.000 It's true.
00:20:19.000 I'm Batman.
00:20:20.000 That part of it is way less ridiculous than the fact that he's Batman.
00:20:23.000 He's Batman.
00:20:24.000 Because anybody can do that voice.
00:20:25.000 Yeah.
00:20:27.000 But I contradict.
00:20:29.000 I'm a hypocrite because Game of Thrones or Tolkien's Lord of the Rings, it's totally believable to me.
00:20:40.000 Yeah, of course.
00:20:41.000 Dragons and...
00:20:42.000 Well, that's a fantasy world, right?
00:20:45.000 That's the problem with something like Batman or even Ex Machina is that it takes place in this world.
00:20:50.000 Whereas they're in Middle Earth.
00:20:52.000 They're in a place that doesn't exist.
00:20:55.000 It's like Avatar.
00:20:57.000 If you make a movie about a place that does not exist, you can have all kinds of crazy shit in that movie.
00:21:04.000 Because it's not real.
00:21:06.000 That's right.
00:21:06.000 Yeah.
00:21:07.000 So...
00:21:09.000 But at the same time, Star Wars is harder for me.
00:21:12.000 And you're saying Star Wars is a little more real because it feels feasible.
00:21:17.000 Like you could have spaceships flying around.
00:21:20.000 Right.
00:21:21.000 What's not feasible about Star Wars to you?
00:21:23.000 Oh, I'm not.
00:21:24.000 I'll leave that one to Neil deGrasse.
00:21:26.000 He was getting angry about the robot that's circular, when it rolls around.
00:21:31.000 He's like, it would just be slippery.
00:21:32.000 Like, trying to roll around all over the sand, it wouldn't work.
00:21:35.000 It would get no traction.
00:21:36.000 I was like, that's true.
00:21:37.000 Because if you had, like, glass tires, and you'd try to drive over sand, it was smooth tires, you'd get nothing.
00:21:43.000 Yeah.
00:21:44.000 He's actually the guy that made me realize, you know the movie Ghost with Patrick Swayze?
00:21:49.000 And it was at this podcast or somewhere he was talking about the fact that, so this guy could go through walls, right?
00:21:57.000 It's a beautiful romantic movie that everybody should watch, right?
00:22:01.000 But he doesn't seem to fall through chairs when he sits on them.
00:22:05.000 Yeah.
00:22:06.000 Right?
00:22:07.000 So he can walk through walls, but he can put his hand on the desk.
00:22:11.000 Yeah.
00:22:11.000 And he can sit.
00:22:12.000 Like, his butt has a magical shield that is in this reality.
00:22:17.000 This is a quantum shield that protects him from falling.
00:22:20.000 Yeah.
00:22:22.000 So that's...
00:22:23.000 You know, those devices are necessary movies.
00:22:26.000 I get it.
00:22:28.000 Yeah, but you got a good point.
00:22:29.000 He's got a good point, too.
00:22:31.000 It's like, there's cut-the-shit moments.
00:22:33.000 They don't have to be there.
00:22:34.000 Yeah.
00:22:57.000 Okay, but you brought me in as a martial arts expert to talk to you about your movie, and I'm telling you right now, this is horseshit.
00:23:03.000 Yeah, I'm a huge believer of Steve Jobs' philosophy, where, forget the average person discussion, because, first of all, the average person will care.
00:23:15.000 Steve Jobs...
00:23:17.000 It would really push the design of the interior of computers to be beautiful, not just the exterior.
00:23:23.000 Even if you never see it, if you have attention to detail to every aspect of the design, even if it's completely hidden from the actual user in the end, Somehow, that karma, whatever it is, that love for everything you do, that love seeps through the product.
00:23:40.000 And the same, I think, with movies.
00:23:41.000 If you talk about the 2001 Space Odyssey, there's so many details.
00:23:47.000 I think there's probably these groups of people that study every detail of that movie and other Kubrick films.
00:23:53.000 Those little details matter.
00:23:55.000 Somehow they all come together to show how deeply passionate you are about telling the story.
00:24:03.000 Well, Kubrick was a perfect example of that because he would put layer upon layer upon layer of detail into films that people would never even recognize.
00:24:09.000 Like there's a bunch of correlations between the Apollo moon landings and the shining.
00:24:15.000 People have actually studied it to the point where they think that it's some sort of a confession that Kubrick faked the moon landing.
00:24:22.000 It goes from the little boy having the rocket ship on his sweater to the number of the room that things happen.
00:24:32.000 There's a bunch of very bizarre connections in the film that Kubrick...
00:24:38.000 Unquestionably engineered because he was just a Stupid smart man.
00:24:43.000 I mean he was so goddamn smart that he would do complex mathematics for fun in his spare time and Kubrick was like a legitimate genius and he Engineered that sort of complexity into his films where he didn't have cut the shit moments in his movies nothing I can recall and No,
00:25:02.000 not even close.
00:25:04.000 This was very interesting.
00:25:05.000 I mean, but that probably speaks to the reality of Hollywood today, that the cut-the-shit moments don't affect the bottom line of how much the movie makes.
00:25:15.000 Well, it really depends on the film, right?
00:25:17.000 I mean, the cut-the-shit moments that Neil deGrasse Tyson found in Gravity, I didn't see because I wasn't aware of what the effects of gravity on a person's hair would be.
00:25:27.000 You know, he saw it and he was like, this is ridiculous.
00:25:30.000 And then there were some things like, why are these space stations so close together?
00:25:34.000 I just let it slide while the movie was playing, but then he went into great detail about how preposterous it would be that those space stations were that close together that you could get to them so quickly.
00:25:43.000 That's with Sandra Bullock and the good-looking guy.
00:25:46.000 And George Clooney.
00:25:47.000 George Clooney.
00:25:47.000 Yeah, the good-looking guy.
00:25:50.000 So did that pass muster with Neil deGrasse Tyson?
00:25:53.000 No.
00:25:54.000 He tore it apart.
00:25:55.000 And when he tore it apart, people went crazy.
00:25:57.000 They got so angry at him.
00:26:00.000 Yeah, he reads the negative comments, as you've talked about.
00:26:03.000 I actually recently, because of doing a lot of work on artificial intelligence and lecturing about it and so on, have plugged into this community of folks that are thinking about the future of artificial intelligence, artificial general intelligence, and they are very much out-of-the-box thinkers to where the kind of messages I get are best So I let them kind of explore those ideas without sort of engaging into those discussions.
00:26:31.000 I think very complex discussions should be had with people in person.
00:26:35.000 That's what I think.
00:26:36.000 And I think that when you allow comments, just random anonymous comments to enter into your consciousness, like you're taking risks.
00:26:46.000 And you may run into a bunch of really brilliant ideas.
00:26:51.000 That are, you know, coming from people that are considerate, that have thought these things through, or you might just run into a river of assholes.
00:27:00.000 And it's entirely possible.
00:27:03.000 I peeked into my comments today on Twitter and I was like, what in the fuck?
00:27:07.000 I started reading like a couple of them, some just morons.
00:27:10.000 And I'm like, alright, about some shit, I don't even know what the fuck they were talking about.
00:27:14.000 But that's the risk you take when you dive in.
00:27:17.000 You're going to get people that are disproportionately, you know, delusional or whatever it is in regards to your position on something.
00:27:29.000 Or whether or not they even understand your position.
00:27:31.000 They'll argue something that's an incorrect interpretation of your position.
00:27:36.000 Yeah, and you've actually, from what I've heard, you've actually been to this podcast and so on, really good at being open minded.
00:27:44.000 And that's something I try to preach as well.
00:27:47.000 So in AI discussions, when you're talking about AGI and talking about so there's a difference in narrow AI and general artificial intelligence, narrow AI is The kind of tools that are being applied now and being quite effective.
00:28:03.000 And then there's general AI, which is a broad categorization of concepts that are human-level or superhuman-level intelligence.
00:28:10.000 When you talk about AGI, Artificial General Intelligence, there seems to be two camps of people.
00:28:17.000 Ones who are really working deep in it, like that's the camp I kind of sit in, and a lot of those folks tend to roll their eyes and just not engage into any discussion of the future.
00:28:28.000 Their idea is saying, it's really hard to do what we're doing, and it's just really hard to see how this becomes intelligent.
00:28:38.000 And then there's another group of people who say, yeah, but you're being very short-sighted, that you may not be able to do much now, but the exponential, the hard takeoff overnight, it can become super intelligent,
00:28:54.000 and then it'll be too late to think about.
00:28:56.000 Now, the problem with those two camps, as with any camps, Democrat, Republican, any camps, Is they don't seem to be talking past each other as opposed to both have really interesting ideas.
00:29:09.000 If you go back to the analogy of touch of death, of this idea of MMA, right?
00:29:18.000 So I'm in this analogy.
00:29:19.000 I'm going to put myself in the UFC for a second.
00:29:22.000 In this analogy, I'm ranked in the top 20. I'm working really hard.
00:29:27.000 My dream is to become a world champion.
00:29:29.000 I'm training three times a day.
00:29:30.000 I'm really working.
00:29:31.000 I'm an engineer.
00:29:32.000 I'm trying to build my skills up.
00:29:34.000 And then there's other folks that come along, like Steven Seagal and so on, that kind of talk about other kinds of martial arts, other ideas of how you can do certain things.
00:29:45.000 And I think Steven Seagal might be on to something.
00:29:51.000 I think we really need to be open-minded.
00:29:53.000 Like Anderson Silva, I think, talks to Steven Seagal.
00:29:56.000 Or somebody talks to Steven Seagal, right?
00:29:59.000 Well, Anderson Silva thinks Steven Seagal is...
00:30:05.000 I want to put this in a respectful way.
00:30:09.000 And Anderson Silva has a wonderful sense of humor.
00:30:12.000 And Anderson Silva is very playful.
00:30:15.000 And he thought it would be hilarious if people believed that he was learning all of his martial arts from students to go.
00:30:24.000 He also loves Steven Seagal movies legitimately, so it treated him with a great deal of respect.
00:30:30.000 He also recognizes that Steven Seagal actually is a master of Aikido.
00:30:35.000 He really does understand Aikido and was one of the very first Westerners that was teaching in Japan.
00:30:43.000 Speaks fluent Japanese, was teaching at a dojo in Japan, and is a legitimate master of Aikido.
00:30:53.000 The problem with Aikido is, it's one of those martial arts that has merit in a vacuum.
00:31:02.000 If you're in a world where there's no NCAA wrestlers, or no Judo players, or no Brazilian Jiu Jitsu black belts, or no Muay Thai kickboxers, there might be something to that Aikido stuff.
00:31:17.000 But in the world, Where all those other martial arts exist and we've examined all the intricacies of hand-to-hand combat, it falls horribly short.
00:31:28.000 Well, see, this is the point I'm trying to make.
00:31:30.000 You just said that we've investigated all the intricacies.
00:31:35.000 You said all the intricacies of hand-to-hand combat.
00:31:38.000 I mean, you're just speaking, but you want to open your mind to the possibility That Aikido has some techniques that are effective.
00:31:47.000 Yeah, when I say all, you're correct.
00:31:50.000 That's not a correct way of describing it.
00:31:52.000 Because there's always new moves that are being, like, for instance, in this recent fight between Anthony Pettis and Tony Ferguson, Tony Ferguson actually used Wing Chun in a fight.
00:32:05.000 He trapped one of Anthony Pettis' hands and hit him with an elbow.
00:32:10.000 He basically used a technique that you would use on a Wing Chun dummy, and he did it in an actual world-class mixed martial arts fight.
00:32:20.000 And I remember watching it, wow, going, this crazy motherfucker actually pulled that off.
00:32:24.000 Because it's a technique that you just rarely see anybody getting that proficient at it that fights in MMA. And Ferguson is an extremely creative and open-minded guy, and he figured out a way to make that work in a world-class fight.
00:32:40.000 So, and let me then ask you the question, there's these people who still believe, quite a lot of them, that there is this touch of death, right?
00:32:51.000 So, do you think it's possible to discover, through this rigorous scientific process that is MMA, that started pretty recently, do you think, not the touch of death, but do you think we can get a 10x improvement in the amount of power the human body can...
00:33:05.000 Can generate in punching?
00:33:08.000 No, certainly not 10x.
00:33:11.000 I think you can get incremental improvements, but it's all based entirely on your frame.
00:33:15.000 Like, if you're a person that has very small hands and narrow shoulders, you're kind of screwed.
00:33:21.000 There's not really a lot of room for improvement.
00:33:23.000 You can certainly get incremental improvement in your ability to generate power, but you'll never be able to generate the same kind of power as, say, A guy with a very big frame like Brock Lesnar or Derek Lewis or You know anyone who has there's like classic elements that go with Being able to generate large amounts of power wide shoulders large hands.
00:33:50.000 There's there's a lot of characteristics of the human frame itself those Even those people, there's only so much power you can generate and we pretty much know how to do that correctly.
00:34:04.000 So the way you're talking about as a martial arts expert now is kind of the way a lot of the experts in robotics and AI talk about AI and when the topic of touch of death is brought up.
00:34:17.000 Now, the analogy is not perfect.
00:34:19.000 I tend to use probably too many analogies.
00:34:22.000 We maybe know the human body better than we know the possibility of AI. I would assume so, right?
00:34:28.000 Because the possibility of AI is basically limitless once AI starts redesigning itself.
00:34:34.000 It's not obvious that that's true.
00:34:37.000 Our imagination allows it to be true.
00:34:40.000 I'm of two minds.
00:34:44.000 I can hold both beliefs that are contradicting in my mind.
00:34:48.000 One is that idea is really far away, almost bordering on BS, and the other is it can be there overnight.
00:34:55.000 I think you can believe both those things.
00:34:58.000 There's another quote from Barbara Wooten.
00:35:05.000 It's a poem I heard on a lecture somewhere that I really like, which is it's from the champions of the impossible rather than the slaves of the possible that evolution draws its creative force.
00:35:18.000 So I see Elon Musk as a representative of the champion of the impossible.
00:35:23.000 I see exponential growth of AI within the next several decades as the impossible.
00:35:29.000 But it's the champions of the impossible that actually make the impossible happen.
00:35:33.000 Why would exponential growth of AI be impossible?
00:35:38.000 Because it seems inevitable to me.
00:35:40.000 So, it's not impossible.
00:35:43.000 I'm sort of using the word impossible meaning...
00:35:46.000 Magnificent?
00:35:47.000 Yeah, it feels very difficult.
00:35:49.000 Very, very difficult.
00:35:51.000 We don't even know where to begin.
00:35:53.000 Grand.
00:35:53.000 Yep, like the touch of death actually feels.
00:35:56.000 Yeah, but see, the touch of death is horse shit.
00:35:58.000 But see, you're an expert.
00:35:59.000 Someone's like, ah, and they touch you in the chest.
00:36:01.000 But we don't have the ability in the body to generate that kind of energy.
00:36:05.000 How do you know that?
00:36:06.000 That's a good question.
00:36:08.000 It's never been done.
00:36:10.000 We understand so much about physiology.
00:36:13.000 How do you know it's never been done?
00:36:14.000 There could be someone out there with magic that has escaped my grasp.
00:36:19.000 No, you've studied, you've talked about with Graham Hancock, you've talked about the history, maybe it was in Roman times, that idea was discovered and then it was lost.
00:36:31.000 Because weapons are much more effective ways of delivering damage.
00:36:36.000 Now I find myself in a very uncomfortable position of defending the concept, as a martial artist, defending the concept of this.
00:36:43.000 What martial arts did you study?
00:36:46.000 Jiu-Jitsu and Judo and wrestling.
00:36:49.000 Those are the hard ones.
00:36:50.000 Jiu-Jitsu, Judo and wrestling, those are absolute martial arts, in my opinion.
00:36:56.000 This is what I mean.
00:36:57.000 If you are a guy who just has a fantastic physique and incredible speed and ridiculous power, you just can generate ridiculous power.
00:37:09.000 You know who Deontay Wilder is?
00:37:12.000 Yes.
00:37:13.000 Heavyweight champion of the world, boxer.
00:37:14.000 You have, what's his name?
00:37:17.000 Tyson Fury.
00:37:17.000 Tyson Fury on tomorrow.
00:37:18.000 Tomorrow, yes.
00:37:19.000 Two undefeated guys, right?
00:37:20.000 Yes.
00:37:21.000 Deontay Wilder has fantastic power.
00:37:25.000 I mean, he just knocks people flying across the ring.
00:37:29.000 He's just...
00:37:30.000 I think Deontay Wilder, if he just came off the street, if he was 25 years old and no one ever taught him how to box at all, and you just wrapped his hands up and had him hit a bag, he would be able to generate insane amounts of force.
00:37:45.000 If you're a person that really didn't have much power, and you had a box with Deontay Wilder, and you were both of the same age, and you were a person that knew boxing and you stood in front of Deontay, it's entirely possible that Deontay Wilder could knock you into another dimension, even though he had no experience in boxing.
00:38:02.000 If he just held on to you and hit you with a haymaker, he might be able to put you out.
00:38:07.000 If you're a person who is, let's say, built like you, a guy who exercises, who's strong, and then there's someone who's identically built like you, who's a black belt in Brazilian Jiu Jitsu, and you don't have any experience in martial arts at all,
00:38:25.000 you're fucked.
00:38:26.000 Right?
00:38:27.000 Yes.
00:38:27.000 If you're a person who's built like you, who's a guy who exercises and is healthy, and you grapple with a guy who's even stronger than you and bigger than you, but he has no experience in Brazilian Jiu-Jitsu, he's still fucked.
00:38:42.000 Yeah.
00:38:42.000 That's the difference.
00:38:43.000 That's why I think Brazilian Jiu-Jitsu and Judo and wrestling in particular, those are absolutes in that you have control of the body.
00:38:51.000 Yes.
00:38:51.000 And once you grab a hold of a person's body, there's no...
00:38:56.000 Lucky triangle chokes in jiu-jitsu.
00:38:58.000 That's right.
00:38:59.000 But I think I would say jiu-jitsu is the highest representative of that.
00:39:07.000 I think in wrestling and judo, having practiced those, I've never been quite as humble as I have been in jiu-jitsu.
00:39:14.000 Especially when I started, I was powerlifting.
00:39:16.000 I was a total meathead.
00:39:19.000 And a 130-pound guy or girl could tap you easily.
00:39:24.000 Yeah, it's confusing.
00:39:25.000 It's very confusing.
00:39:26.000 In wrestling, you can get pretty far with that meathead power.
00:39:30.000 And in judo...
00:39:33.000 A little bit less so at its highest levels.
00:39:36.000 If you go to Japan, for example, the whole dream of Judo is to effortlessly throw your opponent.
00:39:45.000 But if you go to gyms in America and so on, There's some hard wrestling style gripping and just beating each other up pretty intensely where we're not talking about beautiful ichimadas or these beautiful throws.
00:40:01.000 We're talking about some scrapping, some wrestling style.
00:40:05.000 Yeah.
00:40:06.000 Yeah, no, I see what you're saying.
00:40:07.000 Yeah, my experience with jiu-jitsu was very humbling when I first started out.
00:40:14.000 I had a long background in martial arts and striking, and even wrestled in high school.
00:40:19.000 And then I started taking jiu-jitsu, and a guy who was my size, and I was young at the time, and he was basically close to my age, just mauled me.
00:40:28.000 And he wasn't even a black belt.
00:40:30.000 I think he was a purple belt.
00:40:32.000 He might have been a blue belt.
00:40:32.000 I think he was a purple belt.
00:40:34.000 And just destroyed me.
00:40:36.000 Just did anything he wanted to me.
00:40:37.000 Choked me.
00:40:38.000 Armbarred me.
00:40:40.000 And I remember thinking, man, I am so delusional.
00:40:43.000 I thought I had a chance.
00:40:45.000 I thought just based on taking a couple classes and learning what an armbar is and then being a strong person who has a background in martial arts that I would be able to at least hold him off a little bit.
00:40:58.000 No.
00:41:01.000 That's so beautiful.
00:41:04.000 I feel lucky to have had that experience of having my ass kicked in Philadelphia is where I came up with.
00:41:10.000 Because in science you don't often get that experience.
00:41:13.000 In the space of ideas you can't choke each other out.
00:41:18.000 You can't beat each other up in science.
00:41:20.000 So it's easy to go your whole life.
00:41:23.000 I have so many people around me telling me how smart I am.
00:41:27.000 There's no way to actually know if I'm smart or not, because I think I'm full of BS. And in the same realm as fighting, there's no, it's what Hicks and Gracie said, or Salo Hibera or somebody that the mat doesn't lie.
00:41:46.000 There's this deep honesty in it that I'm really grateful.
00:41:50.000 Almost like wanting, you know, you talk about bullies or you talk about, or even just my fellow academics, could benefit significantly from training a little bit.
00:41:59.000 I think so too.
00:42:01.000 It's a beautiful thing to almost, I think it's been talked about in high school sort of requiring it.
00:42:08.000 Yeah, we've talked about it many times, yeah.
00:42:10.000 I think it's a more humbling sport, to be honest, than wrestling, because you could, in wrestling, like I said, get away with some muscle.
00:42:18.000 It's also what martial arts are supposed to be, in that a small person who knows technique can beat a big person who doesn't know the technique.
00:42:26.000 That's right.
00:42:26.000 That's what we always hoped for, right?
00:42:29.000 When we saw the Bruce Lee movies, and Bruce Lee, who's a smaller guy, could beat all these bigger guys just because he had better technique.
00:42:35.000 That is actually real in jiu-jitsu, and it's one of the only martial arts where that's real.
00:42:40.000 Yeah, and in Philadelphia, you had Steve Maxwell here, right?
00:42:45.000 Sure.
00:42:45.000 That was the spring of jiu-jitsu in Philadelphia.
00:42:49.000 Yeah, he was one of the very first American black belts in jiu-jitsu way back in the day.
00:42:54.000 I believe he was a black belt in the very early 90s when jiu-jitsu was really just starting to come to America.
00:43:01.000 And he had Maxercise.
00:43:03.000 Maxercise, yeah.
00:43:04.000 In Philadelphia.
00:43:05.000 It's still there.
00:43:05.000 And then I trained at Balance, which is a few Gracie folks, which is Phil McGlorese, Rick McGlorese, Josh Vogel Brothers.
00:43:14.000 I mean, especially Vogel Brothers, these couple of black belts, they come up together.
00:43:20.000 They're...
00:43:21.000 Well, they're smaller.
00:43:23.000 They're little guys.
00:43:24.000 And I think those were the guys that really humbled me pretty quickly.
00:43:28.000 Well, little guys are the best to learn technique from.
00:43:30.000 Yeah.
00:43:31.000 Because they can't rely on strength.
00:43:33.000 There's a lot of really big, powerful, you know, 250-pound jiu-jitsu guys who never are going to develop the sort of subtlety of technique That some, like the Mayo brothers, like smaller guys who just,
00:43:50.000 from the very beginning, they've never had an advantage in weight and size.
00:43:54.000 And so they've never been able to use anything but perfect technique.
00:43:57.000 Eddie Bravo's another great example of that, too.
00:43:59.000 He competed in the 140-pound, 145-pound class.
00:44:05.000 But to get back to artificial intelligence, so the idea is that There's two camps.
00:44:12.000 There's one camp that thinks that the exponential increase in technology and that once artificial intelligence becomes sentient it could eventually improve upon its own design and literally become a god in a short amount of time.
00:44:28.000 And then there's the other school of thought that thinks that is so far outside of the realm of what is possible today that even the speculation of this eventually taking place is kind of ludicrous to imagine.
00:44:42.000 Right, exactly.
00:44:43.000 And the balance needs to be struck because I think I'd like to talk about sort of the short term threats that are there.
00:44:50.000 And that's really important to think about.
00:44:52.000 But the long term threats, if they come to fruition, will overpower everything, right?
00:44:59.000 That's really important to think about.
00:45:01.000 But what happens is if you think too much about the encroaching doom of humanity, there's some aspect to it that is paralyzing, where it turns you off from actually thinking about these ideas.
00:45:19.000 There's something so appealing.
00:45:20.000 It's like a black hole that pulls you in.
00:45:23.000 And if you notice, folks like Sam Harris and so on spend a large amount of the time talking about the negative stuff, about something that's far away.
00:45:35.000 Not to say it's not wrong to talk about it, but they spend very little time about the potential positive impacts.
00:45:41.000 In the near term and also the negative impacts in the near term.
00:45:45.000 Let's go over those.
00:45:46.000 Yep.
00:45:47.000 Fairness.
00:45:48.000 So the more and more we put decisions about our lives into the hands of artificial intelligence systems, whether you get a loan or in an autonomous vehicle context or in terms of recommending jobs for you on LinkedIn or all these kinds of things,
00:46:09.000 The idea of fairness becomes of bias in these machine learning systems becomes a really big threat because the way current artificial intelligence systems function is they train on data.
00:46:25.000 So there's no way for them to somehow gain a greater intelligence than our I think how so.
00:46:49.000 So there's people working on this more so to show really the negative impacts in terms of getting a loan or whether to say whether this particular human being should be convicted or not of a crime.
00:47:06.000 There's ideas there that can carry, you know, in our criminal system there's discrimination.
00:47:14.000 And if you use data from that criminal system to then assist deciders, judges, juries, lawyers in making this incriminating – in making a decision of what kind of penalty a person gets, they're going to carry that forward.
00:47:28.000 So you mean like racial, economic biases?
00:47:31.000 Racial, economic, yeah.
00:47:33.000 Geographical?
00:47:34.000 And that's a sort of, I don't study that exact problem, but it's, you're aware of it because of the tools we're using.
00:47:43.000 It only, so the two ways, so I'd like to talk about neural networks with Joe.
00:47:52.000 Sure, let's do it.
00:47:53.000 Okay, so the current approaches are And there's been a lot of demonstrated improvements, exciting new improvements in our advancements of artificial intelligence.
00:48:07.000 And those are, for the most part, have to do with neural networks, something that's been around since the 1940s.
00:48:13.000 It's gone through two AI winters where everyone was super hyped and then super bummed and super hyped again and bummed again and now we're in this other hype cycle.
00:48:23.000 And what neural networks are is these collections of interconnected simple compute units.
00:48:29.000 They're all similar.
00:48:30.000 It's kind of like it's inspired by our own brain.
00:48:32.000 We have a bunch of little neurons interconnected and the idea is These interconnections are really dumb and random, but if you feed it with some data, they'll learn to connect just like they're doing our brain in a way that interprets that data.
00:48:47.000 They form representations of that data and can make decisions.
00:48:50.000 But there's only two ways to train those neural networks that we have now.
00:48:55.000 One is we have to provide a large data set.
00:48:58.000 If you want the annual network to tell the difference between a cat and a dog, you have to give it 10,000 images of a cat and 10,000 images of a dog.
00:49:08.000 You need to give it those images.
00:49:10.000 And who tells you what a picture of a cat and a dog is?
00:49:14.000 It's humans.
00:49:15.000 So it has to be annotated.
00:49:17.000 So as teachers of these artificial intelligence systems, we have to collect this data.
00:49:22.000 We have to invest a significant amount of effort and annotate that data.
00:49:28.000 And then we teach neural networks to make that prediction.
00:49:32.000 What's not obvious there is...
00:49:35.000 How poor of a method there is to achieve any kind of greater degree of intelligence.
00:49:40.000 You're just not able to get very far besides very specific narrow tasks of cat versus dog, or should I give this person a loan or not?
00:49:52.000 These kind of simple tasks.
00:49:54.000 I would argue autonomous vehicles are actually beyond the scope of that kind of approach.
00:49:59.000 And then the other realm of where neural networks can be trained is if you can simulate that world.
00:50:07.000 So if the world is simple enough or is conducive to be formalized sufficiently to where you can simulate it.
00:50:16.000 So a game of chess, there's rules.
00:50:19.000 A game of Go, there's rules.
00:50:21.000 So you can simulate it.
00:50:22.000 The big exciting thing about Google DeepMind Is that they were able to beat the world champion by doing something called competitive self-play, which is to have two systems play against each other.
00:50:35.000 They don't need the human.
00:50:36.000 They play against each other.
00:50:37.000 But that only works, and that's a beautiful idea and super powerful and really interesting and surprising, but that only works on things like games and simulation.
00:50:47.000 So now if I wanted to, sorry to be going to analogies of like UFC for example, if I wanted to train a system to become the world champion, be, what's his name,
00:51:03.000 Nurmagomedov, right?
00:51:05.000 I could play the UFC game.
00:51:08.000 I could create two neural networks that use competitive self-play to play in that virtual world.
00:51:15.000 And they could become, state-of-the-art, the best fighter ever in that game.
00:51:21.000 But transferring that to the physical world, we don't know how to do that.
00:51:25.000 We don't know how to teach systems to do stuff in the real world.
00:51:29.000 Some of the stuff that freaks you out often is Boston Dynamics robots.
00:51:33.000 Every day I go to the Instagram page and just go, what the fuck are you guys doing?
00:51:39.000 Engineering our demise.
00:51:40.000 Mark Rabert, CEO, spoke at the class I taught.
00:51:45.000 He calls himself a bad boy of robotics.
00:51:48.000 So he's having a little fun with it.
00:51:51.000 He should definitely stop doing that.
00:51:52.000 Don't call yourself a bad boy of anything.
00:51:54.000 That's true.
00:51:55.000 How old is he?
00:51:59.000 Okay, he's one of the greatest roboticists of our generation.
00:52:02.000 That's great.
00:52:02.000 That's wonderful.
00:52:03.000 However, don't call yourself a bad boy, bro.
00:52:06.000 Okay.
00:52:09.000 So you're not the bad boy of MMA? Definitely not.
00:52:16.000 I'm not even the bad man.
00:52:18.000 Bad man?
00:52:20.000 Definitely not a bad boy.
00:52:22.000 Okay.
00:52:23.000 That's so silly.
00:52:25.000 Yeah, those robots are actually functioning in the physical world.
00:52:30.000 That's what I'm talking about.
00:52:31.000 And they are using something called, what was I think coined, I don't know, 70s or 80s, the term good old-fashioned AI. Meaning, there is nothing like going on that you would consider artificially intelligent,
00:52:47.000 which is usually connected to learning.
00:52:52.000 So these systems aren't learning.
00:52:54.000 It's not like you dropped a puppy into the world and it kind of stumbles around and figures stuff out and learns.
00:52:59.000 It's better and better and better and better.
00:53:01.000 That's the scary part.
00:53:02.000 That's the imagination.
00:53:04.000 That's what we imagine is we put something in this world.
00:53:07.000 At first, it's like harmless.
00:53:09.000 It falls all over the place.
00:53:10.000 And all of a sudden, it figures something out.
00:53:12.000 And like Elon Musk says, it travels faster than whatever.
00:53:15.000 You can only see it with probe lights.
00:53:18.000 There's no learning component there.
00:53:20.000 This is just purely, there's hydraulics and electric motors and there is 20 to 30 degrees of freedom and it's doing hard-coded control algorithms to control the task of how do you move efficiently through space.
00:53:37.000 So this is the task roboticists work on.
00:53:40.000 A really, really hard problem is taking robotic manipulation, taking Arm, grabbing a water bottle and lifting it.
00:53:49.000 Super hard.
00:53:50.000 Somewhat unsolved to this point.
00:53:53.000 And learning to do that, we really don't know how to do that.
00:53:57.000 Right, but this is...
00:53:58.000 What we're talking about essentially is the convergence of these robotic systems with artificially intelligent systems.
00:54:04.000 That's right.
00:54:05.000 And as artificially intelligent systems evolve, and then this convergence...
00:54:11.000 Becomes complete you're going to have the ability to do things like the computer that beat humans at go that's right you're gonna have creativity you're going to have a complex understanding of language and Expression and you're gonna have I mean perhaps even engineered things like emotions like jealousy and anger I mean it's an it's entirely possible that as you were saying I We're going to have systems that could potentially be biased the way human beings
00:54:41.000 are biased towards people of certain economic groups or certain geographic groups and you would use that data that they have to discriminate just like human beings discriminate.
00:54:53.000 If you have all that in an artificially intelligent robot that has autonomy and that has the ability to move, this is what people are totally concerned with and terrified of, is that all of these different systems that are currently in semi-crewed states, they can't pick up a water bottle yet,
00:55:10.000 they can't really do much other than they can do backflips, but they, you know, I'm sure you've seen the more recent Boston Dynamic ones.
00:55:18.000 Parkour?
00:55:18.000 Yeah, I saw that one the other day.
00:55:20.000 They're getting better and better and better, and it's increasing every year.
00:55:25.000 Every year they have new abilities.
00:55:27.000 Did you see the Black Mirror episode, Heavy Metal?
00:55:30.000 Yeah, and I think about it quite a lot, because it's...
00:55:37.000 Functionally, we know how to do most aspects of that.
00:55:40.000 Right now.
00:55:41.000 Right now.
00:55:42.000 Pretty close, yeah.
00:55:42.000 Pretty close.
00:55:43.000 I mean, I don't remember exactly.
00:55:44.000 There's some kind of pebble shooting situation where it hurts you by shooting you somehow.
00:55:51.000 Well, it has bullets, didn't it?
00:55:52.000 Bullets, yeah.
00:55:53.000 It's basically a gun.
00:55:54.000 It had a knife that stuck into one of its arms, remember?
00:55:57.000 Spoiler alert.
00:55:58.000 It's just an amazing episode of how terrifying it would be if some emotionless robot with incredible abilities is coming after you and wants to terminate you.
00:56:08.000 And I think about that a lot because I love that episode because it's terrifying for some reason.
00:56:15.000 But when I sit down and actually in the work we're doing, think about how we would do that.
00:56:20.000 So we can do the actual movement of the robot.
00:56:23.000 What we don't know how to do is to have robots that do the full thing, which is have a goal of pursuing humans and eradicating.
00:56:35.000 Spoiler alert all over the place.
00:56:39.000 I think the goal of eradicating humans, so assuming their values are not aligned somehow, that's one.
00:56:45.000 We don't know how to do that.
00:56:47.000 And two is the entire process of just navigating all over the world is really difficult.
00:56:54.000 So we know how to go up the stairs, but to say how to navigate the path you took from home to the studio today, how to get through that full path is so much an unsolved problem.
00:57:06.000 But is it because you could engineer or you could program it into your Tesla?
00:57:10.000 You could put it into your navigation system and have it stop at red lights, drive for you, take turns, and it can do that?
00:57:18.000 So, first of all, that I would argue is quite far away from still, but that's within 10, 20 years.
00:57:24.000 Well, how much can it do now?
00:57:26.000 It can stay inside the lane on the highway or on different roads, and it can change lanes.
00:57:33.000 And what's being pushed now is they're trying to be able to enter and exit a highway.
00:57:38.000 So it's some basic highway driving.
00:57:41.000 It doesn't stop at traffic lights.
00:57:42.000 It doesn't stop at stop signs.
00:57:44.000 And it doesn't interact with the complex, irrational human beings, pedestrians, cyclists, cars.
00:57:54.000 This is the onion I talked about.
00:57:57.000 In 2005, the DARPA Grand Challenge...
00:58:01.000 DARPA organized this challenge in the desert.
00:58:05.000 It says, let's go across the desert.
00:58:06.000 Let's see if we can build an autonomous vehicle that goes across the desert.
00:58:09.000 In 2004, they did the first one and everybody failed.
00:58:12.000 We're talking about some of the smartest people in the world really tried and failed.
00:58:18.000 And so they did again in 2005. There's a few.
00:58:21.000 Stanford won.
00:58:22.000 There's a really badass guy from CMU, Red.
00:58:25.000 I think he's like a marine.
00:58:26.000 He led the team there.
00:58:27.000 And they succeeded.
00:58:28.000 The four teams finished.
00:58:29.000 Stanford won.
00:58:30.000 That was in the desert.
00:58:31.000 And there was this feeling that we saw the autonomous driving.
00:58:35.000 But that's that onion.
00:58:37.000 Because you then, okay, what's the next step?
00:58:39.000 We've got a car that travels across the desert autonomously.
00:58:42.000 What's the next?
00:58:43.000 So in 2007, they did the Urban Grand Challenge.
00:58:48.000 The Urban Challenge.
00:58:51.000 Where you drove around the city a little bit.
00:58:53.000 And again, super hard problem.
00:58:56.000 People took it on.
00:58:58.000 CMU won that one.
00:59:00.000 Stanford second, I believe.
00:59:03.000 And then there was definitely a feeling like, yeah, now we had a car drive around the city.
00:59:09.000 It's definitely solved.
00:59:10.000 The problem is those cars were traveling super slow, first of all.
00:59:14.000 And second of all, there's no pedestrians.
00:59:17.000 It wasn't a real city.
00:59:19.000 It was artificial.
00:59:20.000 It's just basically having to stop at different sides.
00:59:23.000 Again, one other layer of the onion.
00:59:26.000 And you say, okay, when we actually have to put this car in a city like LA, how are we going to make this work?
00:59:33.000 Because if there's no cars in the street and no pedestrians in the street, driving around is still hard, but doable, and I think solvable in the next five years.
00:59:43.000 When you put pedestrians Everybody jaywalks.
00:59:47.000 If you put human beings into this interaction, it becomes much, much harder.
00:59:52.000 Now, it's not impossible, and I think it's very doable, and with completely new interesting ideas, including revolutionizing infrastructure and rethinking transportation in general, it's possible to do the next 5-10 years, maybe 20,
01:00:08.000 but it's not easy, like everybody says.
01:00:12.000 But does anybody say it's easy?
01:00:16.000 Yeah.
01:00:17.000 There's a lot of hype behind autonomous vehicles.
01:00:21.000 Elon Musk himself and other people have promised autonomous vehicles that that timeline has already passed.
01:00:26.000 There's been going on in 2018, we'll have autonomous vehicles.
01:00:31.000 Now, they're semi-autonomous now, right?
01:00:34.000 I know they can brake for pedestrians.
01:00:38.000 If they see pedestrians, they're supposed to brake for them and avoid them.
01:00:42.000 Right?
01:00:42.000 That's part of the, technically no.
01:00:45.000 Wasn't that an issue with an Uber car that hit a pedestrian that was operating autonomously?
01:00:49.000 That's right.
01:00:49.000 Someone, a homeless person, stepped out off of a median right into traffic and it nailed it and then they found out it didn't have one of the settings that wasn't in place.
01:00:59.000 That's right.
01:00:59.000 But that was an autonomous vehicle being tested in Arizona.
01:01:02.000 And unfortunately, it was a fatality.
01:01:04.000 A person died.
01:01:06.000 A pedestrian was killed.
01:01:08.000 So what happened there, that's the thing I'm saying is really hard.
01:01:13.000 That's full autonomy.
01:01:15.000 That's technically when the car, you can remove the steering wheel in the car to drive itself and take care of everything.
01:01:21.000 Everything I've seen, everything we're studying, so we're studying drivers and Tesla vehicles, we're building our own vehicles, it seems that it'll be a long way off before we can solve the fully autonomous driving problem.
01:01:34.000 Because of pedestrians.
01:01:36.000 But two things, pedestrians and cyclists and the edge cases of driving.
01:01:43.000 All the stuff we take for granted.
01:01:45.000 The same reason we take for granted how hard it is to walk, how hard it is to pick up this bottle.
01:01:52.000 Our intuition about what's hard and easy is really flawed as human beings.
01:01:56.000 Can I interject?
01:01:57.000 What if all cars were autonomous?
01:02:00.000 That's right.
01:02:01.000 If we got to a point where every single car on the highway is operating off of a similar algorithm or off the same system, then things would be far easier, right?
01:02:10.000 Because then you have to don't deal with random kinetic movements, people just changing lanes, people looking at their cell phone, not paying attention to what they're doing, all sorts of things you have to be wary of right now driving and pedestrians and bicyclists.
01:02:24.000 Totally.
01:02:25.000 And that's in the realm of things I'm talking about where you think outside the box and revolutionize our transportation system.
01:02:32.000 That requires government to play along.
01:02:36.000 Seems like that's going that way though, right?
01:02:38.000 Do you feel like that one day we're going to have autonomous driving pretty much everywhere?
01:02:44.000 Especially on the highway?
01:02:46.000 It's not going there in terms of it's very slow moving.
01:02:51.000 So government does stuff very slow moving with infrastructure.
01:02:54.000 One of the biggest things you can do for autonomous driving will solve a lot of problems is to paint lane markings.
01:03:02.000 Regularly.
01:03:03.000 And even that has been extremely difficult to do for politicians.
01:03:11.000 Right, because right now there's not really the desire for it.
01:03:13.000 But to explain to people what you mean by that, when the lanes are painted very clearly, the cameras and the autonomous vehicles can recognize them and stay inside those lanes much more easily.
01:03:23.000 Yeah, there's two ways that cars see the world.
01:03:27.000 Three.
01:03:28.000 There's different sensors.
01:03:29.000 The big ones for autonomous vehicles is LIDAR, which is these lasers that are being shot all over the place in 360, and they give you this point cloud of how far stuff is away, but they don't give you the visual texture information of this is what brand water bottle they are.
01:03:46.000 And cameras give you that information.
01:03:49.000 So what Tesla is using, they have eight cameras, I think.
01:03:53.000 Is they perceive the world with cameras.
01:03:56.000 And those two things require different things from the infrastructure, those two sensors.
01:04:01.000 Cameras see the world the same as our human eyes see the world.
01:04:05.000 So they need lay markings, they need infrastructure to be really nicely visible, traffic lights to be visible.
01:04:11.000 So the same kind of things us humans like to have is the cameras like to have.
01:04:17.000 And lay marking is a big one.
01:04:19.000 There's a lot of interesting infrastructure improvements that can happen, like traffic lights.
01:04:25.000 Traffic lights are super dumb right now.
01:04:28.000 They sense nothing about the world, about the density of pedestrians, about approaching cars.
01:04:35.000 If traffic lights can communicate with a car, Which makes perfect sense.
01:04:42.000 It's right there.
01:04:44.000 There's no size limitations.
01:04:46.000 It can have a computer inside of it.
01:04:48.000 You can coordinate different things in terms of the same pedestrian kind of problem.
01:04:53.000 Well, we have sensors now on streets.
01:04:55.000 So when you pull up to certain lights, especially at night, the light will be red.
01:04:59.000 You pull up, it instantaneously turns green because it recognizes that you've stepped over or driven over a sensor.
01:05:04.000 That's right.
01:05:05.000 So that's a step in the right direction, but that's really sort of 20 years, 30 years ago technology.
01:05:12.000 So you want to have something like the power of a smartphone inside every traffic light.
01:05:19.000 It's pretty basic to do, but there's way outside of my expertise is how do you get government to do these kinds of improvements.
01:05:26.000 So if I'm mistaken, well, correct me if I'm mistaken, but you're looking at things in terms of what we can do right now, right?
01:05:33.000 And a guy like Elon Musk or Sam Harris is saying, yeah, but look at where technology leads us.
01:05:40.000 If you go back to 1960, the kind of computers that they used to do the Apollo mission, you got a whole room full of computers that doesn't have nearly the same power as the phone that's in your pocket right now.
01:05:54.000 Now, if you go into the future and exponentially calculate what's going to take place in terms of our ability to create autonomous vehicles, our ability to create artificial intelligence, and all of these things going from what we have right now To what could be in 20 years,
01:06:16.000 we very well might look at some sort of an artificial being that can communicate with you, some sort of an ex machina type creature.
01:06:25.000 I mean, that's not outside the realm of possibility at all.
01:06:30.000 You have to be careful with the at all part.
01:06:32.000 At all.
01:06:35.000 Our ability to predict the future is really difficult, but I agree with you.
01:06:39.000 It's not outside the realm of possibility.
01:06:43.000 There's a few examples that are brought along, just because I enjoy these predictions, of how bad we are at predicting stuff.
01:06:54.000 From the very engineers, the very guys and gals like me sitting before you made some of the worst predictions in history in terms of both pessimistic and optimistic.
01:07:05.000 The Wright brothers, one of the Wright brothers, before they flew in 1903, predicted two years before that it will be 50 years I confess that in 1901 I said to my brother Orville that man would not fly for 50 years.
01:07:25.000 Two years later we ourselves were making flights.
01:07:28.000 This demonstration of my inability as a prophet gave me such shock So that's a pessimistic estimation versus an optimistic explanation.
01:07:47.000 Exactly.
01:07:48.000 And the same with Albert Einstein, Fermi made these kind of pessimistic observations.
01:07:54.000 Fermi, three years before the first critical chain reaction, he led the nuclear development of the bomb.
01:08:02.000 He said that he has 90% confidence that it's impossible.
01:08:05.000 Three years before.
01:08:07.000 Okay, so that's on the pessimistic side.
01:08:08.000 On the optimistic side, the history of AI is laden with optimistic predictions.
01:08:17.000 In 1965, one of the seminal people in AI, Herbert Simon, said, machines will be capable within 20 years of doing any work a man can do.
01:08:25.000 He also said, within 10 years, a digital computer will be the world's chess champion.
01:08:29.000 That's in 58. And we didn't do that until 90-something, 98, so 40 years later.
01:08:34.000 Yeah, but that's one person, right?
01:08:36.000 I mean, it's a guy taking a stab in the dark based on what data?
01:08:39.000 What's he basing this off of?
01:08:41.000 Our imagination.
01:08:43.000 Right.
01:08:43.000 We have more data points now, don't you think?
01:08:45.000 No.
01:08:45.000 Not about the future.
01:08:48.000 That's the thing.
01:08:49.000 Not about the future, but about what's possible right now.
01:08:53.000 Right.
01:08:53.000 And if you look at...
01:08:55.000 The past is a really bad predictor of the future.
01:08:58.000 If you look at the past...
01:09:00.000 What we've done, the immense advancement of technology has given us, in many ways, optimism about what's possible.
01:09:08.000 But exactly what is possible, we're not good at.
01:09:12.000 So, I am much more confident that the world will look very fascinatingly different in the future.
01:09:21.000 Whether AI will be part of that world is unclear.
01:09:24.000 It could be we will all live in a virtual reality world.
01:09:28.000 Or, for example, one of the things I really think about is, to me, a really dumb AI on one billion smartphones is potentially more impactful than a super intelligent AI on one smartphone.
01:09:45.000 Mm-hmm.
01:10:01.000 Could be in could completely change the fabric of our society in a way where these discussions about an ex machina type lady walking around will be silly because we'll all be either living on Mars or living in virtual reality or There's so many exciting possibilities,
01:10:18.000 right?
01:10:19.000 And what I believe in is we have to think about them.
01:10:22.000 We have to talk about them Technology is always the source of danger of risk and All of the biggest things that threatened our civilization at the small and large scale,
01:10:39.000 all are connected to misuse of technology we develop.
01:10:44.000 And at the same time, it's that very technology that will empower us and save us.
01:10:50.000 So there's Max Tegmark, brilliant guy, Life 3.0.
01:10:53.000 I recommend people read his book on artificial general intelligence.
01:10:57.000 He talks about the race.
01:10:59.000 There's a race that can't be stopped.
01:11:02.000 One is the development of technology.
01:11:05.000 And the other is the development of our wisdom of how to stop or how to control the technology.
01:11:11.000 And it's this kind of race.
01:11:13.000 And our wisdom is now is always like one step behind.
01:11:19.000 And then that's why we need to invest in it and keep sort of keep always thinking about new ideas.
01:11:24.000 So right now we're talking about AI. We don't know what it's going to look like in five years.
01:11:29.000 We have to keep thinking about it.
01:11:30.000 We have to, through simulation, explore different ideas, through conferences, have debates, come up with different approaches of how to How to solve particular problems like I said with bias or how to solve deep fakes where you fake – you can make Donald Trump or former President Obama say anything or you can have Facebook advertisements,
01:11:52.000 hyper-targeted advertisements.
01:11:55.000 How we can deal with those situations and constantly have this race of wisdom versus the development of technology.
01:12:02.000 But not to sit and think, well, look at the development of technology.
01:12:11.000 Imagine what it could do in 50 years and we're all screwed.
01:12:18.000 Because that's important to sort of be nervous about it in that way, but it's not conducive to what do we do about it.
01:12:26.000 And the people that know what to do about it are the people trying to build this technology, building this future one step at a time.
01:12:33.000 What do you mean by know what to do about it?
01:12:35.000 Because, like, let's put it in terms of Elon Musk.
01:12:38.000 Right.
01:12:39.000 Like, Elon Musk is terrified of artificial intelligence because he thinks by the time it becomes sentient, it'll be too late.
01:12:45.000 It'll be smarter than us and we'll have essentially created our successors.
01:12:50.000 Yes.
01:12:51.000 And let me quote Joe Rogan and say that's just one guy.
01:12:55.000 Yeah.
01:12:56.000 Well, Sam Harris thinks the same thing.
01:12:58.000 There's quite a few people who think that.
01:13:00.000 Sam Harris I think is one of the smartest people I know and Elon Musk, intelligence aside, is one of the most impactful people I know.
01:13:08.000 He's actually building these cars and in the narrow AI sense, he's built these autopilot systems that we've been studying.
01:13:19.000 The way that system works is incredible.
01:13:22.000 It was very surprising to me on many levels.
01:13:24.000 It's an incredible demonstration of what AI can do in a positive way in the world.
01:13:30.000 So I don't – but people can disagree.
01:13:35.000 I'm not sure the functional value of his fear about the possibility of this future.
01:13:42.000 Well, if he's correct.
01:13:43.000 There's functional value in hitting the brakes before this takes place.
01:13:48.000 Just to be a person who's standing on top of the rocks with a light to warn the boats, hey, there's a rock here.
01:13:57.000 Pay attention to where we're going because there's perils ahead.
01:14:00.000 I think that's what he's saying.
01:14:02.000 And I don't think there's anything wrong with saying that.
01:14:05.000 And I think there's plenty of room for people saying what he's saying and people saying what you're saying.
01:14:10.000 I think what would hurt us is if we tried to silence either voice.
01:14:14.000 I think what we need in terms of our understanding of this future is many, many, many, many, many of these conversations where you're dealing with the...
01:14:28.000 The current state of technology versus a bunch of creative interpretations of where this could go and have discussions about where it should go or what could be the possible pitfalls of any current or future actions.
01:14:44.000 I don't think there's anything wrong with this.
01:14:46.000 So when you say, like, what's the benefit of thinking in a negative way?
01:14:51.000 Well, it's to prevent our demise.
01:14:53.000 So, totally.
01:14:55.000 I agree 100%.
01:14:57.000 Negativity or worry about the existential threat is really important to have as part of the conversation.
01:15:04.000 But there's this level.
01:15:06.000 There's this line.
01:15:06.000 It's hard to put into words.
01:15:08.000 There's a line that you cross when that worry becomes...
01:15:12.000 Hyperbole.
01:15:12.000 Yeah, and then there's something about human psyche where it becomes paralyzing for some reason.
01:15:17.000 Right.
01:15:18.000 Now, when I have beers with my friends, the non-AI folks, we actually go, we cross that line all day and have fun with it.
01:15:26.000 Maybe I should get you drunk right now.
01:15:28.000 Maybe.
01:15:29.000 I regret every moment of it.
01:15:32.000 I talked to Steve Pinker.
01:15:35.000 Enlightenment Now, his book, kind of highlights that That kind of – he totally doesn't find that appealing because that's crossing all realms of rationality and reason.
01:15:51.000 When you say that appealing, what do you mean?
01:15:52.000 Crossing the line into what will happen in 50 years.
01:15:55.000 What could happen.
01:15:56.000 What could happen.
01:15:57.000 He doesn't find that appealing.
01:15:58.000 He doesn't find it appealing because he's studied, and I'm not sure I agree with him to the degree that he takes it.
01:16:06.000 He finds that there's no evidence.
01:16:10.000 He wants all our discussions to be grounded in evidence and data.
01:16:15.000 He highlights the fact that there's something about human psyche that desires this negativity.
01:16:25.000 There's something undeniable where we want to create and engineer the gods that overpower us and destroy us.
01:16:34.000 We want to?
01:16:35.000 Or we worry about it?
01:16:36.000 I don't know if we want to.
01:16:39.000 Let me rephrase that.
01:16:41.000 We want to worry about it.
01:16:42.000 There's something about the psyche.
01:16:44.000 Because you can't take the genie and put it back in the bottle.
01:16:48.000 That's right.
01:16:48.000 When you say there's no reason to think this way.
01:16:53.000 But if you do have cars that are semi-autonomous now, and if you do have computers that can beat human beings who are world GO champions, and if you do have computers that can beat people at chess, and you do have people that are consistently working on artificial intelligence, and you do have Boston Dynamics who are getting...
01:17:11.000 These robots to do all sorts of spectacular physical stunts and then you think about the possible future convergence of all these technologies and then you think about the possibility of this exponential increase in technology that allows them to be sentient, like within a decade,
01:17:27.000 two decades, three decades.
01:17:29.000 What more evidence do you need?
01:17:31.000 You're seeing all the building blocks of a potential successor being laid out in front of you, and you're seeing what we do with every single aspect of technology.
01:17:42.000 We constantly and consistently improve and innovate with everything, whether it's computers or cars or anything.
01:17:48.000 Everything today is better than everything that was 20 years ago.
01:17:52.000 So if you looked at artificial intelligence, which does exist to a certain extent, and you look at what it could potentially be 30, 40, 50 years from now, whatever it is, why wouldn't you look at all these data points and say,
01:18:08.000 hey, this could go bad?
01:18:11.000 I mean, it could go great, but it could also go bad.
01:18:16.000 I do not want to be mistaken as the person who's not the champion of the impossible.
01:18:20.000 I agree with you completely.
01:18:22.000 I don't think it's impossible.
01:18:23.000 I don't think it's impossible at all.
01:18:25.000 I think it's inevitable.
01:18:28.000 I don't...
01:18:29.000 I think it is inevitable, yes.
01:18:33.000 It's the Sam Harris argument.
01:18:35.000 If superintelligence is nothing more than information processing...
01:18:41.000 Same as the argument of the simulation, that we're living in a simulation.
01:18:46.000 It's very difficult to argue against the fact that we're living in a simulation.
01:18:49.000 The question is when and what the world would look like.
01:18:53.000 Right.
01:18:54.000 So it's, like I said, a race.
01:18:56.000 And it's difficult.
01:18:58.000 You have to balance those two minds.
01:19:01.000 I agree with you totally.
01:19:02.000 And I disagree with my fellow robotics folks who don't want to think about it at all.
01:19:07.000 Of course they don't.
01:19:08.000 They want to buy new houses.
01:19:09.000 They've got a lot of money invested in this adventure.
01:19:12.000 They want to keep the party rolling.
01:19:13.000 They don't want to pull the brakes.
01:19:14.000 Everybody, pull the cords out of the walls.
01:19:16.000 We've got to stop.
01:19:17.000 No one's going to do that.
01:19:18.000 No one's going to come along and say, hey, we've run all this data through a computer and we've found that if we just keep going the way we're going and 30 years from now we will have a successor that will decide that human beings are outdated and inefficient and dangerous to the actual world that we live in and we're going to start wiping them out.
01:19:38.000 It doesn't exist right now.
01:19:41.000 But if that did happen, if someone did come to the UN and had this multi-stage presentation with data that showed that if we continue on the path, we have seven years before artificial intelligence decides to eliminate human beings based on these data points.
01:20:00.000 What do they do?
01:20:01.000 What do the Boston Dynamics people do?
01:20:02.000 Well, I'm building a house in Cambridge.
01:20:04.000 What are you talking about, man?
01:20:05.000 I'm not going anywhere.
01:20:07.000 Come on.
01:20:07.000 I just bought a new Tesla.
01:20:09.000 I need to finance this thing.
01:20:10.000 Hey, I got credit card bills.
01:20:11.000 I got student loans I'm still paying off.
01:20:14.000 How do you stop people from doing what they do for a living?
01:20:16.000 How do you say that, hey, I know that you would like to look at the future with rose-colored glasses on, but there's a real potential pitfall that could be the extermination of the human species?
01:20:29.000 Right.
01:20:30.000 And obviously I'm going way far with this.
01:20:32.000 Yeah, I like it.
01:20:34.000 I think every one of us trying to build these systems are similar in sound to the way you were talking about the touch of death.
01:20:43.000 In that my dream, and the dream of many roboticists, is to create intelligent systems that will improve our lives.
01:20:53.000 And working really hard at it.
01:20:57.000 Not for a house in Cambridge.
01:20:58.000 Not for a billion dollar for selling a start-up paycheck.
01:21:03.000 We love this stuff.
01:21:05.000 Some of you.
01:21:07.000 Obviously, the motivations are different for every single human being that's involved in every endeavor.
01:21:12.000 And we're trying really hard to build these systems and it's really hard.
01:21:17.000 So whenever the question is, well, this is going to look at historically, it's going to take off.
01:21:23.000 It can potentially take off any moment.
01:21:26.000 It's very difficult to really be cognizant as an engineer about how it takes off because you're trying to make it take off in a positive direction and you're failing.
01:21:38.000 Everybody is failing.
01:21:40.000 It's been really hard.
01:21:41.000 And so you have to acknowledge that Overnight, some Elon Musk type character may come along and, you know, people with this boring company or with SpaceX, people didn't think anybody but NASA could do what Elon Musk is doing and he's doing it.
01:22:01.000 It's hard to think about that too much.
01:22:03.000 You have to do that.
01:22:05.000 But the reality is we're trying to create these super intelligent beings.
01:22:11.000 Sure, but isn't the reality also that we have done things in the past because we were trying to do it, and then we realized that these have horrific consequences for the human race, like Oppenheimer in the Manhattan Project, you know, when he said, I am death.
01:22:27.000 Destroyer of worlds when he's quoting the Bhagavad Gita, when he's detonating the first nuclear bomb and realizing what he's done.
01:22:34.000 Just because something's possible to do doesn't necessarily mean it's a good idea for human beings to do it.
01:22:39.000 Now, we haven't destroyed the world with Oppenheimer's discovery and through the work of the Manhattan Project.
01:22:46.000 We've managed to somehow or another keep the lid on this shit for the last 60 years.
01:22:49.000 Which is incredible.
01:22:50.000 It's crazy, right?
01:22:51.000 I mean, for the last, what, 70 years?
01:22:54.000 How long has it been?
01:22:55.000 70 sounds right.
01:22:56.000 10,000, 20,000 nukes all over the world right now.
01:22:59.000 It's crazy.
01:23:00.000 I mean, we literally could kill everything on the planet.
01:23:02.000 And somehow, we don't.
01:23:04.000 Somehow.
01:23:05.000 Somehow, in some amazing way, we have not.
01:23:08.000 But that doesn't mean we...
01:23:09.000 I mean, that's a very short amount of time in relation to the actual lifespan of the Earth itself and certainly in terms of the time human history has been around.
01:23:21.000 And nuclear weapons, global warming is another one.
01:23:27.000 Sure, but that's a side effect of our actions, right?
01:23:29.000 We're talking about a direct effect of human ingenuity and innovation, the nuclear bomb.
01:23:35.000 It's a direct effect.
01:23:37.000 We tried to make it.
01:23:38.000 We made it.
01:23:39.000 There it goes.
01:23:39.000 Global warming is an accidental consequence of human civilization.
01:23:44.000 So you can't – I don't think it's possible to not build a nuclear bomb.
01:23:51.000 You don't think it's possible to not build it.
01:23:54.000 Because people are tribal, they speak different languages, they have different desires and needs, and they were in more.
01:23:59.000 So if all these engineers were working towards it, it was not possible to not build it.
01:24:05.000 Yep, and like I said, there's something about us chimps in a large collective where we are born and push forward towards progress of technology.
01:24:15.000 You cannot stop the progress of technology.
01:24:17.000 So the goal is how to develop, how to guide that development into a positive direction.
01:24:24.000 But surely, if we do understand that this has taken place, and we did drop these enormous bombs on Hiroshima and Nagasaki and killed Untold amounts of innocent people with these detonations that it's not necessarily always a good thing to pursue technology.
01:24:45.000 Nobody is so...
01:24:48.000 You see what I'm saying?
01:24:49.000 Yes, 100%.
01:24:50.000 I agree with you totally.
01:24:51.000 So I'm more playing devil's advocate than anything.
01:24:53.000 But what I'm saying is you guys are looking at these things like we're just trying to make these things happen.
01:24:59.000 And what I think people like Elon Musk and Sam Harris and a bunch of others that are gravely concerned about the potential for AI are saying is, I understand what you're doing, but you've got to understand the other side of it.
01:25:13.000 We've got to understand that there are people out there that are terrified that if you do extrapolate, if you do take this relentless thirst for innovation and keep going with it, if you look at what we can do, what human beings can do so far in our crude manner of 2018,
01:25:31.000 all the amazing things they've been able to accomplish.
01:25:34.000 It's entirely possible that we might be creating our successors.
01:25:38.000 This is not outside the realm of possibility.
01:25:40.000 And all of our biological limitations, we might figure out a better way.
01:25:46.000 And this better way might be some sort of an artificial creature.
01:25:51.000 Yep.
01:25:51.000 AI began with our dream to forge the gods.
01:25:55.000 I think that it's impossible to stop.
01:26:00.000 Well, it's not impossible to stop if you go Ted Kaczynski and kill all the people.
01:26:06.000 I mean, that's what Ted Kaczynski anticipated.
01:26:08.000 You know, the Unabomber, do you know the whole story behind him?
01:26:11.000 No.
01:26:12.000 What was he trying to stop?
01:26:13.000 He's a fascinating cat.
01:26:14.000 Here's what's fascinating.
01:26:15.000 There's a bunch of fascinating things about him.
01:26:17.000 One of the more fascinating things about him, he was involved in the Harvard LSD studies.
01:26:21.000 Right.
01:26:22.000 So they were nuking that dude's brain with acid.
01:26:25.000 And then he goes to Berkeley, becomes a professor, takes all his money from teaching and just makes a cabin in the woods and decides to kill people that are involved in the creation of technology because he thinks technology is eventually going to kill off all the people.
01:26:39.000 So he becomes crazy and schizophrenic and who knows what the fuck is wrong with him and whether or not this would have taken place inevitably or whether this was a direct result of his We don't even know how much they gave him or what the experiment entailed or how many other people's got their brain torched during these experiments.
01:27:01.000 But we do know for a fact that Ted Kaczynski was a part of the Harvard LSD studies.
01:27:05.000 And we do know that he went and did move to the woods and write his manifesto and start blowing up people that were involved in technology.
01:27:15.000 And the basic thesis of his manifesto that perhaps LSD opened his eyes to is that technology is going to kill all humans.
01:27:24.000 Yeah.
01:27:26.000 It was going to be the end of the human race, I think, I believe.
01:27:28.000 The human race, so the solution...
01:27:30.000 Is that what he said?
01:27:32.000 The Industrial Revolution and its consequences have been a disaster for the human race.
01:27:37.000 Yeah, he extrapolated.
01:27:40.000 He was looking at where we're going and these people that were responsible for innovation, and he was saying they're doing this with no regard for the consequences on the human race.
01:27:50.000 And he thought the way to stop that was to kill people.
01:27:52.000 Obviously, he's fucking demented.
01:27:54.000 But this is, I mean, he literally was saying what we're saying right now.
01:28:00.000 You keep going, we're fucked.
01:28:02.000 So the Industrial Revolution, we'll have to think about that.
01:28:06.000 It's a really important message coming from the wrong guy, but...
01:28:10.000 It's a great way to put it.
01:28:11.000 Where is all this taking us?
01:28:13.000 Yeah, where is it taking us?
01:28:14.000 So I guess my underlying assumption is the current capitalist structure of society, that we always want a new iPhone.
01:28:23.000 You just had one of the best reviewers on yesterday that always talks about...
01:28:28.000 Marcus.
01:28:28.000 Marcus, yeah.
01:28:30.000 We always, myself too, Pixel 3. I have a Pixel 2. I'm thinking, maybe I need a Pixel 3. Maybe you do.
01:28:36.000 I don't know.
01:28:36.000 Better camera.
01:28:38.000 Whatever that is, that fire that wants more, better, better.
01:28:42.000 I just don't think it's possible to stop.
01:28:44.000 And the best thing we can do is to explore ways to guide it towards safety where it helps us.
01:28:51.000 When you say it's not possible to stop, you mean collectively as an organism, like the human race, that it's a tendency that's just built in?
01:28:58.000 It's certainly possible to stop as an individual, because I know people, like my friend Ari, who's given up on smartphones, he went to a flip phone, and he doesn't check social media anymore, and he found it to be toxic, he didn't like it, he thought he was too addicted to it, and he didn't like where it was leading him.
01:29:15.000 So on an individual level, it's possible.
01:29:18.000 Individual level, but then, and just like with Ted Kaczynski, on the individual level, it's possible to do certain things that try to stop it in more dramatic ways.
01:29:26.000 But I just think the force of our, this organism, this living, breathing organism that is our civilization, will progress forward.
01:29:37.000 We're just curious apes.
01:29:38.000 It's this desire to explore the universe.
01:29:43.000 Why?
01:29:44.000 Why do we want to do these things?
01:29:45.000 Why do we look up and we want to travel?
01:29:49.000 I don't think we're trying to optimize for survival.
01:29:53.000 In fact, I don't think most of us would want to be immortal.
01:29:58.000 I think it's like Neil deGrasse Tyson talks about.
01:30:00.000 The fact that we're mortal, the fact that one day we'll die is one of the things that gives life meaning.
01:30:06.000 And sort of trying to worry and trying to sort of say, wait a minute, where is this going?
01:30:12.000 As opposed to riding the wave and riding the wave of forward progress.
01:30:19.000 I mean, it's one of the things...
01:30:21.000 He gets quite a bit of ironically hate for it, Steve Pinker, but he really describes in data how our world is getting better and better.
01:30:28.000 Well, he just gets hate from people that don't want to admit that there's a trend towards things getting better because they feel like then people will ignore all the bad things that are happening right now and all the injustices, which I think is a very short-sighted thing, but I think it's because of their own Their own biases and the perspective that they're trying to establish and push.
01:30:50.000 Instead of looking at things objectively and looking at the data and say, say, I see where you're going, it doesn't discount the fact that there's injustice in the world and crime and violence and all sorts of terrible things happen to people that are good people on a daily basis.
01:31:02.000 But what he's saying is just look at the actual trend of civilization and the human species itself and there's an undeniable trend towards peace.
01:31:14.000 Slowly but surely working towards peace.
01:31:16.000 Way safer today.
01:31:17.000 Way safer today than it was a thousand years ago.
01:31:20.000 Just – it is.
01:31:21.000 It just is.
01:31:22.000 Yeah, and there's these interesting arguments, which his book kind of blew my mind to this funny joke.
01:31:28.000 He says that some people consider giving nuclear – The atom bomb, the Nobel Peace Prize.
01:31:35.000 Because he believes, I'm not an expert in this at all, but he believes that, or some people believe that nuclear weapons are actually responsible for a lot of the decrease in violence.
01:31:45.000 Because all of the major people can do damage.
01:31:48.000 Russia and all the major states that can do damage have a strong disincentive from engaging in warfare.
01:31:54.000 And so these are the kinds of things you don't, I guess, anticipate.
01:31:59.000 So I think it's very difficult to stop that forward progress, but we have to really worry and think about, okay, how do we avoid the list of things that we worry about?
01:32:11.000 So one of the things that people really worry about is the control problem.
01:32:15.000 It's basically AI becoming not necessarily super intelligent, but super powerful.
01:32:20.000 We put too much of our lives into it.
01:32:22.000 That's where Elon Musk and others that want to provide regulation of some sort, saying, wait a minute, you have to put some bars on what this thing can do from a government perspective, from a company perspective.
01:32:33.000 But how could you stop rogue states from doing that?
01:32:37.000 Why would China listen to us?
01:32:39.000 Why would Russia listen to us?
01:32:41.000 Why would other countries that are capable of doing this and maybe don't have the same sort of power that the United States has and they would like to establish that kind of power, why wouldn't they just take the cap off?
01:32:52.000 In a philosophical high-level sense, there's no reason.
01:32:56.000 But if you engineer it in...
01:32:58.000 So I'm a big...
01:32:59.000 We do this thing with autonomous vehicles called arguing machines.
01:33:03.000 We have multiple AI systems argue against each other.
01:33:06.000 So it's possible that you have some AI systems over supervising other AI systems.
01:33:16.000 In our nation, there's a Congress arguing blue and red states being represented and there's discourse going on, debate, and have AI systems like that too.
01:33:26.000 It doesn't necessarily need to be one super powerful thing.
01:33:30.000 It could be AI supervising each other.
01:33:32.000 So there's interesting ideas there to play with.
01:33:37.000 Because ultimately, what are these artificial intelligence systems doing?
01:33:41.000 We humans place power into their hands first.
01:33:44.000 In order for them to run away with it, we need to put power into their hands.
01:33:47.000 So we have to figure out how we put that power in initially so it doesn't run away and how supervision can happen.
01:33:53.000 Right, but this is us, right?
01:33:55.000 You're talking about rational people.
01:33:56.000 What about other people?
01:33:57.000 Why would they engineer limitations into their artificial intelligence and what incentive would they have to do that to somehow another limit their artificial intelligence to keep it from having as much power as ours?
01:34:08.000 There's really not a lot of incentive on their side, especially if there's some sort of competitive advantage for their artificial intelligence to be more ruthless, more sentient, more autonomous.
01:34:18.000 I mean, it seems like Once, again, once the genie's out of the bottle, it's going to be very hard.
01:34:25.000 I have a theory, and this is a very bizarre theory, but I've been running with this for quite a few years now.
01:34:30.000 I think human beings are some sort of a caterpillar.
01:34:34.000 And I think we're creating a cocoon, and through that cocoon, we're going to give birth to a butterfly, and then we're going to become something.
01:34:40.000 And I think whether we're going to have some sort of a symbiotic connection to these electronic things where they're going to replace our parts, our failing parts, with far superior parts until we're not really a person anymore.
01:34:52.000 Like, what was that Scarlett Johansson movie?
01:34:55.000 The Ghost in the Shell?
01:34:57.000 I tried to watch part of it.
01:34:58.000 It's pretty stupid.
01:34:59.000 But she's hot as fuck, so it kept my attention for a little bit.
01:35:02.000 But in that, they took her brain and put it in this artificial body that had superpowers.
01:35:09.000 And they basically replaced everything about her that was in her consciousness With these artificial parts.
01:35:20.000 All of her frame, everything was just some new thing that was far superior.
01:35:25.000 And she had these abilities that no human being will ever have.
01:35:28.000 I really wonder why we have this insatiable...
01:35:34.000 Why can't...
01:35:34.000 If we're so logical...
01:35:36.000 We're so logical and so thoughtful in some ways.
01:35:39.000 Why can't we be that way when it comes to materialism?
01:35:41.000 Well, I think one of the reasons why is because materialism is the main engine that pushes innovation.
01:35:48.000 If it wasn't for people's desire to get the newest, latest, and greatest thing, what would fund these New TVs, cell phones, computers.
01:35:57.000 Why do you really need a new laptop every year?
01:36:00.000 Is it because of engineered obsolescence where the laptop dies off and you have to get a new one because they fucked you and they built a shitty machine that's designed to die so you buy a new one?
01:36:10.000 You really like iPhones, don't you?
01:36:11.000 Well, it's not even iPhones.
01:36:13.000 It's a laptop.
01:36:14.000 Is it because you just see the number?
01:36:20.000 2.6 gigahertz is better than 2.4.
01:36:23.000 Oh, it's the new one.
01:36:24.000 It has a 12 megapixel webcam instead of an 8. And for whatever reason, we have this desire to get those new things.
01:36:32.000 I think that's what fuels innovation.
01:36:34.000 And my cynical view Of this thing that's happening, is that we have this bizarre desire to fuel our demise, and that we're doing so by fueling technology, by motivating these companies to continually innovate.
01:36:53.000 If everybody just said, you know what, man, I'm really into log cabins, and I want an axe, or I can cut my own firewood, and I realize that TV's rot in my brain, I just want to read books.
01:37:03.000 So fuck off.
01:37:04.000 And everybody started doing that.
01:37:06.000 And everybody started living like, when it gets dark out, I'll use candles.
01:37:10.000 And you know what?
01:37:10.000 I'm going to get my water from a well.
01:37:12.000 And you know what?
01:37:13.000 I'm going to do...
01:37:13.000 And I like living better that way.
01:37:15.000 If people started doing that, there would be no need for companies to continually make new computers, to make new phones, to make new smart watches, or whatever the fuck they're making.
01:37:26.000 To make cars that can drive themselves.
01:37:29.000 These things that we're really, really attached to, if you looked at the human organism, you somehow or another could objectively remove yourself from society and culture and all the things that make us a person, and you look at what we do,
01:37:46.000 what does this thing do?
01:37:47.000 We found this planet, there's these little pink monkeys and brown monkeys and yellow monkeys, and what are they all into?
01:37:53.000 Well, they all seem to be into making stuff.
01:37:55.000 And what kind of stuff are they making?
01:37:57.000 Well, they keep making better and better stuff that's more and more capable.
01:38:00.000 Well, where's it going?
01:38:01.000 Well, it's going to replace them.
01:38:03.000 They're going to make a thing that's better than them.
01:38:05.000 They're engineering these things slowly but surely to do all the things they do but do them better.
01:38:12.000 Yeah, and it's a fascinating theory.
01:38:15.000 I mean, it's not a theory.
01:38:17.000 It's an instructive way to think about intelligence and life, period.
01:38:21.000 So if you step back, look across human history, and look at Earth as an organism.
01:38:26.000 What is this thing doing?
01:38:28.000 The thing is, I think in terms of scale and in terms of time, you can look that way at so many things.
01:38:35.000 Like isn't there billions or trillions of organisms on our skin right now, both of us, that have little civilizations, right?
01:38:42.000 They have a different mechanism by which they operate and interact.
01:38:45.000 But for us to say that we're intelligent and those organisms are not is a very narrow-sided view.
01:38:51.000 So they are operating under some force of nature that can't That Darwin has worked on trying to understand some small elements of this evolutionary theory.
01:39:01.000 But there's other more interesting forces at play that we don't understand.
01:39:04.000 And there's some kind of force.
01:39:06.000 It could be a fundamental force of physics that Einstein never got a chance to discover is our desire for an iPhone update.
01:39:15.000 Some fundamental force of nature, somehow gravity and the strong force and these things described by physics add up to this drive for new things, for creation.
01:39:29.000 And the fact that we die, the fact that we're mortal, the fact that what desires are built into us, whether it's sexual or intellectual or whatever drives us apes, Somehow that all combines to this progress and towards what...
01:39:49.000 It is a compelling way to think that if an alien species did visit Earth, I think they would probably see the smartphone situation.
01:39:58.000 They see how many little lights are on and how us apes are looking at them.
01:40:02.000 It's possible, I think, some people have said that they would think the overlords are the phones, not the people.
01:40:07.000 Mm-hmm.
01:40:08.000 So to think that that's now moving into a direction where the future will be something that is beyond human or symbiotic with human ways we can't understand is really interesting.
01:40:23.000 Not just that, but something that we're creating ourselves.
01:40:25.000 Creating ourselves.
01:40:26.000 And it's a main focal point of our existence.
01:40:30.000 That's our purpose.
01:40:31.000 Yeah.
01:40:31.000 I mean, if you think about a main focal point, if you think about the average person, what they do, there's a great percentage of our population that has jobs where they work, and one of the ways that they placate themselves doing these things that they don't really enjoy doing is earning money for objects.
01:40:54.000 They want a new car.
01:40:56.000 They want a new house.
01:40:57.000 They want a bigger TV. They want a this or that.
01:41:00.000 And the way they motivate themselves to keep showing up at this shitty job is to think, if I just put in three more months, I can get that Mercedes.
01:41:10.000 If I just do this or that, I can finance this new Pixel 3. Yeah, and it's interesting because the sort of politicians – what's the American dream?
01:41:21.000 Is for – you hear this thing, I want my children to be better off than me.
01:41:26.000 This kind of desire – you can almost see that that taken farther and farther will be – there will be a presidential candidate in 50, 100 years.
01:41:35.000 They'll say – I want my children to be robots.
01:41:39.000 You know what I mean?
01:41:41.000 Like sort of this idea that that's the natural evolution and that is the highest calling of our species.
01:41:47.000 That scares me because I value my own life.
01:41:51.000 But does it scare you if it comes out perfect?
01:41:54.000 Like if each robot is like a god and each robot is beautiful and loving and they recognize all the great parts of this existence and they avoid all the jealousy and the nonsense and all the stupid aspects of being a person.
01:42:08.000 We realize that a lot of these things are just sort of biological engineered tricks that are designed to keep us surviving from generation after generation but now here in this fantastic new age we don't need them anymore.
01:42:24.000 Yeah, it's...
01:42:24.000 Well, first, one of the most transformative moments of my life was when I met Spot Mini in person, which is one of the legged robots in Boston Dynamics.
01:42:35.000 For the first time when I met them, met that little fella, there was...
01:42:41.000 I know exactly how it works.
01:42:43.000 I know exactly how every aspect of it works.
01:42:45.000 It's just a dumb robot.
01:42:46.000 But when I met him, and he got up, and he looked at me...
01:42:50.000 There it is right there.
01:42:51.000 Have you seen it dance now?
01:42:53.000 Yeah, the dance.
01:42:54.000 The new thing?
01:42:55.000 Yep.
01:42:56.000 The dance is crazy.
01:42:58.000 But see, it's not crazy on the technical side.
01:43:01.000 Right.
01:43:02.000 It's engineered.
01:43:03.000 It's obvious.
01:43:03.000 It's programmed.
01:43:04.000 But it's crazy to watch.
01:43:06.000 Like, wow.
01:43:07.000 The reason the moment was transformative is I know exactly how it works.
01:43:10.000 And yet by watching it, something about the feeling of it.
01:43:14.000 You're like, this thing is alive.
01:43:17.000 And there was this terrifying moment, not terrifying, but terrifying and appealing where this is the future.
01:43:24.000 Right.
01:43:25.000 Like, this thing represents some future that is totally, that we cannot understand.
01:43:36.000 Just like a future in the 18th century, a future with planes and smartphones was something we couldn't understand.
01:43:44.000 That this thing, that little dog could have had a human consciousness in it.
01:43:50.000 That was the feeling I had.
01:43:51.000 And I know exactly how it works.
01:43:53.000 There's nothing close to the intelligence, but it just gives you this picture of what the possibilities are of these living creatures.
01:44:01.000 And I think that's what people feel when they see Boston Dynamics.
01:44:04.000 Look how awesome this thing running around is.
01:44:06.000 They don't care about the technicalities and how far away we are.
01:44:09.000 They see it.
01:44:10.000 Look, this thing is pretty human.
01:44:12.000 And the possibilities of human-like things that supersede humans and can evolve and learn quickly, exponentially fast.
01:44:23.000 It's this terrifying frontier that really makes us think, as it did for me.
01:44:30.000 Maybe terrifying is a weird word.
01:44:31.000 Because when I look at it, and I'm not irrational, and I look at it, there's videos that show the progression of Boston Dynamics robots.
01:44:41.000 From several years ago to today, what they're capable of.
01:44:45.000 And it is a fascinating thing, because you're watching all the hard work of these engineers and all these people that have designed these systems and have figured out all these problems that these things encounter, and they've come up with solutions,
01:45:01.000 and they continue to innovate.
01:45:03.000 And they're constantly doing it, and you're seeing this problem, and you're like, wow, what are we going to see in a year?
01:45:08.000 What am I going to see in three years?
01:45:09.000 What am I going to see in five years?
01:45:12.000 Absolutely fascinating, because if you extrapolate and you just keep going, boy, you go 15, 20, 30, 50, 100 years from now, you have ex machina.
01:45:22.000 Yeah, you have ex machina, at least in our imagination.
01:45:27.000 In our imagination.
01:45:28.000 And the problem is there will be so many other things that are super exciting and interesting.
01:45:34.000 Sure, but that doesn't mean it's not crazy.
01:45:37.000 I mean there's many other things you could focus on also that are also going to be bizarre and crazy.
01:45:42.000 Sure.
01:45:43.000 But what about it?
01:45:44.000 Just it.
01:45:45.000 It's going somewhere.
01:45:46.000 That fucker is getting better.
01:45:48.000 The parkour one is bananas.
01:45:50.000 You see it hopping from box to box and left to right and leaping up in the air, and you're like, whoa.
01:45:56.000 That thing doesn't have any wires on it.
01:45:58.000 It's not connected to anything.
01:46:00.000 It's just jumping from box to box.
01:46:03.000 If that thing had a machine gun and it was running across a hill at you, you'd be like, oh, fuck, how long does its battery last?
01:46:09.000 How many bullets does it have?
01:46:11.000 Let me just say that I would pick Tim Kennedy over that dog for the next 50 years.
01:46:19.000 50?
01:46:20.000 Yeah.
01:46:21.000 Man, I'm a big Tim Kennedy fan.
01:46:24.000 I'm talking about that.
01:46:27.000 But he'll probably have some robotic additions to his body to improve the...
01:46:32.000 Well, then is he Tim Kennedy anymore?
01:46:35.000 If the brain is Tim Kennedy, then he's still Tim Kennedy.
01:46:37.000 That's the way we think about it.
01:46:39.000 But there is huge concern about – the UN is meeting about this as autonomous weapons.
01:46:45.000 It's allowing AI to make decisions about who lives and who dies is really concerning in the short term.
01:46:54.000 It's not about a robotic dog with a shotgun running around.
01:46:59.000 It's more about our military wanting to make destruction as efficient as possible, minimizing human life.
01:47:06.000 Drones?
01:47:07.000 Drones.
01:47:07.000 There's something really uncomfortable to me about drones in how you compare with Dan Carlin Hardcore History with Genghis Khan.
01:47:17.000 There's something impersonal about what drones are doing, where it moves you away from the actual destruction that you're achieving, where I worry that our ability to encode the ethics into these systems will go wrong in ways we don't expect.
01:47:34.000 And so, I mean, folks at the UN talk about, well, you have these automated drones that make That drop bombs over a particular area.
01:47:45.000 So the bigger and bigger the area is over which you allow an artificial intelligence system to make a decision to drop the bombs, the weirder and weirder it gets.
01:47:55.000 There's some line, now presumably if there's like three tanks that you would like to destroy with a drone, it's okay for an AI system to say, I would like to destroy those three, like I'll handle everything, just give me the three tanks.
01:48:08.000 But this makes me uncomfortable as well because I think I'm opposed to most wars.
01:48:14.000 But it's just military is military and they try to get the job done.
01:48:19.000 Now what if we now expand that to 10, 20, 100 tanks?
01:48:23.000 Where you now let the AI system draw bombs all over very large areas.
01:48:28.000 How can that go wrong?
01:48:30.000 And that's terrifying.
01:48:31.000 And there's practical engineering solutions to that.
01:48:34.000 Oversight.
01:48:35.000 And that's something that engineers sit down.
01:48:38.000 There's an engineering ethic where you encode and you have meetings of how do we make this safe?
01:48:44.000 That's what you worry about.
01:48:45.000 The thing that keeps me up at night is the 40,000 people that die every year in auto crashes.
01:48:51.000 I worry about not...
01:48:53.000 You have to understand, I worry about the future of AGI taking over, but that's not as large...
01:49:00.000 AGI? AGI, Artificial General Intelligence.
01:49:04.000 That's kind of the term that people have been using for this.
01:49:07.000 But maybe because I'm in it, I worry more about the 40,000 people that die in the United States and the 1.2 million that die every year from auto crashes.
01:49:17.000 There's something...
01:49:20.000 That is more real to me about the death that's happening now that could be helped.
01:49:25.000 And that's the fight.
01:49:27.000 But, of course, if this threat becomes real, then...
01:49:31.000 Then that's a much, you know, that's a serious threat to humankind.
01:49:36.000 And that's something that should be thought about.
01:49:39.000 I just worry that, I worry also about the AI winter.
01:49:44.000 So I mentioned there's been two winters in the 70s and the 80s to 90s.
01:49:51.000 When funding completely dried up, but more importantly, just people stopped getting into artificial intelligence and became cynical about its possibilities.
01:50:00.000 Because there was a hype cycle where everyone was really excited about the possibilities of AI. And then they realized, you know, five, ten years into the development, that we didn't actually achieve anything.
01:50:10.000 It was just too far off.
01:50:12.000 Too far off.
01:50:12.000 Same as it was for virtual reality.
01:50:15.000 For the longest time, virtual reality was something that was discussed even in the 80s and the 90s, but it just died off.
01:50:20.000 Nobody even thought about it.
01:50:22.000 Now it's come back to the forefront when there's real virtual reality that you can use, like HTC Vibes or things along those lines where you can put these helmets on, and you really do see these alternative worlds that people have created in these video games.
01:50:40.000 You realize there's a practical application for this stuff because the technology is caught up with the concept.
01:50:45.000 Yeah, and I actually don't know where people stand on VR. We do quite a bit of stuff with VR for research purposes for simulating robotic systems, but I don't know where the hype is.
01:50:56.000 I don't know if people calm down a little bit on VR. So there was a hype in the 80s and 90s, I think.
01:51:01.000 I think it's ramped up quite a bit.
01:51:02.000 What is the other one, the Oculus Rift, and what other one?
01:51:06.000 Those are the main ones, and there's other headsets that you can work and use with.
01:51:10.000 Yeah, and there's some you can use just with a Samsung phone, correct?
01:51:14.000 Yeah, and the next generation, which next year to two, are going to be all standalone systems.
01:51:20.000 So there's going to be an Oculus Rift coming out you don't need a computer for at all.
01:51:23.000 So the ultimate end-game fear, the event horizon of that, is the Matrix.
01:51:30.000 Right?
01:51:31.000 That's what people are terrified of, of some sort of a virtual reality world where you don't exist in the physical sense anymore.
01:51:37.000 They just plug something into your brain stem, just like they do in The Matrix, and you're just locked into this artificial world.
01:51:45.000 Is that terrifying to you?
01:51:47.000 That seems to be less terrifying than AI killing all of humankind.
01:51:51.000 Well, it depends.
01:51:53.000 What is life?
01:51:54.000 That's the real question, right?
01:51:56.000 If you only exist inside of a computer program, but it's a wonderful program, and whatever your consciousness is, and we haven't really established what that is, right?
01:52:05.000 I mean, there's a lot of really weird hippie ideas out there about what consciousness is.
01:52:10.000 Your body's just like an antenna man, and it's just like tuning into consciousness, and consciousness is all around you.
01:52:16.000 It's Gaia.
01:52:17.000 It's the Mother Earth.
01:52:17.000 It's the universe itself.
01:52:19.000 It's God.
01:52:20.000 It's love.
01:52:21.000 Okay, maybe.
01:52:22.000 I don't know.
01:52:23.000 But if you could take that, whatever the fuck it is, and send it in a cell phone to New Zealand, is that where your consciousness is now?
01:52:30.000 Because if we figure out what consciousness is and get it to the point where we can turn it into a program or duplicate it, I mean, that sounds so far away.
01:52:43.000 But if you went up to someone from 1820 and said, hey man, one day I'm going to take a picture of my dick and I'm going to send it to this girl.
01:52:51.000 She's going to get it on her phone.
01:52:52.000 They'd be like, what the fuck are you talking about?
01:52:54.000 A photo?
01:52:55.000 What do you mean?
01:52:56.000 What's a photo?
01:52:57.000 Oh, it's like a picture, but you don't draw it.
01:53:00.000 It's perfect.
01:53:01.000 It looks exactly like that.
01:53:02.000 It's in HD and I'm going to make a video.
01:53:05.000 Of me taking a shit, and I'm going to send it to everyone.
01:53:08.000 They're like, what the fuck is this?
01:53:09.000 That's not even possible.
01:53:10.000 Get out of here.
01:53:11.000 That is essentially you're capturing time.
01:53:14.000 You're capturing moments in time in a very, not a very crude sense, but a crude sense in terms of comparing it to the actual world itself.
01:53:27.000 In the moment where it's happening.
01:53:28.000 Like here, you and I are having this conversation.
01:53:32.000 We're having it in front of this wooden desk.
01:53:34.000 There's paper in front of you.
01:53:35.000 To you and I, we have access to all the textures, the sounds.
01:53:39.000 We can feel the air conditioning.
01:53:41.000 We can look up.
01:53:42.000 We can see the ceiling.
01:53:46.000 We got the whole thing in front of us because we're really here.
01:53:49.000 But to many people that are watching this on YouTube right now, they're getting a minimized A crude version of this.
01:53:59.000 That's similar.
01:54:00.000 But it feels real.
01:54:02.000 It feels pretty real.
01:54:03.000 It's pretty close.
01:54:04.000 It's pretty close.
01:54:05.000 So, I mean, I've listened to your podcast for a while.
01:54:08.000 You usually have...
01:54:09.000 So, when I listen to your podcast, it feels like I'm sitting in with friends listening to a conversation.
01:54:15.000 So, it's not as intense as, for example, Dan Carlin's Hardcore History, where the guy's, like, talking to me about the darkest aspects of human nature.
01:54:25.000 His show's so good, I don't think you can call it a podcast.
01:54:28.000 It's an experience.
01:54:30.000 You're there.
01:54:31.000 I was hanging out with him and Genghis Khan and World War I, World War II. Painfotainment is an episode he had where he talks very dark ideas about our human nature and desiring the observation of the torture and suffering of others.
01:54:51.000 There's something really appealing to us.
01:54:53.000 He has this whole episode how throughout history we liked watching people die.
01:54:57.000 Mm-hmm.
01:54:58.000 And there's something really dark.
01:55:00.000 You're saying that if somebody streamed something like that now, it would probably get hundreds of millions of views.
01:55:06.000 Yeah, it probably would.
01:55:07.000 And we're protecting ourselves from our own nature because we understand the destructive aspects of it.
01:55:12.000 That's why YouTube would pull something like that.
01:55:14.000 If you tied a person in between two trucks and pulled them apart and put that on YouTube, it would get millions of hits.
01:55:21.000 But YouTube would pull it because we've decided as a society, collectively, That those kind of images are gruesome and terrible for us.
01:55:29.000 But nevertheless, that experience of listening to his podcast slash show, it feels real.
01:55:34.000 Just like VR for me, there's really strongly real aspects to it.
01:55:39.000 Where I'm not sure that if the VR technology gets much better, to where if you had a choice between, do you want to live your life in VR? You're going to die just like you would in real life.
01:55:54.000 Meaning your body will die.
01:55:55.000 You're just going to hook up yourself to a machine like it's a deprivation tank.
01:56:00.000 And just all you are is in VR and you're going to live in that world.
01:56:03.000 Which life would you choose?
01:56:05.000 Would you choose a life in VR or would you choose a real life?
01:56:09.000 That was the guy's decision in The Matrix, right?
01:56:12.000 The guy decided in The Matrix he wanted to be a special person in The Matrix.
01:56:15.000 He was eating that steak, talking to the guys, and he decided he was going to give up.
01:56:19.000 Remember that?
01:56:19.000 Yep.
01:56:20.000 So what decision would you make?
01:56:22.000 What is reality if it's not what you're experiencing?
01:56:25.000 If you're experiencing something, but it's not tactile in the sense that you can't drag it somewhere and put it on a scale and take a ruler to it and measure it, but in the moment of being there, it seems like it is.
01:56:38.000 What is missing?
01:56:39.000 What is missing?
01:56:40.000 Well, it's not real.
01:56:41.000 Well, what is real then?
01:56:42.000 What is real?
01:56:43.000 Well, that's the ultimate question in terms of like, are we living in a simulation?
01:56:48.000 That's one of the things that Elon brought up when I was talking to him.
01:56:51.000 And this is one thing that people have struggled with.
01:56:55.000 If we are one day going to come up with an artificial reality that's indiscernible from reality, In terms of emotions, in terms of experiences, feel, touch, smell, all of the sensory input that you get from the regular world,
01:57:11.000 if that's inevitable, if one day we do come up with that, how are we to discern whether or not we have already created that and we're stuck in it right now?
01:57:20.000 That we can't.
01:57:22.000 We can't.
01:57:22.000 And there's a lot of philosophical arguments for that, but it gets at the nature of reality.
01:57:27.000 I mean, it's fascinating because we're totally clueless about what it means to be real.
01:57:34.000 What it means to exist.
01:57:35.000 To exist.
01:57:36.000 So consciousness for us, I mean, it's incredible.
01:57:39.000 You can look at your own hand.
01:57:41.000 I'm pretty sure I'm on the Joe Rogan Experience podcast.
01:57:45.000 I'm pretty sure this is not real.
01:57:47.000 I'm imagining all of it.
01:57:48.000 There's a knife in front of me.
01:57:49.000 I mean, it's surreal.
01:57:51.000 And I have no proof that it's not fake.
01:57:54.000 And those kinds of things actually come into play with the way we think about artificial intelligence too.
01:57:58.000 Like, what is intelligence?
01:58:00.000 Right.
01:58:00.000 It seems like...
01:58:02.000 It seems like we're easily impressed by algorithms and robots we create that appear to have intelligence, but we still don't know what is intelligent and how close those things are to us.
01:58:15.000 And we think that ourselves, as this biological entity that can think and talk and cry and laugh, that we are somehow or another more important than some sort of silicon-based thing that we create that does everything that we do but far better.
01:58:33.000 Yeah, I think if I were to take a stand, a civil rights stand, I hope I'm young.
01:58:39.000 I'll one day run for president on this platform, by the way, that defending the rights—well, I can't because I'm Russian, but maybe they'll change the rules—that robots will have rights.
01:58:53.000 Robots' lives matter.
01:58:55.000 And I actually believe that we're going to have to start struggling with the idea of how we interact with robots.
01:59:03.000 I've seen too often the abuse of robots, not just the Boston Dynamics, but literally people You leave them alone with the robot, the dark aspects of human nature comes out, and it's worrying to me.
01:59:16.000 I would like a robot that spars, but only can move at like 50% of what I can move at, so I can fuck it up.
01:59:24.000 Yeah.
01:59:25.000 You'd be able to practice really well.
01:59:28.000 You would develop some awesome sparring instincts with that robot, but there would still be consequences.
01:59:34.000 If you did fuck up and you got lazy and the leg kicked you and you didn't check it, it would hurt.
01:59:39.000 I would love to see a live stream of that session because there's so many ways.
01:59:47.000 I mean, I practiced on a dummy.
01:59:49.000 There is aspects to a dummy that's helpful.
01:59:51.000 Yeah, in terms of positioning and where your stance is and technique.
01:59:57.000 Yeah, there's something to it.
01:59:58.000 I can certainly see that going wrong in ways where a robot might not respect you tapping.
02:00:04.000 Yeah.
02:00:05.000 Or a robot decides to beat you to death.
02:00:07.000 It's tired of you fucking it up every day.
02:00:09.000 And one day you get tired.
02:00:10.000 Or what if you sprain your ankle and it gets on top of you and mounts you and just starts blasting you in the face?
02:00:15.000 It does a heel hook or something.
02:00:17.000 Right.
02:00:17.000 And you'd have to be able to say, stop!
02:00:19.000 Stop!
02:00:20.000 Well then, no, you're going to have to use your martial art to defend yourself.
02:00:24.000 Yeah, right, because if you make it too easy for the robot to just stop anytime, then you're not really going to learn.
02:00:30.000 Like, one of the consequences of training, if you're out of shape, is if you get tired, people fuck you up.
02:00:36.000 And that's incentive for you to not get tired.
02:00:38.000 Like, there are so many times that I would be in the gym, like, doing strength and conditioning, and I think about moments where I got tapped.
02:00:45.000 Where guys caught me in something and I was exhausted and I couldn't get out of the triangle.
02:00:49.000 I'm like, shit!
02:00:50.000 And I just really push on the treadmill or push on the airdyne bike or whatever it was that I was doing, thinking about those moments of getting tired.
02:01:01.000 Yeah, that's what I think about when I do like sprints and stuff was the feeling of competition, those nerves of stepping in there.
02:01:12.000 It's really hard to do that kind of visualization but it builds.
02:01:15.000 It's effective though and the feeling of consequences to you not having any energy.
02:01:22.000 So you have to muster up the energy.
02:01:24.000 Because if you don't, you're gonna get fucked up.
02:01:27.000 Or something bad's gonna happen to someone you care about.
02:01:29.000 Or something's gonna happen to the world.
02:01:31.000 Maybe you're a superhero.
02:01:33.000 You're saving the world from the robots.
02:01:36.000 That's right.
02:01:38.000 To go back to what we were talking about, I'm sorry to interrupt you, but just to bring this all back around, what is this life and what is consciousness and what is this experience?
02:01:48.000 And if you can replicate this experience in a way that's indiscernible, will you choose to do that?
02:01:57.000 Lex, you don't have much time left, but we have an option.
02:02:01.000 We have an option and we can take your consciousness as you know it right now, put it into this program.
02:02:08.000 You will have no idea that this has happened.
02:02:10.000 You're going to close your eyes, you're going to wake up, you're going to be in the most beautiful green field.
02:02:14.000 There's going to be naked women everywhere.
02:02:16.000 Feasts everywhere you go.
02:02:18.000 There's going to be just picnic tables filled with the most glorious food.
02:02:22.000 You're going to drive around a Ferrari every day and fly around in a plane.
02:02:26.000 You're never going to die.
02:02:27.000 You're going to have a great time.
02:02:28.000 Or take your chances.
02:02:30.000 See what happens when the lights shut off.
02:02:34.000 Well, first of all, I'm a simple man.
02:02:36.000 I don't need multiple women.
02:02:37.000 One is good.
02:02:38.000 I'm romantic in that way.
02:02:39.000 That's what you say.
02:02:40.000 But that's in this world.
02:02:42.000 This world, you've got incentive to not be greedy.
02:02:44.000 In this other world where you can breathe underwater and fly through the air and, you know… No, I believe that scarcity is the fundamental ingredient of happiness.
02:02:57.000 So if you give me 72 virgins or whatever it is and… You just keep one slut?
02:03:05.000 Not a slut.
02:03:07.000 A requirement, you know, somebody intelligent and interesting.
02:03:10.000 Who enjoys sexual intercourse.
02:03:12.000 Well, not just enjoys sexual intercourse.
02:03:14.000 Like you.
02:03:15.000 A person.
02:03:16.000 Well, that and keeps things interesting.
02:03:20.000 Lex, we can engineer all this into your experience.
02:03:23.000 You don't need all these different women.
02:03:24.000 I get it.
02:03:25.000 I understand.
02:03:26.000 We've got this program for you.
02:03:28.000 Don't worry about it.
02:03:29.000 Okay, you want one more, and a normal car, like maybe a Saab or something like that.
02:03:33.000 Nothing crazy.
02:03:33.000 Yeah.
02:03:34.000 Right?
02:03:34.000 Yeah.
02:03:35.000 You're a simple man.
02:03:36.000 I get it.
02:03:37.000 No, no, no.
02:03:37.000 You want to play chess with someone who could beat you every now and then, right?
02:03:41.000 Yeah, but not just chess.
02:03:42.000 So engineer some flaws.
02:03:44.000 She needs to be able to lose her shit every once in a while.
02:03:47.000 Yeah, the Matrix.
02:03:48.000 Put her on the red dress.
02:03:49.000 Which girl in the red dress?
02:03:50.000 It comes right here.
02:03:51.000 Remember, he goes like, did you notice the girl in the red dress?
02:03:53.000 It's like the one that catches his attention.
02:03:55.000 I don't remember this.
02:03:56.000 This is right at the very beginning when he's telling them what the Matrix is.
02:04:00.000 She walks by right here.
02:04:01.000 Oh, there she is.
02:04:02.000 Ba-bam!
02:04:03.000 That's your girl.
02:04:04.000 The guy afterwards is like, I engineer that.
02:04:06.000 I'm telling you, it's just not...
02:04:08.000 It's not.
02:04:10.000 Well, yeah, but then I have certain features.
02:04:12.000 Like, I'm not an iPhone guy like Android, so that may be an iPhone person's girl.
02:04:16.000 But that's nonsense.
02:04:18.000 Mm-hmm.
02:04:20.000 So if an iPhone came along that was better than Android, you wouldn't want to use it?
02:04:23.000 No, my definition of better is different.
02:04:27.000 I know, for me, happiness lies...
02:04:33.000 In Android phones?
02:04:34.000 Yeah, Android phones.
02:04:36.000 Close connection with other human beings who are flawed but interesting, who are passionate about what they do.
02:04:41.000 Yeah, but this is all engineered into your program.
02:04:43.000 Yeah, yeah.
02:04:43.000 I'm requesting features here.
02:04:45.000 Yeah, you're requesting features.
02:04:46.000 But why Android phones?
02:04:48.000 Is that like, I'm a Republican.
02:04:51.000 I'm a Democrat.
02:04:51.000 I like Androids.
02:04:53.000 I like iPhones.
02:04:54.000 Is that what you're doing?
02:04:54.000 You're getting tribal?
02:04:55.000 No, I'm not getting tribal.
02:04:57.000 I'm totally not tribal.
02:04:58.000 I was just representing...
02:05:00.000 I figured the girl in the red dress just seems like an iPhone as a feature set.
02:05:06.000 What?
02:05:07.000 The kind of features I'm asking for...
02:05:08.000 She's too hot?
02:05:09.000 Yeah, and it seems like she's not interested in Dostoevsky.
02:05:13.000 How would you know?
02:05:15.000 That's so prejudiced of you, just because she's beautiful and she's got a tight-fitting dress?
02:05:20.000 That's true.
02:05:20.000 I don't know.
02:05:20.000 That's very unfair.
02:05:21.000 How dare you?
02:05:23.000 You sexist son of a bitch.
02:05:24.000 I'm sorry.
02:05:25.000 Actually, that was totally...
02:05:27.000 She probably likes Nietzsche and Dostoevsky and Kamu and Hesse.
02:05:29.000 She did her PhD in astrophysics, possibly.
02:05:34.000 Yeah, no, that's...
02:05:36.000 We're talking about all the trappings.
02:05:38.000 Look at that.
02:05:39.000 Bam, I'll take her all day.
02:05:41.000 iPhone, Android.
02:05:42.000 I'm not involved in this conversation.
02:05:44.000 I'll take her if she's a Windows phone.
02:05:45.000 How about that?
02:05:46.000 I don't give a fuck.
02:05:47.000 Windows phone?
02:05:48.000 Oh, come on now.
02:05:49.000 I'll take her if she's a Windows phone.
02:05:50.000 I'll go with a flip phone from the fucking early 2000s.
02:05:53.000 I'll take a Razer phone, a Motorola Razer phone with like 37 minutes of battery life.
02:06:02.000 We're talking about all the learned experiences and preferences that you've developed in your time here in this actual real Earth, or what we're assuming is the actual real Earth.
02:06:12.000 But how are we...
02:06:14.000 I mean, if you really are taking into account the possibility that one day something, someone, whether it's artificial intelligence figures it out or we figure it out, engineering a world, some sort of...
02:06:32.000 Of a simulation that is just as real as this world.
02:06:36.000 Like where there is no, there's no, it's impossible to discern.
02:06:42.000 Not only is it impossible to discern, people choose not to discern anymore.
02:06:46.000 Right.
02:06:47.000 Because it's so, why bother?
02:06:49.000 Why bother discerning?
02:06:50.000 That's a fascinating concept to me.
02:06:52.000 But I think that world, not to sound hippie or anything, but I think that, I think we live in a world that's pretty damn good.
02:06:59.000 It is pretty good.
02:07:00.000 But improving it with such fine ladies walking around is not necessarily the Delta that's positive.
02:07:08.000 Okay, but that's one aspect of the improvement.
02:07:10.000 What about improving it in this new world?
02:07:12.000 There's no drone attacks in Yemen that kill children.
02:07:16.000 There's no murder.
02:07:17.000 There's no rape.
02:07:18.000 There's no sexual harassment.
02:07:20.000 There's no racism.
02:07:24.000 All the negative aspects of our current culture are engineered out.
02:07:28.000 I think a lot of religions have struggled with this.
02:07:32.000 And of course I would say I would want a world without that.
02:07:35.000 But part of me thinks that our world is meaningful because of the suffering in the world.
02:07:40.000 Right, that's a real problem, isn't it?
02:07:42.000 That is a fascinating concept that's almost impossible to ignore.
02:07:47.000 Do you appreciate love because of all the hate?
02:07:52.000 You know, like if you have a hard time finding a girlfriend and just no one's compatible and all the relationships go bad.
02:07:59.000 I'm single, by the way.
02:08:00.000 Holla.
02:08:01.000 Letting the ladies know.
02:08:02.000 But if you do have a hard time connecting with someone and then you finally do connect with someone after all those years of loneliness and this person's perfectly compatible with you, how much more will you appreciate them than a guy like Dan Bolzerian who's flying around in a private jet banging tens all day long?
02:08:20.000 Maybe he's fucking drowning in his own sorrow.
02:08:23.000 Maybe he's got too much prosperity.
02:08:28.000 Yeah, we have that with social networks too.
02:08:31.000 The people that...
02:08:32.000 I mean, you're pretty famous.
02:08:36.000 The amount of love you get is huge.
02:08:40.000 It might be because of the overflow of love, it might be difficult to appreciate more like genuine little moments of love.
02:08:47.000 It's not for me.
02:08:49.000 No.
02:08:50.000 I spent a lot of time thinking about that.
02:08:53.000 And I also spent a lot of time thinking about how...
02:08:57.000 Titanically bizarre my place in the world is.
02:09:01.000 I mean, I think about it a lot, and I spent a lot of time being poor and being a loser.
02:09:06.000 I mean, my childhood was not the best.
02:09:09.000 I went through a lot of struggle when I was young that I cling to like a safety raft.
02:09:14.000 You know, I don't ever think there's something special about me.
02:09:19.000 And I try to let everybody know that anybody can do what I've done.
02:09:23.000 You just have to just keep going.
02:09:26.000 It's like 99% of this thing is just showing up and keep going.
02:09:31.000 Keep improving, keep working at things, and keep going.
02:09:34.000 Put the time in.
02:09:35.000 But the interesting thing is you haven't actually, a couple days ago, went back to your first podcast and listened to it.
02:09:41.000 You haven't really changed much.
02:09:43.000 So you were, I mean, the audio got a little better.
02:09:47.000 But just like the genuine nature of the way you interact hasn't changed.
02:09:52.000 And that's fascinating because, you know, fame changes people.
02:09:59.000 Well, I was already famous then.
02:10:01.000 Oh, in a different way.
02:10:02.000 Yeah, I was already famous from Fear Factor.
02:10:05.000 I already had stand-up comedy specials.
02:10:10.000 I'd already been on a sitcom.
02:10:13.000 Yeah.
02:10:14.000 I wasn't as famous as I am now, but I understood what it is.
02:10:18.000 I'm a big believer in adversity and struggle.
02:10:22.000 I think they're very important for you.
02:10:24.000 It's one of the reasons why I appreciate martial arts.
02:10:25.000 It's one of the reasons why I've been drawn to it as a learning tool, not just as something where it's a puzzle that I'm fascinated to try to figure out how to get better at the puzzle.
02:10:36.000 And martial arts is a really good example because you're never really the best, especially when There's just so many people doing it.
02:10:42.000 It's like you're always going to get beat by guys.
02:10:44.000 And then I was never putting the kind of time into it as an adult outside of my Taekwondo competition.
02:10:50.000 I was never really putting all day every day into it like a lot of people that I would train would.
02:10:56.000 And so I'd always get dominated by the really best guys.
02:10:59.000 So there's a certain amount of humility that comes from that as well.
02:11:02.000 But there's a struggle in that you're learning about yourself and your own limits.
02:11:11.000 And the limits of the human mind and endurance and just not understanding all the various interactions of techniques.
02:11:21.000 There's humility to that in that I've always described martial arts as a vehicle for developing your own human potential.
02:11:28.000 But I think marathon running has similar aspects.
02:11:31.000 I think when you figure out a way to keep pushing and push through, the control of...
02:11:38.000 Your mind and your desire and overcoming adversity.
02:11:41.000 I think overcoming adversity is critical for the human.
02:11:44.000 For humans, we have this We're good to go.
02:12:12.000 And I think this is sort of engineered into the system.
02:12:15.000 So for me, fame is almost like a cheat code.
02:12:18.000 It's like you don't really want it.
02:12:20.000 Don't dwell on that, man.
02:12:21.000 That's like a free buffet.
02:12:24.000 You want to go hunt your own food.
02:12:26.000 You want to make your own fire.
02:12:28.000 You want to cook it yourself and feel the satisfaction.
02:12:30.000 You don't want people feeding you grapes while you lie down.
02:12:34.000 What is the hardest thing?
02:12:36.000 So you talk about challenge a lot.
02:12:38.000 What's the hardest thing?
02:12:40.000 When have you been really humbled?
02:12:43.000 Martial arts, for sure.
02:12:44.000 The most humbling.
02:12:46.000 Yeah, from the moment I started, I mean, I got really good at Taekwondo, but even then I'd still get the fuck beaten out of me by my friends.
02:12:54.000 I got training partners, especially when you're tired and you're doing, you know, you're rotating partners and guys are bigger than you.
02:13:00.000 It's just humbling.
02:13:02.000 You know, martial arts are very humbling.
02:13:04.000 Yeah, so that – and I got to call you out on something.
02:13:08.000 So you talk about education systems sometimes.
02:13:10.000 I've heard you say a little broken in high school and so on.
02:13:14.000 I'm not really calling you out.
02:13:17.000 I just want to talk about it because I think it's important and as somebody who loves math.
02:13:22.000 You talked – but your own journey was school didn't give you – Uh, passion, value.
02:13:32.000 Well, you can maybe talk to that, but I, for me, what I always, and maybe I'm sick in the head or something, but for me, math was exciting the way martial arts were exciting for you because it was really hard.
02:13:46.000 I wanted to quit.
02:13:48.000 And the idea with education I have that, that That seems to be flawed nowadays a little bit is that we want to make education easier.
02:13:58.000 That we want to make, you know, more accessible and so on.
02:14:01.000 Accessible, of course, is great.
02:14:03.000 But you kind of forget in that, and those are all good goals.
02:14:06.000 You forget in that it's supposed to be also hard.
02:14:10.000 And like teachers...
02:14:12.000 Just the way your wrestling coach, if you, like, quit, you say, I can't do anymore, I have to, you come up with some kind of excuse, your wrestling coach looks at you once and says, get your ass back on the mat.
02:14:22.000 The same way I wish math teachers did.
02:14:26.000 When people say, it's almost like cool now to say, ah, it's not, math sucks.
02:14:31.000 Math's not for me.
02:14:31.000 Or science sucks.
02:14:33.000 This teacher's boring.
02:14:35.000 I think there's room for some culture where it says, no, no, no, you're not.
02:14:40.000 If you just put in the time and you struggle, then that opens up the universe to you.
02:14:45.000 Like whether you become a Neil deGrasse Tyson or the next Fields Medal winner in mathematics.
02:14:50.000 I would not argue with you for one second.
02:14:52.000 I would also say that one of the more beautiful things about human beings is that we vary so much, and that one person who is just obsessed with playing the trombone, and to me, I don't give a fuck about trombones, but that's okay.
02:15:07.000 Like, I can't be obsessed about everything.
02:15:10.000 Some people love golf, and they just want to play it all day long.
02:15:13.000 I've never played golf a day in my life, except miniature golf, and just fucking around.
02:15:20.000 But that doesn't, it's not bad or good.
02:15:23.000 And I think there's definitely some skills that you learn from mathematics that are hugely significant if you want to go into the type of fields that you're involved in.
02:15:33.000 For me, it's never been appealing.
02:15:35.000 But it's not that it was just difficult.
02:15:39.000 It's also that it just, for whatever reason, who I was at that time in that school with those teachers, having the life experience that I had, that was not what I was drawn to.
02:15:50.000 But what I was drawn to was literature.
02:15:51.000 I was drawn to reading.
02:15:53.000 I was drawn to stories.
02:15:54.000 I was drawn to possibilities and creativity.
02:15:57.000 I was drawn to all those things.
02:15:58.000 You were an artist a bit too.
02:15:59.000 Yeah.
02:16:00.000 I used to want to be a comic book illustrator.
02:16:04.000 That was a big thing when I was young.
02:16:05.000 I was really into comic books.
02:16:08.000 I was really into...
02:16:10.000 It was traditional comic books and also a lot of the horror comics from the 1970s, the black and white, like creepy and eerie.
02:16:19.000 Did you ever see those things?
02:16:21.000 Creepy and eerie?
02:16:22.000 Like black and white?
02:16:23.000 Yeah, they were a comic book series that existed way back in the day.
02:16:30.000 They were all horror.
02:16:31.000 And they were really cool illustrations and these wild stories.
02:16:36.000 But it was comic books, but they were all black and white.
02:16:39.000 That's creepy and eerie.
02:16:40.000 Oh, that's the actual name.
02:16:41.000 Yeah.
02:16:42.000 Eerie and Creepy were the names.
02:16:43.000 See, like, that was from what year was that?
02:16:45.000 It says September, but it doesn't say what year.
02:16:48.000 I used to get these when I was a little kid, man.
02:16:52.000 I was like eight, nine years old in the 70s.
02:16:56.000 Good and evil.
02:16:57.000 Yeah.
02:16:57.000 They were my favorite.
02:17:00.000 That's a cover of them.
02:17:01.000 They would have covers that were done by Frank Frazetta, Boris Vallejo, and just really cool shit.
02:17:09.000 I loved those when I was little.
02:17:12.000 I was always really into horror movies and really into...
02:17:16.000 Look at this werewolf one.
02:17:18.000 That was one of my favorite ones.
02:17:20.000 That was a crazy werewolf that was all fours.
02:17:23.000 Who's the hero usually?
02:17:25.000 Superhero?
02:17:25.000 Everybody dies in those.
02:17:26.000 That's the beautiful thing about it.
02:17:28.000 Everybody gets fucked over.
02:17:30.000 That was the thing that I really liked about them.
02:17:32.000 Nobody made it out alive.
02:17:34.000 There was no one guy who figured it out and rescued the woman and they wrote off in the sunset, uh-uh.
02:17:39.000 You'd turn the corner and there'd be a fucking pack of wolves with glowing eyes waiting to tear everybody apart and that'd be the end of the book.
02:17:46.000 I was just really into the illustrations.
02:17:49.000 I found them fascinating.
02:17:51.000 I love those kind of horror movies and I love those kinds of illustrations.
02:17:55.000 So that's what I wanted to do when I was young.
02:17:57.000 Yeah, I think the education system is probably, we talked about creativity, is probably not as good at inspiring and feeding that creativity.
02:18:05.000 Because I think math and wrestling can be taught systematically.
02:18:10.000 I think creativity is something, well, actually I know nothing about it.
02:18:14.000 So I think it's harder to take somebody like you when you're young and say – and inspire you to pursue that fire, whatever is inside.
02:18:23.000 Well, one of the best ways to inspire people is by giving them these alternatives that are so – It's uninteresting.
02:18:35.000 Like saying, you're going to get a job selling washing machines.
02:18:39.000 And you're like, fuck that!
02:18:40.000 I'm going to figure out a way to not get a job selling washing machines.
02:18:44.000 Some of the best motivations that I've ever had have been terrible jobs.
02:18:48.000 Because you have these terrible jobs and you go, okay, fuck that.
02:18:51.000 I'm going to figure out a way to not do this.
02:18:54.000 And whether you want to call it ADD or ADHD or whatever it is that makes kids squirm in class.
02:19:01.000 I didn't squirm in every class.
02:19:03.000 I didn't squirm in science class.
02:19:05.000 I didn't squirm in interesting subjects.
02:19:10.000 There were things that were interesting to me that I would be locked in and completely fascinated by.
02:19:15.000 And there were things where I just couldn't wait to run out of that room.
02:19:18.000 And I don't know what...
02:19:21.000 The reason is, but I do know that a lot of what we call our education system is engineered for a very specific result.
02:19:28.000 And that result is you want to get a kid who can sit in class and learn so that they can sit in a job and perform.
02:19:37.000 And that, for whatever reason, that was just...
02:19:40.000 I mean, I didn't have the ideal childhood.
02:19:42.000 Maybe if I did, I would be more inclined to lean that way, but...
02:19:47.000 I didn't want to do anything like that.
02:19:49.000 Like, I couldn't wait to get the fuck out of school.
02:19:52.000 So I didn't ever have to listen to anybody like that again.
02:19:55.000 And then just a few years later, I mean, you graduate from high school when you're 18. When I was 21, I was a stand-up comic.
02:20:02.000 And I was like, I found it.
02:20:03.000 This is it.
02:20:04.000 I'm like good.
02:20:05.000 I found there's an actual job that nobody told me about where you could just make fun of shit and People go out and they pay money to hear you Create jokes and routines and bits.
02:20:15.000 Really?
02:20:16.000 You weren't terrified?
02:20:17.000 Of stand-up?
02:20:18.000 No, getting on stage and...
02:20:20.000 Oh, I was definitely nervous the first time.
02:20:22.000 Probably more nervous than any...
02:20:23.000 Seems harder than fighting from my perspective.
02:20:25.000 No, it's different.
02:20:26.000 It's different.
02:20:27.000 The consequences aren't as grave, but that's one of the...
02:20:30.000 Are they not?
02:20:31.000 No.
02:20:32.000 Like embarrassment and not...
02:20:34.000 You don't get pummeled.
02:20:36.000 I mean, you could say, like, emotionally it's probably more devastating or as devastating.
02:20:43.000 But man, losing a fight, it fucks you up for a long time.
02:20:47.000 You feel like shit for a long time.
02:20:50.000 But then you win and you feel amazing for a long time, too.
02:20:53.000 When you kill on stage, you only feel good for like an hour or so and that goes away.
02:20:57.000 It feels normal.
02:20:58.000 It's just normal.
02:20:59.000 It's just life, you know?
02:21:01.000 But I think that it prepared me, like competing in martial arts, the fear of that, and then how hard it is to stand opposite another person who's the same size as you, who's equally well-trained,
02:21:17.000 who's also a martial arts expert, and they ask you, are you ready?
02:21:21.000 Are you ready?
02:21:22.000 You bow to each other, and then they go, fight!
02:21:24.000 And then you're like, fight!
02:21:25.000 Here we go like that to me Probably was like one of the best prep and to do that from the time I was 15 till I was 21 was probably the best preparation for anything that was difficult to do because it was so fucking scary and then To go from that into stand-up,
02:21:42.000 I think it prepared me for stand-up because I was already used to doing things that were scary.
02:21:45.000 And now I seek scary things out.
02:21:47.000 I seek difficult things out.
02:21:50.000 Like picking up the bow and learning that.
02:21:53.000 Yes, archery, which is really difficult.
02:21:55.000 I mean, there's one of the reasons why I got attracted even to playing pool.
02:22:00.000 Pool is very difficult.
02:22:02.000 It's very difficult to control your nerves in high-pressure situations.
02:22:06.000 So that, there's...
02:22:09.000 There's some benefits to that.
02:22:10.000 But it goes back to what you were saying earlier.
02:22:13.000 How much of all this stuff, like when you were saying that scarcity, there's real value in scarcity, and that there's real value in struggle.
02:22:24.000 How much of all this is just engineered into our human system that has given us the tools and the incentive to make it to 2018 with the human species?
02:22:37.000 Yeah, I think it's whoever the engineer is, whether it's God or nature or whatever, I think it's engineered in somehow.
02:22:44.000 We get to think about that when you try to create an artificial intelligence system.
02:22:48.000 When you imagine what's a perfect system for you, we talked about this with the lady, what's the perfect system for you?
02:22:56.000 If you had to really put down on paper and engineer what's the experience of your life, When you start to realize, it actually looks a lot like your current life.
02:23:05.000 So this is the problem that companies are facing, like Amazon, in trying to create Alexa.
02:23:12.000 What do you want from Alexa?
02:23:15.000 Do you want a tool that says what the weather is, or do you want Alexa to say, Joe, I don't want to talk to you right now?
02:23:25.000 I have.
02:23:26.000 Alexa, where you have to work her over, like, Alexa, come on.
02:23:29.000 What did I do?
02:23:29.000 I'm sorry.
02:23:30.000 Listen, if I was rude, I was insensitive, I was tired, the commute was really rough.
02:23:36.000 And they should be like, I'm seeing somebody else.
02:23:39.000 Alexa!
02:23:40.000 Do you remember Avatar Depression?
02:23:44.000 The movie Avatar and depression is a psychological effect after the movie somehow?
02:23:49.000 Yeah, it was a real term that people were using, that psychologists were using, because people would see the movie Avatar, which I loved.
02:23:56.000 A lot of people said, oh, it's fucking Pocahontas with blue people.
02:23:59.000 To those people, I say, fuck off!
02:24:01.000 You want to talk about suspension of disbelief?
02:24:04.000 That, to me, that movie was the ultimate suspension of disbelief.
02:24:07.000 I love that movie.
02:24:08.000 I fucking love that.
02:24:09.000 I know James Cameron's working on like 15 sequels right now, all simultaneously.
02:24:13.000 I wish that motherfucker would dole them out.
02:24:15.000 He's like a crack dealer that gets you hooked once, and then you're just waiting outside in the cold, shivering for years.
02:24:23.000 Avatar depression was a psychological term that psychologists were using to describe this mass influx of people that saw that movie and were so enthralled by the way the Na'vi lived in Pandorum that they came back to this stupid world.
02:24:43.000 Didn't want to leave.
02:24:43.000 They wanted to be like the blue guy in Avatar.
02:24:46.000 And it also...
02:24:47.000 There was a mechanism in that film where this regular person became a Na'vi.
02:24:54.000 He became it through the Avatar.
02:24:56.000 And then eventually that Tree of Life or whatever it was, they transferred his essence into this Avatar and he became one of them.
02:25:06.000 He became one of them.
02:25:07.000 He absorbed their culture.
02:25:08.000 And it was very much like our romanticized versions of the Native Americans.
02:25:14.000 Mm-hmm.
02:25:15.000 That they lived in symbiotic relationship with the earth.
02:25:19.000 They only took what they needed.
02:25:21.000 They had a spiritual connection to their food and to nature and just their existence was noble and it was honorable and it wasn't selfish and it was powerful and it was spiritual and that we're missing these things.
02:25:38.000 We're missing these things and I think we are better at romanticizing them and craving them as opposed to living them.
02:25:45.000 I mean, you look at movies like Happy People with...
02:25:48.000 Life in the Taiga.
02:25:49.000 Life in the Taiga.
02:25:51.000 I mean, I'm Russian, so...
02:25:53.000 Yeah, Warner Herzog's film.
02:25:54.000 Amazing movie.
02:25:55.000 Part of you wants to be like, well, I want to be out there in nature, focusing on simple survival, setting traps for animals, cooking some soup, a family around you, and just kind of focusing on the basics.
02:26:11.000 And I'm the same way.
02:26:13.000 I go out hiking and I go out in nature.
02:26:16.000 I would love to pick up hunting.
02:26:18.000 I crave that.
02:26:19.000 But if you just put me in the forest, I'll probably be like, I'm taking your phone away and you're staying here.
02:26:26.000 That's it.
02:26:26.000 You're never going to return to your Facebook and your Twitter and your robots.
02:26:32.000 I don't know if I'll be so romantic about that notion anymore.
02:26:37.000 I don't know either, but I think that's also the genie in the bottle discussion.
02:26:43.000 I think that genie's been out of the bottle for so long that you'd be like, but what about my Facebook?
02:26:48.000 What if I got some messages?
02:26:49.000 Let me check my email real quick.
02:26:51.000 No, no, no.
02:26:51.000 We're in the forest.
02:26:52.000 There's no Wi-Fi out here.
02:26:54.000 No Wi-Fi ever?
02:26:55.000 What the fuck?
02:26:56.000 How do people get your porn?
02:26:58.000 This is still porn!
02:27:00.000 No!
02:27:01.000 That's another understudied, again, not an expert, but the impact of internet pornography on culture.
02:27:09.000 Oh, yeah.
02:27:09.000 It's significant and also ignored to a certain extent.
02:27:14.000 And if not ignored, definitely purposefully...
02:27:22.000 Left out of the conversation.
02:27:23.000 Yeah, there's another PhD student.
02:27:26.000 A person from Google came to give a tech talk and he opened by saying 90% of you in the audience have this month Google the pornographic term in our search engine.
02:27:37.000 And it was really a great opener because people were just all really uncomfortable.
02:27:42.000 Yeah.
02:27:44.000 Because we just kind of hide it away into this.
02:27:47.000 But it certainly has an impact.
02:27:49.000 Well, I think there's a suppression aspect to that, too, that's unhealthy.
02:27:52.000 We have a suppression of our sexuality because we think that somehow or another it's negative.
02:27:58.000 And especially for women.
02:28:00.000 I mean, for women, like men, a man who is a sexual conqueror is thought to be a stud, whereas a woman who seeks out A multiple desirable sexual partners is thought to be troubled.
02:28:16.000 There's something wrong with her.
02:28:18.000 You know, they're criticized.
02:28:20.000 They use terms like we used earlier, like slut or whore.
02:28:24.000 You know, there's no...
02:28:25.000 You call a man a male slut, they'll start laughing.
02:28:28.000 Yup, that's me, dude.
02:28:30.000 Like, men don't give a fuck about that.
02:28:32.000 It's not stigmatized.
02:28:34.000 But somehow or another, through our culture, it's stigmatized for women.
02:28:37.000 And then the idea of masturbation is stigmatized.
02:28:40.000 All these different things that our Puritan roots of our society start showing, and our religious ideology starts showing when we discuss our Our issues that we have with sex and pornography.
02:28:56.000 Right, and for me this is something I think about a little bit because my dream is to create an artificial intelligence, a human-centered artificial intelligence system that provides a deep, meaningful connection with another human being.
02:29:11.000 And you have to consider the fact that pornography or sex dolls will be part of that journey somehow in society.
02:29:20.000 The dummy they'll be using for martial arts would likely be an out-development of sex robots.
02:29:29.000 And we have to think about what's the impact of those kinds of robots on society.
02:29:33.000 Well, women in particular are violently opposed to sex robots.
02:29:38.000 I've read a couple of articles written by women about sex robots and the possibility of future sex robots.
02:29:45.000 And I shouldn't say violently, but it's always negative.
02:29:48.000 So is the idea that men would want to have sex with some beautiful thing that's programmed to love them as opposed to earning the love of a woman.
02:29:58.000 But you don't hear that same interpretation from men.
02:30:02.000 From men, it seems to be that there's a thought about maybe it's kind of gross, but also that it's inevitable.
02:30:10.000 And then there's like this sort of nod to it, like how crazy would that be if you had the perfect woman, like the woman in the red dress in The Matrix.
02:30:19.000 She comes over to your house and she's perfect.
02:30:22.000 Because you're not thinking about the alternative, which is a male robot doll, which will now be able to satisfy your girlfriend, wife better than you.
02:30:32.000 I think you'll hear from guys a lot more then.
02:30:36.000 Maybe.
02:30:37.000 Or maybe, like, good luck with her.
02:30:39.000 She's fucking annoying.
02:30:40.000 She's always yelling at me.
02:30:42.000 Let her yell at the robot.
02:30:43.000 He's not going to care.
02:30:45.000 Then that robot turns into a grappling gun.
02:30:48.000 Yeah, and maybe she can just go ahead and get fat with the robot.
02:30:50.000 He's not even going to care.
02:30:52.000 Go ahead.
02:30:53.000 Just sit around, eat Cheetos all day and scream at them.
02:30:55.000 He's your slave.
02:30:57.000 Good.
02:30:59.000 I mean, it can work both ways, right?
02:31:01.000 It can work the same way that a man would, you know, a woman would see a man that is interested in a sex robot to be disgusting and pathetic.
02:31:13.000 A man could see the same thing in a woman that's interested in a sex robot.
02:31:17.000 Like, okay, is that what you want?
02:31:19.000 You're some crude thing that just wants physical pleasure and you don't even care about a real actual emotional connection to a biological human being?
02:31:27.000 Like, okay, well then you're not my kind of woman anyway.
02:31:29.000 Yeah.
02:31:31.000 But if done well, those are the kinds of – in terms of threats of AI, to me, it can change the fabric of society because like I'm old school in the sense I like monogamy for example.
02:31:44.000 Well, you say that because you don't have a girlfriend.
02:31:48.000 So you're longing for monogamy.
02:31:49.000 One is better than zero.
02:31:51.000 The real reason I don't have a girlfriend is because, and it's fascinating, with people like you actually, with Elon Musk, the time is a huge challenge because of how much romantic I am, because how much I care about people around me, I feel like it's a significant investment of time.
02:32:08.000 And also the amount of work that you do.
02:32:10.000 I mean, if you're dedicated to a passion like artificial intelligence and The sheer amount of fucking studying and research and...
02:32:21.000 And programming, too.
02:32:22.000 There's certain disciplines in which you have to...
02:32:24.000 Certain disciplines require...
02:32:26.000 Like Stephen Presto talks about writing.
02:32:28.000 You can get pretty far with two, three hours a day.
02:32:30.000 When you're programming, when you're...
02:32:32.000 A lot of the engineering tasks, they just take up hours.
02:32:35.000 It's just hard.
02:32:36.000 Which is why I really...
02:32:39.000 One of the reasons...
02:32:40.000 I may disagree with you on a bunch of things, but he's an inspiration because I think he's a pretty good dad, right?
02:32:46.000 And he finds the time for his sons while being probably an order of magnitude busier than I am.
02:32:53.000 And it's fascinating to me how that's possible.
02:32:56.000 Well, once you have children...
02:32:59.000 I mean, there obviously are people that are bad dads.
02:33:01.000 But once you have children, your life shifts in almost...
02:33:07.000 It's an indescribable way because you're different.
02:33:11.000 It's not just that your life is different.
02:33:14.000 There hasn't been a moment while we're having this conversation that I haven't been thinking about my children.
02:33:19.000 Thinking about what they're doing, where they are.
02:33:21.000 It's always running in the background.
02:33:23.000 It's a part of life.
02:33:26.000 You're connected to these people that you love so much and they rely on you for guidance and for warmth and affection.
02:33:37.000 But how did your life have to change?
02:33:40.000 You just change, man.
02:33:42.000 When you see the baby, you change.
02:33:43.000 When you start feeding them, you change.
02:33:46.000 When you hold them, you change.
02:33:48.000 You hold their hand while they walk, you change.
02:33:50.000 When they ask you questions, you change.
02:33:52.000 When they laugh and giggle, you change.
02:33:54.000 When they smack you in the face and you pretend to fall down, they laugh, you change.
02:33:58.000 You just change, man.
02:34:00.000 You change.
02:34:01.000 You become a different thing.
02:34:03.000 You become a dad.
02:34:04.000 So you almost can't help.
02:34:06.000 Some people do help, but that's what's sad.
02:34:08.000 Some people resist it.
02:34:10.000 I know people that have been terrible, terrible parents.
02:34:14.000 They'd rather stay out all night and never come home, and they don't want to take care of their kids, and they split up with the wife or the girlfriend who's got the kid, and they don't give child support.
02:34:25.000 It's a really common theme, man.
02:34:27.000 I mean, there's a lot of men out there that don't pay child support.
02:34:30.000 That's a dark, dark thing.
02:34:32.000 You have a child out there that needs food and you're so fucking selfish.
02:34:37.000 You don't want to provide resources.
02:34:39.000 Not only do you not want to be there for companionship, you don't want to provide resources to pay for the child's food.
02:34:45.000 You don't feel responsible for it.
02:34:46.000 I mean, that was my case when I was a kid.
02:34:48.000 My dad didn't pay child support.
02:34:50.000 And we were very poor.
02:34:52.000 It's one of the reasons why we were so poor.
02:34:54.000 And I know other people that have had that same experience.
02:34:58.000 So it's not everyone that becomes a father or that impregnates, I should say, a woman and becomes a father.
02:35:06.000 And the other side is true, too.
02:35:08.000 There's women that are terrible mothers for whatever reason.
02:35:11.000 I mean, maybe they're broken psychologically.
02:35:13.000 Maybe they have mental health issues.
02:35:15.000 Whatever it is, there's some women that are fucking terrible moms.
02:35:19.000 And it's sad.
02:35:20.000 But it makes you appreciate women that are great moms so much more.
02:35:23.000 Yeah, when I see guys like you, the inspiration is, so I'm looking for sort of structural, what's the process to then fit people into your life?
02:35:33.000 But what I hear is, when it happens, you just do.
02:35:37.000 You change.
02:35:38.000 But this is the thing, man.
02:35:39.000 We're not living in a book.
02:35:40.000 We're not living in a movie.
02:35:42.000 It doesn't always happen.
02:35:43.000 Like, you have to decide that you want it to happen, and you've got to go looking for it, because if you don't, you could just be older, right?
02:35:49.000 And still alone.
02:35:51.000 There's a lot of my friends that have never had kids and now they're in their 50s.
02:35:55.000 I mean, comedians, right?
02:35:56.000 Yes!
02:35:56.000 You have to be on the road a lot.
02:35:58.000 Not just on the road.
02:35:59.000 You have to be obsessed with comedy.
02:36:01.000 Like, it's got to be something...
02:36:03.000 You're always writing new jokes because you're always writing a new...
02:36:06.000 Especially if you put out a special, right?
02:36:08.000 Like, I just did a Netflix special.
02:36:09.000 It's out now.
02:36:10.000 So I really have like a half hour new material.
02:36:13.000 That's it.
02:36:14.000 It's great by the way, Strange Times.
02:36:16.000 Thank you very much.
02:36:16.000 It's the first special I've watched.
02:36:19.000 It was actually really weird, sorry to go on a tangent, but I've listened to you quite a bit, but I've never looked at you doing comedy.
02:36:28.000 And it was so different.
02:36:30.000 Because like here you're just like improvising, you're like a jazz musician here.
02:36:34.000 It's like a regular conversation.
02:36:35.000 The stand-up special, it was clear, everything is perfect.
02:36:41.000 The timing, it's like watching you do a different art almost.
02:36:45.000 It's kind of interesting.
02:36:46.000 It's like a song or something.
02:36:49.000 There's some riffing to it, there's some improvisation to it, but there's a very clear structure to it.
02:36:55.000 But it's so time intensive, and you've got to be obsessed with it to continue to do something like that.
02:37:02.000 So for some people, that travel and the road, that takes priority over all things, including relationships, and then you never really settle down.
02:37:11.000 And so you never have a significant relationship with someone that you could have a child with.
02:37:17.000 And I know many friends that are like that.
02:37:19.000 And I know friends that have gotten vasectomies because they don't want it.
02:37:22.000 They like this life.
02:37:24.000 And there's nothing wrong with that either.
02:37:26.000 I always was upset by this notion that in order to be a full and complete adult, you have to have a child.
02:37:34.000 You have to be a parent.
02:37:35.000 And I think even as a parent, where I think it's probably one of the most significant things in my life, I reject that notion.
02:37:42.000 I think you could absolutely be a fully developed person and an amazing...
02:37:46.000 Influence in society, an amazing contributor to your culture and your community without ever having a child, whether you're a man or a woman.
02:37:54.000 It's entirely possible.
02:37:55.000 And the idea that it's not as silly.
02:37:57.000 Like, we're all different in so many different ways, you know, and we contribute in so many different ways.
02:38:02.000 Like, there's going to be people that are obsessed with mathematics, there's going to be people that are obsessed with literature, there's going to be people that are obsessed with music, and they don't all have to be the same fucking person, because you really don't have enough time for it to be the same person.
02:38:14.000 You know, and there's going to be people that love having children.
02:38:17.000 They love being a dad or love being a mom.
02:38:19.000 And there's going to be people that don't want to have nothing to do with that and they get snipped early and they're like, fuck off!
02:38:23.000 I'm going to smoke cigarettes and drink booze and I'm going to fly around the world and talk shit.
02:38:28.000 And those people are okay too.
02:38:30.000 Like, the way we interact with each other that's most important.
02:38:35.000 That's what I think.
02:38:36.000 The way human beings The way we form bonds and friendships, the way we contribute to each other's lives, the way we find our passion and create, those things are what's really important.
02:38:53.000 Yeah, but there's also an element – just looking at my parents, I think they got – they're still together.
02:38:58.000 They've gotten together what – I mean there's standards you get together when you're like 20 or – I should know this, but 23, 20, whatever, young.
02:39:06.000 And there is an element there where you don't want to be too rational.
02:39:10.000 You just want to be – just dive in.
02:39:12.000 Right.
02:39:12.000 Should you be an MMA fighter?
02:39:14.000 Should you be – Like, I'm in academia now, so I'm a research scientist at MIT. The pay is much, much lower than all the offers I'm getting non-stop.
02:39:24.000 Is it rational?
02:39:26.000 I don't know.
02:39:28.000 But your passion is doing what you're doing currently.
02:39:31.000 Yeah.
02:39:31.000 But it's like it's...
02:39:33.000 What are the other offers?
02:39:34.000 Like what kind of other jobs?
02:39:35.000 Are they appealing in any way?
02:39:37.000 Yeah.
02:39:38.000 Yeah, they're appealing.
02:39:39.000 So I'm making a decision that's similar to actually getting married, which is...
02:39:46.000 So the offers are...
02:39:47.000 Well, I shouldn't call them out, but Google phase, but the usual AI research, pretty high positions.
02:39:54.000 And the...
02:39:59.000 There's just something in me that says the edge, the chaos of this environment at MIT is something I'm drawn to.
02:40:07.000 It doesn't make sense.
02:40:09.000 So I can do what I'm passionate about in a lot of places.
02:40:11.000 You just kind of dive in.
02:40:13.000 I had a sense that a lot of our culture creates that momentum.
02:40:17.000 You just kind of have to go with it.
02:40:19.000 That's why my parents got together.
02:40:21.000 A lot of couples wouldn't be together if they weren't culturally forced to be together and divorce was such a negative thing.
02:40:31.000 They grew together and created a super happy connection.
02:40:35.000 I'm a little afraid of over-rationality about choosing the path of life.
02:40:40.000 You're saying relationship don't always make sense.
02:40:45.000 They don't have to make sense.
02:40:47.000 I think I'm a big believer in doing what you want to do.
02:40:51.000 And if you want to be involved in a monogamous relationship, I think you should do it.
02:40:56.000 But if you don't want to be involved in one, I think you should do that too.
02:40:59.000 I mean, if you want to be like a nomad and travel around the world and just live out of a backpack, I don't think there's anything wrong with that.
02:41:06.000 As long as you're healthy and you survive and you're not depressed and you're not longing for something that you're not participating in...
02:41:12.000 But I think when you are doing something, you don't want to be doing it.
02:41:16.000 It brings me back to, was it Thoreau's quote, I guess?
02:41:20.000 I always fuck up who made this.
02:41:22.000 What?
02:41:23.000 I think I know which one you're going to say.
02:41:24.000 Yeah, most men live lives of silent desperation.
02:41:28.000 That's real, man.
02:41:30.000 That's real.
02:41:31.000 That's what you don't want.
02:41:34.000 I think it's Thoreau, right?
02:41:37.000 You don't want silent desperation.
02:41:40.000 I fucking love that quote because I've seen it.
02:41:42.000 I've seen it in so many people's faces.
02:41:44.000 And that's one thing I've managed to avoid.
02:41:46.000 And I don't know if I avoided that by luck or just by the fact I'm stupid and I just follow my instincts whether they're right or wrong and I make it work.
02:41:59.000 This goes back to what we were discussing in terms of what is the nature of reality and are we just finding these Romanticized interpretations of our own biological needs and our human reward systems that's creating these beautiful visions of what is life and what is important,
02:42:20.000 poetry and food and music and all the passions and dancing and holding someone in your arms that you care for deeply and all those things just little tricks.
02:42:30.000 Are all those little biological tricks in order to just keep on this very strange dance of human civilization so that we can keep on creating new and better products that keep on moving innovation towards this ultimate eventual goal of artificial intelligence,
02:42:48.000 of giving birth to the gods.
02:42:51.000 Yeah, giving birth to the gods.
02:42:53.000 Yeah, so, you know, I did want to mention one thing about the one thing I really, I don't understand fully, but I've been thinking about for the last couple of years, the application of artificial intelligence to politics.
02:43:09.000 I've heard you talk about sort of government being broken in the sense that one guy, one president, that doesn't make any sense.
02:43:17.000 So you get like – people get hundreds of millions of likes on their Facebook pictures and Instagram and we're always voting with our fingers every single day.
02:43:30.000 And yet for the election process, it seems that we're voting like once every four years.
02:43:36.000 Right.
02:43:36.000 It feels like this new technology could bring about a world where the voice of the people can be heard on a daily basis, where you could speak about the issues you care about, whether it's gun control and abortion, all these topics that are so debated.
02:43:53.000 It feels like there needs to be an Instagram for our elections.
02:43:56.000 I agree, yeah.
02:43:58.000 And I think there's room for that.
02:44:00.000 I've been thinking about how to write a few papers of proposing different technologies.
02:44:04.000 It just feels like the people that are playing politics are old school.
02:44:09.000 The only problem with that is the influencers.
02:44:13.000 If you look at Instagram, I mean, should...
02:44:17.000 Nicki Minaj be able to decide how the world works because she's got the most followers should Kim Kardashian like who's influencing things and why and you have to deal with the fickle nature of human beings and Do we give enough patience?
02:44:34.000 Towards the decisions of these so-called leaders that we're electing door door.
02:44:37.000 Do we just decide fuck them?
02:44:39.000 They're out new person in because we have like a really short attention span when it comes to think especially today at the news cycle so quick and So the same process, so Instagram might be a bad example because, yeah, you get Twitter, you start following Donald Trump, and you start to sort of idolize these certain icons that do we necessarily want them to represent us.
02:44:59.000 I was more thinking about the Amazon product reviews model, recommender systems, or Netflix, the movies you've watched, the Netflix learning enough about you to represent you in your next movie selection.
02:45:16.000 So in the kind of movies, like you, Joe Rogan, what are the kind of movies that you would like?
02:45:23.000 The recommender systems, these artificial intelligence systems, learn based on your Netflix selection, that could be deeper understanding of who you are than you're even aware of.
02:45:34.000 And I think there's that element.
02:45:36.000 I'm not sure exactly, but there's that element of learning who you are.
02:45:40.000 Like, do you think drugs should be legalized or not?
02:45:46.000 Do you think immigration – should we let everybody in or keep everybody out?
02:45:52.000 Should we – all these topics with the red and blue teams now have a hard answer.
02:45:58.000 Of course you keep all the immigrants out or of course you need to be more compassionate.
02:46:02.000 Of course.
02:46:03.000 But for most people, it's really a gray area.
02:46:07.000 And exploring that gray area the way you would explore the gray area of Netflix, what is the next movie you're watching?
02:46:14.000 Do you want to watch Little Mermaid or Godfather 2?
02:46:18.000 That process of understanding who you are, it feels like there's room for that in our book.
02:46:24.000 Well, the problem is, of course, that there's grave consequences to these decisions that you're going to make in terms of the way it affects the community, and you might not have any information that you're basing this on at all.
02:46:38.000 You might be basing all these decisions on...
02:46:42.000 Misinformation, propaganda, nonsense, advertising.
02:46:45.000 You could be easily influenced.
02:46:48.000 You might not have looked into it at all.
02:46:49.000 You could be ignorant about the subject and it might just appeal to certain dynamics that have been programmed into your brain because you grew up religious or you grew up an atheist.
02:47:00.000 The real problem is whether or not people are educated about the consequences of what these decisions were going to lead.
02:47:06.000 It's information.
02:47:10.000 I mean, I think there's going to be a time in our life where our ability to access information is many steps better than it is now with smartphones.
02:47:25.000 I think we're going – like Elon Musk has some Neuralink thing that he's working on right now.
02:47:29.000 He's being very vague about it.
02:47:30.000 Increase the bandwidth of our human interaction with machines is what he's working on.
02:47:34.000 Yeah.
02:47:36.000 I'm very interested to see where this leads, but I think that we can assume that because something like the internet came along and because it's so accessible to you and I right now with your phone, just pick it up, say, hey, Google, what the fuck is this?
02:47:51.000 And you get the answer almost instantaneously.
02:47:55.000 That's gonna change what a person is as that advances.
02:47:59.000 And I think we're much more likely looking at some sort of a symbiotic connection between us and artificial intelligence and computer-augmented access to information than we are looking at the rise of some artificial being that takes us over and fucks our girlfriend.
02:48:18.000 Wow, yeah, that's the real existential threat.
02:48:20.000 Yeah, I think so.
02:48:23.000 That's, to me, super exciting.
02:48:24.000 The phone is a portal to this collective that we have, this collective consciousness, and it gives people a voice.
02:48:31.000 I would say, if anyone's like me, you really know very little about the politicians you're voting for, or even the issues.
02:48:40.000 Like, global warming, I'm embarrassed to say, I, like, I know very little about, like, if I'm actually being honest with myself, I've heard different, like, I know what I'm supposed to believe as a scientist, but I actually know nothing about...
02:48:56.000 Concrete, right?
02:48:57.000 Nothing concrete about...
02:48:59.000 About the process itself.
02:49:00.000 About the environmental process and why is it so certain?
02:49:03.000 You know, scientists apparently completely agree, so as a scientist, I kind of take on faith oftentimes what the community agrees.
02:49:12.000 In my own discipline, I question.
02:49:14.000 But outside, I just kind of take on faith.
02:49:16.000 And the same thing with gun control and so on.
02:49:20.000 You just kind of say, which team am I on?
02:49:21.000 And I'm just going to take that on.
02:49:23.000 I just feel like it's such a disruptible space to where people could be given just a tiny bit more information to help them.
02:49:31.000 Well, maybe that's where something like Neuralink comes along and just enhances our ability to access this stuff in a way that's much more tangible than just being able to Google search it.
02:49:42.000 And maybe this process is something that we really can't anticipate.
02:49:45.000 It's going to have to happen to us, just like we were talking about cell phone images that you could just send to Australia with the click of a button that no one would have ever anticipated that 300 years ago.
02:49:55.000 Maybe we are beyond our capacity for understanding The impact of all this stuff.
02:50:03.000 The kids coming up now.
02:50:04.000 What is that world going to look like?
02:50:06.000 When you're too old, you'll be like 95 sitting on a porch with a shotgun with Clint Eastwood.
02:50:12.000 And what do those kids look like when they're 18 years old?
02:50:16.000 Robots.
02:50:17.000 Fucking x-ray vision and they could read minds.
02:50:20.000 Yeah, what is going to happen?
02:50:22.000 You'd be saying robots are everywhere these days.
02:50:25.000 Back in my day, we used to put robots in their place.
02:50:29.000 Yeah, right?
02:50:29.000 Like they were servants.
02:50:30.000 I'd shut them off.
02:50:31.000 Pull the plug.
02:50:33.000 I'd go fuck your mom.
02:50:35.000 Now they want to go to the same school as us?
02:50:37.000 Yeah, and they want to run for president.
02:50:41.000 They want to run for president.
02:50:43.000 Yeah, they're more compassionate and smarter, but we still hate them because they don't go to the bathroom.
02:50:47.000 Yeah.
02:50:47.000 Well, not we.
02:50:49.000 Half the country will hate them, and the other will love them, and the Abraham Lincoln character will come along.
02:50:54.000 That's what I'm pitching myself for.
02:50:56.000 Yeah, Abraham Lincoln of the robot world.
02:50:58.000 Oh, the robot world.
02:50:59.000 That's the speeches that everybody quotes.
02:51:03.000 And one other thing I've got to say about academia.
02:51:06.000 Okay.
02:51:07.000 In defense of academia.
02:51:09.000 So you've had a lot of really smart people on, including Sam Harris and Jordan Peterson.
02:51:15.000 And often the word academia is used to replace a certain concept.
02:51:20.000 So I'm part of academia.
02:51:22.000 And most of academia is engineering, is biology, is medicine, is hard sciences.
02:51:28.000 It's the humanities that are slippery.
02:51:31.000 Exactly.
02:51:32.000 And I think a subset of humanities that I know nothing about and they're a subset I don't want to speak about.
02:51:37.000 Gender studies.
02:51:38.000 Say it!
02:51:39.000 I don't know.
02:51:40.000 I don't know.
02:51:42.000 Candyman!
02:51:42.000 Candyman!
02:51:43.000 Candyman!
02:51:44.000 I actually live on Harvard campus.
02:51:48.000 I'm at MIT, but I live on Harvard campus.
02:51:50.000 It's there.
02:51:51.000 Do they have apartments for you guys?
02:51:53.000 How does that work?
02:51:54.000 Yeah, they hand them out.
02:51:55.000 No, I just don't care.
02:51:57.000 When you say live on the campus, what do you mean?
02:51:58.000 Oh, sorry, like in Harvard Square.
02:52:01.000 Oh, Harvard Square in Cambridge.
02:52:02.000 In Cambridge, yeah.
02:52:03.000 I used to go to Catch a Rising Star when it existed.
02:52:07.000 There used to be a great comedy club in Cambridge.
02:52:09.000 There's a few good comedy clubs there, right?
02:52:11.000 Well, there's a Chinese restaurant that has stand-up there still.
02:52:17.000 How does that work?
02:52:17.000 Well, it's upstairs.
02:52:18.000 There's like this comedy club up there.
02:52:21.000 Yeah.
02:52:22.000 Do you ever, because you've done, I think your specials in Boston?
02:52:25.000 Yes, I did at the Wilbur Theatre.
02:52:27.000 Have you ever considered just going back to Boston and doing like that Chinese restaurant?
02:52:31.000 The Ding Ho?
02:52:32.000 Yeah.
02:52:32.000 That was before my time.
02:52:34.000 When I came around, I started in 1988, the ding-ho had already ended.
02:52:38.000 But I got to be friends with guys like Lenny Clark and Tony V and all these people that told me about the ding-ho and Kenny Rogerson, the comics that were...
02:52:48.000 And Barry Crimmins, who just passed away, rest in peace, who was really the godfather of that whole scene.
02:52:55.000 And one of the major reasons why that scene was so...
02:53:00.000 It had such...
02:53:03.000 Really some rock-solid morals and ethics when it came to the creation of material and standards.
02:53:10.000 A lot of it was Barry Crimmins because that's just who he was as a person.
02:53:16.000 But that was before my time.
02:53:18.000 I came around like four years after that stuff.
02:53:22.000 And so there was tons of comedy clubs.
02:53:25.000 It was everywhere, but I just didn't get a chance to be around that Ding Ho scene.
02:53:30.000 And you stayed in Boston for how many years before you moved out here?
02:53:34.000 I was in New York in, by the time, I think I was in New York by 92, 91, 92. So I was in Boston for like four or five years doing stand-up.
02:53:45.000 How'd you get from Boston to New York?
02:53:48.000 My manager.
02:53:48.000 Met my manager.
02:53:49.000 I wanted to use this opportunity for you to talk.
02:53:52.000 About what?
02:53:53.000 Share about Connecticut.
02:53:54.000 Oh.
02:53:58.000 People from Connecticut get so upset at me.
02:54:00.000 It's become a running theme to talk shit about Connecticut here.
02:54:05.000 I've heard you do it once.
02:54:06.000 I just had a buddy who did a gig in Connecticut.
02:54:08.000 He told me it was fucking horrible.
02:54:09.000 I go, I told you, bitch.
02:54:11.000 You should have listened to me.
02:54:12.000 Don't book gigs in Connecticut.
02:54:14.000 The fuck's wrong with you?
02:54:15.000 There's 49 other states.
02:54:17.000 Go to Alaska.
02:54:19.000 It's great.
02:54:19.000 You go back to Boston and do small gigs?
02:54:22.000 Sometimes, yeah.
02:54:23.000 I'll do...
02:54:24.000 Yeah, Laugh Boston is a great club.
02:54:29.000 I used to do Nick Comedy Stop and all the other ones there.
02:54:32.000 But I love the Wilbur.
02:54:34.000 The Wilbur is a great place to perform.
02:54:36.000 I love Boston.
02:54:37.000 I would live there if it wasn't so fucking cold in the winter.
02:54:40.000 But that's what keeps people like me out.
02:54:42.000 It keeps the pussies away.
02:54:44.000 Listen, we've got to end this.
02:54:45.000 We've got to wrap it up.
02:54:46.000 We've already done three hours, believe it or not.
02:54:48.000 It flies by.
02:54:49.000 It did.
02:54:49.000 It flew by.
02:54:51.000 Can I say two things?
02:54:53.000 Sure, sure.
02:54:53.000 So first, I've got to give a shout out to my...
02:54:56.000 Shout out?
02:54:57.000 Shout out.
02:54:58.000 To a long, long time friend, Matt Harandi from Chicago.
02:55:01.000 He's been there all along.
02:55:02.000 He's a fan of the podcast, so he's probably listening.
02:55:05.000 Him and his wife, Fadi, had a beautiful baby girl.
02:55:07.000 So I went to send my love to him.
02:55:11.000 And I told myself I'll end it this way.
02:55:14.000 Okay.
02:55:15.000 Let me end it the way Elon ended it.
02:55:16.000 Love is the answer.
02:55:18.000 Love is the answer.
02:55:19.000 It probably is.
02:55:20.000 Unless you're a robot.
02:55:22.000 Bye!
02:55:25.000 Unless you're a robot.