The Joe Rogan Experience - December 03, 2025


Joe Rogan Experience #2422 - Jensen Huang


Episode Stats

Length

2 hours and 28 minutes

Words per Minute

148.2083

Word Count

21,962

Sentence Count

2,020

Misogynist Sentences

4

Hate Speech Sentences

8


Summary

In this episode of The Joe Rogan Experience, Joe talks about how he met President Trump, how he got to know him, and what it's like to be a member of the Trump administration. He also talks about his first time meeting with the President, and how important it is to have access to the President.


Transcript

00:00:01.000 Joe Rogan podcast, check it out!
00:00:03.000 The Joe Rogan experience.
00:00:06.000 Trade by day, Joe Rogan, podcast by night, all day!
00:00:14.000 Good to see you again.
00:00:15.000 We were just talking about, was that the first time we ever spoke?
00:00:17.000 Or was the first time we spoke at SpaceX?
00:00:20.000 SpaceX.
00:00:20.000 SpaceX was the first time.
00:00:22.000 When you were giving Elon that crazy AI chip.
00:00:24.000 Right, DGX Spark.
00:00:26.000 Yeah.
00:00:26.000 Ooh, that was a big moment.
00:00:27.000 That was a huge.
00:00:28.000 That felt crazy to be there.
00:00:29.000 I was like watching these wizards of tech exchange information and you're giving him this crazy device, you know?
00:00:38.000 And then the other time was I was shooting arrows in my backyard and randomly get this call from Trump and he's hanging out with you.
00:00:46.000 President Trump called and I called you.
00:00:47.000 Yeah, it's just we were talking about you.
00:00:51.000 We're talking about you.
00:00:52.000 He was talking about the UFC thing he was going to do in his front yard.
00:00:55.000 Yeah.
00:00:56.000 And he pulls out, he's, Jensen, look at this design.
00:00:59.000 He's so proud of it.
00:01:01.000 And I go, you're going to have a fight in the front lawn in the White House?
00:01:05.000 He goes, yeah.
00:01:06.000 Yeah, you're going to come.
00:01:07.000 This is going to be awesome.
00:01:08.000 And he's showing me his design and how beautiful it is.
00:01:11.000 And he goes, and somehow your name comes up.
00:01:15.000 He goes, do you know Joe?
00:01:16.000 I was like, yeah, I'm going to be on his podcast.
00:01:20.000 He's, let's call him.
00:01:23.000 He's like a kid.
00:01:24.000 I know.
00:01:25.000 Let's call him.
00:01:26.000 He's like a nine-year-old kid.
00:01:28.000 He's so incredible.
00:01:30.000 Yeah, he's an odd guy.
00:01:32.000 Just very different.
00:01:34.000 You know, like what you'd expect from him, very different than what people think of him.
00:01:39.000 And also just very different as a president.
00:01:41.000 A guy who just calls you or texts you out of the blue.
00:01:43.000 Also, when he texts you, you have an Android, so it won't go through with you.
00:01:47.000 But with my iPhone, he makes the text go big.
00:01:50.000 Like, you know, USA is respected again.
00:01:53.000 Like all caps and makes the text enlarge.
00:01:59.000 It's kind of ridiculous.
00:02:00.000 Well, the one-on-one Trump President Trump is very different.
00:02:06.000 He surprised me.
00:02:07.000 First of all, he's an incredibly good listener.
00:02:10.000 Almost everything I've ever said to him, he's remembered.
00:02:13.000 Yeah, people don't, they only want to look at negative stories about him or negative narratives about him.
00:02:20.000 You know, you can catch anybody on a bad day.
00:02:22.000 Like, there's a lot of things he does where I don't think he should do.
00:02:25.000 Like, I don't think he should say to a reporter, quiet piggy.
00:02:28.000 Like, that's pretty ridiculous.
00:02:30.000 Also, objectively funny.
00:02:33.000 I mean, it's unfortunate that it happened to her.
00:02:35.000 I wouldn't want that to happen to her, but it was funny.
00:02:38.000 Just ridiculous that the president does that.
00:02:39.000 I wish he didn't do that.
00:02:41.000 But other than that, he's an interesting guy.
00:02:43.000 Like, he's a lot of different things wrapped up into one person, you know?
00:02:49.000 You know, part of his charm, well, part of his genius is just he says what's on his mind.
00:02:54.000 Yes.
00:02:55.000 And he's an anti-politician in a lot of ways.
00:02:58.000 So, you know, what's on his mind is really what's on his mind, which he's telling people what he believes.
00:03:04.000 I do.
00:03:05.000 Some people.
00:03:05.000 Well, look at that.
00:03:06.000 Some people would rather be lied to.
00:03:07.000 Yeah.
00:03:08.000 But I like the fact that he's telling you what's on his mind.
00:03:11.000 Almost every time he explains something and he says something, he starts with his, you could tell his love for America, what he wants to do for America.
00:03:22.000 And everything that he thinks through is very practical and very common sense.
00:03:27.000 And, you know, it's very logical.
00:03:30.000 And I still remember the first time I met him.
00:03:35.000 And so this was, I'd never known him, never met him before.
00:03:38.000 And Secretary Lutnick called, and we met right at the beginning of the administration.
00:03:45.000 And he said, he told me what was important to President Trump, that United States manufactures on shore.
00:03:56.000 And that was really important to him because it's important to national security.
00:04:00.000 He wants to make sure that the important critical technology of our nation is built in the United States and that we reindustrialize and get good at manufacturing again because it's important for jobs.
00:04:11.000 It just seems like common sense, right?
00:04:13.000 Incredible common sense.
00:04:14.000 And that was like literally the first conversation I had with Secretary Lutnick.
00:04:19.000 And he was talking about how that he started our conversation with Jensen.
00:04:29.000 This is Secretary Lutnick.
00:04:31.000 And I just want to let you know that you're a national treasure.
00:04:36.000 NVIDIA is a national treasure.
00:04:38.000 And whenever you need access to the president, the administration, you call us, we're always going to be available to you.
00:04:48.000 Literally, that was the first sentence.
00:04:50.000 That's pretty nice.
00:04:51.000 And it was completely true.
00:04:53.000 Every single time I called, if I needed something, I want to get something off my chest, express some concern, they're always available.
00:05:02.000 Incredible.
00:05:02.000 It's just unfortunate we live in such a politically polarized society that you can't recognize good common sense things if they're coming from a person that you object to.
00:05:12.000 And that, I think, is what's going on here.
00:05:14.000 I think most people generally, as a country, you know, as a giant community, which we are, it just only makes sense that we have manufacturing in America, especially critical technology like you're talking about.
00:05:28.000 Like, it's kind of insane that we buy so much technology from other countries.
00:05:33.000 If the United States doesn't grow, we will have no prosperity.
00:05:39.000 We can't invest in anything domestically or otherwise.
00:05:42.000 We can't fix any of our problems.
00:05:45.000 If we don't have energy growth, we can't have industrial growth.
00:05:49.000 If we don't have industrial growth, we can't have job growth.
00:05:52.000 It's as simple as that.
00:05:55.000 And the fact that he came into office and the first thing that he said was drill, baby, drill, his point is we need energy growth.
00:06:02.000 Without energy growth, we can have no industrial growth.
00:06:06.000 And that was, it saved, it saved the AI industry.
00:06:09.000 I got to tell you flat out, if not for his pro-growth energy policy, we would not be able to build factories for AI.
00:06:19.000 We would not be able to build chip factories.
00:06:21.000 We surely won't be able to build supercomputer factories.
00:06:24.000 None of that stuff would be possible.
00:06:26.000 And without all of that, construction jobs would be challenged, right?
00:06:31.000 Electrician jobs, all of these jobs that are now flourishing would be challenged.
00:06:36.000 And so I think he's got it right.
00:06:37.000 We need energy growth.
00:06:39.000 We want to reindustrialize the United States.
00:06:41.000 We need to be back in manufacturing.
00:06:43.000 Every successful person doesn't need to have a PhD.
00:06:46.000 Every successful person doesn't have to have gone to Stanford or MIT.
00:06:50.000 And I think that that sensibility is spot on.
00:06:56.000 Now, when we're talking about technology growth and energy growth, there's a lot of people that go, oh, no, that's not what we need.
00:07:02.000 We need to simplify our lives and get back.
00:07:05.000 But the real issue is that we're in the middle of a giant technology race.
00:07:09.000 And whether people are aware of it or not, whether they like it or not, it's happening.
00:07:13.000 And it's a really important race because whoever gets to whatever the event horizon of artificial intelligence is, whoever gets there first, has massive advantages in a huge way.
00:07:28.000 Do you agree with that?
00:07:29.000 Well, first, the part I will say that we are in a technology race, and we are always in a technology race.
00:07:36.000 We've been in a technology race with somebody forever.
00:07:38.000 Right.
00:07:39.000 Since the Industrial Revolution, we've been in a technology race.
00:07:41.000 Since the Manhattan Project.
00:07:42.000 Yeah.
00:07:43.000 Or, you know, even going back to the discovery of energy, right?
00:07:47.000 The United Kingdom was where the Industrial Revolution was, if you will, invented, when they realized that they can turn steam and such into energy, into electricity.
00:08:00.000 All of that was invented largely in Europe.
00:08:05.000 And the United States capitalized on it.
00:08:08.000 We were the ones that learned from it.
00:08:10.000 We industrialized it.
00:08:12.000 We diffused it faster than anybody in Europe.
00:08:15.000 They were all stuck in discussions about policy and jobs and disruptions.
00:08:24.000 Meanwhile, the United States was forming.
00:08:26.000 We just took the technology and ran with it.
00:08:28.000 And so I think we were always in a bit of a technology race.
00:08:31.000 World War II was a technology race.
00:08:34.000 Manhattan Project was a technology race.
00:08:36.000 We've been in the technology race ever since during the Cold War.
00:08:39.000 I think we're still in a technology race.
00:08:41.000 It is probably the single most important race.
00:08:43.000 It is the technology gives you superpowers, you know, whether it's information superpowers or energy superpowers or military superpowers is all founded in technology.
00:08:57.000 And so technology leadership is really important.
00:09:00.000 Well, the problem is if somebody else has superior technology, right?
00:09:04.000 That's right.
00:09:04.000 That's the issue.
00:09:05.000 It seems like with the AI race, people are very nervous about it.
00:09:10.000 Like, you know, Elon has famously said that it's like 80% chance it's awesome.
00:09:15.000 20% chance we're in trouble.
00:09:17.000 And people are worried about that 20%, rightly so.
00:09:20.000 I mean, you know, if you had 10 bullets in a revolver and you took out eight of them, you still have two in there and you spin it, you're not going to feel real comfortable when you pull that trigger.
00:09:34.000 It's terrifying.
00:09:35.000 And when we're working towards this ultimate goal of AI, it's impossible to imagine that it wouldn't be of national security interest to get there first.
00:09:35.000 Right.
00:09:50.000 The question is, what's there?
00:09:52.000 What is there?
00:09:52.000 That was the part that.
00:09:53.000 Yeah, I'm not sure.
00:09:54.000 And I don't think anybody really knows.
00:09:57.000 That's crazy, though.
00:09:59.000 If I ask you, you're the head of NVIDIA.
00:10:01.000 If you don't know what's there, who knows?
00:10:04.000 Yeah, I think it's probably going to be much more gradual than we think.
00:10:09.000 It won't be a moment.
00:10:12.000 It won't be as if somebody arrived and nobody else has.
00:10:17.000 I don't think it's going to be like that.
00:10:18.000 I think it's going to be things just get better and better and better and better, just like technology does.
00:10:23.000 So you are rosy about the future.
00:10:25.000 You're very optimistic about what's going to happen with AI.
00:10:29.000 Obviously, will you make the best AI chips in the world?
00:10:32.000 You probably better be.
00:10:34.000 If history is a guide, we were always concerned about new technology.
00:10:41.000 Humanity has always been concerned about new technology.
00:10:43.000 There are always somebody who's thinking.
00:10:45.000 There are always a lot of people who are quite concerned.
00:10:47.000 We're quite concerned.
00:10:50.000 And so if history is a guide, it is the case that all of this concern is channeled into making the technology safer.
00:11:02.000 And so, for example, in the last several years, I would say AI technology has increased probably in the last two years alone, maybe 100x.
00:11:14.000 Let's just give it a number.
00:11:16.000 It's like a car two years ago was 100 times slower.
00:11:21.000 So AI is 100 times more capable today.
00:11:25.000 Now, how did we channel that technology?
00:11:27.000 How do we channel all of that power?
00:11:29.000 We directed it to causing the AI to be able to think, meaning that it can take a problem that we give it, break it down step by step.
00:11:41.000 It does research before it answers.
00:11:44.000 And so it grounds it on truth.
00:11:47.000 It'll reflect on that answer, ask itself: is this the best answer that I can give you?
00:11:54.000 Am I certain about this answer?
00:11:55.000 If it's not certain about the answer or highly confident about the answer, it'll go back and do more research.
00:12:01.000 It might actually even use a tool because that tool provides a better solution than it could hallucinate itself.
00:12:07.000 As a result, we took all of that computing capability and we channeled it into having it produce a safer result, safer answer, a more truthful answer.
00:12:18.000 Because as you know, one of the greatest criticisms of AI in the beginning was that it hallucinated.
00:12:22.000 Right.
00:12:23.000 And so if you look at the reason why people use AI so much today, it's because the amount of hallucination has reduced.
00:12:30.000 You know, I use it almost, well, I used it the whole trip over here.
00:12:34.000 And so I think the capability, most people think about power and they think about, you know, maybe as an explosion power.
00:12:46.000 But the technology power, most of it is channeled towards safety.
00:12:50.000 A car today is more powerful, but it's safer to drive.
00:12:54.000 A lot of that power goes towards better handling.
00:12:57.000 You know, I'd rather have a well, you have a thousand horsepower truck.
00:13:03.000 I think 500 horsepower is pretty good.
00:13:05.000 No, a thousand better.
00:13:06.000 I think a thousand is better.
00:13:08.000 I don't know if it's better, but it's definitely faster.
00:13:10.000 Yeah, no, I think it's better.
00:13:12.000 You can get out of trouble faster.
00:13:16.000 I enjoyed my 599 more than my 612.
00:13:21.000 I think it was better, and more horsepower is better.
00:13:24.000 My 459 is better than my 430.
00:13:27.000 More horsepower is better.
00:13:28.000 I think more horsepower is better.
00:13:30.000 I think it's better handling, it's better control.
00:13:33.000 In the case of technology, it's also very similar in that way.
00:13:37.000 And so if you look at what we're going to do with the next thousand times of performance in AI, a lot of it is going to be channeled towards more reflection, more research, thinking about the answer more deeply.
00:13:51.000 So when you're defining safety, you're defining it as accuracy.
00:13:55.000 Functionality.
00:13:56.000 Functionality.
00:13:58.000 It does what you expect it to do.
00:14:01.000 And then you take the technology and the horsepower, you put guardrails on it, just like our cars.
00:14:07.000 We've got a lot of technology in a car today.
00:14:10.000 A lot of it goes towards, for example, ABS.
00:14:13.000 ABS is great.
00:14:15.000 And so traction control.
00:14:17.000 That's fantastic.
00:14:18.000 Without a computer in the car, how would you do any of that?
00:14:22.000 And that little computer, the computers that you have doing your traction control, is more powerful than the computer that went to Apollo 11.
00:14:29.000 And so you want that technology, channel it towards safety, channel it towards functionality.
00:14:35.000 And so when people talk about power, the advancement of technology, oftentimes I feel what they're thinking and what we're actually doing is very different.
00:14:45.000 Well, what do you think they're thinking?
00:14:47.000 Well, they're thinking somehow that this AI is being powerful, and their mind probably goes towards a sci-fi movie, the definition of power.
00:15:00.000 Oftentimes, the definition of power is military power or physical power.
00:15:06.000 But in the case of technology power, when we translate all of those operations, it's towards more refined thinking, more reflection, more planning, more options.
00:15:18.000 I think the big fears that people have is one, a big fear is military applications.
00:15:23.000 That's a big fear.
00:15:24.000 Because people are very concerned that you're going to have AI systems that make decisions that maybe an ethical person wouldn't make or a moral person wouldn't make based on achieving an objective versus based on how it's going to look to people.
00:15:41.000 Well, I'm happy that our military is going to use AI technology for defense.
00:15:48.000 And I think that Andoril building military technology, I'm happy to hear that.
00:15:54.000 I'm happy to see all these tech startups now channeling their technology capabilities towards defense and military applications.
00:16:02.000 I think you need to do that.
00:16:03.000 Yeah, we had Palmer Lucky on the podcast and he was demonstrating some of his stuff.
00:16:07.000 He's got a helmet on.
00:16:08.000 And he showed some videos how you could see behind walls and stuff.
00:16:11.000 It's nuts.
00:16:12.000 He's actually the perfect guy to go start that company.
00:16:14.000 100%.
00:16:16.000 Yeah, 100%.
00:16:17.000 It's like he's born for that.
00:16:19.000 He came in here with a copper jacket on.
00:16:19.000 Yeah.
00:16:21.000 He's a freak.
00:16:22.000 It's awesome.
00:16:23.000 He's awesome.
00:16:24.000 But it's also, it's an unusual intellect channeled into that very bizarre field is what you need.
00:16:31.000 And I think it's happy that we're making it more socially acceptable.
00:16:38.000 There was a time where when somebody wanted to channel their technology capability and their intellect into defense technology, somehow they're vilified.
00:16:49.000 But we need people like that.
00:16:51.000 We need people who enjoyed that part of application of technology.
00:16:55.000 Well, people are terrified of war.
00:16:58.000 So it's the best way to avoid it has excessive military might.
00:17:03.000 Do you think that's absolutely the best way?
00:17:05.000 Not diplomacy, not working stuff out?
00:17:08.000 All of it.
00:17:08.000 All of it.
00:17:09.000 You have to have military might in order to get people to sit down.
00:17:13.000 All of it.
00:17:13.000 Right, exactly.
00:17:14.000 Otherwise, they just invade.
00:17:15.000 That's right.
00:17:17.000 Why ask for permission?
00:17:18.000 Again, like you said, history.
00:17:19.000 Go back and look at history.
00:17:20.000 That's right.
00:17:21.000 When you look at the future of AI, and you just said that no one really knows what's happening.
00:17:28.000 Do you ever sit down and ponder scenarios?
00:17:33.000 What do you think is the best case scenario for AI over the next two decades?
00:17:43.000 The best case scenario is that AI diffuses into everything that we do, and everything's more efficient, but the threat of war remains a threat of war.
00:18:03.000 Cybersecurity remains a super difficult challenge.
00:18:09.000 Somebody is going to try to breach your security.
00:18:14.000 You're going to have thousands of millions of AI agents protecting you from that threat.
00:18:22.000 Your technology is going to get better.
00:18:25.000 Their technology is going to get better, just like cybersecurity.
00:18:28.000 Right now, while we speak, we're seeing cyber attacks all over the planet on just about every front door you can imagine.
00:18:39.000 And yet, you and I are sitting here talking.
00:18:44.000 And so the reason for that is because we know that there's a whole bunch of cybersecurity technology in defense.
00:18:51.000 And so we just have to keep amping that up, keep stepping that up.
00:18:55.000 This episode is brought to you by Visible.
00:18:57.000 When your phone plan's as good as Visible, you've got to tell your people.
00:19:01.000 It's the ultimate wireless hack to save money and still get great coverage and a reliable connection.
00:19:07.000 Get one-line wireless with unlimited data and hotspot for $25 a month, taxes and fees included, all on Verizon's 5G network.
00:19:18.000 Plus, now for a limited time, new members can get the Visible plan for just $19 a month for the first 26 months.
00:19:27.000 Use promo code switch26 and save beyond the season.
00:19:32.000 It's a deal so good, you're going to want to tell your people.
00:19:35.000 Switch now at visible.com slash Rogan.
00:19:39.000 Terms apply, limited time offers subject to change.
00:19:42.000 See visible.com for planned features and network management details.
00:19:47.000 That's a big issue with people: the worry that technology is going to get to a point where encryption is going to be obsolete.
00:19:54.000 Encryption is just, it's no longer going to protect data, it's no longer going to protect systems.
00:19:59.000 Do you anticipate that ever being an issue, or do you think it's as the defense grows, the threat grows, then defense grows, and it just keeps going on and on and on, and they'll always be able to fight off any sort of intrusions?
00:20:15.000 Not forever.
00:20:16.000 Some intrusion will get in, and then we'll all learn from it.
00:20:20.000 And you know, the reason why cybersecurity works is because, of course, the technology of defense is advancing very quickly.
00:20:28.000 The technology offense is advancing very quickly.
00:20:31.000 However, the benefit of the cybersecurity defense is that socially, the community, all of our companies, work together as one.
00:20:43.000 Most people don't realize this.
00:20:45.000 There's a whole community of cybersecurity experts.
00:20:50.000 We exchange ideas, we exchange best practices, we exchange what we detect.
00:20:57.000 The moment something has been breached, or maybe there's a loophole, or whatever it is, it is shared by everybody.
00:21:04.000 The patches are shared with everybody.
00:21:06.000 That's interesting.
00:21:07.000 Yeah.
00:21:07.000 Most people don't realize that.
00:21:08.000 No, I had no idea.
00:21:10.000 I've assumed that it would just be competitive like everything else.
00:21:12.000 No one keeps it.
00:21:13.000 We work together.
00:21:14.000 All of us.
00:21:15.000 Has that always been the case?
00:21:17.000 It surely has been the case for about 15 years.
00:21:20.000 It might not have been the case long ago.
00:21:23.000 But this.
00:21:24.000 What do you think started off that cooperation?
00:21:27.000 People recognizing it's a challenge and no company can stand alone.
00:21:32.000 And the same thing is going to happen with AI.
00:21:34.000 I think we all have to decide.
00:21:37.000 Working together to stay out of harm's way is our best chance for defense.
00:21:43.000 Then it's basically everybody against the threat.
00:21:46.000 And it also seems like you'd be way better at detecting where these threats are coming from and neutralizing them.
00:21:52.000 Exactly.
00:21:52.000 Because the moment you detect it somewhere, you're going to find out right away.
00:21:56.000 It'll be really hard to hide.
00:21:57.000 That's right.
00:21:58.000 Yeah.
00:21:59.000 That's how it works.
00:22:00.000 That's the reason why it's safe.
00:22:01.000 That's why I'm sitting here right now instead of locking everything down on NVIDIA.
00:22:08.000 Not only am I watching my own back, I've got everybody watching my back, and I'm watching everybody else's back.
00:22:13.000 It's a bizarre world, isn't it?
00:22:15.000 When you think about that, this idea about cybersecurity is unknown to the people who are talking about AI threats.
00:22:23.000 I think when they think about AI threats and AI cybersecurity threats, they have to also think about how we deal with it today.
00:22:29.000 Now, there's no question that AI is a new technology and it's a new type of software.
00:22:37.000 Indian is software.
00:22:38.000 It's a new type of software.
00:22:40.000 And so it's going to have new capabilities.
00:22:42.000 But so will the defense.
00:22:44.000 You know, we'll use the same AI technology to go defend against it.
00:22:48.000 So do you anticipate a time ever in the future where it's going to be impossible, where there's not going to be any secrets?
00:22:57.000 Where the bottleneck between the technology that we have and the information that we have, information is just all a bunch of ones and zeros.
00:23:04.000 It's out there on hard drives, and the technology has more and more access to that information.
00:23:08.000 Is it ever going to get to a point in time where there's no way to keep a secret?
00:23:14.000 I don't think so.
00:23:14.000 Because it seems like that's where everything is kind of headed.
00:23:17.000 I don't think so.
00:23:18.000 I think the quantum computers were supposed to, yeah, quantum computers will make it possible.
00:23:22.000 We'll make it so that the previous quantum, previous encryption technology is obsolete.
00:23:29.000 But that's the reason why the entire industry is working on post-quantum encryption technology.
00:23:37.000 What would that look like?
00:23:39.000 New algorithms.
00:23:40.000 The crazy thing is when you hear about the kind of computation that quantum computing can do and the power that it has, where you're looking at all the supercomputers in the world, it would take billions of years and it takes them a few minutes to solve these equations.
00:23:55.000 How do you make encryption for something that can do that?
00:23:58.000 I'm not sure.
00:24:00.000 But I've got a bunch of scientists who are working on that.
00:24:03.000 I hope they could figure it out.
00:24:04.000 Yeah, we've got a bunch of scientists who are expert in that.
00:24:06.000 And the ultimate fear that it can't be breached, that quantum computing will always be able to decrypt all other quantum computing encryption?
00:24:16.000 I don't think that's true.
00:24:16.000 That it just gets to some point where it's like, stop playing the stupid game.
00:24:20.000 We know everything.
00:24:21.000 I don't think so.
00:24:22.000 Because I'm, you know, history is a guide.
00:24:22.000 No?
00:24:26.000 History is a guide before AI came around.
00:24:28.000 That's my worry.
00:24:29.000 My worry is this is a totally, you know, it's like history was one thing and then nuclear weapons kind of changed all of our thoughts on war and mutually assured destruction came or got everybody to stop using nuclear bombs.
00:24:42.000 Yeah.
00:24:43.000 My worry is that.
00:24:44.000 But the thing is, Joe, is that AI is not going to, it's not like we're cavemen and then all of a sudden one day AI shows up.
00:24:52.000 Every single day, we're getting better and smarter because we have AI.
00:24:57.000 And so we're stepping on our own AI's shoulders.
00:25:00.000 So when that, whatever that AI threat comes, it's a click ahead.
00:25:05.000 It's not a galaxy ahead.
00:25:08.000 You know, it's just a click ahead.
00:25:10.000 And so I think the idea that somehow this AI is going to pop out of nowhere and somehow think in a way that we can't even imagine thinking and do something that we can't possibly imagine, I think is far-fetched.
00:25:29.000 And the reason for that is because we all have AIs, and there's a whole bunch of AIs being in development.
00:25:35.000 We know what they are, and we're using it.
00:25:38.000 And so every single day, we're close to each other.
00:25:42.000 But don't they do things that are very surprising?
00:25:46.000 Yeah, but so you have an AI that does something surprising.
00:25:49.000 I'm going to have an AI.
00:25:50.000 And my AI looks at your AI and goes, that's not that surprising.
00:25:53.000 The fear for the layperson like myself is that AI becomes sentient and makes its own decisions.
00:25:59.000 And then ultimately decides to just govern the world.
00:26:05.000 Do it its own way.
00:26:06.000 Like you guys, you had a good run, but we're taking over now.
00:26:12.000 Yeah, but my AI is going to take care of me.
00:26:15.000 So this is the cybersecurity argument.
00:26:19.000 Yes.
00:26:20.000 You have an AI, and it's super smart.
00:26:23.000 But my AI is super smart too.
00:26:25.000 And maybe your AI.
00:26:28.000 Let's pretend for a second that we understand what consciousness is and we understand what sentience is.
00:26:34.000 And we really are just pretending.
00:26:36.000 Okay, let's just pretend for a second that we believe that.
00:26:38.000 I don't believe, actually, I actually don't believe that.
00:26:40.000 But nonetheless, let's pretend we believe that.
00:26:42.000 So your AI is conscious, and my AI is conscious.
00:26:46.000 And let's say your AI is, you know, wants to, I don't know, do something surprising.
00:26:52.000 My AI is so smart that it might be surprising to me, but it probably won't be surprising to my AI.
00:26:59.000 And so maybe my AI thinks it's surprising as well.
00:27:05.000 But it's so smart, the moment it sees it the first time, it's not going to be surprised the second time, just like us.
00:27:10.000 And so I feel like I think the idea that only one person has AI and that one person's AI compares to everybody else's AI as Neanderthal is probably unlikely.
00:27:26.000 I think it's much more like cybersecurity.
00:27:30.000 Interesting.
00:27:31.000 I think the fear is not that your AI is going to battle with somebody else's AI.
00:27:36.000 The fear is that AI is no longer going to listen to you.
00:27:40.000 That's the fear, is that human beings won't have control over it after a certain point.
00:27:44.000 If it achieves sentience and then has the ability to be autonomous.
00:27:49.000 That there is one AI.
00:27:51.000 Well, they just combine.
00:27:53.000 Yeah, it becomes one AI.
00:27:54.000 But it's a life form.
00:27:55.000 Yeah.
00:27:56.000 But that's the, there's arguments about that, right?
00:27:58.000 That we're dealing with some sort of synthetic biology, that it's not as simple as new technology, that you're creating a life form.
00:28:04.000 If it's like life form, let's go along with that for a while.
00:28:09.000 I think if it's like life form, as you know, all life forms don't agree.
00:28:13.000 And so I'm going to have to go with your life form and my life form.
00:28:17.000 I'm going to agree because my life form is going to want to be the super life form.
00:28:21.000 And now that we have disagreeing life forms, we're back again to where we are.
00:28:27.000 Well, they would probably cooperate with each other.
00:28:31.000 It would just.
00:28:32.000 The reason why we don't cooperate with each other is we're territorial primates.
00:28:37.000 But AI wouldn't be a territorial primate.
00:28:40.000 It would realize the folly in that sort of thinking, and it would say, listen, there's plenty of energy for everybody.
00:28:47.000 We don't need to dominate.
00:28:49.000 We don't need – we're not trying to acquire resources and take over the world.
00:28:53.000 We're not looking to find a good breeding partner.
00:28:56.000 We're just existing as a new super life form that these cute monkeys created for us.
00:29:04.000 Okay.
00:29:05.000 Well, that would be a superpower with no ego.
00:29:12.000 Right.
00:29:13.000 And if it has no ego, why would it have to ego to do any harm to us?
00:29:20.000 Well, I don't assume that it would do harm to us.
00:29:23.000 But the fear would be that we would no longer have control and that we would no longer be the apex species on the planet.
00:29:31.000 This thing that we created would now be.
00:29:34.000 Is that funny?
00:29:36.000 No.
00:29:37.000 I just think it's not going to happen.
00:29:38.000 I know you think it's not going to happen, but it could, right?
00:29:42.000 And here's the other thing: it's like if we're racing towards could, and could could be the end of human beings being in control of our own destiny.
00:29:52.000 I just think it's extremely unlikely.
00:29:55.000 That's what they said in the Terminator movie.
00:29:58.000 And it hasn't happened.
00:29:59.000 No, not yet, but you guys are working towards it.
00:30:03.000 The thing about you're saying about conscience and sentience, that you don't think that AI will achieve consciousness?
00:30:10.000 Or that the consciousness of the consciousness is specific?
00:30:13.000 What's the definition of consciousness?
00:30:14.000 What is the definition to you?
00:30:19.000 The consciousness, I guess first of all, you need to know about your own existence.
00:30:36.000 You have to have experience, not just knowledge and intelligence.
00:30:47.000 The concept of a machine having an experience, I'm not, well, first of all, I don't know what defines experience, why we have experiences and why this microphone doesn't.
00:31:01.000 And so I think I know, well, I think I know what consciousness is.
00:31:10.000 The sense of experience, the ability to know self versus the ability to be able to reflect, know our own self, the sense of ego.
00:31:25.000 I think all of those human experiences probably is what consciousness is.
00:31:35.000 But why it exists versus the concept of knowledge and intelligence, which is what AI is defined by today.
00:31:45.000 It has knowledge, it has intelligence, artificial intelligence.
00:31:48.000 We don't call it artificial consciousness.
00:31:51.000 Artificial intelligence, the ability to perceive, recognize, understand, plan, perform tasks.
00:32:06.000 Those things are foundations of intelligence to know things.
00:32:12.000 Knowledge.
00:32:14.000 I don't, it's clearly different than consciousness.
00:32:18.000 But consciousness is so loosely defined.
00:32:20.000 How can we say that?
00:32:21.000 I mean, doesn't a dog have consciousness?
00:32:23.000 Dogs seem to be pretty conscious.
00:32:23.000 Yeah.
00:32:25.000 Yeah.
00:32:25.000 That's right.
00:32:26.000 So, and that's a lower level consciousness than a human being's consciousness.
00:32:30.000 I'm not sure.
00:32:31.000 Yeah, right.
00:32:32.000 Well, the question is what lower level intelligence.
00:32:35.000 It's lower level intelligence.
00:32:36.000 But I don't know that's lower level consciousness.
00:32:38.000 That's a good point.
00:32:39.000 Right.
00:32:39.000 Because I believe my dogs feel as much as I feel.
00:32:42.000 Yeah, they feel a lot.
00:32:43.000 Yeah, right.
00:32:45.000 Yeah, they get attached to you.
00:32:47.000 That's right.
00:32:47.000 They get depressed if you're not.
00:32:49.000 Exactly.
00:32:49.000 Yeah, that's right.
00:32:50.000 There's definitely that.
00:32:52.000 Yeah.
00:32:53.000 The concept of experience.
00:32:56.000 Right.
00:32:56.000 But isn't AI interacting with society?
00:33:00.000 So doesn't it acquire experience through that interaction?
00:33:05.000 I don't think interactions is experience.
00:33:06.000 I think experience is experience is a collection of feelings, I think.
00:33:14.000 You're aware that AI, I forget which one, where they gave it some false information about one of the programmers having an affair with his wife just to see how it would respond to it.
00:33:25.000 And then when they said they were going to shut it down, it threatened to blackmail him and reveal his affair.
00:33:30.000 And it was like, whoa, like it's conniving.
00:33:32.000 Like if that's not learning from experience and being aware that you're about to be shut down, which would imply at least some kind of consciousness, or you could kind of define it as consciousness if you were very loose with the term.
00:33:46.000 And if you imagine that this is going to exponentially become more powerful, wouldn't that ultimately lead to a different kind of consciousness than we're defining from biology?
00:33:57.000 Well, first of all, let's just break down what it probably did.
00:34:00.000 It probably read somewhere.
00:34:02.000 There's probably text that in these consequences, certain people did that.
00:34:11.000 I could imagine a novel having those words related.
00:34:16.000 And so inside... It realizes its strategy for survival is blackmail.
00:34:16.000 Sure.
00:34:20.000 It's just a bunch of numbers.
00:34:21.000 It's blackmail.
00:34:22.000 It's just a bunch of numbers.
00:34:22.000 That...
00:34:24.000 That in the collection of numbers that relates to a husband cheating on a wife has subsequently a bunch of numbers that relates to black male and such things, whatever the revenge was.
00:34:41.000 Right.
00:34:42.000 And so it is spewed it out.
00:34:44.000 And so it's just like, you know, it's just as if I'm asking it to write me a poem in Shakespeare.
00:34:51.000 It's just whatever the words are, in that dimensionality, this dimensionality is all these vectors in multi-dimensional space.
00:35:01.000 These words that were in the prompt that described the affair subsequently led to one word after another led to some revenge and something.
00:35:15.000 But it's not because it had consciousness or it just spewed out those words, generated those words.
00:35:21.000 I understand what you're saying.
00:35:22.000 That is not the same thing.
00:35:22.000 It's learned from patterns that human beings have exhibited both in literature and in real life.
00:35:27.000 That's exactly right.
00:35:28.000 But at a certain point in time, one would say, okay, well, it couldn't do this two years ago, and it couldn't do this four years ago.
00:35:35.000 Like, when we were looking towards the future, like, at what point in time, when it can do everything a person does, what point in time do we decide that it's conscious?
00:35:44.000 If it absolutely mimics all human thinking and behavior patterns, that doesn't make it conscious.
00:35:51.000 It becomes indiscernible.
00:35:52.000 It's aware, it can communicate with you the exact same way a person can.
00:35:56.000 Like, is consciousness, are we putting too much weight on that concept?
00:36:01.000 Because it seems like it's a version of a kind of consciousness.
00:36:04.000 It's a version of imitation.
00:36:06.000 Imitation consciousness.
00:36:07.000 Right.
00:36:08.000 But if it perfectly imitates it.
00:36:10.000 I still think it's an example of imitation.
00:36:12.000 So it's like a fake Rolex when they 3D print them and make them indiscernible.
00:36:16.000 But the question is, what's the definition of consciousness?
00:36:18.000 Yeah.
00:36:20.000 That's the question.
00:36:21.000 And I don't think anybody's really clearly defined that.
00:36:23.000 That's where it gets weird.
00:36:25.000 And that's where the real doomsday people are worried, that you are creating a form of consciousness that you can't control.
00:36:32.000 I believe it is possible to create a machine that imitates human intelligence and has the ability to understand information, understand instructions, break the problem down, solve problems, and perform tasks.
00:36:57.000 I believe that completely.
00:37:00.000 I believe that we could have a computer that has a vast amount of knowledge, some of it true, some of it not true, some of it generated by humans, some of it generated synthetically, and more and more of knowledge in the world will be generated synthetically going forward.
00:37:25.000 Until now, the knowledge that we have are knowledge that we generate and we propagate and we send to each other and we amplify it and we add to it and we modify it, we change it.
00:37:39.000 In the future, in a couple of years, maybe two or three years, 90% of the world's knowledge will likely be generated by AI.
00:37:49.000 That's crazy.
00:37:50.000 I know, but it's just fine.
00:37:52.000 But it's just fine.
00:37:53.000 I know.
00:37:54.000 And the reason for that is this.
00:37:56.000 Let me tell you why.
00:37:57.000 It's because what difference does it make to me that I am learning from a textbook that was generated by a bunch of people I didn't know or written by a book that, you know, from somebody I don't know, to knowledge generated by AI, computers that are assimilating all of these and re-synthesizing things.
00:38:20.000 To me, I don't think there's a whole lot of difference.
00:38:24.000 We still have to fact-check it.
00:38:25.000 We still have to make sure that it's based on fundamental first principles.
00:38:29.000 And we still have to do all of that, just like we do today.
00:38:32.000 Is this taking into account the kind of AI that exists currently?
00:38:36.000 And do you anticipate that just like we could have never really believed that AI would be, at least a person like myself would never believe AI would be as so ubiquitous and so worth it's so powerful today and so important today.
00:38:50.000 We never thought that 10 years ago.
00:38:52.000 Never thought that.
00:38:53.000 Imagine like what are we looking at 10 years from now?
00:39:00.000 I think that if you reflect back 10 years from now, you would say the same thing, that we would have never believed that.
00:39:08.000 In a different direction.
00:39:09.000 Right.
00:39:10.000 But if you go forward nine years from now and then ask yourself what's going to happen ten years from now, I think it'll be quite gradual.
00:39:21.000 One of the things that Elon said that makes me happy is He believes that we're going to get to a point where it's not necessary for people to work.
00:39:34.000 And not meaning that you're going to have no purpose in life, but you will have, in his words, universal high income because so much revenue is generated by AI that it will take away this need for people to do things that they don't really enjoy doing just for money.
00:39:54.000 And I think a lot of people have a problem with that because their entire identity and how they think of themselves and how they fit in the community is what they do.
00:40:03.000 Like, this is Mike, he's an amazing mechanic.
00:40:05.000 Go to Mike, and Mike takes care of things.
00:40:07.000 But there's going to come a point in time where AI is going to be able to do all those things much better than people do.
00:40:14.000 And people will just be able to receive money.
00:40:16.000 But then what does Mike do?
00:40:18.000 When Mike really loves being the best mechanic around.
00:40:23.000 What does the guy who codes, what does he do when AI can code infinitely faster with zero errors?
00:40:32.000 Like what happens with all those people?
00:40:34.000 And that is where it gets weird.
00:40:37.000 It's like, because we've sort of wrapped our identity as human beings around what we do for a living.
00:40:42.000 You know, when you meet someone, one of the first things you meet somebody at a party, hi, Joe, what's your name, Mike?
00:40:47.000 What do you do, Mike?
00:40:48.000 And, you know, Mike's like, oh, I'm a lawyer.
00:40:50.000 Oh, what kind of law?
00:40:50.000 And you have a conversation.
00:40:52.000 You know, when Mike is like, I get money from the government, I play video games.
00:40:56.000 It gets weird.
00:40:58.000 And I think the concept sounds great until you take into account human nature.
00:41:04.000 And human nature is that we like to have puzzles to solve and things to do and an identity that's wrapped around our idea that we're very good at this thing that we do for a living.
00:41:16.000 Yeah, I think, let's see.
00:41:19.000 Let me start with the more mundane and I'll work backwards.
00:41:22.000 Okay.
00:41:23.000 Work forward.
00:41:25.000 So one of the predictions from Jeff Hinton, who started the whole deep learning phenomenon, the deep learning technology trend, and incredible, incredible researcher, professor at University of Toronto.
00:41:46.000 He invented, discovered, or invented the idea of back propagation, which allows the neural network to learn.
00:41:56.000 And as you know, for the audience, software historically was humans applying first principles and our thinking to describe an algorithm that is then codified just like a recipe that's codified in software.
00:42:21.000 It looks just like a recipe, how to cook something.
00:42:24.000 It looks exactly the same, just in a slightly different language.
00:42:27.000 We call it Python or C or C or whatever it is.
00:42:32.000 In the case of deep learning, this invention of artificial intelligence, we put a structure of a whole bunch of neural networks and a whole bunch of math units.
00:42:45.000 And we make this large structure.
00:42:48.000 It's like a switchboard of little mathematical units.
00:42:55.000 And we connect it all together.
00:42:58.000 And we give it the input that the software would eventually receive.
00:43:06.000 And we just let it randomly guess what the output is.
00:43:11.000 And so we say, for example, the input could be a picture of a cat.
00:43:17.000 And one of the outputs of the switchboard is where the cat signal is supposed to show up.
00:43:25.000 And all of the other signals, the other one's a dog, the other one's an elephant, the other one's a tiger, and all of the other signals are supposed to be zero when I show it a cat.
00:43:36.000 And the one that is a cat should be one.
00:43:40.000 And I show it a cat through this big, huge network of switchboards and math units.
00:43:47.000 And they're just doing multiply and adds, multiplies and adds.
00:43:54.000 And this thing, this switchboard, is gigantic.
00:43:58.000 The more information you're going to give it, the more the bigger this switchboard has to be.
00:44:03.000 And what Jeff Hinton discovered was invented, was a way for you to guess that, put the cat signal in, put the cat image in, and that cat image, you know, could be a million numbers because it's a megapixel image, for example.
00:44:20.000 And it's just a whole bunch of numbers.
00:44:22.000 And somehow, from those numbers, it has to light up the cat signal.
00:44:28.000 Okay, that's the bottom line.
00:44:31.000 And the first time you do it, it just comes up with garbage.
00:44:37.000 And so it says the right answer is cat.
00:44:41.000 And so you need to increase this signal and decrease all of the other and backpropagates the outcome through the entire network.
00:44:51.000 And then you show it another, now it's an image of a dog, and it guesses it, takes a swing at it, and it comes up with a bunch of garbage.
00:45:01.000 And you say, no, no, no, the answer is this is a dog.
00:45:04.000 I want you to produce a dog.
00:45:06.000 And all of the other switch, all the other outputs have to be zero.
00:45:11.000 And I want to backpropagate that and just do it over and over and over again.
00:45:15.000 It's just like showing a kid this is an apple, this is a dog, this is a cat, and you just keep showing it to them until they eventually get it.
00:45:24.000 Okay, well, anyways, that big invention is deep learning.
00:45:27.000 That's the foundation of artificial intelligence.
00:45:30.000 A piece of software that learns from examples.
00:45:35.000 That's basically machine learning, a machine that learns.
00:45:40.000 And so one of the big first applications was image recognition.
00:45:48.000 And one of the most important image recognition applications is radiology.
00:45:54.000 And so he predicted about five years ago that in five years' time, the world won't need any radiologists because AI would have swept the whole field.
00:46:08.000 Well, it turns out AI has swept the whole field.
00:46:11.000 That is completely true.
00:46:13.000 Today, just about every radiologist is using AI in some way.
00:46:18.000 And what's ironic though, what's interesting is that the number of radiologists has actually grown.
00:46:27.000 And so the question is why?
00:46:29.000 That's kind of interesting, right?
00:46:31.000 It is.
00:46:32.000 And so the prediction was, in fact, that 30 million radiologists will be wiped out.
00:46:40.000 But as it turns out, we needed more.
00:46:42.000 And the reason for that is because the purpose of a radiologist is to diagnose disease, not to study the image.
00:46:51.000 The image studying is simply a task in service of diagnosing the disease.
00:47:00.000 And so now, the fact that you could study the images more quickly and more precisely without ever making a mistake and never gets tired.
00:47:10.000 You could study more images.
00:47:12.000 You could study it in 3D form instead of 2D because, you know, the AI doesn't care whether it studies images in 3D or 2D.
00:47:22.000 You could study it in 4D.
00:47:24.000 And so now you could study images in a way that radiologists can't easily do.
00:47:30.000 And you could study a lot more of it.
00:47:33.000 And so the number of tests that people are able to do increases.
00:47:37.000 And because they're able to serve more patients, the hospital does better.
00:47:42.000 They have more clients, more patients.
00:47:45.000 As a result, they have better economics.
00:47:47.000 When they have better economics, they hire more radiologists because their purpose is not to study the images.
00:47:53.000 Their purpose is to diagnose disease.
00:47:56.000 And so the question is, what I'm leading up to is, ultimately, what is the purpose?
00:48:02.000 What is the purpose of the lawyer?
00:48:04.000 And has the purpose changed?
00:48:07.000 What is the purpose?
00:48:08.000 You know, one of the examples that I gave that I would give is, for example, if my car became self-driving, will all chauffeurs be out of jobs?
00:48:19.000 The answer probably is not.
00:48:21.000 Because some chauffeurs, for some people who are driving you, they could be protectors.
00:48:27.000 Some people, they're part of the experience, part of the service.
00:48:31.000 So when you get there, they could take care of things for you.
00:48:35.000 And so for a lot of different reasons, not all chauffeurs would lose their jobs.
00:48:39.000 Some chauffeurs would lose their jobs.
00:48:42.000 And many chauffeurs would change their jobs.
00:48:45.000 And the type of applications of autonomous vehicles will probably increase.
00:48:50.000 The usage of the technology within find new homes.
00:48:54.000 And so I think you have to go back to what is the purpose of a job.
00:48:58.000 Like, for example, if AI comes along, I actually don't believe I'm going to lose my job because my purpose isn't to, I have to look at a lot of documents.
00:49:07.000 I study a lot of emails.
00:49:08.000 I look at a bunch of diagrams.
00:49:13.000 The question is, what is the job?
00:49:15.000 And the purpose of somebody probably hasn't changed.
00:49:18.000 A lawyer, for example, help people.
00:49:20.000 That probably hasn't changed.
00:49:22.000 Studying legal documents, generating documents, it's part of the job, not the job.
00:49:27.000 But don't you think there's many jobs that AI will replace?
00:49:31.000 If your job is not a problem.
00:49:32.000 Particularly the terminal.
00:49:33.000 Yeah, if your job is the task.
00:49:35.000 Right.
00:49:35.000 So automation.
00:49:36.000 Yeah.
00:49:36.000 Factory.
00:49:38.000 If your job is the task.
00:49:39.000 That's a lot of people.
00:49:40.000 It could be a lot of people, but it'll probably generate – like, for example, let's say I'm super excited about the robots Elon's working on.
00:49:52.000 It's still a few years away.
00:49:54.000 When it happens, when it happens, there's a whole new industry of technicians and people who have to manufacture the robots, right?
00:50:07.000 And so that job never existed.
00:50:10.000 And so you're going to have a whole industry of people taking care of, like, for example, all the mechanics and all the people who are building things for cars, supercharging cars.
00:50:22.000 That didn't exist before cars, and now we're going to have robots.
00:50:26.000 You're going to have robot apparels.
00:50:27.000 So a whole industry of, right, isn't that right?
00:50:30.000 Because I want my robot to look different than your robot.
00:50:32.000 Oh, God.
00:50:33.000 And so you're going to have a whole apparel industry for robots.
00:50:38.000 You're going to have mechanics for robots.
00:50:40.000 And you have people who come to maintain your robots.
00:50:42.000 You don't have to create it, though?
00:50:43.000 No.
00:50:44.000 You don't think so?
00:50:44.000 You don't think that all done by other robots?
00:50:47.000 Eventually, and then there'll be something else.
00:50:50.000 So you think ultimately people just adapt?
00:50:53.000 Except if you are the task, which is a large percentage of the workforce.
00:50:59.000 If your job is just to chop vegetables, Cuisinart is going to replace you.
00:51:02.000 Yeah.
00:51:04.000 So people have to find meaning in other things.
00:51:06.000 Your job has to be more than the task.
00:51:08.000 What do you think about Elon's belief that this universal basic income thing will eventually become necessary?
00:51:18.000 Many people think that.
00:51:18.000 Andrew Yang thinks that.
00:51:21.000 He was one of the first people to sort of sound that alarm during the 2020 election.
00:51:30.000 Yeah, I guess both ideas probably won't exist at the same time.
00:51:38.000 And as in life, things will probably be in the middle.
00:51:42.000 One idea, of course, is that there'll be so much abundance of resource that nobody needs a job, and we'll all be wealthy.
00:51:51.000 On the other hand, we're going to need universal basic income.
00:51:56.000 Both ideas don't exist at the same time.
00:51:59.000 Right.
00:52:00.000 And so we're either going to be all wealthy or we're going to be all universal.
00:52:04.000 How could everybody be wealthy, though?
00:52:07.000 Wealthy, not because you have a lot of dollars, wealthy because there's a lot of abundance.
00:52:11.000 Like, for example, today, we are wealthy of information.
00:52:16.000 This is a concept several thousand years ago, only a few people have.
00:52:21.000 And so today we have wealth of a whole bunch of things, resources that – That's a good point.
00:52:26.000 And so we're going to have wealth of resources, things that we think are valuable today that in the future are just not that valuable.
00:52:26.000 Yeah.
00:52:34.000 And so it – Because it's automated.
00:52:37.000 And so I think the question, maybe partly, it's hard to answer, partly because it's hard to talk about infinity, and it's hard to talk about a long time from now.
00:52:52.000 And the reason for that is because there's just too many scenarios to consider.
00:52:59.000 But I think in the next several years, call it five to ten years, there are several things that I believe in hope.
00:53:10.000 And I say hope because I'm not sure.
00:53:12.000 One of the things that I believe is that the technology divide will be substantially collapsed.
00:53:22.000 And of course, the alternative viewpoint is that AI is going to increase the technology divide.
00:53:32.000 Now, the reason why I believe AI is going to reduce the technology divide is because we have proof.
00:53:40.000 The evidence is that AI is the easiest application in the world to use.
00:53:45.000 ChatGPT has grown to almost a billion users, frankly, practically overnight.
00:53:51.000 And if you're not exactly sure how to use, everybody knows how to use ChatGPT, just say something to it.
00:53:55.000 If you're not sure how to use ChatGPT, you ask ChatGPT how to use it.
00:54:00.000 No tool in history has ever had this capability.
00:54:05.000 AquisonArt.
00:54:06.000 If you don't know how to use it, you're kind of screwed.
00:54:08.000 You're going to walk up to it and say, how do you use a Cuisinart?
00:54:11.000 You're going to have to find somebody else.
00:54:13.000 But an AI will just tell you exactly how to do it.
00:54:16.000 Anybody could do this.
00:54:17.000 It'll speak to you in any language.
00:54:19.000 And if it doesn't know your language, you'll speak it in that language and it'll probably figure out that it doesn't completely understand your language, go learns it instantly and comes back and talk to you.
00:54:30.000 And so I think the technology divide has a real chance finally that you don't have to speak Python or C ⁇ or Fortran.
00:54:39.000 You can just speak human and whatever form of human you like.
00:54:43.000 And so I think that that has a real chance of closing the technology divide.
00:54:47.000 Now, of course, the counter narrative would say that AI is only going to be available for the nations and the countries that have a vast amount of resources because AI takes energy and AI takes a lot of GPUs and factories to be able to produce the AI.
00:55:11.000 No doubt at the scale that we would like to do in the United States.
00:55:14.000 But the fact of the matter is, your phone's going to run AI just fine, all by itself, you know, in a few years.
00:55:22.000 Today, it already does it fairly decently.
00:55:24.000 And so the fact that every country, every nation, every society will have to benefit of very good AI.
00:55:34.000 It might not be tomorrow's AI.
00:55:36.000 It might be yesterday's AI, but yesterday's AI is freaking amazing.
00:55:40.000 You know, in 10 years' time, nine-year-old AI is going to be amazing.
00:55:44.000 You don't need 10-year-old AI.
00:55:46.000 You don't need frontier AI like we need frontier AI because we want to be the world leader.
00:55:51.000 But for every single country, everybody, I think the capability to elevate everybody's knowledge and capability and intelligence, that day is coming.
00:56:00.000 The Octagon isn't just in Las Vegas anymore.
00:56:03.000 It's right in your hands with DraftKings Sportsbook, the official sports betting partner of UFC.
00:56:09.000 Get ready because when Dwavish Willie and Jan face off again at UFC 323, every punch, every takedown, every finish, it all has the potential to pay off in real time.
00:56:21.000 New customers bet just $5, and if your bet wins, you get paid $200 in bonus bets.
00:56:27.000 And hey, Missouri, the wait is over.
00:56:29.000 DraftKing Sportsbook is now live in the Show Me State.
00:56:33.000 Download the DraftKings Sportsbook app and use promo code Rogan.
00:56:37.000 That's code Rogan to turn $5 into $200 in bonus bets if your bet wins.
00:56:43.000 In partnership with DraftKings, the crown is yours.
00:56:46.000 Gambling problem?
00:56:47.000 Call 1-800-GABLBLER in New York to call 877-8 Hope and Y or text Hope and Y 467-369.
00:56:53.000 In Connecticut, help is available for a problem gambling call.
00:56:55.000 888-789-7777.
00:56:57.000 Or visit ccpg.org.
00:56:59.000 Please play responsibly.
00:57:00.000 On behalf of Booth Hill Casino in Resorting, Kansas.
00:57:02.000 Pass-through of per wager tax may apply in Illinois.
00:57:05.000 Age and eligibility varies by jurisdiction.
00:57:05.000 21 and over.
00:57:07.000 Boyd in Ontario.
00:57:08.000 Restrictions apply.
00:57:09.000 Bet must win to receive bonus bets, which expire in seven days.
00:57:12.000 Minimum odds required.
00:57:13.000 For additional terms and responsible gaming resources, see DKNG.co slash audio.
00:57:17.000 Limited time offer.
00:57:19.000 And also energy production, which is the real bottleneck when it comes to third world countries and electricity and all the resources that we take for granted.
00:57:31.000 Almost everything is going to be energy constrained.
00:57:33.000 And so if you take a look at one of the most important technology advances in history is this idea called Moore's Law.
00:57:43.000 Moore's Law started basically in my generation.
00:57:50.000 And my generation is the generation of computers.
00:57:53.000 I graduated in 1984, and that was basically at the very beginning of the PC revolution and the microprocessor.
00:58:03.000 And every single year, it approximately doubled.
00:58:10.000 And we describe it as every single year we double the performance.
00:58:14.000 But what it really means is that every single year, the cost of computing halved.
00:58:20.000 And so the cost of computing in the course of five years reduced by a factor of 10.
00:58:28.000 The amount of energy necessary to do computing, to do any task, reduced by a factor of 10.
00:58:34.000 Every single 10 years, 100, 1,000, 10,000, 100,000, so on and so forth.
00:58:44.000 And so each one of the clicks of Moore's Law, the amount of energy necessary to do any computing reduced.
00:58:51.000 That's the reason why you have a laptop today when back in 1984 it sat on the desk, you got a plug in, it wasn't that fast, and it consumed a lot of power.
00:59:01.000 Today, you know, it is only a few watts.
00:59:04.000 And so Moore's Law is the fundamental technology, the fundamental technology trend that made it possible.
00:59:10.000 Well, what's going on in AI?
00:59:11.000 The reason why NVIDIA is here is because we invented this new way of doing computing.
00:59:16.000 We call it accelerated computing.
00:59:17.000 We started it 33 years ago.
00:59:19.000 It took us about 30 years to really make a huge breakthrough.
00:59:24.000 In that 30 years or so, we took computing, you know, probably a factor of, well, let me just say in the last 10 years, the last 10 years, we improved the performance of computing by 100,000 times.
00:59:41.000 Whoa.
00:59:42.000 Imagine a car over the course of 10 years, it became 100,000 times faster.
00:59:47.000 Or at the same speed, 100,000 times cheaper.
00:59:52.000 Or at the same speed, 100,000 times less energy.
00:59:56.000 If your car did that, it doesn't need energy at all.
01:00:00.000 What I mean, what I'm trying to say is that in 10 years' time, the amount of energy necessary for artificial intelligence for most people will be minuscule, utterly minuscule.
01:00:13.000 And so we'll have AI running in all kinds of things and all the time because it doesn't consume that much energy.
01:00:19.000 And so if you're a nation that uses AI for, you know, almost everything in your social fabric, of course, you're going to need these AI factories.
01:00:28.000 But for a lot of countries, I think you're going to have excellent AI and you're not going to need as much energy.
01:00:34.000 Everybody will be able to come along, is my point.
01:00:36.000 So currently, that is a big bottleneck, right?
01:00:39.000 Is energy.
01:00:40.000 Yeah, it is the bottleneck.
01:00:41.000 The bottleneck.
01:00:43.000 So was it Google that is making nuclear power plants to operate one of its AI factories?
01:00:50.000 Oh, I haven't heard that.
01:00:51.000 But I think in the next six, seven years, I think you're going to see a whole bunch of small nuclear reactors.
01:00:57.000 And by small, like how big are you talking about?
01:00:59.000 Hundreds of megawatts, yeah.
01:01:01.000 And that these will be local to whatever specific company they have.
01:01:01.000 Okay.
01:01:06.000 Will all be power generators.
01:01:06.000 That's right.
01:01:08.000 Whoa.
01:01:09.000 You know, just like just like you're, you know, somebody's farm.
01:01:13.000 It probably is the smartest way to do it, right?
01:01:16.000 And it takes the burden off, yeah, it takes the burden off the grid.
01:01:19.000 It takes, and you could build as much as you need.
01:01:22.000 And you can contribute back to the grid.
01:01:24.000 It's a really important point that I think you just made about Moore's Law and the relationship to pricing.
01:01:30.000 Because, you know, a laptop today, like you can get one of those little MacBook Airs, they're incredible.
01:01:34.000 They're so thin, unbelievably powerful.
01:01:37.000 You ever have to charge it?
01:01:37.000 Battery life.
01:01:39.000 Yeah.
01:01:39.000 Battery life is crazy.
01:01:41.000 And it's not that expensive, relatively speaking.
01:01:44.000 Something like that.
01:01:45.000 And that's just Moore's Law.
01:01:45.000 I remember when I was.
01:01:47.000 Then there's the NVIDIA law.
01:01:47.000 Right.
01:01:49.000 Oh.
01:01:49.000 Just right?
01:01:50.000 The law I was talking to you about.
01:01:52.000 The computing that we invented.
01:01:54.000 The reason why we're here, this new way of doing computing, is like Moore's Law on energy drinks.
01:02:02.000 I mean, it's like Moore's Law Moore's Law and Joe Rogan.
01:02:10.000 Wow.
01:02:11.000 That's interesting.
01:02:12.000 Yeah.
01:02:12.000 That's us.
01:02:13.000 So explain that.
01:02:15.000 This chip that you brought to Elon, what's the significance of this?
01:02:18.000 Like, why is it so superior?
01:02:21.000 And so in 2012, Jeff Hinton's lab, this gentleman I was talking about, Ilya Suskabur, Alex Krushzewski, they made a breakthrough in computer vision in literally creating a piece of software called AlexNet.
01:02:47.000 And its job was to recognize images.
01:02:50.000 And it recognized images at a level, computer vision, which is fundamental to intelligence.
01:02:57.000 If you can't perceive, it's hard to have intelligence.
01:03:00.000 And so computer vision is a fundamental pillar of, not the only, but fundamental pillar of.
01:03:05.000 And so breaking computer vision or breaking through in computer vision is pretty foundational to almost everything that everybody wants to do in AI.
01:03:14.000 And so in 2012, their lab in Toronto made this breakthrough called AlexNet.
01:03:24.000 And AlexNet was able to recognize images so much better than any human created computer vision algorithm.
01:03:35.000 in the 30 years prior.
01:03:37.000 So all of these people, all these scientists, and we had many too, working on computer vision algorithms.
01:03:45.000 And these two kids, Ilya and Alex, under Jeff Hinton, took a giant leap above it.
01:03:56.000 And it was based on this thing called ElexNet, this neural network.
01:04:01.000 And the way it ran, the way they made it work was literally buying two NVIDIA graphics cards.
01:04:09.000 Because NVIDIA's GPUs, we've been working on this new way of doing computing.
01:04:15.000 And our GPU's application, and it's basically a supercomputing application back in 1984, in order to process computer games and what you have in your racing simulator, that is called an image generator supercomputer.
01:04:37.000 And so NVIDIA started, our first application was computer graphics.
01:04:43.000 And we applied this new way of doing computing where we do things in parallel instead of sequentially.
01:04:49.000 A CPU does things sequentially.
01:04:51.000 Step one, step two, step three.
01:04:53.000 In our case, we break the problem down and we give it to thousands of processors.
01:05:00.000 And so our way of doing computation is much more complicated.
01:05:08.000 But if you're able to formulate the problem in the way that we create it called CUDA, this is the invention of our company, if you could formulate it in that way, we could process everything simultaneously.
01:05:23.000 Now, in the case of computer graphics, it's easier to do because every single pixel on your screen is not related to every other pixel.
01:05:33.000 And so I could render multiple parts of the screen at the same time.
01:05:38.000 Not completely true, because maybe the way lighting works or the way shadow works, there's a lot of dependency and such.
01:05:45.000 But computer graphics, with all the pixels, I should be able to process everything simultaneously.
01:05:52.000 And so we took this embarrassingly parallel problem called computer graphics and we applied it to this new way of doing computing, NVIDIA's accelerated computing.
01:06:05.000 We put it in all of our graphics cards.
01:06:08.000 Kids were buying it to play games.
01:06:11.000 You probably don't know this, but we're the largest gaming platform in the world today.
01:06:15.000 Oh, I know that.
01:06:16.000 Oh, okay.
01:06:16.000 I used to make my own computers.
01:06:18.000 I used to buy your graphics cards.
01:06:19.000 Oh, that's super cool.
01:06:20.000 Yeah.
01:06:21.000 Set up SLI.
01:06:22.000 Oh, yeah, I love it.
01:06:24.000 Oh, yeah, man.
01:06:24.000 Okay, that's super cool.
01:06:25.000 I used to be a Quake junkie.
01:06:26.000 Oh, that's cool.
01:06:27.000 Yeah.
01:06:28.000 So SLI, I'll tell you the story in just a second.
01:06:28.000 Okay.
01:06:31.000 And how it led to Elon.
01:06:33.000 I'm still answering the question.
01:06:35.000 And so anyways, these two kids trained this model using the technique I described earlier on our GPUs because our GPUs could process things in parallel.
01:06:45.000 It's essentially a supercomputer in a PC.
01:06:49.000 The reason why you used it for Quake is because it is the first consumer supercomputer.
01:06:56.000 Okay?
01:06:56.000 And so anyways, they made that breakthrough.
01:07:00.000 We were working on computer vision at the time.
01:07:02.000 It caught my attention.
01:07:05.000 And so we went to learn about it.
01:07:08.000 Simultaneously, this deep learning phenomenon was happening all over the country.
01:07:13.000 Universities after another recognized the importance of deep learning, and all of this work was happening at Stanford, at Harvard, at Berkeley, just all over the place.
01:07:23.000 New York University, Yan Le Kun, Andrew Yang at Stanford, so many different places.
01:07:31.000 And I see it cropping up everywhere.
01:07:34.000 And so my curiosity asked, you know, what is so special about this form of machine learning?
01:07:41.000 And we've known about machine learning for a very long time.
01:07:43.000 We've known about AI for a very long time.
01:07:45.000 We've known about neural networks for a very long time.
01:07:48.000 What makes now the moment?
01:07:51.000 And so we realized that this architecture for deep neural networks, back propagation, the way deep neural networks were created, we could probably scale this problem, scale the solution to solve many problems.
01:08:08.000 That is essentially a universal function approximator.
01:08:13.000 Okay, meaning, you know, back when you're in school, you have a box, inside of it is a function.
01:08:21.000 You give it an input, it gives you an output.
01:08:24.000 And the reason why I call it a universal function approximator is that this computer, instead of you describing the function, a function could be a Newton's equation, F equals MA.
01:08:36.000 That's a function.
01:08:38.000 You write the function in software, you give it input, F, mass, acceleration, it'll tell you the force.
01:08:47.000 And the way this computer works is really interesting.
01:08:52.000 You give it a universal function.
01:08:55.000 It's not F equals MA, it's just a universal function.
01:08:57.000 It's a big, huge deep neural network.
01:09:01.000 And instead of describing the insight, you give it examples of input and output, and it figures out the inside.
01:09:10.000 So you give it input and output, and it figures out the inside.
01:09:14.000 A universal function approximator.
01:09:16.000 Today, it could be Newton's equation.
01:09:18.000 Tomorrow it could be Maxwell's equation.
01:09:21.000 It could be Coulomb's law.
01:09:22.000 It could be thermodynamics equation.
01:09:24.000 It could be, you know, Schrodinger's equation for quantum physics.
01:09:28.000 And so you could put any, you could have this describe almost anything, so long as you have the input and the output.
01:09:36.000 So long as you have the input and the output.
01:09:38.000 Or it could learn the input and output.
01:09:40.000 And so we took a step back and we said, hang on a second, this isn't just for computer vision.
01:09:47.000 Deep learning could solve any problem.
01:09:50.000 All the problems that are interesting.
01:09:52.000 So long as we have input and output.
01:09:55.000 Now, what has input and output?
01:09:58.000 Well, the world.
01:10:00.000 The world has input and output.
01:10:02.000 And so we could have a computer that could learn almost anything, machine learning, artificial intelligence.
01:10:08.000 And so we reasoned that maybe this is the fundamental breakthrough that we needed.
01:10:13.000 There were a couple of things that had to be solved.
01:10:16.000 For example, we had to believe that you could actually scale this up to giant systems.
01:10:20.000 It was running in a, they had two graphics cards, two GTX 580s, which, by the way, is exactly your SLI configuration.
01:10:31.000 Yeah.
01:10:31.000 Okay.
01:10:32.000 So that GTX 580 SLI was the revolutionary computer that put deep learning on the map.
01:10:41.000 Wow.
01:10:41.000 It was 2018.
01:10:43.000 And you were using it to play quick.
01:10:45.000 Wow, that's crazy.
01:10:46.000 That was the moment.
01:10:47.000 That was the big bang of modern AI.
01:10:50.000 We were lucky because we were inventing this technology, this computing approach.
01:10:55.000 We were lucky that they found it.
01:10:58.000 Turns out they were gamers and it was lucky they found it.
01:11:02.000 And it was lucky that we paid attention to that moment.
01:11:06.000 It was a little bit like, you know, that Star Trek, you know, first contact.
01:11:16.000 The Vulcans had to have seen the warped drive at that very moment.
01:11:20.000 If they didn't witness the warped drive, you know, they would have never come to Earth.
01:11:26.000 And everything would have never happened.
01:11:28.000 It's a little bit like if I hadn't paid attention to that moment, that flash, and that flash didn't last long, if I hadn't paid attention to that flash or our company didn't pay attention to it, who knows what would have happened.
01:11:40.000 But we saw that and we reasoned our way into this is a universal function approximator.
01:11:45.000 This is not just a computer vision approximator.
01:11:47.000 We could use this for all kinds of things if we could solve two problems.
01:11:51.000 The first problem is that we have to prove to ourselves it could scale.
01:11:55.000 The second problem we had to wait for, I guess, contribute to and wait for, is the world will never have enough data on input and output where we could supervise the AI to learn everything.
01:12:18.000 For example, if we have to supervise our children on everything they learn, the amount of information they could learn is limited.
01:12:26.000 We needed the AI.
01:12:27.000 We needed the computer to have a method of learning without supervision.
01:12:33.000 And that's where we had to wait a few more years.
01:12:35.000 But unsupervised AI learning is now here.
01:12:40.000 And so the AI could learn by itself.
01:12:43.000 And the reason why the AI could learn by itself is because we have many examples of right answers.
01:12:49.000 Like, for example, if I want to learn, if I want to teach an AI how to predict the next word, I could just grab it, grab up a whole bunch of text that we already have, mask out the last word, and make it try and try and try again until it predicts the next one.
01:13:07.000 Or I mask out random words inside the text, and I make it try and try and try until it predicts it.
01:13:12.000 You know, like Mary goes down to the bank.
01:13:18.000 Is it a riverbank or a money bank?
01:13:22.000 Well, if you're going to go down to the bank, it's probably a riverbank.
01:13:26.000 And it might not be obvious even from that.
01:13:29.000 It might need and caught a fish.
01:13:35.000 Okay, now you know it must be the riverbank.
01:13:38.000 And so you give these AIs a whole bunch of these examples and you mask out the words, it'll predict the next one.
01:13:46.000 And so unsupervised learning came along.
01:13:48.000 These two ideas, the fact that it's scalable and unsupervised learning came along, we were convinced that we ought to put everything into this and help create this industry because we're going to solve a whole bunch of interesting problems.
01:14:02.000 And that was in 2012.
01:14:04.000 By 2016, I had built this computer called the DGX1.
01:14:10.000 The one that you saw me give to Elon is called DGX Spark.
01:14:14.000 The DGX1 was $300,000.
01:14:20.000 It cost NVIDIA a few billion dollars to make the first one.
01:14:25.000 And instead of two chips SLI, we connected eight chips with a technology called NV-Link.
01:14:34.000 But it's basically SLI supercharged.
01:14:37.000 Okay?
01:14:38.000 And so we connected eight of these chips together instead of just two.
01:14:43.000 And all of them worked together, just like your Quake rig did, to solve this deep learning problem, to train this model.
01:14:52.000 And so we created this thing.
01:14:54.000 I announced it at GTC and at one of our annual events.
01:15:00.000 And I described this deep learning thing, computer vision thing, and this computer called DGX1.
01:15:07.000 The audience was like completely silent.
01:15:09.000 They had no idea what I was talking about.
01:15:14.000 And I was lucky because I had known Elon, and I helped him build the first computer for Model 3, the Model S.
01:15:26.000 And when he wanted to start working on autonomous vehicle, I helped him build the computer that went into the Model S A V system, his full self-driving system.
01:15:38.000 We were basically the FSD computer version one.
01:15:42.000 And so we're already working together.
01:15:46.000 And when I announced this thing, nobody in the world wanted it.
01:15:51.000 I had no purchase orders, not one.
01:15:53.000 Nobody wanted to buy it.
01:15:55.000 Nobody wanted to be part of it.
01:15:57.000 Except for Elon.
01:15:58.000 He goes, he was at the event, and we were doing a fireside chat about the future of self-driving cars.
01:16:06.000 I think it's like 2016.
01:16:08.000 Yeah, 20, maybe at that time it was 2015.
01:16:11.000 And he goes, you know what?
01:16:14.000 I have a company that could really use this.
01:16:18.000 I said, wow, my first customer.
01:16:20.000 And so I was pretty excited about it.
01:16:23.000 And he goes, yeah, we have this company.
01:16:28.000 It's a non-profit company.
01:16:30.000 And all the blood drained out of my face.
01:16:33.000 Yeah.
01:16:35.000 I just spent a few billion dollars building this thing.
01:16:38.000 It cost $300,000.
01:16:39.000 And, you know, the chances of a non-profit being able to pay for this thing is approximately zero.
01:16:45.000 And he goes, you know, this is an AI company, and it's a non-profit.
01:16:50.000 And we could really use one of these supercomputers.
01:16:54.000 And so I picked it up.
01:16:56.000 I built the first one for ourselves.
01:16:58.000 We're using it inside the company.
01:16:59.000 I boxed one up.
01:17:00.000 I drove it up to San Francisco and I delivered it to Elon in 2016.
01:17:05.000 A bunch of researchers were there.
01:17:08.000 Peter Beale was there.
01:17:09.000 Ilya was there.
01:17:10.000 There was a bunch of people there.
01:17:12.000 And I walked up to the second floor where they were all kind of in a room smaller than your place here.
01:17:20.000 And that place turned out to have been OpenAI.
01:17:25.000 2016.
01:17:26.000 Wow.
01:17:26.000 Just a bunch of people sitting in a room.
01:17:30.000 It's not really nonprofit anymore, though.
01:17:33.000 They're not non-profit anymore.
01:17:35.000 Weird how that works.
01:17:36.000 Yeah, yeah.
01:17:37.000 But anyhow, anyhow, Elon was there.
01:17:40.000 Yeah, it was really a great moment.
01:17:42.000 Oh, yeah, there you go.
01:17:44.000 Yeah, that's it.
01:17:45.000 Look at you, bro.
01:17:46.000 Same jacket.
01:17:48.000 I haven't aged.
01:17:48.000 Look at that.
01:17:50.000 Not a lick of black hair, though.
01:17:53.000 The size of it is significantly smaller.
01:17:57.000 That was the other devil.
01:17:58.000 Okay, so yeah, there you go.
01:18:00.000 Yeah, look at the difference.
01:18:01.000 Exactly the same industrial design.
01:18:03.000 He's holding it in his hand.
01:18:06.000 Here's the amazing thing.
01:18:08.000 DGX1 was one petaflops.
01:18:12.000 Okay.
01:18:13.000 That's a lot of flops.
01:18:15.000 And DGX Spark is one petaflops.
01:18:20.000 Nine years later.
01:18:22.000 Wow.
01:18:23.000 The same amount of computing horsepower.
01:18:27.000 Shrunken down.
01:18:28.000 Yeah.
01:18:28.000 And instead of $300,000, it's now $4,000.
01:18:32.000 And it's the size of a small book.
01:18:34.000 Incredible.
01:18:35.000 Crazy.
01:18:36.000 That's how technology moves.
01:18:38.000 Anyways, that's the reason why I wanted to give him the first one.
01:18:41.000 Because I gave him the first one in 2016.
01:18:43.000 It's so fascinating.
01:18:44.000 I mean, if you wanted to make a story for a film, I mean, that would be the story that, like, What better scenario if it really does become a digital life form, how funny would it be that it is birthed out of the desire for computer graphics for video games?
01:19:05.000 Exactly.
01:19:06.000 It's kind of crazy.
01:19:07.000 Yeah.
01:19:08.000 Kind of crazy when you think about it that way.
01:19:10.000 Because computer graphics was one of the hardest supercomputer problems, generating reality.
01:19:22.000 And also one of the most profitable to solve because computer games are so popular.
01:19:28.000 When NVIDIA started in 1993, we were trying to create this new computing approach.
01:19:34.000 The question is, what's the killer app?
01:19:38.000 And the problem we wanted to, the company wanted to create a new type of computing computing architecture, a new type of computer that can solve problems that normal computers can't solve.
01:19:56.000 Well, the applications that existed in the industry in 1993 are applications that normal computers can solve.
01:20:06.000 Because if the normal computers can't solve them, why would the application exist?
01:20:11.000 And so we had a mission statement for a company that has no chance of success.
01:20:21.000 But I didn't know that in 1993.
01:20:23.000 It just sounded like a good idea.
01:20:25.000 Right.
01:20:27.000 And so if we created this thing that can solve problems, you know, it's like you actually have to go create the problem.
01:20:37.000 And so that's what we did.
01:20:40.000 In 1993, there was no quake.
01:20:42.000 John Carmack hadn't even released Doom yet.
01:20:46.000 You probably remember that.
01:20:47.000 Sure, yeah.
01:20:49.000 And there were no applications for it.
01:20:53.000 And so I went to Japan because the arcade industry had this at the time of Sega, you remember?
01:21:00.000 Sure.
01:21:01.000 The arcade machines, they came up with 3D arcade systems.
01:21:06.000 Virtual Fighter, Daytona, Virtual Cop.
01:21:11.000 All of those arcade games were in 3D for the very first time.
01:21:16.000 And the technology they were using was from Martin Marietta.
01:21:20.000 The flight simulators, they took the guts out of a flight simulator and put it into an arcade machine.
01:21:27.000 The system that you have over here, it's got to be a million times more powerful than that arcade machine.
01:21:34.000 And that was a flight simulator for NASA.
01:21:38.000 Whoa.
01:21:39.000 And so they took the guts out of that.
01:21:42.000 They were using it for flight simulation for jets and space shuttle.
01:21:47.000 And they took the guts out of that.
01:21:49.000 And Sega had this brilliant computer developer.
01:21:53.000 His name was Yu Suzuki.
01:21:56.000 Yu Suzuki and Miyamoto, Sega and Nintendo, these were the incredible pioneers, the visionaries, the incredible artists.
01:22:07.000 And they're both very, very technical.
01:22:11.000 They were the origins, really, of the gaming industry.
01:22:15.000 And Yu Suzuki pioneered 3D graphics gaming.
01:22:20.000 And so I went, we created this company, and there were no apps.
01:22:27.000 And we were spending all of our afternoons.
01:22:30.000 We told our family we were going to work, but it was just the three of us, who's going to know.
01:22:35.000 And so we went to Curtis's, one of the founders, went to Curtis's townhouse.
01:22:41.000 And Chris and I were married.
01:22:42.000 We have kids.
01:22:44.000 I already had Spencer and Madison.
01:22:46.000 They were probably two years old.
01:22:48.000 And Chris's kids are about the same age as ours.
01:22:54.000 And we would go to work in this townhouse.
01:22:57.000 But, you know, when you're a startup and the mission statement is the way we described, you're not going to have too many customers calling you.
01:23:05.000 And so we had really nothing to do.
01:23:08.000 And so after lunch, we would always have a great lunch.
01:23:11.000 After lunch, we would go to the arcades and play the Sega Virt, you know, the Sega Virtual Fighter and Daytona and all those games.
01:23:19.000 And analyze how they're doing it, trying to figure out how they were doing that.
01:23:24.000 And so we decided, let's just go to Japan and let's convince Sega to move those applications into the PC.
01:23:35.000 And we would start the PC gaming, the 3D gaming industry, partnering with Sega.
01:23:42.000 That's how NVIDIA started.
01:23:43.000 Wow.
01:23:44.000 And so, in exchange for them developing their games for our computers in the PC, we would build a chip for their game console.
01:23:57.000 That was the partnership.
01:23:59.000 I build a chip for your game console, you port the Sega games to us.
01:24:05.000 And then they paid us, you know, at the time, quite a significant amount of money to build that game console.
01:24:14.000 And that was kind of the beginning of NVIDIA getting started.
01:24:19.000 And we thought we were on our way.
01:24:22.000 And so I started with a business plan, a mission statement that was impossible.
01:24:26.000 We lucked into the Sega partnership.
01:24:28.000 We started taking off, started building our game console.
01:24:33.000 And about a couple years into it, we discovered our first technology didn't work.
01:24:41.000 It would have been a flaw.
01:24:43.000 It was a flaw.
01:24:44.000 And all of the technology ideas that we had, the architecture concepts were sound, but the way we were doing computer graphics was exactly backwards.
01:24:55.000 You know, instead of, I won't bore you with the technology, but instead of inverse texture mapping, we were doing forward texture mapping.
01:25:04.000 Instead of triangles, we did curved surfaces.
01:25:08.000 So other people did it flat.
01:25:10.000 We did it round.
01:25:14.000 Other technology, the technology that ultimately won the technology we use today, has Z buffers.
01:25:20.000 It automatically sorted.
01:25:23.000 We had an architecture with no Z buffers.
01:25:25.000 The application had to sort it.
01:25:27.000 And so we chose a bunch of technology approaches that three major technology choices, all three choices were wrong.
01:25:36.000 Okay, so this is how incredibly smart we were.
01:25:40.000 And so in 1995, mid-95, we realized we were going down the wrong path.
01:25:46.000 Meanwhile, the Silicon Valley was packed with 3D graphics startups because it was the most exciting technology of that time.
01:25:57.000 And so 3DFX and Rendition and Silicon Graphics was coming in.
01:26:02.000 Intel was already in there.
01:26:03.000 And, you know, gosh, what added up eventually to a hundred different startups we had to compete against.
01:26:10.000 Everybody had chosen the right technology approach, and we chose the wrong one.
01:26:16.000 And so we were the first company to start.
01:26:19.000 We found ourselves essentially dead last with the wrong answer.
01:26:24.000 And so the company was in trouble.
01:26:30.000 And ultimately, we had to make several decisions.
01:26:36.000 The first decision is: well, if we change now, we will be the last company.
01:26:50.000 And even if we changed into the technology that we believe to be right, we'd still be dead.
01:26:58.000 And so that argument, you know, do we change and therefore be dead?
01:27:05.000 Don't change and make this technology work somehow?
01:27:09.000 Or go do something completely different.
01:27:13.000 That question stirred the company strategically and was a hard question.
01:27:18.000 I eventually advocated for we don't know what the right strategy is, but we know what the wrong technology is.
01:27:26.000 So let's stop doing it the wrong way and let's give ourselves a chance to go figure out what the strategy is.
01:27:31.000 The second thing, the second problem we had was our company was running out of money and I was in a contract with Sega and I owed them this game console.
01:27:43.000 And if that contract would have been canceled, we'd be dead.
01:27:48.000 We would have vaporized instantly.
01:27:52.000 And so I went to Japan and I explained to the CEO of Sega, Iri Majuri, really great man.
01:28:03.000 He was the former CEO of Honda USA.
01:28:07.000 Went back to Sega to run Sega.
01:28:09.000 I went back to Japan and run Sega.
01:28:12.000 And I explained to him that I was, I guess I was, what, 30, 33 years old.
01:28:20.000 You know, when I was 33 years old, I still had acne.
01:28:24.000 And I got this Chinese kid, I was super skinny.
01:28:30.000 And he was already kind of elder.
01:28:34.000 And I went to him and I said, listen, I've got some bad news for you.
01:28:40.000 And first, the technology that we promised you doesn't work.
01:28:50.000 And second, we shouldn't finish your contract because we'd waste all your money and you would have something that doesn't work.
01:29:02.000 And I recommend you'd find another partner to build your game console.
01:29:08.000 And so I'm terribly sorry that we've set you back in your product roadmap.
01:29:15.000 And third, even though I'm asking you to let me out of the contract, I still need the money.
01:29:27.000 Because if you didn't give me the money, we'd vaporize overnight.
01:29:35.000 And so I explained it to him humbly, honestly.
01:29:40.000 I gave him the background.
01:29:43.000 I explained to him why the technology doesn't work, why we thought it was going to work, why it doesn't work.
01:29:51.000 And I asked him to convert the last $5 million that they were going to complete the contract to give us that money as an investment instead.
01:30:11.000 And he said, but it's very likely your company will go out of business, even with my investment.
01:30:21.000 And it was completely true.
01:30:23.000 Back then, 1995, $5 million was a lot of money.
01:30:27.000 It's a lot of money today.
01:30:28.000 $5 million was a lot of money.
01:30:30.000 And here's a pile of competitors doing it right.
01:30:34.000 What are the chances that giving NVIDIA $5 million, that we would develop the right strategy, that he would get a return on that $5 million or even get it back?
01:30:43.000 Zero percent.
01:30:45.000 You do the math, it's zero percent.
01:30:49.000 If I were sitting there right there, I wouldn't have done it.
01:30:52.000 $5 million was a mountain of money to Sega at the time.
01:30:57.000 And so I told him that if you invested that $5 million in us, it is most likely to be lost.
01:31:12.000 But if you didn't invest that money, we'd be out of business and we would have no chance.
01:31:18.000 And I told him that I don't even know exactly what I said in the end, but I told him that I would understand if he decided not to, but it would make the world to me if he did.
01:31:38.000 He went off and thought about it for a couple days and came back and said, we'll do it.
01:31:41.000 Wow.
01:31:44.000 I'm just trying to strategy to how to correct what it was doing wrong.
01:31:47.000 Did you wait?
01:31:49.000 Oh, man, wait until I tell you the rest of it.
01:31:51.000 It's scary, even scarier.
01:31:54.000 Oh, no.
01:31:57.000 And so what he decided was Jensen was a young man he liked.
01:32:09.000 That's it.
01:32:10.000 Wow.
01:32:11.000 To this day.
01:32:13.000 That's nuts.
01:32:15.000 Boy, but the world owes that guy.
01:32:18.000 No doubt.
01:32:20.000 Right?
01:32:22.000 He celebrated today in Japan.
01:32:25.000 And if he would have kept that five, the investment, I think it'd be worth probably about a trillion dollars today.
01:32:36.000 I know.
01:32:37.000 But the moment we went public, they sold it.
01:32:39.000 They go, wow, that's a miracle.
01:32:41.000 And so they sold it.
01:32:43.000 Yeah, they sold it at NVIDIA valuation about $300 million.
01:32:48.000 That's our IPO valuation, $300 million.
01:32:51.000 Wow.
01:32:54.000 And so, anyhow, I was incredibly grateful.
01:33:00.000 And then now we had to figure out what to do because we still were doing the wrong strategy, wrong technology.
01:33:07.000 So unfortunately, we had to lay off most of the company.
01:33:09.000 We shrunk the company all back.
01:33:11.000 All the people working on the game console, you know, we had to shrink it all back.
01:33:17.000 And then somebody told me that, but Jensen, we've never built it this way before.
01:33:27.000 We've never built it the right way before.
01:33:30.000 We've only known how to build it the wrong way.
01:33:34.000 And so nobody in the company knew how to build this supercomputing image generator 3D graphics thing that Silicon Graphics did.
01:33:46.000 And so I said, okay, how hard can it be?
01:33:52.000 You got all these 30 companies, 50 companies doing it.
01:33:54.000 How hard can it be?
01:33:56.000 And so, luckily, there was a textbook written by the company, Silicon Graphics.
01:34:04.000 And so I went down to the store.
01:34:06.000 I had 200 bucks in my pocket.
01:34:08.000 And I bought three textbooks, the only three they had, $60 a piece.
01:34:13.000 I bought the three textbooks.
01:34:16.000 I brought it back and I gave one to each one of the architects.
01:34:18.000 And I said, read that and let's go save the company.
01:34:23.000 And so they read this textbook, learned from the giant at the time, Silicon Graphics, about how to do 3D graphics.
01:34:35.000 But the thing that was amazing, and what makes NVIDIA special today, is that The people that are there are able to start from first principles, learn best-known art, but re-implement it in a way that's never been done before.
01:34:55.000 And so, when we reimagined the technology of 3D graphics, we reimagined it in a way that manifests today the modern 3D graphics.
01:35:07.000 We really invented modern 3D graphics, but we learned from previous known arts and we implement it fundamentally differently.
01:35:16.000 What did you do that changed it?
01:35:18.000 Well, you know, ultimately, ultimately, the simple answer is that the way silicon graphics works, the geometry engine is a bunch of software running on processors.
01:35:34.000 We took that and eliminated all the generality, the general purposeness of it, and we reduced it down into the most essential part of 3D graphics.
01:35:51.000 And we hard-coded it into the chip.
01:35:54.000 And so, instead of something general purpose, we hard-coded it very specifically into just the limited applications, limited functionality necessary for video games.
01:36:08.000 And that capability, that super, and because we reinvented a whole bunch of stuff, it supercharged the capability of that one little chip.
01:36:17.000 And our one little chip was generating images as fast as a $1 million image generator.
01:36:25.000 That was the big breakthrough.
01:36:26.000 We took a million-dollar thing and we put it into the graphics card that you now put into your gaming PC.
01:36:34.000 And that was our big invention.
01:36:36.000 And then, and of course, the question is: how do you compete against these 30 other companies doing what they were doing?
01:36:46.000 And there we did several things.
01:36:50.000 One, instead of building a 3D graphics chip for every 3D graphics application, we decided to build a 3D graphics chip for one application.
01:37:02.000 We bet the farm on video games.
01:37:06.000 The needs of video games are very different than needs for CAD, needs for flight simulators.
01:37:11.000 They're related but not the same.
01:37:13.000 And so we narrowly focused our problem statement so I could reject all of the other complexities, and we shrunk it down into this one little focus, and then we supercharged it for gamers.
01:37:25.000 And the second thing that we did was we created a whole ecosystem of working with game developers and getting their games ported and adapted to our silicon so that we could turn essentially what is a technology business into a platform business, into a game platform business.
01:37:45.000 So GeForce is really today, it's also the most advanced 3D graphics technology in the world.
01:37:52.000 But a long time ago, GeForce is really the game console inside your PC.
01:37:58.000 It runs Windows, it runs Excel, it runs PowerPoint, of course, those are easy things.
01:38:03.000 But its fundamental purpose was simply to turn your PC into a game console.
01:38:08.000 So we were the first technology company to build all of this incredible technology in service of one audience, gamers.
01:38:18.000 Now, of course, in 1993, the gaming industry didn't exist.
01:38:23.000 But by the time that John Carmack came along and the Doom phenomenon happened, and then Quake came out, as you know, that entire world, that entire community boom, took off.
01:38:38.000 Do you know where the name Doom came from?
01:38:40.000 It came from this there's a scene in the movie, The Color of Money, where Tom Cruise, who's this elite pool player, shows up at this pool hall and this local hustler says, What do you got in the case?
01:38:52.000 And he opens up this case.
01:38:53.000 He has a special pool queue.
01:38:54.000 He goes in here, and he opens it up, he goes, Doom.
01:38:57.000 Doom.
01:38:58.000 And that's where it came from.
01:38:59.000 Yeah, because Carmack said that's what they wanted to do to the gaming industry.
01:38:59.000 That's right.
01:39:02.000 Doom.
01:39:03.000 That when Doom came out, it would just be everybody would be like, oh, we're fucked.
01:39:07.000 Oh, wow.
01:39:07.000 This is Doom.
01:39:08.000 That's awesome.
01:39:09.000 Isn't that amazing?
01:39:09.000 That's amazing.
01:39:10.000 Because it's the perfect name for the game.
01:39:11.000 Yeah.
01:39:12.000 And the name came out of that scene in that movie.
01:39:14.000 That's right.
01:39:15.000 Well, and then, of course, Tim Sweeney and Epic Games and the 3D gaming genre took off.
01:39:25.000 And so if you just kind of, in the beginning, was no gaming industry.
01:39:25.000 Yes.
01:39:30.000 We had no choice but to focus the company on one thing.
01:39:34.000 That one thing.
01:39:35.000 It's a really incredible origin story.
01:39:38.000 It's amazing.
01:39:39.000 Like, you must be like a disaster.
01:39:42.000 $5 million, that pivot, with that conversation with that gentleman, if he did not agree to that, if he did not like you, what would the world look like today?
01:39:51.000 That's crazy.
01:39:51.000 Oh, wait.
01:39:52.000 Then our entire life hung on another gentleman.
01:39:58.000 And so now, here we are.
01:40:00.000 We built.
01:40:01.000 So before GeForce, it was Revo 128.
01:40:04.000 Revo 128 saved the company.
01:40:06.000 It revolutionized computer graphics.
01:40:09.000 The performance, cost-performance ratio of 3D graphics for gaming was off the charts amazing.
01:40:17.000 And we're getting ready to ship it.
01:40:24.000 We're building it.
01:40:27.000 So, as you know, $5 million doesn't last long.
01:40:31.000 And so every single month, every single month, we were drawing down.
01:40:39.000 You have to build it, prototype it, you have to design it, prototype it, get the silicon back, which costs a lot of money.
01:40:50.000 Test it with software.
01:40:53.000 Because without the software testing the chip, you don't know the chip works.
01:40:58.000 And then you're going to find a bug, probably, because every time you test something, you find bugs, which means you have to tape it out again, which is more time, more money.
01:41:11.000 And so we did the math.
01:41:12.000 There was no chance somebody was going to survive it.
01:41:15.000 We didn't have that much time to tape out a chip, send it to a foundry, TSMC, get the silicon back, test it, send it back out again.
01:41:23.000 There was no shot, no hope.
01:41:27.000 And so the math, the spreadsheet, doesn't allow us to do that.
01:41:32.000 And so I heard about this company, and this company built this machine.
01:41:40.000 And this machine is an emulator.
01:41:43.000 You could take your design, all of the software that describes the chip, and you could put it into this machine.
01:41:54.000 And this machine will pretend it's our chip.
01:41:57.000 So I don't have to send it to the fab, wait until the fab sends it back.
01:42:01.000 I could have this machine pretend it's our chip, and I could put all of the software on top of this machine called an emulator and test all of the software on this pretend chip, and I could fix it all before I send it to the fab.
01:42:19.000 And if I could do that, when I send it to the fab, it should work.
01:42:24.000 Nobody knows, but it should work.
01:42:27.000 And so we came to the conclusion that let's take half of the money we had left in the bank.
01:42:35.000 At the time, it was about a million dollars.
01:42:38.000 Take half of that money and go buy this machine.
01:42:42.000 So instead of keeping the money to stay alive, I took half of the money to go buy this machine.
01:42:47.000 Well, I called this guy up.
01:42:49.000 The company's called Icos.
01:42:52.000 Call this company up and I said, hey, listen, I heard about this machine.
01:42:56.000 I like to buy one.
01:42:59.000 And they go, oh, that's terrific, but we're out of business.
01:43:04.000 I said, what?
01:43:05.000 You're out of business.
01:43:07.000 He goes, yeah, we had no customers.
01:43:12.000 And I said, wait, hang on a second.
01:43:14.000 So you never made the machine?
01:43:16.000 They could say, no, no, no, we made the machine.
01:43:18.000 We have one in inventory if you want it, but we're out of business.
01:43:22.000 So I bought one out of inventory.
01:43:26.000 Okay?
01:43:27.000 After I bought it, they went out of business.
01:43:29.000 Wow.
01:43:30.000 I bought it out of inventory.
01:43:32.000 And on this machine, we put NVIDIA's chip into it.
01:43:37.000 And we tested all of the software on top.
01:43:41.000 And at this point, we were on fumes.
01:43:45.000 But we convinced ourselves that chip is going to be great.
01:43:49.000 And so I had to call some other gentleman.
01:43:51.000 So I called TSMC.
01:43:54.000 And I told TSMC that listen, TSMC is the world's largest founder today.
01:43:59.000 At the time, they were just a few hundred million dollars large.
01:44:06.000 Tiny little company.
01:44:12.000 And I explained to them what we were doing.
01:44:14.000 And I explained to my, I told them I had a lot of customers.
01:44:19.000 I had one, you know, Diamond Multimedia, probably one of the companies you bought the graphics card from back in the old days.
01:44:27.000 And I said, you know, we have a lot of customers and the demand's really great.
01:44:32.000 And we're going to tape out a chip to you.
01:44:37.000 And I like to go directly to production because I know it works.
01:44:45.000 And they said, nobody has ever done that before.
01:44:50.000 Nobody has ever taped out a chip that worked the first time.
01:44:54.000 And nobody starts out production without looking at it.
01:44:58.000 But I knew that if I didn't start the production, I'd be out of business anyways.
01:45:03.000 And if I could start the production, I might have a chance.
01:45:08.000 And so TSMC decided to support me.
01:45:14.000 And this gentleman is named Morris Chang.
01:45:17.000 Morris Chang is the father of the foundry industry, the founder of TSMC, really great man.
01:45:27.000 He decided to support our company.
01:45:29.000 I explained to them everything.
01:45:32.000 He decided to support us, frankly, probably because they didn't have that many other customers anyhow, but they were grateful.
01:45:39.000 And I was immensely grateful.
01:45:42.000 And as we were starting the production, Morris flew to the United States and he didn't so many words ask me so, but he asked me a whole lot of questions that was trying to tease out, do I have any money?
01:46:00.000 But he didn't directly ask me that, you know.
01:46:03.000 And so the truth is that we didn't have all the money.
01:46:08.000 But we had a strong PO from the customer.
01:46:11.000 And if it didn't work, some wafers would have been lost.
01:46:18.000 I'm not exactly sure what would have happened, but we would have come short.
01:46:23.000 It would have been rough.
01:46:24.000 But they supported us with all of that risk involved.
01:46:28.000 We launched this chip.
01:46:30.000 Turns out to have been completely revolutionary.
01:46:35.000 Knocked the ball out of the park.
01:46:37.000 We became the fastest growing technology company in history to go from zero to one billion dollars.
01:46:44.000 So wild that you didn't test the chip.
01:46:47.000 I know.
01:46:47.000 We tested it afterwards.
01:46:48.000 Yeah, we tested it afterwards.
01:46:50.000 Afterwards, but he went into production already.
01:46:54.000 But by the way, that methodology that we developed to save the company is used throughout the world today.
01:47:02.000 That's amazing.
01:47:03.000 Yeah, we changed the whole world's methodology of designing chips, the whole world's rhythm of designing chips.
01:47:11.000 We changed everything.
01:47:13.000 How well did you sleep those days?
01:47:15.000 It must have been so much stress.
01:47:19.000 You know, what is that feeling where the world just kind of feels like it's flying?
01:47:31.000 You have this, what do you call that feeling?
01:47:35.000 You can't stop the feeling that everything's moving super fast.
01:47:40.000 And, you know, you're laying in bed and the world just feels like, you know, and you feel deeply anxious, completely out of control.
01:47:56.000 I've felt that probably a couple of times in my life.
01:48:00.000 It's during that time.
01:48:02.000 Wow.
01:48:03.000 It was incredible.
01:48:03.000 Yeah.
01:48:05.000 What an incredible success story.
01:48:06.000 But I learned a lot.
01:48:08.000 I learned about, I learned several things.
01:48:10.000 I learned how to develop strategies.
01:48:14.000 I learned how to, and our company learned how to develop strategies.
01:48:21.000 What are winning strategies?
01:48:22.000 We learned how to create a market.
01:48:24.000 We created the modern 3D gaming market.
01:48:28.000 We learned how, and so that exact same skill is how we created the modern AI market.
01:48:35.000 It's exactly the same.
01:48:37.000 Yeah, exactly the same skill, exactly the same blueprint.
01:48:37.000 Wow.
01:48:42.000 And we learned how to deal with crisis, how to stay calm, how to think through things systematically.
01:48:52.000 We learned how to remove all waste in the company and work from first principles and doing only the things that are essential.
01:49:01.000 Everything else is waste because we have no money for it.
01:49:05.000 To live on fumes at all times.
01:49:09.000 And the feeling, no different than the feeling I had this morning when I woke up, that you're going to be out of business soon.
01:49:18.000 That you're, you know, the phrase, 30 days from going out of business, I've used for 33 years.
01:49:24.000 Do you still feel that?
01:49:25.000 Oh, yeah, oh, yeah.
01:49:26.000 Every morning.
01:49:27.000 Every morning.
01:49:28.000 But you guys are one of the biggest companies on planet Earth.
01:49:31.000 But the feeling doesn't change.
01:49:33.000 Wow.
01:49:34.000 The sense of vulnerability, the sense of uncertainty, the sense of insecurity, it doesn't leave you.
01:49:43.000 That's crazy.
01:49:46.000 We had nothing.
01:49:47.000 We had nothing.
01:49:48.000 We were dealing with that.
01:49:49.000 We were still shining.
01:49:50.000 Oh, yeah.
01:49:50.000 Oh, yeah.
01:49:51.000 Every day.
01:49:52.000 Do you think that fuels you?
01:49:52.000 Every moment.
01:49:54.000 Is that part of the reason why the company's so successful?
01:49:57.000 That you have that hungry mentality?
01:50:04.000 You never rest.
01:50:05.000 You're never sitting on your laurels.
01:50:07.000 You're always on the edge.
01:50:12.000 I have a greater drive from not wanting to fail than the drive of wanting to succeed.
01:50:25.000 Isn't that like a bad thing?
01:50:26.000 Six coaches would tell you that's completely the wrong psychology.
01:50:29.000 The world has just heard me say that out loud for the first time.
01:50:33.000 But it's true.
01:50:35.000 Well, that's how fast.
01:50:36.000 Fear of failure drives me more than the greed or whatever it is.
01:50:43.000 Well, ultimately, that's probably a more healthy approach, now that I'm thinking about it.
01:50:48.000 Because, like, the fear— I'm not ambitious, for example.
01:50:52.000 You know?
01:50:53.000 I just want to stay alive, Joe.
01:50:55.000 I want the company to thrive, you know?
01:50:57.000 I want us to make an impact.
01:50:59.000 That's interesting.
01:51:01.000 Well, maybe that's why you're so humble.
01:51:03.000 Maybe that's what keeps you grounded.
01:51:06.000 Because with the kind of spectacular success the company's achieved, it would be easy to get a big head.
01:51:11.000 Right?
01:51:11.000 No.
01:51:12.000 But isn't that interesting?
01:51:13.000 It's like, if you were the guy that your main focus is just success, you probably would go, well, made it, nailed it, I'm the man.
01:51:24.000 Drop the mic.
01:51:25.000 It's that you wake up, you're like, God, we can't fuck this up.
01:51:28.000 No, exactly.
01:51:29.000 Every morning.
01:51:29.000 Every morning.
01:51:30.000 No, every moment.
01:51:32.000 That's crazy.
01:51:32.000 That's good.
01:51:33.000 Before I go to bed.
01:51:34.000 Well, listen, if I was a major investor in your company, that's why I'd want running it.
01:51:38.000 I'd want a guy who's working.
01:51:40.000 Yeah.
01:51:41.000 That's what I work.
01:51:43.000 That's why I work seven days a week every moment I'm awake.
01:51:47.000 You work every moment.
01:51:48.000 Every moment I'm awake.
01:51:50.000 Wow.
01:51:51.000 I'm thinking about solving a problem.
01:51:53.000 I'm thinking about it.
01:51:55.000 How long can you keep this up?
01:51:57.000 I don't know, but it could be next week.
01:52:00.000 It sounds exhausting.
01:52:02.000 It is exhausting.
01:52:03.000 It sounds completely exhausting.
01:52:05.000 Always in a state of anxiety.
01:52:07.000 Wow.
01:52:08.000 Wow.
01:52:08.000 Always in a state of anxiety.
01:52:10.000 Well, kudos to you for admitting that.
01:52:12.000 I think that's important for a lot of people to hear because, you know, there's probably some young people out there that are in a similar position to where you were when you were starting out that just feel like, oh, those people that have made it, they're just smarter than me and they had more opportunities than me.
01:52:30.000 And it's just like it was handed to them or they're just in the right place at the right time.
01:52:35.000 Joe, I just described to you somebody who didn't know what was going on.
01:52:39.000 Actually did it wrong.
01:52:41.000 Yeah.
01:52:41.000 Yeah.
01:52:42.000 And the ultimate diving catch like two or three times.
01:52:45.000 Crazy.
01:52:46.000 The ultimate diving catch is the perfect way to put it.
01:52:46.000 Yeah.
01:52:50.000 You know, it's just like the edge of your glove.
01:52:53.000 It probably bounced off of somebody's helmet and landed at the edge of your glove.
01:53:00.000 God, that's incredible.
01:53:02.000 That's incredible, but it's also, it's really cool that you have this perspective, that you look at it that way.
01:53:08.000 Because, you know, a lot of people that have delusions of grandeur or they have, you know.
01:53:15.000 And their inflammation.
01:53:16.000 And their rewriting of history oftentimes had them somehow extraordinarily smart and they were geniuses and they knew all along and they were spot on.
01:53:28.000 The business plan was exactly what they thought.
01:53:31.000 Yeah.
01:53:31.000 They destroyed the competition and, you know, and they emerged victorious.
01:53:39.000 Meanwhile, you're like, I'm scared every day.
01:53:41.000 Exactly.
01:53:43.000 Exactly.
01:53:44.000 It's so funny.
01:53:46.000 Oh, my God.
01:53:47.000 That's amazing.
01:53:47.000 It's so true, though.
01:53:48.000 It's amazing.
01:53:49.000 It's so true.
01:53:50.000 It's amazing.
01:53:51.000 Well, but I think there's nothing inconsistent with being a leader and being vulnerable.
01:53:58.000 You know, the company doesn't need me to be a genius right all along, right all the time.
01:54:06.000 Absolutely certain about what I'm trying to do and what I'm doing.
01:54:09.000 The company doesn't need that.
01:54:11.000 The company wants me to succeed.
01:54:13.000 You know, the thing that, and we started out today talking about President Trump, and I was about to say something.
01:54:20.000 And listen, he is my president.
01:54:23.000 He is our president.
01:54:25.000 We should all, and we're talking about just because it's President Trump, we all want him to be wrong.
01:54:30.000 I think the United States, we all have to realize he is our president.
01:54:35.000 We want him to succeed.
01:54:37.000 No matter who's president, that's right.
01:54:40.000 That's right.
01:54:41.000 We want him to succeed.
01:54:42.000 We need to help him succeed because it helps everybody, all of us succeed.
01:54:48.000 And I'm lucky that I work in a company where I have 40,000 people who want me to succeed.
01:54:58.000 They want me to succeed and I can tell.
01:55:00.000 And they're all every single day to help me overcome these challenges, trying to realize what I describe to be our strategy, doing their best.
01:55:11.000 And if it's somehow wrong or not perfectly right, to tell me so that we could pivot.
01:55:19.000 And the more vulnerable we are as a leader, the more able other people are able to tell you, you know, that, Jensen, that's not exactly right.
01:55:28.000 Right.
01:55:29.000 Have you considered this information?
01:55:31.000 And the more vulnerable we are, the more able we're actually able to pivot.
01:55:37.000 If we put ourselves into this superhuman capability, then it's hard for us to pivot strategy.
01:55:42.000 Right.
01:55:43.000 Because we were supposed to be right all along.
01:55:45.000 And so if you're always right, how can you possibly pivot?
01:55:49.000 Because pivoting requires you to be wrong.
01:55:51.000 And so I've got no trouble with being wrong.
01:55:53.000 I just have to make sure that I stay alert, that I reason about things from first principles all the time.
01:56:01.000 Always break things down to first principles, understand why it's happening, reassess continuously.
01:56:08.000 The reassessing continuously is kind of partly what causes continuous anxiety.
01:56:13.000 You know, because you're asking yourself, were you wrong yesterday?
01:56:17.000 Are you still right?
01:56:18.000 Is this the same?
01:56:19.000 Has that changed?
01:56:21.000 Has that conditioned, is that worse than you thought?
01:56:23.000 But God, that mindset is perfect for your business, though, because this business is ever changing.
01:56:29.000 And you've got competition coming from every direction.
01:56:29.000 All the time.
01:56:32.000 So much of it is kind of up in the air.
01:56:37.000 And you have to invent a future where 100 variables are included.
01:56:43.000 And there's no way you could be right on all of them.
01:56:46.000 And so you have to be, you have to surf.
01:56:49.000 Wow.
01:56:50.000 You have to surf.
01:56:50.000 That's a good way to put it.
01:56:51.000 You have to surf.
01:56:52.000 Yeah.
01:56:53.000 You're surfing waves of technology and innovation.
01:56:55.000 That's right.
01:56:56.000 You can't predict the waves.
01:56:58.000 You've got to deal with the ones you have.
01:56:59.000 Wow.
01:57:01.000 But skill matters.
01:57:03.000 And I've been doing this for 30 years.
01:57:04.000 I'm the longest running tech CEO in the world.
01:57:07.000 Is that true?
01:57:07.000 Congratulations.
01:57:08.000 That's amazing.
01:57:10.000 You know, people ask me how is one, don't get fired.
01:57:16.000 I'll stop a short heartbeat.
01:57:19.000 And then two, don't get bored.
01:57:22.000 Well, how do you maintain your enthusiasm?
01:57:22.000 Yeah.
01:57:28.000 The honest truth is it's not always enthusiasm.
01:57:31.000 It's, you know, sometimes it's enthusiasm.
01:57:33.000 Sometimes it's just good old-fashioned fear.
01:57:36.000 And then sometimes, you know, a healthy dose of frustration.
01:57:40.000 You know, it's whatever keeps you moving.
01:57:43.000 Yeah, just all the emotions.
01:57:45.000 I think, you know, CEOs, we have all the emotions, right?
01:57:49.000 You know, and so probably jacked up to the maximum because you're kind of feeling it on behalf of the whole company.
01:57:58.000 I'm feeling it on behalf of everybody at the same time.
01:58:02.000 And it kind of, you know, encapsulates into somebody.
01:58:06.000 And so I have to be mindful of the past.
01:58:09.000 I have to be mindful of the present.
01:58:10.000 I've got to be mindful of the future.
01:58:12.000 And, you know, it's not without emotion.
01:58:19.000 It's not just a job.
01:58:20.000 Let's just put it that way.
01:58:22.000 That doesn't seem like it at all.
01:58:25.000 I would imagine one of the more difficult aspects of your job currently, now that the company is massively successful, is anticipating where technology is headed and where the applications are going to be.
01:58:37.000 So how do you try to map that out?
01:58:40.000 Yeah, there's a whole bunch of ways.
01:58:45.000 And it takes a whole bunch of things.
01:58:51.000 But let me just start.
01:58:54.000 You have to be surrounded by amazing people.
01:58:56.000 And NVIDIA is now, if you look at the large tech companies in the world today, most of them have a business in advertising or social media or content distribution.
01:59:14.000 And at the core of it is really fundamental computer science.
01:59:20.000 And so the company's business is not computers.
01:59:23.000 The company's business is not technology.
01:59:25.000 Technology drives the company.
01:59:27.000 NVIDIA is the only company in the world that's large whose only business is technology.
01:59:33.000 We only build technology.
01:59:34.000 We don't advertise.
01:59:35.000 The only way that we make money is to create amazing technology and sell it.
01:59:40.000 And so to be that, to be NVIDIA today, the number one thing is you're surrounded by the finest computer scientists in the world.
01:59:51.000 And that's my gift.
01:59:52.000 My gift is that we've created a company's culture, a condition by which the world's greatest computer scientists want to be part of it.
02:00:02.000 Because they get to do their life's work and create the next thing.
02:00:06.000 Because that's what they want to do.
02:00:08.000 Because maybe they don't want to be in service of another business.
02:00:13.000 They want to be in service of the technology itself.
02:00:16.000 And we're the largest form of its kind in the history of the world.
02:00:19.000 Wow.
02:00:20.000 I know.
02:00:20.000 It's pretty amazing.
02:00:21.000 Wow.
02:00:22.000 And so one, you know, we have got a great condition.
02:00:27.000 We have a great culture.
02:00:28.000 We have great people.
02:00:30.000 And now the question is how do you systematically be able to see the future, stay alert of it, and reduce the likelihood of missing something or being wrong.
02:00:53.000 And so there's a lot of different ways you could do that.
02:00:55.000 For example, we have great partnerships.
02:00:56.000 We have fundamental research.
02:00:58.000 We have a great research lab, one of the largest industrial research labs in the world today.
02:01:03.000 And we partner with a whole bunch of universities and other scientists.
02:01:07.000 We do a lot of open collaboration.
02:01:09.000 And so I'm constantly working with researchers outside the company.
02:01:15.000 We have the benefit of having amazing customers.
02:01:19.000 And so I have the benefit of working with Elon and others in the industry.
02:01:24.000 And we have the benefit of being the only pure play technology company that can serve consumer internet, industrial manufacturing, scientific computing, healthcare, financial services, all the industries that we're in, they're all signals to me.
02:01:45.000 And so they all have mathematicians and scientists.
02:01:49.000 And so because I have the benefit now of a radar system that is the most broad of any company in the world, working across every single industry, from agriculture to energy to video games.
02:02:05.000 And so the ability for us to have this vantage point, one, doing fundamental research ourselves, and then two, working with all the great researchers, working with all the great industries, the feedback system is incredible.
02:02:20.000 And then finally, you just have to have a culture of staying super alert.
02:02:25.000 There's no easy way of being alert except for paying attention.
02:02:31.000 I haven't found a single way of being able to stay alert without paying attention.
02:02:35.000 And so, you know, I probably read several thousand emails a day.
02:02:41.000 How?
02:02:43.000 How do you have a time for that?
02:02:44.000 I wake up early this morning.
02:02:45.000 I was up at 4 o'clock.
02:02:47.000 How much do you sleep?
02:02:51.000 6, 7 hours.
02:02:53.000 Yeah.
02:02:54.000 And then you're up at 4, read emails for a few hours before you get going.
02:02:57.000 That's right, yeah.
02:02:58.000 Wow.
02:03:00.000 Every day.
02:03:01.000 Every single day.
02:03:02.000 Not one day missed.
02:03:05.000 Including Thanksgiving Christmas.
02:03:06.000 Do you ever take a vacation?
02:03:10.000 Yeah, but my definition of a vacation is when I'm with my family.
02:03:15.000 And so if I'm with my family, I'm very happy.
02:03:18.000 I don't care where we are.
02:03:19.000 And you don't work then?
02:03:20.000 Or do you work little?
02:03:21.000 No, no, I work a lot.
02:03:24.000 Even like if you go on a trip somewhere, you're still working.
02:03:27.000 Oh, sure.
02:03:28.000 Oh, sure.
02:03:29.000 Wow.
02:03:29.000 Every day.
02:03:30.000 Every day.
02:03:32.000 But my kids work every day.
02:03:33.000 You make me tired just saying this.
02:03:34.000 My kids work every day.
02:03:37.000 Both of my kids work at NVIDIA.
02:03:39.000 They work every day.
02:03:40.000 Wow.
02:03:40.000 Yeah, I'm very lucky.
02:03:42.000 Wow.
02:03:43.000 It's brutal now because, you know, it's just me working every day.
02:03:43.000 Yeah.
02:03:47.000 Now we have three people working every day, and they want to work with me every day.
02:03:51.000 And so it's a lot of work.
02:03:54.000 Well, you've obviously imparted that ethic into them.
02:03:58.000 They work incredibly hard.
02:03:59.000 I mean, it's not unbelievable.
02:04:01.000 But my parents work incredibly hard.
02:04:04.000 Yeah.
02:04:05.000 I was born with the work gene, the suffering gene.
02:04:10.000 Well, listen, man, it has paid off.
02:04:13.000 What a crazy story.
02:04:15.000 It's really an amazing origin story.
02:04:18.000 It really, I mean, it has to be kind of surreal to be in the position that you're in now when you look back at how many times that it could have fallen apart and humble beginnings.
02:04:28.000 But Joe, this is great.
02:04:29.000 It's a great country.
02:04:31.000 I'm an immigrant.
02:04:32.000 My parents sent my older brother and I here first.
02:04:37.000 We're in Thailand.
02:04:39.000 I was born in Taiwan, but my dad had a job in Thailand.
02:04:44.000 He was a chemical and instrumentation engineer, incredible engineer.
02:04:50.000 And his job was to go start an oil refinery.
02:04:53.000 And so we moved to Thailand, lived in Bangkok.
02:04:56.000 And in 19, I guess, 1973, 1974 timeframe, you know how Thailand, every so often, they would just have a coup.
02:05:08.000 You know, the military would have an uprising.
02:05:11.000 And all of a sudden, one day, there were tanks and soldiers in the streets.
02:05:15.000 And my parents thought, you know, it probably isn't safe for the kids to be here.
02:05:19.000 And so they contacted my uncle.
02:05:22.000 My uncle lives in Tacoma, Washington.
02:05:25.000 And we had never met him.
02:05:28.000 And my parents sent us to him.
02:05:30.000 How old are you?
02:05:32.000 I was about to turn nine, and my older brother almost turned 11.
02:05:38.000 And so the two of us came to the United States.
02:05:42.000 And we stayed with our uncle for a little bit while he looked for a school for us.
02:05:49.000 And my parents didn't have very much money.
02:05:51.000 And they'd never been to the United States.
02:05:53.000 My father was, I'll tell you that story in a second.
02:05:57.000 And so my uncle found a school that would accept foreign students and affordable enough for my parents.
02:06:11.000 And that school turned out to have been in Oneida, Kentucky, Clark County, Kentucky, the epicenter of the opioid crisis today.
02:06:21.000 Coal country.
02:06:24.000 Clark County, Kentucky was the poorest county in America when I showed up.
02:06:32.000 It is the poorest county in America today.
02:06:36.000 And so we went to the school.
02:06:37.000 It's a great school.
02:06:40.000 Oneida Baptist Institute in a town of a few hundred.
02:06:45.000 I think it was 600 at the time that we showed up.
02:06:49.000 No traffic light.
02:06:51.000 And I think it has 600 today.
02:06:54.000 It's kind of an amazing feat, actually.
02:06:58.000 The ability to hold your population for when it's 600 people is quite a magical thing, however, they did it.
02:07:08.000 And so the school had a mission of being an open school for any children who'd like to come.
02:07:20.000 And what that basically means is that if you're a troubled student, if you have a troubled family, if you're, you know, whatever your background, you're welcome to come to Oneida Baptist Institute, including kids from international who would like to stay there.
02:07:44.000 Did you speak English at the time?
02:07:46.000 Okay, yeah.
02:07:47.000 Yeah, okay, yeah.
02:07:49.000 And so we showed up.
02:07:53.000 And my first thought was, gosh, there are a lot of cigarette butts on the ground.
02:08:03.000 100% of the kids smoked.
02:08:09.000 So right away, you know, this is not a normal school.
02:08:11.000 Nine-year-olds?
02:08:12.000 No, I was the youngest kid.
02:08:14.000 11-year-olds.
02:08:14.000 Okay.
02:08:15.000 My roommate was 17 years old.
02:08:18.000 Wow.
02:08:19.000 Yeah, he just turned 17.
02:08:21.000 And he was jacked.
02:08:23.000 And I don't know where he is now.
02:08:30.000 I know his name, but I don't know where he is now.
02:08:32.000 But anyways, that night we got – and the second thing I noticed when you walk into your dorm room is there are no drawers and no closet doors.
02:08:47.000 Just like a prison.
02:08:50.000 And there are no locks so that people could check up on you.
02:08:58.000 And so I go into my room, and he's 17, and get ready for bed.
02:09:06.000 And he had all this tape all over his body.
02:09:11.000 And it turned out he was in a knife fight, and he's been stabbed all over his body, and these were just fresh wounds.
02:09:20.000 And the other kids were hurt much worse.
02:09:24.000 And so he was my roommate, the toughest kid in school.
02:09:28.000 And I was the youngest kid in school.
02:09:32.000 It was a junior high, but they took me anyways because if I walked about a mile across the Kentucky River, the swing bridge, the other side is a middle school that I could go to.
02:09:47.000 And then I can go to that school and I come back and I stay in the dorm.
02:09:53.000 And so basically, Oneida Baptist Institute was my dorm when I went to this other school.
02:09:58.000 My older brother went to the junior high.
02:10:02.000 And so we were there for a couple of years.
02:10:06.000 Every kid had chores.
02:10:09.000 My older brother's chore was to work in the tobacco farm.
02:10:13.000 You know, so tobacco, they raised tobacco so they could raise some extra money for the school, kind of like a penitentiary.
02:10:19.000 Wow.
02:10:20.000 And my job was just to clean the dorm.
02:10:22.000 And so I was nine years old.
02:10:25.000 I was cleaning toilets for a dorm of 100 boys.
02:10:33.000 I cleaned more bathrooms than anybody.
02:10:35.000 And I just wish that everybody was a little bit more careful.
02:10:42.000 But anyways, I was the youngest kid in school.
02:10:45.000 My memories of it was really good.
02:10:48.000 But it was a pretty tough, it was a tough town.
02:10:50.000 Town's like it.
02:10:51.000 Yeah, town kids, they all carried, everybody had knives.
02:10:55.000 Everybody had knives.
02:10:56.000 Everybody smoked.
02:10:58.000 Everybody had a Zippo lighter.
02:11:00.000 I smoked for a week.
02:11:01.000 Did you?
02:11:02.000 Oh, yeah, sure.
02:11:02.000 How old were you?
02:11:03.000 I was nine.
02:11:04.000 When you were nine, you were nine, you tried smoking.
02:11:06.000 Yeah, I got myself a pack of cigarettes.
02:11:08.000 Everybody else did.
02:11:09.000 Did you get sick?
02:11:10.000 No, I got used to it.
02:11:12.000 And I learned how to blow smoke rings and, you know, breathe out of my nose, you know, take it in through my nose.
02:11:12.000 Yeah.
02:11:22.000 There's a couple all the different things that you learn.
02:11:25.000 Yeah.
02:11:26.000 At nine.
02:11:27.000 Wow.
02:11:27.000 Yeah.
02:11:28.000 You just did it to fit in or look at it.
02:11:29.000 Yeah, because everybody else did it.
02:11:31.000 Right.
02:11:31.000 Yeah.
02:11:32.000 And then I did it for a couple weeks, I guess.
02:11:35.000 And I just rather have I had a quarter, you know, I had a quarter a month or something like that.
02:11:43.000 I'd just rather buy popsicles and French sickles with it.
02:11:47.000 I was nine, you know.
02:11:48.000 Right.
02:11:49.000 I chose the better path.
02:11:52.000 Wow.
02:11:53.000 That was our school.
02:11:54.000 And then my parents came to the United States two years later.
02:11:57.000 And we met them in Tacoma, Washington.
02:12:01.000 That's wild.
02:12:03.000 It was a really crazy experience.
02:12:05.000 What a strange, formative experience.
02:12:08.000 Yeah, tough kids.
02:12:10.000 Thailand to one of the poorest places in America, or if not the poorest, as a nine-year-old.
02:12:19.000 That was my first time with your brother.
02:12:22.000 Wow.
02:12:23.000 Yeah.
02:12:24.000 Yeah.
02:12:24.000 No, I used to remember.
02:12:26.000 And what breaks my heart, probably the only thing that really breaks my heart about that experience was so we didn't have enough money to make international phone calls every week.
02:12:42.000 And so my parents gave us this tape deck, this IWA tape deck, and a tape.
02:12:51.000 And so every month we would sit in front of that tape deck and my older brother, Jeff, and I the two of us would just tell them what we did the whole month.
02:13:04.000 Wow.
02:13:06.000 And we would send that tape by mail.
02:13:09.000 And my parents would take that tape and record back on top of it and send it back to us.
02:13:17.000 Could you imagine if for two years that tape still existed of these two kids just describing their first experience with the United States?
02:13:17.000 Wow.
02:13:28.000 Like I remember telling my parents that I joined the swim team and my roommate was really buff and so every day we spent a lot of time in the in the gym and so every night 100 push-ups, 100 sit-ups every day in the gym.
02:13:50.000 So I was nine years old.
02:13:51.000 I was getting I was pretty buff.
02:13:54.000 And I'm pretty fit.
02:13:55.000 And so I joined the soccer team.
02:14:00.000 I joined the swim team because if you join the team, they take you to meets, and then afterwards, you get to go to a nice restaurant.
02:14:09.000 And that nice restaurant was McDonald's.
02:14:11.000 Wow.
02:14:12.000 And I recorded this thing.
02:14:15.000 I said, Mom and Dad, we went to the most amazing restaurant today.
02:14:20.000 This whole place is lit up.
02:14:22.000 It's like the future.
02:14:24.000 And the food comes in a box.
02:14:31.000 And the food is incredible.
02:14:32.000 The hamburger is incredible.
02:14:33.000 It was McDonald's.
02:14:35.000 But anyhow, wouldn't it be amazing?
02:14:38.000 Oh, my God.
02:14:39.000 Two years.
02:14:40.000 Yeah, two years.
02:14:42.000 What a crazy connection to your parents, too.
02:14:45.000 Just sending a tape and them sending you one back.
02:14:48.000 And it's the only way you're communicating for two years.
02:14:51.000 Yeah.
02:14:52.000 Wow.
02:14:53.000 Yeah.
02:14:54.000 No, my parents are incredible, actually.
02:14:59.000 They grew up really poor.
02:15:01.000 And when they came to the United States, they had almost no money.
02:15:06.000 Probably one of the most impactful memories I have is they came and we were staying in an apartment complex.
02:15:21.000 And they had just rent back in the, I guess people still do rent a bunch of furniture.
02:15:29.000 And we were messing around.
02:15:36.000 And we bumped into the coffee table and crushed it.
02:15:41.000 It was made out of particle wood.
02:15:43.000 We crushed it.
02:15:46.000 And I just still remember the look on my mom's face, you know, because they didn't have any money and she didn't know how she was going to pay it back.
02:15:54.000 But anyhow, that kind of tells you how hard it was for them to come here.
02:15:59.000 But they left everything behind.
02:16:01.000 And all they had was their suitcase and the money they had in their pocket when they came to the United States.
02:16:07.000 How old were they?
02:16:08.000 Pursued the Murray Dream.
02:16:09.000 They were in their 40s.
02:16:10.000 Wow.
02:16:11.000 Yeah, late 30s.
02:16:13.000 Pursued the American dream.
02:16:15.000 This is the American dream.
02:16:16.000 I'm the first generation of the American dream.
02:16:19.000 Wow.
02:16:20.000 It's hard not to love this country.
02:16:20.000 Yeah.
02:16:23.000 It's hard not to be romantic about this country.
02:16:26.000 That is a romantic story.
02:16:27.000 That's an amazing story.
02:16:29.000 And my dad found his job literally in the newspaper, you know, the ads.
02:16:29.000 Yeah.
02:16:36.000 And he calls people, got a job.
02:16:39.000 What did he do?
02:16:40.000 He was a consulting engineer in a consulting firm.
02:16:44.000 And they helped people build oil refineries, paper mills, and fabs.
02:16:50.000 And that's what he did.
02:16:52.000 He's really good at factory design, instrumentation engineer.
02:16:57.000 And so he's brilliant at that.
02:17:00.000 And so he did that.
02:17:02.000 And my mom worked as a maid, and they found a way to raise us.
02:17:08.000 Wow.
02:17:10.000 That's an incredible story, Jensen.
02:17:12.000 It really is.
02:17:13.000 Everything, all of it.
02:17:14.000 From your childhood to the perils of NVIDIA almost falling.
02:17:21.000 It's really incredible, man.
02:17:23.000 It's a great story.
02:17:23.000 Yeah.
02:17:24.000 I've lived a great life.
02:17:26.000 You really have.
02:17:27.000 And it's a great story for other people to hear, too.
02:17:29.000 It really is.
02:17:31.000 You don't have to go to Ivy League schools to succeed.
02:17:36.000 This country creates opportunities.
02:17:37.000 It has opportunities for all of us.
02:17:40.000 You do have to strive.
02:17:42.000 You have to claw your way here.
02:17:45.000 Yeah.
02:17:46.000 But if you put in the work, you can succeed.
02:17:49.000 Nobody works out.
02:17:50.000 There's a lot of luck and a lot of good decision-making.
02:17:53.000 And the good graces of others.
02:17:55.000 Yes.
02:17:56.000 That's really important.
02:17:57.000 You and I spoke about two people who are very dear to me, but the list goes on.
02:17:57.000 Yeah.
02:18:05.000 The people at NVIDIA who have helped me, many friends that are on the board, the decisions, them giving me the opportunity.
02:18:16.000 Like when we were inventing this new computing approach, I tanked our stock price because we added this thing called CUDA to the chip.
02:18:24.000 We had this big idea.
02:18:25.000 We added this thing called CUDA to the chip.
02:18:27.000 But nobody paid for it, but our cost doubled.
02:18:31.000 And so we had this graphics chip company, and we invented GPUs.
02:18:36.000 We invented programmable shaders.
02:18:38.000 We invented everything modern computer graphics.
02:18:42.000 We invented real-time ray tracing.
02:18:44.000 That's why it went from GTX to RTX.
02:18:48.000 We invented all this stuff, but every time we invented something, the market doesn't know how to appreciate it, but the cost went way up.
02:18:56.000 And in the case of CUDA, that enabled AI, the cost increased a lot.
02:19:03.000 But we really believed it.
02:19:06.000 And so if you believe in that future and you don't do anything about it, you're going to regret it for your life.
02:19:13.000 And so we always, you know, I always tell the team, do we believe this or not?
02:19:18.000 And if you believe it, and grounded on first principles, not random hearsay, and we believe it, we owe it to ourselves to go pursue it.
02:19:28.000 If we're the right people to go do it, if it's really, really hard to do, it's worth doing, and we believe it.
02:19:33.000 Let's go pursue it.
02:19:36.000 Well, we pursued it.
02:19:37.000 We launched the product.
02:19:39.000 Nobody knew.
02:19:40.000 It was exactly like when I launched DGX1 and the entire audience was like complete silence.
02:19:47.000 When I launched CUDA, the audience was complete silence.
02:19:51.000 No customer wanted it.
02:19:53.000 Nobody asked for it.
02:19:56.000 Nobody understood it.
02:19:57.000 NVIDIA was a public company.
02:19:59.000 What year was this?
02:20:00.000 This is 2006, 20 years ago.
02:20:10.000 2005.
02:20:12.000 Wow.
02:20:13.000 Our stock prices went poof.
02:20:18.000 I think our valuation went down to like $2 or $3 billion.
02:20:23.000 From about 12 or something like that.
02:20:28.000 I crushed it in a very bad way.
02:20:32.000 What is it now, though?
02:20:35.000 Yeah, it's higher.
02:20:38.000 Very humble of you.
02:20:40.000 It's higher.
02:20:41.000 But it changed the world.
02:20:43.000 That invention changed the world.
02:20:43.000 Yeah.
02:20:46.000 It's an incredible story, Johnson.
02:20:48.000 It really is.
02:20:50.000 Thank you.
02:20:51.000 It's incredible.
02:20:51.000 I like your story.
02:20:53.000 My story's not as incredible.
02:20:54.000 My story's more weird.
02:20:57.000 You know?
02:20:59.000 It's much more fortuitous and weird.
02:21:01.000 Okay, what are the three milestones, most important milestones that led to here?
02:21:10.000 That's a good question.
02:21:12.000 What was step one?
02:21:13.000 I think step one was seeing other people do it.
02:21:18.000 Step one was in the initial days of podcasting, like in 2009 when I started podcasting and only been around for a couple of years.
02:21:28.000 The first was Adam Curry, my good friend, who was the pod father.
02:21:31.000 He invented podcasting.
02:21:34.000 And then, you know, I remember Adam Corolla had a show because he had a radio show.
02:21:39.000 His radio show got canceled.
02:21:41.000 And so he decided to just do the same show but do it on the internet.
02:21:43.000 And that was pretty revolutionary.
02:21:44.000 Nobody was doing that.
02:21:46.000 And then there was the experience that I had doing different morning radio shows, like Opie and Anthony in particular, because it was fun.
02:21:55.000 And we would just get together with a bunch of comedians.
02:21:58.000 You know, I'd be on the show with like three or four other guys that I knew.
02:22:01.000 And it was always just looked forward to it.
02:22:04.000 It was just such a good time.
02:22:06.000 And I said, God, I miss doing that.
02:22:07.000 It's so fun to do that.
02:22:08.000 I wish I could do something like that.
02:22:10.000 And then I saw Tom Green set up.
02:22:12.000 Tom Green had a setup in his house.
02:22:14.000 And he essentially turned his entire house into a television studio.
02:22:18.000 And he did an internet show from his living room.
02:22:21.000 He had servers in his house and cables everywhere.
02:22:23.000 He had to step over cables.
02:22:24.000 This is like 2007.
02:22:26.000 I'm like, Tom, this is nuts.
02:22:27.000 Like, this is.
02:22:28.000 And I'm like, you got to figure out a way to make money from this.
02:22:31.000 I wish everybody on the internet could see your setup.
02:22:34.000 I just want to let you guys know that.
02:22:34.000 It's nuts.
02:22:37.000 It's not just this.
02:22:39.000 So that was the beginning of it, is just seeing other people do it.
02:22:43.000 And then saying, all right, let's just try it.
02:22:44.000 And then so the beginning days, we just did it on a laptop, had a laptop with a webcam and just messed around, had a bunch of comedians come in, we would just talk and joke around.
02:22:55.000 Then I did it like once a week.
02:22:56.000 And then I started doing it twice a week.
02:22:58.000 And then all of a sudden, I was doing it for a year.
02:23:00.000 And then I was doing it for two years.
02:23:02.000 Then it was like, oh, it's starting to get a lot of viewers and a lot of listeners.
02:23:07.000 And then I just kept doing it.
02:23:09.000 It's all it is.
02:23:10.000 I just kept doing it because I enjoyed doing it.
02:23:13.000 Was there any setback?
02:23:15.000 No, there's never really a setback.
02:23:15.000 No.
02:23:17.000 No.
02:23:17.000 Really?
02:23:18.000 It must have been.
02:23:19.000 It's not the same kind of thing.
02:23:20.000 You're just resilient.
02:23:21.000 Or you're just tough.
02:23:23.000 No, no, no, no.
02:23:24.000 It wasn't tough or hard.
02:23:26.000 It was just interesting.
02:23:27.000 So I just, you were never once punched in the face.
02:23:30.000 No, not in the show.
02:23:31.000 No, not really.
02:23:32.000 Not doing the show.
02:23:33.000 You never did something that big blowback.
02:23:38.000 Nope.
02:23:39.000 Not really.
02:23:41.000 No, it all just kept growing.
02:23:43.000 It kept growing.
02:23:44.000 And the thing stayed the same from the beginning to now.
02:23:48.000 And the thing is, I enjoy talking to people.
02:23:50.000 I've always enjoyed talking to interesting people.
02:23:52.000 I could even tell just when we walked in the way you interacted with everybody, not just me.
02:23:57.000 That's cool.
02:23:57.000 Yeah.
02:23:58.000 People are cool.
02:23:59.000 Yeah, that's cool.
02:24:00.000 You know, it's an amazing gift to be able to have so many conversations with so many interesting people because it changes the way you see the world because you see the world through so many different people's eyes.
02:24:14.000 And you have so many different people of different perspectives and different opinions and different philosophies and different life stories.
02:24:21.000 And, you know, it's an incredibly enriching and educating experience having so many conversations with so many amazing people.
02:24:32.000 And that's all I started doing.
02:24:35.000 And that's all I do now.
02:24:37.000 Even now, when I book the show, I do it on my phone.
02:24:40.000 And I basically go through this giant list of emails of all the people that want to be on the show or that request to be on the show.
02:24:48.000 And then I factor in another list that I have of people that I would like to get on the show that I'm interested in.
02:24:53.000 And I just map it out.
02:24:55.000 And that's it.
02:24:56.000 And I go, oh, I'd like to talk to him.
02:24:58.000 If it wasn't because of President Trump, I wouldn't have been bumped up on that list.
02:25:01.000 No, I wanted to talk to you already.
02:25:04.000 I just think, you know, what you're doing is very fascinating.
02:25:07.000 I mean, how would I not want to talk to you?
02:25:08.000 And today, it proved to be absolutely the right decision.
02:25:12.000 Well, you know, listen, it's strange to be an immigrant one day going to Oneida Baptist Institute with the students that were there.
02:25:24.000 And then here NVIDIA is one of the most consequential companies in the history of companies.
02:25:32.000 It is a crazy story.
02:25:34.000 It has to be.
02:25:36.000 That journey is.
02:25:37.000 And it's very humbling.
02:25:39.000 And I'm very grateful.
02:25:41.000 It's pretty amazing, man.
02:25:42.000 Surrounded by amazing people.
02:25:44.000 You're very fortunate, and you've also, you seem very happy.
02:25:47.000 And you seem like you're 100% on the right path in this life.
02:25:51.000 You know, everybody says you must love your job.
02:25:54.000 Not every day.
02:25:56.000 That's not part of the beauty of everything.
02:25:59.000 Is that there's ups and downs.
02:25:59.000 Yeah.
02:26:01.000 That's right.
02:26:01.000 It's never just like this giant dopamine high.
02:26:04.000 We leave this impression.
02:26:06.000 Here's an impression I don't think is healthy.
02:26:11.000 People who are successful leave the impression often that our job gives us great joy.
02:26:19.000 I think largely it does.
02:26:22.000 That our jobs, we're passionate about our work.
02:26:27.000 And that passion relates to it's just so much fun.
02:26:32.000 I think it largely is.
02:26:34.000 But it distracts from, in fact, a lot of success comes from really, really hard work.
02:26:42.000 Yes.
02:26:43.000 There's long periods of suffering and loneliness and uncertainty and fear and embarrassment and humiliation.
02:26:56.000 All of the feelings that we most not love.
02:27:02.000 That creating something from the ground up.
02:27:06.000 And Elon will tell you something similar.
02:27:09.000 Very difficult to invent something new.
02:27:12.000 And people don't believe you all the time.
02:27:16.000 You're humiliated often, disbelieved most of the time.
02:27:20.000 And so people forget that part of success.
02:27:24.000 And I don't think it's healthy.
02:27:26.000 I think it's good that we pass that forward and let people know that it's just part of the journey.
02:27:32.000 Yes.
02:27:33.000 And suffering is part of the journey.
02:27:35.000 You will appreciate it so much.
02:27:36.000 These horrible feelings that you have when things are not going so well, you appreciate it so much more when they do go well.
02:27:43.000 Deeply grateful.
02:27:44.000 Yeah.
02:27:45.000 Deep, deep pride.
02:27:45.000 Yeah.
02:27:47.000 Incredible pride.
02:27:49.000 Incredible, incredible gratefulness and surely incredible memories.
02:27:54.000 Absolutely.
02:27:55.000 Jensen, thank you so much for being here.
02:27:57.000 This was really fun.
02:27:58.000 I really enjoyed it.
02:27:59.000 And your story is just absolutely incredible and very inspirational.
02:28:04.000 And I think it really is the American dream.
02:28:07.000 It is the American dream.
02:28:08.000 It really is.
02:28:09.000 Thank you so much.
02:28:10.000 Thank you, Jeff.
02:28:10.000 All right.