The Joe Rogan Experience - March 12, 2024


Joe Rogan Experience #2117 - Ray Kurzweil


Episode Stats

Length

2 hours and 2 minutes

Words per Minute

145.45946

Word Count

17,860

Sentence Count

1,432

Misogynist Sentences

3


Summary

In this episode of the podcast, we have a special guest on the show, a man who has been in the field of artificial intelligence for over 60 years. He's a technologist, a musician, a painter, an inventor, an artist, a writer, a composer, and an inventor's wife. And he's also the creator of the world's first artificial intelligence (AI) music. We talk about what it means to be an AI artist, and how AI has changed the way we think about art, music, and other forms of art, and the impact it can have on our everyday lives. We also talk about the dangers of AI, and what it could mean for the future of the human experience, and why it's not as good as it used to be. This episode was produced by Alex Blumberg and edited by Annie-Rose Strasser. Our theme song is Come Alone by my main amigo, Evan Handyside, and our ad music is by Ian Dorsch. Please rate, review, and subscribe to our podcast on Apple Podcasts. Have a question or suggestion for our next episode? Send us an e-mail at sws@whatiwatchedtonight.co.uk and we'll get it on the next episode! Thanks again for listening! Timestamps: 1:00:00 - What's your favorite piece of art? 2:30 - What do you think of AI? 3:40 - Is AI better than human creativity? 4:15 - What are your favorite art form? 5: What would you like to see in an AI machine? 6: What are you scared of? 7:00 8:50 - What is your favorite kind of artist? 9:20 - What does AI do better? 10:10 - How AI is better than a robot? 11:30 | What would AI be better than humans? 12:00 | What can you imagine? 13: Is AI more than a human experience? 14:10 | What is the limit? 15: How will AI match a human? 16: What will AI be the best? 17:40 | What are we going to be a better human experience by 2029? 21: What's the limit of AI in the future? 19:30 22:40 Can AI match human experience ?


Transcript

00:00:11.000 Good to see you, sir.
00:00:13.000 Great to see you.
00:00:14.000 I was telling you before I'm admiring your suspenders, and you told me you have how many pairs of these things?
00:00:19.000 Thirty of them, yeah.
00:00:21.000 I wear them every day.
00:00:22.000 Do you really?
00:00:23.000 Every day?
00:00:24.000 Why do you like suspenders?
00:00:25.000 A practicality thing?
00:00:29.000 No, it expresses my personality.
00:00:35.000 And different ones have different personalities that express how I feel that day.
00:00:48.000 I see.
00:00:48.000 So it's just another style point.
00:00:50.000 You don't see any hand-painted suspenders.
00:00:55.000 Have you ever seen one?
00:00:56.000 I don't know.
00:00:58.000 I would have not noticed.
00:00:59.000 I only noticed because you were here.
00:01:01.000 I'm not really a suspender aficionado.
00:01:04.000 But the reason why I'm asking is because you're basically a technologist.
00:01:08.000 I mean, you know a lot about technology.
00:01:10.000 You would think that suspenders are kind of outdated tech.
00:01:17.000 Well, people like them.
00:01:19.000 Clearly.
00:01:19.000 Yeah.
00:01:20.000 And I'm surprised I haven't caught on.
00:01:24.000 But you have somebody who can actually paint them.
00:01:27.000 I mean, these are hand-painted suspenders.
00:01:30.000 So the ones that you have, right here, these are hand-painted?
00:01:33.000 Yeah.
00:01:33.000 Interesting.
00:01:34.000 Okay, so that's part of it.
00:01:35.000 So you're wearing art.
00:01:37.000 Exactly.
00:01:37.000 Got it.
00:01:39.000 And art is part of technology.
00:01:41.000 I mean, we're using technology to create art now, so...
00:01:44.000 That's true.
00:01:45.000 And it's...
00:01:46.000 In fact, the very first...
00:01:48.000 I mean, I've been now in AI for 61 years, which is actually a record.
00:01:55.000 And the first thing I did was create something that could write music.
00:02:03.000 Writing music now with AI is a major field today, but this was actually the first time that had ever been done.
00:02:13.000 Yeah, that was one of your many inventions.
00:02:15.000 That was the first one, yeah.
00:02:17.000 So why did you go about doing that?
00:02:20.000 What was your desire to create artificial intelligence music?
00:02:24.000 Well, my father was a musician, and I felt this would be a good way to relate to him.
00:02:30.000 And he actually worked with me on it.
00:02:35.000 And you could feed in music, like it could feed in, let's say, Mozart or Chopin, and it would figure out how they created melodies and then write melodies in the same style.
00:02:49.000 So you could actually tell this is Mozart, this is Chopin.
00:02:53.000 It wasn't as good, but it's the first time that that had been done.
00:03:00.000 It wasn't as good then.
00:03:02.000 What are the capabilities now?
00:03:04.000 Because now they can do some pretty extraordinary things.
00:03:07.000 Yeah, it's still not up to what humans can do, but it's getting there, and it's actually pleasant to listen to.
00:03:16.000 We still have a while to do art, both art, music, so on.
00:03:23.000 Well, one of the main arguments against AI art comes from actual artists who are upset that what essentially they're doing is they're, like you could say, write, draw a paint or create a painting in the style of Frank Frazetta,
00:03:38.000 for instance.
00:03:39.000 And what it would be would be they would take all of Frazetta's work that he's ever done, which is all documented on the internet, and then you create an image of That's representative of that.
00:03:52.000 So you're essentially, in one way or another, you're kind of taking from the art.
00:03:57.000 Right.
00:03:58.000 But it's not quite as good.
00:04:00.000 It will be as good.
00:04:02.000 I think we'll match human experience by 2029. That's been my idea.
00:04:11.000 It's not as good.
00:04:14.000 Which is the best image generator right now, Jamie?
00:04:16.000 Something.
00:04:16.000 Pull one up.
00:04:17.000 They really change almost from day to day right now, but Mid Journey was the most popular one at first, and then...
00:04:26.000 Dali, I think, is a really good one, too.
00:04:28.000 Mid-Journey is incredibly impressive.
00:04:29.000 Incredibly impressive graphics.
00:04:31.000 I've seen some of the Mid-Journey stuff.
00:04:33.000 It's mind-blowing.
00:04:36.000 Still not quite as good.
00:04:37.000 But, boys, it's so much better than it was five years ago.
00:04:40.000 That's what's scary.
00:04:41.000 It's so quick.
00:04:42.000 I mean, it's never going to reach its limit.
00:04:44.000 We're not going to get to a point, okay, this is how good it's going to be.
00:04:48.000 It's going to keep getting better.
00:04:51.000 And what would that look like?
00:04:52.000 If it can get to a certain point, it will far exceed what human creativity is capable of.
00:04:58.000 Yes.
00:04:59.000 I mean, when we reach the ability of humans, it's not going to just match one human.
00:05:06.000 It's going to match all humans, and it's going to do everything that any human can do.
00:05:11.000 If it's playing a game like Go, it's going to play it better than any human.
00:05:16.000 Right.
00:05:17.000 Well, that's already been proven, right?
00:05:18.000 That they have invented moves.
00:05:20.000 AI has invented moves that have now been implemented by humans in a very complex game that they never thought that AI was going to be able to be because it requires so much creativity.
00:05:29.000 Right.
00:05:30.000 Art, though, we're not quite there, but we will be there.
00:05:34.000 And by 2029, it will match any person.
00:05:41.000 That's it?
00:05:42.000 2029. That's just a few years away.
00:05:46.000 Yeah, well I'm actually considered conservative.
00:05:48.000 People think that will happen like next year or the year after.
00:05:52.000 I actually said that in 1999. I said we would match any person by 2029, so 30 years.
00:06:03.000 People thought that was totally crazy.
00:06:07.000 And in fact, Stanford had a...
00:06:12.000 I invited several hundred people from around the world to talk about my prediction.
00:06:17.000 And people came in and people thought that this would happen, but not by 2029. They thought it would take a hundred years.
00:06:25.000 Yeah, I've heard that.
00:06:26.000 I've heard that, but I think people are amending those.
00:06:29.000 Is it because human beings have a very difficult time grasping the concept of exponential growth?
00:06:36.000 That's exactly right.
00:06:39.000 In fact, still, economists have a linear view.
00:06:43.000 And if you say, well, it's going to grow exponentially, they say, yeah, but maybe 2% a year.
00:06:51.000 It actually doubles in 14 years.
00:06:56.000 And I brought a chart I can show you that really illustrates this.
00:07:06.000 Is this chart available online so we can show people?
00:07:08.000 Yeah, it's in the book.
00:07:09.000 But is it available online, that chart, where Jamie can pull it up and someone can see it?
00:07:16.000 Just so the folks watching the podcast could see it too.
00:07:18.000 But I could just hold it up to the camera.
00:07:20.000 What's it called?
00:07:21.000 What's the title of it?
00:07:23.000 It says Price Performance of Computation 1939 to 2023. You have it.
00:07:29.000 Okay, great.
00:07:30.000 Jamie already has it.
00:07:31.000 Yeah, the climb is insane.
00:07:34.000 It's like the San Juan Mountains.
00:07:36.000 What's interesting is that it's an exponential curve and a straight line represents exponential growth.
00:07:44.000 And that's an absolute straight line for 80 years.
00:07:49.000 The very first point, this is the speed of computers, it was 0.0000007 calculations per second per constant dollar.
00:08:06.000 The last point is 35 billion calculations per second.
00:08:10.000 So there's a 20 quadrillion-fold increase in those 80 years.
00:08:16.000 But the speed with which it gained is actually the same throughout the entire 80 years.
00:08:24.000 Because if it was sometimes better and sometimes worse, this curve would bend.
00:08:30.000 It would bend up and down.
00:08:32.000 It's really very much a straight line.
00:08:36.000 So the speed with which we increased it was the same regardless of the technology used.
00:08:41.000 And the technology was radically different at the beginning versus the end, and yet it increased the speed exactly the same for 80 years.
00:08:53.000 In fact, the first 40 years, nobody even knew this was happening.
00:08:56.000 So it's not like somebody was in charge and saying, okay, next year we have to get to here, and people would try to match that.
00:09:02.000 We didn't even know this was happening for 40 years.
00:09:05.000 40 years later, I noticed this.
00:09:08.000 For various reasons, I predicted it would stay the same, the same speed increase each year, which it has.
00:09:15.000 In fact, we just put the last dot like two weeks ago, and it's exactly where it should be.
00:09:23.000 So, technology, and computation is certainly a prime form of technology, increases at the same speed.
00:09:32.000 And this goes through war and peace.
00:09:34.000 You might say, well, maybe it's greater doing war.
00:09:37.000 No, it's exactly the same.
00:09:39.000 You can't tell when there's war, peace, or anything else on here.
00:09:43.000 It just matches from one type of technology to the next.
00:09:50.000 And it's also true of other things, like, for example, getting energy from the sun.
00:09:59.000 That's also exponential.
00:10:01.000 It's also just like this.
00:10:04.000 It's increased.
00:10:09.000 We now are getting about a thousand times as much energy Energy from the Sun that we did 20 years ago.
00:10:22.000 Because of the implementation of solar panels and the like?
00:10:25.000 Yes.
00:10:25.000 Has the function of it increased exponentially as well?
00:10:30.000 Because what I had understood was that there was a bottleneck in the technology as far as how much you could extract from the Sun from those panels.
00:10:40.000 No, not at all.
00:10:41.000 I mean, it's increased 99.7% since we started.
00:10:49.000 And it does the same every year.
00:10:52.000 It's an exponential curve.
00:10:54.000 And if you look at the curve, we'll be getting 100% of all the energy we need in 10 years.
00:11:01.000 The person who told me that was Elon, and Elon was telling me that this is the reason why you can't have a fully solar-powered electric car, because it's not capable of absorbing that much from the sun with a small panel like that.
00:11:11.000 He said there's a physical limitation in the panel size.
00:11:15.000 No, I mean, it's increased 99.7% since we started.
00:11:20.000 Since what year?
00:11:22.000 This is about...
00:11:27.000 35 years ago.
00:11:28.000 35 years ago.
00:11:29.000 And 99% of the ability of it, as well as the expansion of use?
00:11:38.000 I mean, you might have to store it.
00:11:40.000 We're also making exponential gains in the storage of electricity.
00:11:43.000 Right.
00:11:44.000 Battery technology.
00:11:46.000 So you don't have to get it all from a solar panel that fits in a car.
00:11:52.000 The concept was, like, could you make a solar-paneled car, a car that has solar panels on the roof, and would that be enough to power the car?
00:12:00.000 And he said no.
00:12:02.000 He said it's just not really there yet.
00:12:05.000 Right.
00:12:05.000 It's not there yet, but it will be there in 10 years.
00:12:09.000 You think so?
00:12:10.000 Yeah.
00:12:10.000 Yeah, he seemed to doubt that.
00:12:12.000 He thought that there's a limitation of the amount of energy you can get from the sun, period, how much it gives out and how much those solar panels can absorb.
00:12:20.000 Well, you're not gonna be able to get it all from the solar panel that fits in a car.
00:12:24.000 You're gonna have to store some of that energy.
00:12:26.000 Right.
00:12:26.000 So you wouldn't just be able to drive indefinitely on solar power.
00:12:31.000 Yeah, that was what he was saying.
00:12:33.000 But you can obviously power a house, especially if you have a roof.
00:12:38.000 Tesla has those solar-powered roofs now.
00:12:40.000 But you can also store the energy for a car.
00:12:45.000 I mean, we're going to go to all renewable energy, wind and sun, within 10 years, including our ability to store the energy.
00:12:56.000 All renewable in 10 years?
00:12:58.000 So what are they going to do with all these nuclear plants and coal-powered plants?
00:13:02.000 That's completely unnecessary.
00:13:04.000 People say we need nuclear power, which we don't.
00:13:08.000 We can get it all from the sun and the wind within 10 years.
00:13:14.000 So in 10 years you'll be able to power Los Angeles with sun and wind?
00:13:19.000 Yes.
00:13:19.000 Really?
00:13:20.000 Yeah.
00:13:21.000 I was not aware that we were anywhere near that kind of timeline.
00:13:25.000 Well, that's because people are not taking into account exponential growth.
00:13:30.000 So the exponential growth also of the grid?
00:13:33.000 Because just to pull the amount of power that you would need to charge, you know, X amount of million, if everyone has an electric vehicle by 2035, let's say then, just the amount of change you would need on the grid would be pretty substantial.
00:13:49.000 Well, we're making exponential gains on that as well.
00:13:52.000 Are we?
00:13:52.000 Yeah?
00:13:53.000 Yeah.
00:13:54.000 I wasn't aware.
00:13:55.000 I had this impression that there was a problem with that, especially in Los Angeles.
00:14:01.000 They've actually asked people at certain times when it's hot out to not charge your car.
00:14:05.000 They're not looking at the future.
00:14:07.000 That's true now, but it's growing exponentially.
00:14:10.000 In every field of technology then, essentially.
00:14:14.000 Yeah.
00:14:15.000 Is the bottleneck a battery technology?
00:14:18.000 And how close are they to solving some of these problems, like conflict minerals and the things that we need in order to power these batteries?
00:14:28.000 I mean, our ability to store energy is also growing exponentially.
00:14:33.000 So putting all that together, we'll be able to power everything we need within 10 years.
00:14:41.000 Wow.
00:14:42.000 Most people don't think that.
00:14:43.000 So you're thinking that based on this idea that people have a limited idea?
00:14:48.000 I never imagined that computation would grow like this.
00:14:51.000 It's just continuing to do that.
00:14:54.000 And so we have large language models, for example.
00:14:58.000 No one expected that to happen like five years ago.
00:15:02.000 Right.
00:15:02.000 And we had them two years ago, but they didn't work very well.
00:15:05.000 So it began a little less than two years ago that we could actually do large language models.
00:15:12.000 And that was very much a surprise to everybody.
00:15:16.000 So that's probably the primary example of exponential growth.
00:15:21.000 We had Sam Altman on.
00:15:23.000 One of the things that he and I were talking about was that AI figured out a way to lie.
00:15:28.000 That they used AI to go through a CAPTCHA system and the AI told the system that it was vision impaired, which is not technically a lie.
00:15:37.000 But it used it to bypass, are you a robot?
00:15:41.000 What we don't know now is for large language models to say they don't know something.
00:15:46.000 So you ask it a question, and if the answer to that question is not in the system, it still comes up with an answer.
00:15:54.000 So it'll look at everything and give you its best answer.
00:15:57.000 And if the best answer is not there, it still gives you an answer, but that's considered a hallucination.
00:16:06.000 A hallucination?
00:16:07.000 Yeah, that's what it's called.
00:16:08.000 Really?
00:16:09.000 AI hallucination.
00:16:11.000 So they cannot be wrong.
00:16:13.000 They have to be able to answer things.
00:16:14.000 So far, we're actually working on being able to tell if it doesn't know something.
00:16:18.000 So if you ask it something, say, oh, I don't know that.
00:16:21.000 Right now, it can't do that.
00:16:23.000 Oh wow, that's interesting.
00:16:26.000 So it gives you some answer And if the answer's not there, it just makes something up.
00:16:34.000 It's the best answer, but the best answer isn't very good because it doesn't know the answer.
00:16:40.000 And the way to fix hallucinations is to actually give it more capabilities to memorize things and give it more information so it knows the answer to it.
00:16:51.000 If you tell an answer to a question, it will remember that and give you that correct answer.
00:16:59.000 But these models, we don't know everything.
00:17:07.000 We have to be able to scan an answer to every single question, which we can't quite do.
00:17:14.000 It would be actually better if it could actually answer, well, gee, I don't know that.
00:17:18.000 Right.
00:17:19.000 Like, in particular, like, say, when it comes to exploration of the universe, if there's a certain amount of, I mean, a vast amount of the universe we have not explored.
00:17:29.000 So if it has to answer questions about that, it would just come up with an answer?
00:17:33.000 Right, it'll just come up with an answer, which will likely be wrong.
00:17:37.000 Hmm, that's interesting.
00:17:39.000 But that would be a real problem if someone was counting on the AI to have a solution for something too soon, right?
00:17:47.000 Right.
00:17:47.000 They don't know everything.
00:17:49.000 Search engines actually are pretty well vetted, and if it actually answers something, it's usually correct.
00:17:59.000 Unless it's curated.
00:18:01.000 But large language models don't have that capability.
00:18:06.000 So it'd be good, actually, if they knew that they were wrong.
00:18:09.000 They'd also tell us what we have to fix.
00:18:13.000 What about the idea that AI models are influenced by ideology?
00:18:19.000 That AI models have been programmed with certain ideologies?
00:18:23.000 I mean, they do learn from people, and people have ideologies, some of which are not correct, and that's a large way in which it will make things up,
00:18:39.000 because it's learning from people.
00:18:44.000 So right now, If somebody has access to a good search engine, they will check before they actually answer something with a search engine to make sure that it's correct.
00:18:58.000 Because search engines are generally much more accurate.
00:19:02.000 Generally.
00:19:03.000 Right.
00:19:04.000 When it comes to this idea that people enter information into a computer and the computer relies on ideology, do you anticipate that with artificial general intelligence that will be agnostic to ideology, that it will be able to reach a point where instead of deciding things based on social norms or whatever the culture is accepted currently,
00:19:28.000 that it would look at things more objectively and rationally?
00:19:32.000 Well, eventually.
00:19:33.000 But we still call it artificial general intelligence, even if it didn't do that.
00:19:39.000 And people certainly are influenced by whatever their people that they respect That field is correct and will be as influenced as people are.
00:19:59.000 And we'll still call it artificial general intelligence.
00:20:05.000 We are starting to check what large language models come up with with search engines and that's actually making them more correct.
00:20:16.000 But we have to actually continue on this curve.
00:20:18.000 We need more data to be able to store everything.
00:20:22.000 This is not enough data to be able to store everything correctly.
00:20:26.000 This is a large amount of large language models for which we don't have storage for the data.
00:20:36.000 So that's what's holding us back is data and storage?
00:20:38.000 Yeah, we also have to have the correct storage.
00:20:43.000 So that's really where the effort is going, to be able to get rid of these hallucinations.
00:20:51.000 That's a fun thing to say, hallucinations in terms of artificial intelligence.
00:20:56.000 Well, we usually come up with wrong things.
00:20:58.000 Like large language models is not really the correct way to talk about this.
00:21:03.000 It does know language, but there's a lot of other things it knows.
00:21:08.000 We're using them now to come up with medicines.
00:21:20.000 For example, the Moderna vaccine, we wrote down every possible type of medicine that might be That might work.
00:21:40.000 It was actually several billion mRNA sequences.
00:21:44.000 And we then tested them all and did that in two days.
00:21:50.000 So I actually came up with, tested several billion and decided on it in two days.
00:21:59.000 We then tested it with people.
00:22:02.000 We'll be able to overcome that as well because we'll be able to test it with machines.
00:22:09.000 But we actually did test it with people for 10 months.
00:22:13.000 There was still a record.
00:22:15.000 So for machines, when they start testing medications with machines, how will they audit that?
00:22:21.000 So the concept will be that you take into account biological variability, all the different factors that would lead to a person to have an adverse reaction to a certain compound, and then you program all the known data about how things interact with the body?
00:22:38.000 Right.
00:22:38.000 I mean, you need to be able to simulate all the different possibilities.
00:22:43.000 And then come up with, like, a number of how many people will be adversely affected by something.
00:22:48.000 That's one of the things you would look at.
00:22:50.000 And then efficacy based on age, health.
00:22:54.000 But that could be done literally in a matter of days rather than years.
00:22:58.000 Right.
00:23:03.000 But the question would be, like, who's in charge of that data and, like, how does that get resolved?
00:23:09.000 And if artificial intelligence is still prone to hallucinations and they start using those hallucinations to justify medications, that could be a bit of an issue, especially if it's controlled by a corporation that wants to make a lot of money.
00:23:23.000 Well, that's the issue, to be able to do it correctly.
00:23:27.000 So there's going to have to be a point in time where we all decide that artificial intelligence has reached This place where we can trust it implicitly.
00:23:36.000 Right.
00:23:36.000 Well, that's why they take now the leading candidate and actually test it with people.
00:23:44.000 But we'll be able to get rid of the testing with people once we can have reliance on the simulation.
00:23:54.000 So we've got to make the simulations correct.
00:23:58.000 But, like, right now we actually test it with people, and that takes, well, it took 10 months in this case.
00:24:07.000 When you look at artificial intelligence and you look at the expansion of it and the ultimate place that it will eventually be, what do you see happening inside of our lifetime, like inside of 20 years?
00:24:19.000 What kind of revolutionary changes on society would this have?
00:24:24.000 Well, one thing I feel will happen in five years, by 2029, is we'll reach longevity escape velocity.
00:24:35.000 So right now you go through a year and you use up a year of your longevity.
00:24:40.000 You're then a year older.
00:24:42.000 However, we do have scientific progress, and we're coming up with new cures for diseases and so on.
00:24:50.000 Right now you're getting back about four months.
00:24:53.000 So you lose a year, but through scientific progress you're getting back four months.
00:24:59.000 So you're only losing eight months.
00:25:01.000 However, the scientific progress is progressing exponentially, and by 2029, you'll get back a full year.
00:25:09.000 So you lose a year, but you get back a year, and you pretty much stay in the same place.
00:25:14.000 So by 2029, you'll be static.
00:25:17.000 And past 2029, you'll actually get back more than a year.
00:25:21.000 You'll get back...
00:25:23.000 Can I be a baby again?
00:25:25.000 Uh...
00:25:27.000 No, but in terms of your longevity, you'll get back more than a year.
00:25:32.000 Right.
00:25:33.000 So you'll be able to essentially go back in biological age.
00:25:37.000 Lengthening of the telomeres, changing the elasticity of the skin, muscle density.
00:25:45.000 It doesn't guarantee you living forever.
00:25:48.000 I mean, you could have a 10-year-old and you could compute, okay, he's got many decades of longevity, and he could die tomorrow.
00:25:56.000 Sure.
00:25:58.000 But overall, there'd be an expansion of the age that most people die.
00:26:04.000 And that's something that we're going to get.
00:26:06.000 And also using the same type of logic as large language models, but that's not language.
00:26:13.000 You're actually creating medications.
00:26:16.000 So we should call that large event models, not large language models, because it's not just dealing with language.
00:26:22.000 It's dealing with all kinds of things.
00:26:24.000 When I talked to you 10 years ago, you were telling me about this pretty extensive supplement routine that you're on.
00:26:31.000 Are you still doing that?
00:26:32.000 I'm trying to get to the point where we have longevity escape velocity in good shape.
00:26:39.000 Right.
00:26:39.000 And yes, I do follow that.
00:26:42.000 I take maybe 80 pills a day and some injections and so on.
00:26:50.000 Peptides?
00:26:51.000 Yes, peptides.
00:26:52.000 So far it works.
00:26:56.000 Have you ever gone off of it to see what you feel like normally?
00:27:01.000 No.
00:27:01.000 Well, I do that, right?
00:27:03.000 Yeah.
00:27:04.000 I mean, it seems to work, and there's evidence behind it.
00:27:08.000 How old are you now?
00:27:10.000 76. You look good.
00:27:13.000 You look good for 76, man.
00:27:15.000 That's great.
00:27:15.000 So it's doing something.
00:27:17.000 Yeah, I think it's working.
00:27:20.000 And so your goal is to get to that point where they start doing the, you live a year, you stay static, and then eventually get back to youthfulness.
00:27:31.000 Right, and it's not that far off.
00:27:33.000 If you're diligent, I think we'll get there by 2029. Now, not everybody's diligent.
00:27:40.000 Right, of course.
00:27:41.000 Now, past that, this is for life extension, which is great, but what about how AI is going to change society?
00:27:52.000 Yes, well, that's a very big issue, and it's already doing lots of things that make some people uncomfortable.
00:28:00.000 What we're actually doing is increasing our intelligence.
00:28:04.000 I mean, right now you have a brain, and it has different modules in it that deal with different things, but really it's able to connect one concept to another concept, and that's what your brain does.
00:28:20.000 We can actually increase that by, for example, carrying around a phone.
00:28:24.000 This has connections in it.
00:28:26.000 It's a little bit of a hassle to use.
00:28:28.000 If I ask you to do something, you've got to kind of mess with it.
00:28:33.000 Actually, it would be good if this actually listened to your conversation.
00:28:36.000 Oh, it does.
00:28:38.000 And without saying anything, you're just talking, and it says, oh, the name of that actress is so-and-so, and Yeah, but then it's a busybody.
00:28:47.000 It's like interfering with your life, talking to you all the time.
00:28:50.000 Well, there's ways of dealing with that, too.
00:28:52.000 You shut it off.
00:28:54.000 So we haven't done that yet, but that's a way of expanding your connections.
00:29:04.000 Yeah.
00:29:07.000 What a large language model does, it has connections in it as well.
00:29:11.000 And in fact, it's getting now to a point that's getting fairly comparable to the human brain.
00:29:18.000 We have about a trillion connections in our brain.
00:29:23.000 Things like the top model from Google or GPT-4, they have about 400 billion connections approximately.
00:29:37.000 They'll be at a trillion probably within a year.
00:29:40.000 That's pretty comparable to what the human brain does.
00:29:45.000 Eventually it'll go beyond that, and we'll have access to that.
00:29:50.000 So it's basically making us smarter.
00:29:53.000 So if you have the ability to be smarter, that's something that's positive, really.
00:30:05.000 I mean, if we were like mice today and we had the opportunity to become like humans, we wouldn't object to that.
00:30:16.000 In fact, we are humans and we don't object to that.
00:30:19.000 We used to be shrews.
00:30:23.000 And this is going to basically make us smarter.
00:30:27.000 Eventually we'll be much smarter than we are today.
00:30:31.000 And that's a positive thing.
00:30:33.000 We'll be able to do things that are today that we find bothersome in a way that's much more palatable.
00:30:45.000 The idea of us getting smarter sounds great.
00:30:48.000 Great.
00:30:49.000 It'd be great to be smarter.
00:30:50.000 Right, but people object to that because it's like competition.
00:30:55.000 In what way?
00:30:58.000 Well, I mean, Google has, I don't know, 60,000, 70,000 programmers, and how many programmers exist in the world?
00:31:08.000 How much longer is that going to be a viable career?
00:31:12.000 Because large language models already can code, not quite as good as a real expert coder.
00:31:22.000 But how long is that going to be?
00:31:24.000 It's not going to be 100 years.
00:31:26.000 It's going to be a few years.
00:31:30.000 So people see it as competition.
00:31:34.000 I have a slightly different view of that.
00:31:36.000 I see these things as actually adding to our own intelligence and we're merging with these kinds of computers and making ourselves smarter by merging with it.
00:31:49.000 And eventually it'll go inside our brain and be able to make us smarter instantly, just like we had more connections inside our own brain.
00:32:00.000 Well, I think people have reservations always when it comes to great change.
00:32:04.000 And this is probably the greatest change.
00:32:07.000 The greatest change we've ever experienced in our lifetimes for sure has been the internet.
00:32:11.000 And this will make that look like nothing.
00:32:14.000 It'll change everything.
00:32:17.000 And it seems inevitable.
00:32:20.000 I understand that people are upset about it, but it just seems like what human beings were sort of designed to do.
00:32:27.000 Right.
00:32:27.000 We're the only animal that actually creates technology.
00:32:30.000 It's a combination of our brain and something else, which is our thumb.
00:32:35.000 So I can imagine something.
00:32:38.000 Oh, if I take that leaf from a tree, I could create a tool with it.
00:32:45.000 Other animals have actually a bigger brain, like the whale.
00:32:49.000 Dolphins.
00:32:53.000 Dolphins, elephants, they have a larger brain than we do, but they don't have something equivalent to the thumb.
00:33:00.000 Monkey has a thing that looks like the thumb, but it's actually an inch down and it doesn't actually work very well.
00:33:05.000 So they can actually create a tool, but they don't create a tool that's powerful enough to create the next tool.
00:33:13.000 So we're actually able to use our tools and create something that's that much more significant.
00:33:21.000 So we can create tools, and that's really part of who we are.
00:33:27.000 It makes us that much more intelligent, and that's a good thing.
00:33:35.000 I mean, here's...
00:33:49.000 So here's U.S. personal income per capita.
00:33:53.000 So this is the average amount that we make per person in constant dollars.
00:34:02.000 There it is right here.
00:34:03.000 It's on the screen.
00:34:05.000 We make a lot more money, but things cost a lot more money too, right?
00:34:08.000 No.
00:34:09.000 This is constant dollars.
00:34:11.000 Constant dollars in relation to the inflation?
00:34:14.000 Yeah.
00:34:14.000 So this does not show you inflation.
00:34:17.000 These are constant dollars.
00:34:19.000 And so we're actually making that much more each year on average.
00:34:26.000 Right, but it doesn't take into account inflation, correct?
00:34:28.000 So it's not taking into account the rise of cost of things.
00:34:31.000 No, it is taking into account.
00:34:33.000 Oh, it is.
00:34:34.000 Okay.
00:34:35.000 So we're making that much more in constant dollars.
00:34:40.000 If you look over the past hundred years, we've made about ten times as much.
00:34:45.000 I wonder if there's a similar chart about consumerism, like just about material possessions.
00:34:52.000 I wonder if like how much more we're purchasing and creating.
00:34:55.000 I've always felt like that's one of the things that materialism is one of those instincts that human beings sort of look down upon and this aimless pursuit of buying things.
00:35:09.000 But I feel like that motivates technology because The constant need for the newest, greatest thing is one of the things that fuels the creation and innovation of new things.
00:35:22.000 But if you were to go back a hundred years, you'd be very unhappy.
00:35:25.000 Oh, yeah.
00:35:26.000 Because you wouldn't have...
00:35:27.000 I mean, you wouldn't have a computer, for example.
00:35:30.000 You wouldn't have anything.
00:35:32.000 You'd have most things you've grown accustomed to.
00:35:34.000 Yeah.
00:35:34.000 I mean...
00:35:39.000 Also, we didn't live very long.
00:35:42.000 Right.
00:35:43.000 Medical advancements.
00:35:45.000 Average life was 48 years in 1900. It's 35 years in 1800. Right.
00:35:54.000 Go back a thousand years, it was 20 years.
00:35:57.000 Right.
00:35:58.000 That takes into account child mortality, too, though, right?
00:36:01.000 But it's also injuries, death.
00:36:03.000 Some people did live long.
00:36:05.000 There was people that lived back then.
00:36:07.000 If nothing happened to you, you did live to be 80 like a normal person.
00:36:11.000 That was actually very rare.
00:36:13.000 Because most things happen to people.
00:36:15.000 Most people, by the time you get to 80, you've had at least one hospital visit.
00:36:19.000 Something's gone wrong.
00:36:20.000 Broken arm, broken this, broken that.
00:36:22.000 It was very rare to make it to AD 200 years ago.
00:36:28.000 But the human body was physically capable of doing it.
00:36:32.000 Well, our human body can go on forever if you fix things properly.
00:36:40.000 There's nothing in our body that means that you have to die at 100 or even 120. We can go on really indefinitely.
00:36:49.000 Well, that's the groundbreaking work today, right?
00:36:51.000 They're treating disease or, excuse me, age as if it is a disease, not just an inevitable disease.
00:36:58.000 And our FDA doesn't accept that, but they're actually beginning to accept it now.
00:37:04.000 Well, as they get older.
00:37:05.000 Exactly.
00:37:07.000 They're forced into it.
00:37:09.000 The concept of artificial general intelligence scares a lot of people also because of Hollywood, right?
00:37:15.000 Because of the Terminator films and things along those lines.
00:37:17.000 Like, how far away are we, do you think, from actual artificial humans, or will we ever get there?
00:37:24.000 Will we integrate before that takes place?
00:37:28.000 I mean, all of this additional intelligence that we're creating is something that we use.
00:37:36.000 And it's just like it came with us.
00:37:38.000 So we're actually making ourselves more intelligent.
00:37:43.000 And ultimately, that's a good thing.
00:37:45.000 And if we have it, and then we say, well, gee, we don't really like this, let's take it away, people would never accept that.
00:37:53.000 They may be against the idea of general intelligence, but once they get it, nobody wants to give that up.
00:38:02.000 And it will be beneficial.
00:38:11.000 The blow lights started 200 years ago because the cotton jenny came out and all these people that were making money with the cotton jenny were against it and they would actually destroy these machines at night.
00:38:27.000 And they said, gee, if this keeps going, all jobs are going to go away.
00:38:32.000 And indeed, people using Cotton Jenny to create more wealth, that did go away.
00:38:39.000 But we actually made more money because we created things that didn't exist then.
00:38:44.000 We didn't have anything like electronics, for example.
00:38:50.000 And as we can actually see, we make 10 times as much in constant dollars As we did 100 years ago.
00:39:01.000 And if you were to ask, well, what are people going to be doing?
00:39:04.000 You couldn't answer it because we didn't understand the internet, for example.
00:39:11.000 And there's probably some technologies down the pipe that are going to have a similar impact.
00:39:16.000 Exactly.
00:39:17.000 And they're going to extend life, for example.
00:39:20.000 But are they going to create life?
00:39:25.000 Well...
00:39:29.000 We know how to create life.
00:39:37.000 Well, that's an interesting question.
00:39:44.000 What do you mean by create life?
00:39:46.000 What I think is that human beings are some sort of a biological caterpillar that makes a cocoon that gives birth to an electronic butterfly.
00:39:56.000 I think we are creating a life form and that we're merely conduits for this thing and that all of our instincts and ego and emotions and all these things feed into it.
00:40:06.000 Materialism feeds into it.
00:40:08.000 We keep buying and keep innovating.
00:40:11.000 And technology keeps increasing exponentially and eventually it's going to be artificial intelligence and artificial intelligence is going to create better artificial intelligence and a form of being that has no limitations in terms of what's capable of doing.
00:40:26.000 And capable of traveling anywhere, not having any biological limitations in terms of...
00:40:31.000 But that's going to be ourselves.
00:40:32.000 I mean, we're going to be able to create life that is like humans, but far greater than we are today.
00:40:41.000 With an integration of technology.
00:40:43.000 Yeah.
00:40:43.000 If we choose to go that route.
00:40:46.000 But that's the prediction that you have, that we will go that route, like a Neuralink-type deal, something along those lines.
00:40:52.000 Right.
00:40:52.000 So I don't see this competition...
00:40:56.000 No, I don't think it's competition.
00:40:57.000 Well, it will seem like that.
00:41:00.000 I mean, if you have a job doing coding, and suddenly they don't really want you anymore because they can do coding with a large language model, it's going to feel like it's competition.
00:41:11.000 Well, there's an issue now with films.
00:41:13.000 Tyler Perry, who was building an $800 million television studio, and he stopped production.
00:41:21.000 What is it called?
00:41:22.000 Sora?
00:41:22.000 Is that what it's called, Jamie?
00:41:24.000 He stopped production when he saw the capabilities of AI just for creating visuals, scenes, movies.
00:41:34.000 There's one that's incredibly impressive.
00:41:36.000 It's Tokyo.
00:41:37.000 They're walking down the street of Tokyo in the winter.
00:41:40.000 So it's snowing and they're walking down the street and you look at it and you go, this is insane.
00:41:46.000 This looks like a film.
00:41:48.000 See if you can find that film.
00:41:49.000 Because it's incredible.
00:41:51.000 But would you want to get rid of that?
00:41:53.000 Get rid of what?
00:41:54.000 That capability.
00:41:56.000 No.
00:41:56.000 No, I don't want to get rid of the capability.
00:41:58.000 Right.
00:41:58.000 But people do want to get rid of it.
00:42:01.000 Well, people that make movies, people that actually film things with cameras and use actors are going to be very upset.
00:42:08.000 So this.
00:42:09.000 This is all fake.
00:42:11.000 Which is insane.
00:42:12.000 Beautiful snowy Tokyo city is bustling, the camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls.
00:42:20.000 Gorgeous sakura petals are flying through the wind along with snowflakes.
00:42:25.000 And this is what you get.
00:42:26.000 I mean, this is insanely good.
00:42:29.000 The variability, like just the way people are dressed.
00:42:33.000 If you saw this somewhere else, look at this, a robot's life in a cyberpunk setting.
00:42:39.000 If you saw this, You would say, oh, they filmed this.
00:42:44.000 But just look at what they're able to do with animation and kids' movies and things along those lines.
00:42:48.000 And it's going to get better.
00:42:49.000 Yeah.
00:42:50.000 It's just incredible.
00:42:51.000 I mean, it's a new art form.
00:42:54.000 So right there, the smoke looks a little uniform.
00:42:57.000 But, yeah.
00:42:58.000 I mean, there's some problems with this, but...
00:43:00.000 But not much.
00:43:02.000 Yeah.
00:43:02.000 And you imagine what it was like five years ago, and then imagine what it's going to be like five years from now.
00:43:07.000 Yes, absolutely.
00:43:08.000 And it's insane.
00:43:09.000 I mean, no one took into consideration the idea that kids are going to be cheating on their school papers using ChatGPT, but my kids tell me that's a real problem in school now.
00:43:21.000 Yes, definitely.
00:43:23.000 So no one saw that coming?
00:43:25.000 No one saw this coming?
00:43:26.000 And what we're at now is with chat GPT-4, right?
00:43:30.000 4.5?
00:43:31.000 Is that what it is?
00:43:32.000 Well, 4.5 is coming.
00:43:34.000 4.5 is coming.
00:43:35.000 5 is supposed to be the massive leap.
00:43:40.000 It'll be a leap, just like three to four was a massive leap.
00:43:44.000 But it's going to continue.
00:43:46.000 It's never going to be finished.
00:43:49.000 Right.
00:43:49.000 It'll keep going.
00:43:50.000 And it will also be able to make better versions of itself, correct?
00:43:55.000 Yes.
00:43:56.000 Well, we do that.
00:43:57.000 I mean, technology does that already.
00:43:59.000 Right.
00:43:59.000 But if you scale that out 100 years from now, what are you looking at?
00:44:04.000 You're looking at a god.
00:44:06.000 Well, it'll be less than 100 years.
00:44:08.000 I mean...
00:44:09.000 So you're looking at a god in 50 years?
00:44:13.000 Less than that.
00:44:14.000 I mean, once we have an ability to emulate everything that humans can do, and not just one human, but all humans, and that's only like 2029. That's only five years from now.
00:44:26.000 And then it will make better versions of that.
00:44:29.000 So it will probably solve a lot of the problems that we have in terms of energy storage, data storage, data speeds, computation speeds.
00:44:38.000 And also medications.
00:44:40.000 For us.
00:44:41.000 For humans, yeah.
00:44:43.000 Wouldn't it be better to just, Ray, just download yourself into this beautiful electronic body?
00:44:47.000 Why do you want to be biological?
00:44:50.000 I mean...
00:44:53.000 Ultimately, that's what we're going to be able to do.
00:44:56.000 You think that's going to happen?
00:44:57.000 Yeah.
00:44:58.000 So do you think that we'll be able to...
00:45:00.000 I mean, we'll be able to create...
00:45:03.000 I mean, the singularity is when we multiply our intelligence a million-fold, and that's 2045. So that's not that long from now.
00:45:11.000 That's like 20 years from now.
00:45:13.000 Right.
00:45:16.000 And therefore, most of your intelligence will be handled by the computer part of ourselves.
00:45:26.000 The only thing that won't be captured is what comes with our body originally.
00:45:33.000 We'll ultimately be able to do that as well.
00:45:35.000 It'll take a little longer, but we'll be able to actually capture what comes with our normal body and be able to recreate that.
00:45:47.000 That also has to do with How long we live.
00:45:53.000 Because if everything is backed up, I mean, right now, anytime you put anything into a phone or any kind of electronics, it's backed up.
00:46:02.000 So, I mean, this has a lot of data.
00:46:05.000 I could flip it and it ends up in a river and we can't capture anymore.
00:46:12.000 I can recreate it because it's all backed up.
00:46:15.000 And you think that's going to be the case with consciousness?
00:46:17.000 That's going to be the case of our normal biological body as well.
00:46:22.000 What's to stop someone like Donald Trump from just making a hundred thousand versions of himself?
00:46:29.000 Like if you can back someone up, could you duplicate it?
00:46:33.000 Couldn't you have three or four of them?
00:46:34.000 Couldn't you have a bunch of them?
00:46:35.000 Couldn't you live multiple lives?
00:46:39.000 Yes.
00:46:40.000 Would you be interacting with each other while you're living multiple lives, having consultations about what is St. Louis Ray doing?
00:46:46.000 Well, I don't know.
00:46:47.000 Let's talk to San Francisco Ray.
00:46:48.000 San Francisco Ray is talking to Florida Ray.
00:46:53.000 It's basically a matter of increasing our intelligence and being able to multiply Donald Trump, for example, that comes with that.
00:47:02.000 Do you think there'll be regulations on that to stop people from making 100,000 versions of themselves that operate a city?
00:47:08.000 There'll be lots of regulations.
00:47:10.000 There's lots of regulations we have already.
00:47:12.000 You can't just create a medication And sell it to people that it cures its disease.
00:47:17.000 Right.
00:47:18.000 We have tremendous amount of regulation.
00:47:20.000 Sure, but we don't really with phones.
00:47:21.000 Like with your phone, you could essentially, if you had the money, you could make as many copies of that as you wanted.
00:47:27.000 Yeah.
00:47:30.000 There are some regulations.
00:47:32.000 We regulate everything, but you're right.
00:47:36.000 Generally, electronics doesn't have as much regulation.
00:47:42.000 Right.
00:47:43.000 And when you get to a certain point, we will be electronics.
00:47:48.000 Yes, yes.
00:47:49.000 I mean, certainly if we multiply our intelligence a million-fold, everything of that additional million-fold of yours is not regulated.
00:48:01.000 Right.
00:48:02.000 When you think about the concept of integration and technological integration, when do you think that will start taking place and what will be the initial usage of it?
00:48:14.000 Like, what will be the first versions and what would they provide?
00:48:19.000 Well, we have it now.
00:48:21.000 Large language models are pretty impressive.
00:48:23.000 I mean, if you look at what they can do...
00:48:25.000 I mean, I'm talking about physical integration with the human body, like a Neuralink type thing.
00:48:31.000 Right.
00:48:31.000 Some people feel that we can actually understand what's going on in your brain and actually put things into your brain without actually going into the brain with something like Neuralink.
00:48:42.000 So something that, like, sits on the outside of your head?
00:48:45.000 Yeah.
00:48:47.000 It's clear to me if that's feasible or not.
00:48:50.000 I've been assuming that you have to actually go in.
00:48:53.000 Neuralink isn't exactly what we want because it's too slow.
00:48:59.000 And it actually will do what it's advertised to do.
00:49:04.000 I actually know some people like this who were active people and they completely lost the ability to speak and to understand language and so on.
00:49:19.000 And so they can't actually say anything to you.
00:49:25.000 And we can use something like Neuralink to actually have them express something.
00:49:31.000 They could think something and then have it be expressed to you.
00:49:35.000 Right.
00:49:35.000 And they're doing that, right?
00:49:36.000 They had the first patient.
00:49:37.000 The first patient that was...
00:49:39.000 Yeah.
00:49:39.000 Yeah.
00:49:40.000 And apparently that person can move a cursor around on a screen.
00:49:43.000 Right.
00:49:44.000 And therefore you can do anything.
00:49:45.000 It's fairly slow, though.
00:49:47.000 And Neuralink is slow.
00:49:49.000 And if you really want to extend your brain, you need to do it at a much faster pace.
00:49:54.000 But isn't that going to increase exponentially as well?
00:49:57.000 Yes, absolutely.
00:49:58.000 So how long do you think it'll be before it's implemented?
00:50:01.000 Well, it's got to be by 2045 because that's when the singularity exists and we can actually multiply our intelligence on the order of a million fold.
00:50:18.000 And when you say 2045, what is the source of that estimation?
00:50:29.000 Because we'll be able to, based actually on this chart and also the increase in the ability of software to also expand,
00:50:45.000 we'll be able to multiply our intelligence a million fold and we'll be able to Put that inside of our brain.
00:50:56.000 It would be just like it's part of our brain.
00:50:58.000 So this is just following the current graph of progress?
00:51:01.000 Yeah, exactly.
00:51:02.000 So if you follow the current graph of progress, and if you do understand exponential growth, then what we're looking at in 2045 is inevitable.
00:51:10.000 Right.
00:51:12.000 Does that concern you at all?
00:51:14.000 Are you excited about it?
00:51:16.000 Do you think it's just a thing that is happening and you're a part of it and you're experiencing it?
00:51:21.000 I think we'll be enthusiastic about it.
00:51:29.000 I mean, imagine if you were to ask a mouse, would you like to actually be as intelligent as a human?
00:51:37.000 Right.
00:51:41.000 It's hard to know what people would say, but generally that's a positive thing.
00:51:45.000 Generally, yeah.
00:51:47.000 And that's what it's going to be like.
00:51:49.000 We're going to be that much smarter.
00:51:51.000 And once we're there, is someone going to say, no, I don't really like this.
00:51:56.000 I want to be stupid like human beings used to be.
00:52:02.000 Nobody's really going to say that.
00:52:05.000 Do human beings now say, gee, I'm really too smart.
00:52:08.000 I'd really like to be like a mouse.
00:52:11.000 Not necessarily, but what people do say is that technology is too invasive and that it's too much a part of my life and I'd like to sort of have a bit of an electronic vacation and separate from it.
00:52:24.000 And there's a lot of people that I know that have gone to...
00:52:26.000 But nobody does that.
00:52:28.000 I mean nobody becomes stupid like we used to be when we were mice.
00:52:35.000 Right, but I'm not saying stupid.
00:52:36.000 I'm saying some people just like being a human the way humans are now.
00:52:40.000 Because one of the complications that comes with the integration of technology is what we're seeing now with people.
00:52:45.000 Massive increases in anxiety from social media use, being manipulated by algorithms, the effect that it has on culture, misinformation and disinformation and propaganda.
00:52:56.000 There's so many different factors that are at play now that make people more anxious and more depressed statistically than ever.
00:53:04.000 I'm not sure we had more anxiety today than we used to have.
00:53:12.000 Well, we certainly had more when the Mongols were invading.
00:53:15.000 We certainly had more anxiety when we were worried constantly about war.
00:53:19.000 But I think people have a pretty heightened level of social anxiety.
00:53:22.000 Well, take war.
00:53:23.000 I mean, 80 years ago, we had 100 million people die in Europe and Asia from World War II. We're very concerned about wars today, and they're terrible.
00:53:37.000 But we're not losing millions of people.
00:53:41.000 Right.
00:53:42.000 But we could.
00:53:43.000 We most certainly could.
00:53:45.000 With what's going on with Israel and Gaza, what's going on with Ukraine and Russia, it could easily escalate.
00:53:52.000 But it's thousands of people.
00:53:54.000 It's not millions of people.
00:53:56.000 For now.
00:53:57.000 Yeah.
00:53:58.000 But if it escalates to a hot war where it's involving the entire world.
00:54:03.000 What would really cause a tremendous amount of danger is something that's not really artificial intelligence.
00:54:10.000 It was invented when I was a child, which is atomic weapons.
00:54:15.000 Right.
00:54:15.000 I remember when I was like five or six, we'd actually go outside, put our hands behind our back to protect us from a nuclear war.
00:54:26.000 Yeah, drills.
00:54:27.000 And it seemed to work.
00:54:29.000 We're still here, so...
00:54:31.000 Do you remember those things they tell kids to get under the desk?
00:54:34.000 Yes, that's right.
00:54:36.000 We went under the desk and put our...
00:54:37.000 Which is hilarious, as if a desk is going to protect you from a nuclear bomb.
00:54:42.000 Right, but that's not AI. Right.
00:54:45.000 No, but AI applied to nuclear weapons makes them significantly more dangerous.
00:54:50.000 And isn't one of the problems with AI is that AI will find a solution to a problem.
00:54:55.000 Say if you have AI running your military and AI says, what do you want me to do?
00:55:01.000 And you say, well, I'd like to take over Taiwan.
00:55:03.000 And AI says, well, this is how to do it.
00:55:06.000 And it just implements it with no morals, no thought of...
00:55:11.000 Any sort of diplomacy or just force?
00:55:18.000 Right.
00:55:19.000 Hasn't happened yet because we do have people in charge and the people are enhanced with AI and AI can actually help us to avoid that kind of problem.
00:55:29.000 By thinking through the implications of different solutions.
00:55:35.000 Sure, if it has some sort of autonomy.
00:55:38.000 But if we get to the point where one superpower has AI, artificial general intelligence, and the other one doesn't, how much of a significant advantage would that be?
00:55:50.000 I mean, I do think there are problems.
00:55:53.000 Basically, there's problems with intelligence.
00:55:56.000 And we like to say stupid.
00:56:03.000 But actually, it's better to be intelligent.
00:56:08.000 I believe it's better to have greater intelligence.
00:56:11.000 Overall, sure.
00:56:12.000 Right.
00:56:12.000 But my question was, if there's a race to achieve AGI, how close is this race?
00:56:20.000 Is it neck and neck?
00:56:22.000 Who's at the lead?
00:56:23.000 And how much capital is being put into these companies that are at the lead?
00:56:28.000 And whoever achieves it first, If that is under the control of a government, it's completely dependent upon what are the morals and ethics of that government?
00:56:38.000 What is the constitution?
00:56:39.000 What if it happens in China?
00:56:41.000 What if it happens in Russia?
00:56:42.000 What if it happens somewhere other than the United States?
00:56:44.000 And even if it does happen in the United States, who's controlling it?
00:56:49.000 I mean, the knowledge of how to create these things is pretty widespread.
00:56:54.000 It's not like somebody can just capitalize on a way to do it and nobody else understands it.
00:57:03.000 The knowledge of how to create a large language model or how to create the The type of chips that would enable you to create this is actually pretty widespread.
00:57:19.000 So do you think essentially the competition is pretty even in all the countries currently?
00:57:24.000 And there's also probably espionage.
00:57:26.000 There's espionage where they're stealing information and sharing information and selling information.
00:57:34.000 In terms of differences, the United States actually has superior AI compared to other places.
00:57:46.000 Well, that's good for us.
00:57:51.000 I mean, we're actually way ahead of China, I would say.
00:57:55.000 Right, but China has a way of figuring out what we're doing in copying it.
00:57:59.000 We're pretty good at that.
00:58:01.000 They have been, yeah.
00:58:02.000 Yeah.
00:58:04.000 So do you have any concern whatsoever in the idea that AI gets in the hands of the wrong people?
00:58:11.000 So when it first gets implemented, that's the big problem, is before it exists, before artificial general intelligence really exists, it doesn't, and then it does, and who hasn't?
00:58:22.000 And then once it does, can that AGI stop other people from getting it?
00:58:26.000 Can you program it to make sure?
00:58:30.000 Can you sabotage grids?
00:58:31.000 Can you do whatever you can to take down the internet in these opposing places?
00:58:35.000 Could you inject their computations with viruses?
00:58:39.000 What could you do to stop other people from getting to where you're at if you have an infinitely superior intelligence?
00:58:46.000 First.
00:58:47.000 If that's what your goal is, then yes, you could do that.
00:58:52.000 Are you worried about that at all?
00:58:53.000 Yes, I worry about it.
00:58:55.000 What is your main worry when you worry about the implementation of artificial intelligence?
00:58:59.000 What's your main worry?
00:59:09.000 I mean, I'm worried if people who have a destructive...
00:59:17.000 Idea of how to use these capabilities get into control.
00:59:23.000 Right.
00:59:26.000 And that could happen.
00:59:28.000 And I've got a chapter in the book about perils that are like what we're talking about.
00:59:37.000 And what do you think that could look like if the wrong people got a hold of this technology?
00:59:43.000 Well, you know, if you look at actually who controls atomic weapons, which is not AI, it's some of the worst people in the world.
00:59:53.000 Right.
00:59:56.000 And if you were to ask people right after we used two atomic weapons within a week, 80 years ago, what's the likelihood that we're going to go another 80 years and not have that happen again?
01:00:11.000 Everybody would say zero.
01:00:13.000 But it actually has happened.
01:00:16.000 Shockingly.
01:00:17.000 Yeah.
01:00:18.000 Yeah.
01:00:19.000 And I think there's actually some message there.
01:00:24.000 Mutual assured destruction.
01:00:26.000 But the thing is, would artificial general intelligence...
01:00:30.000 But that has not happened.
01:00:31.000 Right.
01:00:52.000 It has not happened yet.
01:00:53.000 And if human beings were capable of doing it because no one else had it, if artificial general intelligence reaches that sentient level and is in control of the wrong people, what's to stop them from doing?
01:01:08.000 There's no mutually assured destruction if you're the one who's got it.
01:01:11.000 You're the only one who's got it.
01:01:15.000 My concern is that whoever gets it could possibly stop it from being spread everywhere else and control it completely.
01:01:23.000 And then you're looking at a completely dystopian world.
01:01:27.000 Right.
01:01:28.000 So that's, if you ask me what I'm concerned about, it's along those lines.
01:01:32.000 Along those lines, yeah.
01:01:33.000 Because that's what I always want to get out of you guys.
01:01:35.000 Because there's so many people that are rightfully so, so high on this technology and the possibilities for enhancing our lives.
01:01:42.000 But the concern that a lot of people have is that at what cost and what are we signing up for?
01:01:49.000 Right.
01:01:50.000 But, I mean, if we want to, for example, live indefinitely, this is what we need to do.
01:01:56.000 We can't do...
01:01:57.000 What if you're denying yourself heaven?
01:02:00.000 You ever thought of that possibility?
01:02:02.000 I know that's a ridiculous abstract concept, but if heaven is real, if the idea of the afterlife is real, and it's the next level of existence, and you're constantly going through these cycles of life, what if you're stepping in and artificially denying that?
01:02:17.000 It's hard to imagine.
01:02:19.000 It is hard to imagine, but so is life.
01:02:21.000 So is the universe itself.
01:02:22.000 So is the Big Bang.
01:02:23.000 So is the black holes.
01:02:25.000 My father died when I was 22, so that's more than 50, 60 years ago.
01:02:35.000 And...
01:02:38.000 And he was actually a great musician and he created fantastic music, but he hasn't done that since he died.
01:02:50.000 And there's nothing that exists that is at all creative.
01:02:59.000 Based on him, we have his memories.
01:03:02.000 I actually created a large language model that represented him.
01:03:06.000 I can actually talk to him.
01:03:07.000 You do that now?
01:03:09.000 Yeah, it's in the book.
01:03:12.000 When you do that, have you thought about implementing some sort of a Sora-type deal where you're talking to him?
01:03:21.000 Well, you can do that now with language.
01:03:23.000 Right, but I mean physically, like looking at him like you're on a Zoom call with him.
01:03:29.000 That's a little bit in the future to be able to actually capture the way he looks.
01:03:34.000 But that's also feasible.
01:03:37.000 It seems pretty feasible.
01:03:39.000 Certainly it could be something representative of what he looks based on photographs that you have, right?
01:03:44.000 So things like that is a reason to continue so that we can create that And create our own ability to continue to exist.
01:03:57.000 You talk to people and they say, well, I don't really want to live past 90 or whatever, 100. But in my mind, if you don't exist, there's nothing for you to experience.
01:04:15.000 That's true, in this dimension.
01:04:17.000 My thought on that, people saying that I don't want to live past 90, it's like, okay, are you alive now?
01:04:23.000 Do you like being alive now?
01:04:24.000 What's the difference between now and 90?
01:04:26.000 Is it just a number or is it a deterioration of your physical body?
01:04:29.000 And how much effort have you put into mitigating the deterioration of your natural body so that you can enjoy life now?
01:04:38.000 Exactly.
01:04:38.000 And we've actually seen who would want to take their lives.
01:04:42.000 People do take their lives.
01:04:45.000 If they are experiencing something that's miserable, if they're suffering physically, emotionally, mentally, spiritually, and they just cannot stand the way life is carrying on,
01:05:03.000 then they want to take their lives.
01:05:05.000 Otherwise, people don't.
01:05:08.000 If they're enjoying their lives, they continue.
01:05:12.000 And people say, I don't want to live past 100. But then when they get to be 99.9, they don't want to disappear unless they're suffering.
01:05:26.000 Unless they're suffering.
01:05:27.000 That's what's interesting about the positive aspects of AI. Once we can manipulate human neurochemistry to the point where we figure out what is causing Great Depression?
01:05:38.000 What is causing anxiety?
01:05:39.000 What is causing a lot of these schizophrenic people?
01:05:43.000 And we definitely had that before.
01:05:44.000 We didn't have the terms.
01:05:46.000 We didn't understand schizophrenia, but people definitely had it.
01:05:48.000 For sure.
01:05:49.000 But what if we get to a point where we can mitigate that with technology?
01:05:52.000 Where we can say, this is what's going on in the human brain.
01:05:55.000 That's why we're continuing.
01:05:56.000 Right.
01:05:57.000 I was saying, that's a good thing.
01:05:59.000 That's a positive aspect of this technology.
01:06:03.000 Profoundly.
01:06:03.000 Profoundly.
01:06:04.000 Think about how many people do take their lives and with this technology would not just live happily but also be productive and also contribute to whatever society is doing.
01:06:15.000 That's why we're carrying on with this.
01:06:19.000 But in order to do that, we do have to overcome some of the problems that you've articulated.
01:06:24.000 Yeah.
01:06:26.000 I think what a lot of people are terrified of is that these people that are creating this technology, there's oversight, but it's oversight by people that don't necessarily understand it the way the people that are creating it.
01:06:40.000 And they don't know what guardrails are in place.
01:06:43.000 How safe is this?
01:06:44.000 Especially when it's implemented with some sort of weapons technology, you know, or some sort of a military application, especially a military application that can be insanely profitable.
01:06:56.000 And the motivations behind utilizing that are that profit.
01:07:01.000 And then we do horrible things and somehow or another justify it.
01:07:05.000 I mean, I think democracy is actually an important issue here because democratic nations tend not to go to war with each other.
01:07:15.000 And, I mean, you look at the way we're...
01:07:23.000 Handling military technology, if everybody was a democracy, I think there'd be much less war.
01:07:31.000 As long as it's a legitimate democracy that's not controlled by money.
01:07:35.000 Right.
01:07:36.000 As long as it's a legitimate democracy that's not controlled by the military-industrial complex or the pharmaceutical industry or whoever puts the people that are in elected places, who puts them in there?
01:07:47.000 How do they get funded?
01:07:48.000 And what do they represent once they get in there?
01:07:51.000 Are they there for the will of the people?
01:07:52.000 Are they there for their own career?
01:07:54.000 Do they bypass the safety and the future of the people for their own personal gain, which we've seen politicians do?
01:08:01.000 There's certain problems with every system that involves human beings.
01:08:06.000 This is another thing that technology may be able to do.
01:08:09.000 One of the things, if you think about the worst attributes of humans, whether it's war, crime, some of the horrible things that human beings are capable of.
01:08:23.000 Imagine that technology can find what causes those thoughts and behaviors in human beings and mitigate them.
01:08:31.000 You know, I've joked around about this, but if we came up with something that would elevate dopamine just 300% worldwide.
01:08:38.000 There would be no more war.
01:08:39.000 It'd be over.
01:08:40.000 Everybody would be loving everybody.
01:08:42.000 We'd be interacting with each other.
01:08:44.000 Well, that's the point of doing this.
01:08:45.000 But there would also be no sad songs.
01:08:49.000 You need some blues in your life.
01:08:51.000 You need a little bit of that too.
01:08:53.000 Or do we?
01:08:54.000 Maybe we don't.
01:08:55.000 Maybe that's just a byproduct of our monkey minds and that one day we'll surpass that and get to this point of enlightenment.
01:09:05.000 Enlightenment seems possible without technological innovation, but maybe not.
01:09:12.000 I've never really met a truly enlightened person.
01:09:14.000 I've met some people that are pretty close.
01:09:16.000 But if you could get there with technology, if technology just completely elevated the human consciousness to the point where all of our conflicts become erased.
01:09:25.000 Just for starters, if you could actually live longer, Quite aside from the motivations of people, most people die not because of people's motivations, but because our bodies just won't last that long.
01:09:43.000 And a lot of people say, you know, I don't want to live longer, which makes no sense to me.
01:09:50.000 Why would you want to disappear and not be able to have any kind of experience?
01:09:57.000 Well, I think some people don't think you're disappearing.
01:09:59.000 I mean, there is a long-held thought in many cultures that this life is but one step.
01:10:09.000 And that there is an afterlife and maybe that exists to comfort us because we deal with existential angst and the reality of our own inevitable demise or maybe it's a function of consciousness being something that we don't truly understand and what you are is a soul contained in a body and that we have a very primitive understanding of the existence of life itself and of the existence of everything.
01:10:37.000 Well, I guess that makes sense.
01:10:41.000 But I don't really accept it.
01:10:43.000 I mean if you— Well, there's no evidence, right?
01:10:44.000 Yeah.
01:10:45.000 Right.
01:10:46.000 But is it there's no evidence because we're not capable of determining it yet and understanding it?
01:10:54.000 Or is it just because it doesn't exist?
01:10:57.000 That's the real question.
01:10:59.000 Is this it?
01:11:00.000 Is this everything?
01:11:02.000 Or is this merely a stage?
01:11:04.000 And are we monkeying with that stage by interfering with the process of life and death?
01:11:11.000 Well, it makes sense, but I don't really see the evidence for that.
01:11:16.000 I could see from your perspective.
01:11:19.000 I don't see the evidence of it either, but it's a concept that is not – look, just when you start talking to string theorists and they start talking about things existing and not existing at the same time, particles in superposition, you're talking about magic.
01:11:35.000 You're talking about something that's impossible to wrap your head around.
01:11:40.000 Even just the structure of an atom.
01:11:42.000 Like, what?
01:11:42.000 What's that?
01:11:43.000 What's in there?
01:11:44.000 Nothing?
01:11:45.000 How much of it is space?
01:11:47.000 The entire existence of everything in the universe seems preposterous.
01:11:53.000 But it's all real.
01:11:54.000 And we only have a limited grasp of understanding of what this is really all about and what processes are really in place.
01:12:03.000 Right.
01:12:03.000 But if you look at people's If somebody gets a disease and it's kind of known they can only live like another six months, people are not happy with that.
01:12:16.000 No.
01:12:16.000 Well, they're scared.
01:12:18.000 They're scared to die.
01:12:18.000 It's a natural human instinct.
01:12:20.000 It's what kept us alive for all these hundreds of millions of years.
01:12:23.000 Yes, but very few people would be happy with that.
01:12:25.000 And if you then had something, gee, we have this new device, you could take this, and you won't die, almost everybody would do that.
01:12:36.000 Sure.
01:12:37.000 But would they appreciate life if they knew it had no end?
01:12:41.000 Would it be the same thing?
01:12:42.000 Or would it be like a lottery winner just goes nuts and spends all their money and loses their marbles because they can't believe they can't die?
01:12:51.000 Well, first of all, it's not guaranteed to live forever.
01:12:55.000 Sure, you can get in an accident.
01:12:57.000 Something can happen.
01:12:58.000 You can get injured.
01:12:59.000 But if we get to a point where you have automated cars that significantly reduce the amount of automobile accidents...
01:13:06.000 Well, also, we can back up everything, everything in our physical body as well as...
01:13:11.000 How far away are we from that?
01:13:13.000 That idea of...
01:13:14.000 I mean, we don't really truly understand what consciousness is, correct?
01:13:20.000 Right.
01:13:20.000 So how would we be able to manipulate it or duplicate it to the point where you're putting it inside of some kind of a computation device?
01:13:30.000 Well, we know to be able to create computation that matches what our What our brain does.
01:13:44.000 That's what we're doing with these large language models.
01:13:46.000 Right.
01:13:47.000 And we're actually very close now to what our brain can do with these large language models, and we'll be there like within a year.
01:13:58.000 And we can back up the electronic version, and we'll get to the point where we can back up what our...
01:14:11.000 Brain normally does.
01:14:13.000 So we'll be able to actually back that up as well.
01:14:16.000 We'll be able to detect what it is and back that up just like our computers.
01:14:20.000 So we'll create it in the form of an artificial version of everything that it is to be a human being.
01:14:27.000 Right, exactly.
01:14:27.000 In terms of emotions, love, excitement.
01:14:30.000 And that's going to happen over the next 20 years.
01:14:33.000 It's not a thousand years.
01:14:36.000 But will that be a person?
01:14:38.000 Or will it be some sort of a zombie?
01:14:41.000 What motivations will it have?
01:14:44.000 If you can take human consciousness and duplicate it, much like you could duplicate your phone, and you make this new thing, what does that thing feel like?
01:14:52.000 Does that thing live in hell?
01:14:53.000 What is that experience like for that thing?
01:14:56.000 What about large language models?
01:14:59.000 Do they really exist?
01:15:00.000 I mean, they can talk.
01:15:03.000 They certainly do, but would you want to be one?
01:15:06.000 Are we different than that?
01:15:08.000 Yeah, we're people.
01:15:09.000 We shake hands.
01:15:10.000 I give you a hug.
01:15:11.000 You pet my dog.
01:15:12.000 You listen to music.
01:15:14.000 We'll be able to do all of that as well.
01:15:16.000 Right, but will you want to?
01:15:17.000 Will you even care?
01:15:18.000 The thing is, like, a lot of what gives us joy in life is biological motivations.
01:15:23.000 There's human reward systems that are put in place that allow us to...
01:15:26.000 Well, it's going to be part of who we are.
01:15:28.000 Right.
01:15:28.000 It'll be just like a person, and we'll also have our physical bodies as well.
01:15:34.000 And that will also be able to be backed up.
01:15:37.000 And we'll be doing the things that we do now except we'll be able to have them continue.
01:15:43.000 So if you get hit by a car and you die, there's another ray that just pops up.
01:15:47.000 Oh, we got the backup ray.
01:15:49.000 And the backup ray will have no feelings at all about having had died and come back to life.
01:15:57.000 Well, that's a question.
01:15:58.000 I mean, why wouldn't it be just like Ray is now?
01:16:04.000 Why wouldn't it?
01:16:05.000 If we figure out that if biological life is essentially a kind of technology that the universe has created, And we can manipulate that to the point where we understand it, we get it, we've optimized it, and then replicate it.
01:16:23.000 Physically replicate it.
01:16:24.000 Not just replicate it in form of a computer, but an actual physical being.
01:16:29.000 Right.
01:16:30.000 Well, that's where we're headed.
01:16:32.000 Do you anticipate that people will be happy with whatever they have?
01:16:37.000 If you decide, I don't like being 5'6", I wish I was 6'6".
01:16:41.000 I don't like being a woman.
01:16:42.000 I want to be a man.
01:16:44.000 I don't want to be Asian.
01:16:46.000 I want to be, you know, whatever.
01:16:48.000 I want to be a black person.
01:16:49.000 I want to be...
01:16:50.000 We'll actually be able to do all of those things.
01:16:55.000 Simultaneously and so on.
01:16:56.000 We're not going to be limited by those kinds of happenstance.
01:17:01.000 Which is going to be very strange.
01:17:02.000 Like, what will human beings look like if you give people the ability to manipulate your physical form?
01:17:07.000 Well, we do things now that were impossible even ten years ago.
01:17:11.000 We certainly do, but we don't change races, size, sex, gender, height.
01:17:16.000 We don't do all the radical increase in just your intelligence.
01:17:21.000 Like, what is that going to look like?
01:17:23.000 What kind of an interaction is it going to be between two human beings when you have a completely new form?
01:17:29.000 You know, you're much different physically than you ever were when you were alive.
01:17:33.000 You're taller, you're stronger, you're smarter, you're faster.
01:17:37.000 You're basically not really a human anymore.
01:17:40.000 You're a new thing.
01:17:42.000 I mean, we're expanding who we are.
01:17:44.000 We've already expanded who we are from, you know...
01:17:47.000 Sure.
01:17:48.000 Right.
01:17:48.000 Over a course of hundreds of thousands of years, we've gone from being Australiapithecus to what we are now.
01:17:53.000 That has to do with the...
01:17:59.000 Pace at which we make changes.
01:18:01.000 We can make changes now much more quickly than we could 100,000 years ago.
01:18:09.000 Right, but if we can manipulate our physical form with no limitations, we're going to have six armed people that can fly?
01:18:18.000 What is it going to look like?
01:18:19.000 Well, do you have a problem with that?
01:18:21.000 Yeah, I would discriminate against six armed people that can fly.
01:18:24.000 That's the one area I allow myself to give prejudice to.
01:18:27.000 Okay.
01:18:27.000 No, I'm just curious as to how much time you've spent.
01:18:32.000 Seven armed people would be okay?
01:18:34.000 Yeah, seven armed people is cool because it's like, you know, maybe five on one side, two on the other.
01:18:40.000 No, I'm just curious as to how much time you've spent thinking about what this could look like.
01:18:48.000 And I don't think it's going to be as simple as, you know, it's going to be Ray Kurzweil, but Ray Kurzweil as like a 30-year-old man 50 years from now.
01:18:59.000 I think it's probably going to be, you're going to be all kinds of different things.
01:19:02.000 You could be kind of whatever you want.
01:19:04.000 You could be a bird.
01:19:05.000 I mean, what's to stop?
01:19:06.000 If we can get to manipulate the physical form and we can take consciousness and put it into a physical form...
01:19:12.000 But that's a description, I think, of something that's positive rather than negative.
01:19:16.000 You could be a giant eagle.
01:19:18.000 I mean, negative is...
01:19:21.000 People that wanted to destroy things getting power.
01:19:25.000 And that is a problem.
01:19:28.000 Well, it's certainly improvement in terms of the viability.
01:19:31.000 Having seven arms and being like an eagle and so on.
01:19:37.000 And you can also change that.
01:19:39.000 Right.
01:19:40.000 So I think that's a positive aspect, and we will be able to do that kind of thing.
01:19:46.000 Sure.
01:19:47.000 If you want to look at it in a binary fashion, positive and negative, but it's also going to be insanely strange.
01:19:54.000 Like, it's not going to be as simple as there'll be people that are living in 2069. Well, it seems strange once it's first reported.
01:20:04.000 If it's been reported now for five years and people are constantly doing it, you won't find it that strange.
01:20:10.000 It'll just be life.
01:20:11.000 Yeah.
01:20:12.000 Yeah.
01:20:12.000 So that's what I'm asking.
01:20:14.000 When you think about the implementation of this technology to its fullest, what does the world look like?
01:20:20.000 What does the world look like in 2069?
01:20:27.000 I mean, the kind of things that you can imagine right now we'll be able to do.
01:20:34.000 And it might seem strange when it first happens, but when it happens for the, you know, millionth time, it won't seem that strange.
01:20:42.000 And maybe you'll like being an eagle for a few minutes.
01:20:49.000 It's certainly interesting.
01:20:51.000 It's certainly interesting.
01:20:53.000 I just wonder how much time you've spent thinking about what this world looks like with the full implementation of the kind of exponential growth of technology that would exist if we do make it to 2069. Well, I did write a book,
01:21:08.000 Danielle, and This young girl has fantastic capabilities, and no one really can figure out how she does this.
01:21:23.000 She actually takes over China at age 15, and she makes it a democracy, and then she actually becomes president of the United States at 19. She has,
01:21:38.000 of course, Create a constitutional amendment that at least she can become president at 19. That sounds like what a dictator would do.
01:21:52.000 Right, but unlike a dictator, she's very popular and she writes very good music.
01:21:59.000 And this is one artificial intelligence creature?
01:22:03.000 Yes.
01:22:03.000 And how was she created?
01:22:05.000 It never says that she gets these capabilities through AI. I didn't want to spell that out.
01:22:14.000 But that would be the only way that she could do this.
01:22:18.000 Right.
01:22:19.000 Unless it's some insane freak of genetics.
01:22:22.000 And she's like a very positive person.
01:22:25.000 She's very popular.
01:22:28.000 Yeah, but she's the only one that has that.
01:22:31.000 She doesn't give it to everybody, which is where it gets really weird.
01:22:35.000 You have a cell phone.
01:22:36.000 I have a cell phone.
01:22:37.000 Pretty much everybody has one now.
01:22:38.000 What happens when everybody gets the kind of technology we're discussing?
01:22:42.000 Well, it shows you the benefit that she has it, and if everybody gets it, that would be even more positive, right?
01:22:50.000 Perhaps, yeah.
01:22:52.000 I mean, that's the best way of looking at it, that we become a completely altruistic, positive, beneficial to each other society of integrated minds.
01:23:03.000 I mean, that is a benefit.
01:23:04.000 If you have more intelligence, you'd be more likely to do this.
01:23:08.000 Yes.
01:23:09.000 Yeah, for sure.
01:23:12.000 That's the benefit.
01:23:13.000 Yeah.
01:23:14.000 So we live longer and we're also smarter than making more rational decisions towards each other.
01:23:23.000 So overall, when you're looking at this, you just don't concentrate really on the negative possibilities?
01:23:30.000 Well, no.
01:23:31.000 I mean, I do focus on that as well.
01:23:34.000 But you think overall it's net positive?
01:23:37.000 Yes, it's called intelligence.
01:23:41.000 And if you have more intelligence, we'll be doing things that are more beneficial to ourselves and other people.
01:23:48.000 Do you think that the experiences that we're having right now...
01:23:51.000 I mean, like right now, we have much less crime than we did 50 years ago.
01:23:57.000 Now, if you listen to people debating presidential politics, they'll say crime is worse than it's ever.
01:24:04.000 But if you look at the actual...
01:24:12.000 Statistics, it's gone way down.
01:24:14.000 And if you actually go back like a few hundred years, crime and murder and so on was far, far higher than it is today.
01:24:23.000 It's actually pretty rare.
01:24:26.000 So the kind of additional intelligence that we've created is actually good for people, if you look at the actual data.
01:24:36.000 Sure.
01:24:37.000 If you look at Steven Pinker's work, right, you scale it from hundreds-plus years ago to today, things are generally always seem to be moving in a better direction.
01:24:47.000 Right.
01:24:47.000 Well, Pinker didn't credit this to technology.
01:24:52.000 He just looks at the data and says it's gotten better.
01:24:57.000 What I try to do in the current book is to show how it's related to technology, and as we have more technology, we're actually moving in this direction.
01:25:06.000 So you feel it's a function of technology that we're moving in this direction?
01:25:10.000 Absolutely.
01:25:11.000 That's why.
01:25:14.000 I mean, look at the technology.
01:25:16.000 In 80 years, we've multiplied the amount of computation 20 quadrillion times.
01:25:23.000 And so we have things that didn't exist two years ago.
01:25:27.000 Right.
01:25:30.000 When you think about the idea of life on Earth and that this is happening and that we are on this journey to 2045 to the singularity, do you consider whether or not this is happening elsewhere in the universe or whether it's already happened?
01:25:48.000 Yeah, we see no evidence.
01:25:51.000 That there's any form of life, let alone intelligent life, anywhere else.
01:25:57.000 And I can say, well, we're not in touch with these other people.
01:26:01.000 It is possible.
01:26:03.000 But it seems...
01:26:05.000 I mean, given the exponential impact of this type of technology, we would be spaced out...
01:26:23.000 Based on...
01:26:24.000 Over a large period of time.
01:26:38.000 So some people that might be ahead of us could be ahead of us certainly thousands of years, even millions of years.
01:26:47.000 And so they'd be like way ahead of us.
01:26:50.000 And they'd be doing galaxy-wide engineering.
01:26:54.000 How is it that we look out there and we don't see anybody doing galaxy-wide engineering?
01:26:59.000 Maybe we don't have the capability to actually see it.
01:27:02.000 Yes, it's possible.
01:27:04.000 What's the 13.7 billion years old or whatever it is?
01:27:08.000 But even just incidental capabilities would affect galaxies.
01:27:16.000 We would see that somehow.
01:27:18.000 Would we if we were at the peak?
01:27:20.000 If there is intelligent life in the universe, some form of that intelligent life has to be the most advanced.
01:27:29.000 And what if we are underestimating our position in the universe?
01:27:33.000 Well, that's what I'm saying.
01:27:35.000 But maybe there's something that's like 10 years.
01:27:37.000 Maybe there's an industrial age.
01:27:39.000 I think there's a good argument that we are ahead of other people.
01:27:44.000 But we don't have the capability of observing the goings-on of a planet 5,000 light-years away.
01:27:50.000 We can't see into their atmosphere.
01:27:53.000 We can't, like, look at high-resolution video of activity on that planet.
01:27:58.000 Yeah, but if they were doing galaxy-wide engineering, I think we would notice that.
01:28:02.000 If they were more advanced than us, maybe we would.
01:28:04.000 But what if they're not?
01:28:05.000 What if they're at the level that we're at?
01:28:07.000 Well, that's what I'm saying.
01:28:08.000 What if we're at the peak?
01:28:10.000 I think it's an argument that we aren't at the peak.
01:28:13.000 What if it gets to the point where artificial intelligence gets implemented and then that becomes the primary form of life and it doesn't have the desire to do anything in terms of like galactic engineering?
01:28:28.000 But even just incidental things would affect whole galaxies.
01:28:34.000 Like what things?
01:28:35.000 Like we're doing?
01:28:36.000 Are we affecting the whole galaxy?
01:28:37.000 No, not yet.
01:28:38.000 Right, but what if it's like us, but it gets to the point where it becomes artificial intelligence, and then it doesn't have emotions, it doesn't have desires, it doesn't have ambitions, so why would it decide to expand?
01:28:48.000 Why would it not have those things?
01:28:49.000 Well, we'd have to program it into it, but it would probably decide that that's foolish and that those things have caused all these problems, all the problems in human race.
01:28:57.000 What's our number one issue?
01:28:59.000 War.
01:28:59.000 What is war caused by?
01:29:02.000 It's caused by ideologies.
01:29:04.000 It's caused by acquisition of resources, theft of resources, violence.
01:29:09.000 War is not the primary thing that we are motivated by.
01:29:13.000 It's not the primary thing we're motivated by, but it's existed in every single step of the way of human existence.
01:29:21.000 But it's actually getting better.
01:29:23.000 I mean, just look at the effect of war.
01:29:25.000 Sure.
01:29:26.000 I mean, we have a couple of wars going on.
01:29:28.000 They're not killing millions of people like they used to.
01:29:31.000 Right.
01:29:31.000 Right.
01:29:32.000 My point is that if artificial intelligence recognizes that the problem with human beings is these emotions, and a lot of it is fueled by these desires, like the desire to expand, the desire to acquire things,
01:29:48.000 the desire to Well, the emotion is positive.
01:29:51.000 I mean, music and other things.
01:29:53.000 To us.
01:29:54.000 To us.
01:29:55.000 But if it gets to the point where artificial intelligence is no longer stimulated by mere human creations, creativity, all these different things, why would it even have the ambition to do any sort of galaxy-wide engineering?
01:30:10.000 Why would it want to?
01:30:14.000 Because it's based on us.
01:30:16.000 It is based on us until it decides it's not based on us anymore.
01:30:19.000 That's my point.
01:30:20.000 If it realizes that, like if we're based on a very violent chimpanzee, and we say, you know what, there's a lot of what we are because of our genetics that really are a problem.
01:30:30.000 And this is what's causing all of our violence, all of our crime, all of our war.
01:30:35.000 If we just step in and put a stop to all that, will we also put a stop to our ambition?
01:30:42.000 I would maintain that we're actually moving away from that.
01:30:45.000 We are moving away from that.
01:30:47.000 But that's just natural, right?
01:30:49.000 That's natural with our understanding and our mitigations of these social problems.
01:30:53.000 Right.
01:30:53.000 So if you expand that even more, we'll be even more in that direction.
01:30:58.000 As long as we're still we.
01:30:59.000 But as soon as you become something different, why would it even have the desire to expand?
01:31:03.000 If it was infinitely intelligent, why would it even want to physically go anywhere?
01:31:09.000 Why would it want to?
01:31:10.000 What's the reason for our motivation to expand?
01:31:14.000 What is it?
01:31:15.000 It's human.
01:31:16.000 The same humans that were tribal creatures that roamed, the same humans that Stole resources from neighboring villages.
01:31:23.000 This is our genes, right?
01:31:24.000 This is what made us, that got us to this point.
01:31:27.000 If we create a sentient artificial intelligence that's far superior to us, and it can create its own version of artificial intelligence, the first thing it's going to engineer out is all these stupid emotions that get us in trouble.
01:31:40.000 If it just can create happiness and joy from programming, why would it create happiness and joy through the acquisition of other people's creativity, art, music, all those things?
01:31:55.000 And then why would it have any ambition at all to travel?
01:31:58.000 Why would it want to go anywhere?
01:32:00.000 Well, I mean, it's an interesting philosophical problem.
01:32:04.000 Right.
01:32:04.000 It is a problem because a lot of what we are and the things that we create is because of all these flaws that you would say.
01:32:12.000 If you were programming us, you'd say, well, what is the cause of all these issues that plague the human race?
01:32:17.000 I wouldn't necessarily say that there are flaws.
01:32:18.000 Murder is a flaw.
01:32:20.000 Isn't it a flaw?
01:32:21.000 But that's way down.
01:32:23.000 As technology moves ahead.
01:32:26.000 If it happens to you, it's a flaw.
01:32:28.000 Crime is a flaw.
01:32:30.000 Theft is a fraud.
01:32:32.000 Those are flaws.
01:32:33.000 If we could engineer those out, What would be the way that we do it?
01:32:38.000 Well, one of the things we do, we get rid of what it is to be a person.
01:32:41.000 Because what it is is corrupt people that go down these terrible paths and cause harm to other people, right?
01:32:48.000 You're taking a step there that our ability to feel emotion and so on is a flaw.
01:32:54.000 No, I'm not.
01:32:55.000 I'm saying that it's the root of these flaws.
01:32:58.000 That greed and envy and lust and anger are the root.
01:33:03.000 Like to go to the bathroom?
01:33:05.000 Yeah.
01:33:05.000 Okay.
01:33:06.000 Go to the bathroom.
01:33:07.000 We'll come back.
01:33:08.000 We'll talk about flaws.
01:33:09.000 And we're back.
01:33:10.000 Provide an answer to that.
01:33:13.000 I mean, as I think about myself now, it's when I have emotions that are positive emotions.
01:33:24.000 Like really getting off on a song or a picture or some new art form that didn't exist in the past.
01:33:33.000 That's positive.
01:33:35.000 That's what I live for.
01:33:38.000 Relating to another person in a way that's intimate.
01:33:46.000 So, I mean...
01:33:49.000 The idea, if we're actually more intelligent, would be not to get rid of that, but to actually enjoy that to a greater extent.
01:33:59.000 Hopefully.
01:34:01.000 What I'm saying is that...
01:34:03.000 Yes, there are things that can go wrong.
01:34:05.000 It might lead us in an incorrect direction.
01:34:09.000 I'm not even saying it's wrong.
01:34:12.000 I'm not saying that it's going to go wrong.
01:34:14.000 I'm saying that if you wanted to program away some of the issues that human beings have in terms of what keeps us from working with each other universally all over the globe, what keeps us from these things?
01:34:30.000 We're actually doing that more than we used to do.
01:34:33.000 Sure.
01:34:33.000 But also not.
01:34:34.000 You know, we're also like massive inequality.
01:34:37.000 You've got people in the Congo mining cobalt with sticks that powers your cell phones.
01:34:41.000 There's a lot of real problems with society today.
01:34:43.000 Right.
01:34:43.000 But there used to be even more of that.
01:34:45.000 There's a lot of that, though.
01:34:46.000 There's a lot of that.
01:34:48.000 And if you looked at greed and war and crime and all the problems with human beings, a lot of it has to do with these biological instincts, these instincts to control things, these built-in genetic codes that we have that are from our ancestors.
01:35:06.000 That's because we haven't gotten there yet.
01:35:09.000 Right.
01:35:09.000 But when we get there, You think we will be a better version of a human being and we will be able to experience all the good, the positive aspects of being a human being?
01:35:23.000 The art and the creativity and all these different things?
01:35:26.000 I hope so and actually if you look at what human beings have done already, we're moving in that direction.
01:35:35.000 Right.
01:35:37.000 It may not seem that way.
01:35:38.000 No, it does seem that way to me.
01:35:40.000 It does overall.
01:35:41.000 But it's also like if you look at a graph of temperatures, it goes up, it goes down, it goes up, it goes down.
01:35:48.000 But it's moving in a general direction.
01:35:50.000 We are moving in a generally positive direction.
01:35:52.000 So that's why we want to continue moving in this same direction.
01:35:57.000 Yeah, I don't think that we're...
01:35:58.000 But it's not a guarantee.
01:35:59.000 I mean, you can describe things that would...
01:36:04.000 It would be horrible and it's feasible.
01:36:07.000 Yeah.
01:36:08.000 It could be the end of the human race, right?
01:36:12.000 Or it could be the beginning of the next race, of this new thing.
01:36:16.000 Well, I mean, when I was born, we created nuclear weapons, and very soon we had hydrogen weapons, and we have enough hydrogen weapons to wipe out all humanity.
01:36:30.000 We still have that.
01:36:33.000 That didn't exist like 100 years ago.
01:36:37.000 Well, it did exist 80 years ago.
01:36:39.000 Yeah.
01:36:41.000 So that is something that concerns me.
01:36:48.000 And you could do the same thing with artificial intelligence.
01:36:51.000 It could also create something that would be very negative.
01:36:55.000 But what I'm getting at is like, what do you think life looks like if it's engineered?
01:37:00.000 What do you think human life looks like if it's engineered by a far superior intelligence?
01:37:06.000 And what would it change about what it means to be a person?
01:37:13.000 I mean, first of all, we would base it on what human beings are already, so we'd become better versions of ourselves.
01:37:23.000 For example, we'd be able to overcome life-threatening diseases, and we're actually working on that, and that's going to go into high gear very soon.
01:37:38.000 Yes, but that's still being a human being.
01:37:41.000 If you're implementing large-scale artificial intelligence You're essentially a superhuman.
01:37:52.000 You're a different thing.
01:37:53.000 You're not what we are.
01:37:56.000 If you have the computational power— Well, if you're superhuman, you have the human being as part of it.
01:38:01.000 For now.
01:38:02.000 But this is the thing.
01:38:03.000 If you're engineering this artificial intelligence and you're engineering this with essentially like a superior life form— Well, you're making
01:38:34.000 certain assumptions about what we'll create.
01:38:38.000 No, I'm just making an assumption.
01:38:41.000 I mean, in my mind, we would want to create better music and better art and better relationships.
01:38:51.000 Well, the relationship should be all perfect eventually if we keep going in this general direction.
01:38:56.000 Well, it's not perfect.
01:38:58.000 I mean...
01:38:58.000 But if you get artificial intelligence, we're all reading each other's minds and everyone's working towards the same goal.
01:39:04.000 Well, no, you can't read each other's minds.
01:39:06.000 I mean...
01:39:06.000 Ever?
01:39:08.000 Yes, we can create privacy that's virtually unbreakable, and you could keep the privacy to yourselves.
01:39:15.000 But can you do that as technology scales upward, if it continues to move?
01:39:19.000 I mean, it's difficult.
01:39:20.000 Like, your phone.
01:39:21.000 Like, anyone can listen to you on your phone.
01:39:23.000 I mean, anyone who has a significant technology.
01:39:25.000 No, actually, it has pretty good technology already.
01:39:28.000 You can't really read someone else's phone.
01:39:31.000 You definitely could.
01:39:32.000 Yeah, if you have Pegasus, you could hack into your phone easily.
01:39:35.000 Not hard at all.
01:39:36.000 The new software that they have, all they need is your phone number.
01:39:39.000 All they need is your phone number, and they can look at every text message you send, every email you send, they can look at your camera, they can turn on your microphone.
01:39:47.000 Easy.
01:39:48.000 We have ways of keeping total privacy, and if it's not built into your phone now, it will be.
01:39:54.000 Right, but it's definitely not built into your phone now.
01:39:56.000 The security people that really understand the capabilities of intelligence agencies, they 100% can listen to your phone.
01:40:04.000 100% can turn on your camera.
01:40:06.000 100% can record your voice.
01:40:09.000 Yes and no.
01:40:10.000 I mean, we have an ability to keep total privacy in a device.
01:40:17.000 But from who?
01:40:18.000 You can keep privacy from me because I don't have access to your device.
01:40:21.000 But if I was working for an intelligence agency and I had access to a Pegasus program, I am in your device.
01:40:29.000 I've talked to people...
01:40:30.000 Only because it's not perfect.
01:40:32.000 We can actually build much better privacy than exists today.
01:40:37.000 But the privacy that we have today is far less than the privacy that we had before we had phones.
01:40:44.000 I don't really quite agree with that.
01:40:47.000 How so?
01:40:48.000 If you didn't have a phone, okay, and you were at home having a conversation, a sensitive conversation about maybe you didn't pay as much taxes as you should, there's no way anybody would hear that.
01:40:58.000 But now your phone hears that.
01:41:00.000 If you have an Alexa in your home, your Alexa hears you say that.
01:41:03.000 People have been charged with crimes because Alexa heard them committing murder.
01:41:12.000 We actually know how to create perfect privacy in your phone.
01:41:17.000 And if your phone doesn't have that, that's just an imperfection in the way we're building these things now.
01:41:24.000 But it's not just an imperfection.
01:41:25.000 It's sort of built into the program itself, because that's what fuels the algorithm, is that it has access to all of your data.
01:41:32.000 It has access to all of what you're interested in, what you like, what you don't like.
01:41:37.000 You can't opt out of it, especially you.
01:41:38.000 You've got a Google phone.
01:41:39.000 That thing is just a net scooping up information.
01:41:45.000 We know how to build perfect privacy.
01:41:50.000 How did we do it?
01:42:02.000 I mean, if it's not built into your phone now, it should be.
01:42:07.000 Unless they don't want it to be built in there because there's an actual business model in it not being built in there.
01:42:13.000 Okay.
01:42:13.000 But it can be done and if people want that, it will happen.
01:42:20.000 But you recognize the financial incentive in not doing that, right?
01:42:23.000 Because that's what – a company like Google for instance, that's where they make the majority of their money is from data or a lot of their money I should say.
01:42:33.000 Well, I mean the – There's actually a lot of effort that goes into keeping what's on your phone private.
01:42:45.000 It's not that easy.
01:42:46.000 Private from some people, but not really private.
01:42:50.000 It's only private until they want to listen.
01:42:53.000 And now the capability of listening to your phone is super easy.
01:42:58.000 Not really.
01:42:59.000 No?
01:43:00.000 With the Pegasus program, it's very easy.
01:43:04.000 Well, that has to do with imperfections in the way phones are created.
01:43:07.000 Right, but I think it's a feature.
01:43:10.000 I think part of the feature is that they want as much data from you and knowing about what you're doing, what you're talking about.
01:43:16.000 Have you ever had a conversation with someone and then you see an ad for that thing on Google?
01:43:24.000 It happens.
01:43:26.000 Yes, but...
01:43:27.000 So something's going on where it's listening to your conversations.
01:43:32.000 It's picking up on key words.
01:43:34.000 It's not picking up on everything.
01:43:36.000 Not yet.
01:43:37.000 Well, it's not unless it wants to.
01:43:38.000 Like I said, if they're using a program, an intelligence program, to gather information from your phone, it is.
01:43:44.000 And you're basically, you got a little spy that you carry around with you everywhere you go.
01:43:49.000 Unless you're using, I mean, there's...
01:43:51.000 I mean, if you think that's a major issue, we could build phones that are impossible to spy on.
01:44:00.000 Maybe.
01:44:02.000 There are some phones that like graphene.
01:44:06.000 Do you know about that?
01:44:07.000 Do you know about people that take a Google phone and they put a different Linux-based operating system on it?
01:44:13.000 It makes it much more difficult to track and there's multi-levels of protection.
01:44:17.000 There's a bunch of phones that are being made that are security phones.
01:44:21.000 But you lose access to apps, you lose access to a lot of the features that people rely on when it comes to phones.
01:44:28.000 Like for instance, if you have GPS on your phone, As soon as you're using GPS, you're easy to find, right?
01:44:33.000 So you lose that privacy.
01:44:35.000 If they want to know where Ray's phone is, they know exactly where Ray's phone is.
01:44:39.000 And that's where you are, and you're with your phone.
01:44:42.000 They've got you tracked everywhere you go.
01:44:44.000 It's complicated.
01:44:45.000 If this were a major issue, we could definitely overcome that.
01:44:49.000 Do we not?
01:44:49.000 I think it's a major issue, but I don't think it's a major concern for most people.
01:44:54.000 Right.
01:44:54.000 But it's because they reap the benefits of it.
01:44:57.000 Like the algorithm is specifically tailored to their interests.
01:45:00.000 That's how we fund the kinds of things we put on phones.
01:45:03.000 Right.
01:45:04.000 But you can't opt out of it unless you just decide to get a flip phone.
01:45:08.000 But even if you do, they can figure out where you are.
01:45:11.000 They triangulate you from cell phone towers.
01:45:17.000 I mean, we give up certain things in order to get the benefits of insurance.
01:45:23.000 Yeah, we do.
01:45:26.000 If what you're giving up is a grave concern, we could overcome that.
01:45:32.000 We know how to do that.
01:45:35.000 Yeah.
01:45:36.000 If people agree that the benefit of overcoming that outweighs the loss in the financial loss that you would have with not having access to everybody's data and information.
01:45:48.000 Well, I mean, what you're giving up is a certain type of data that you want, a certain type of capability that you could buy, and so they can advertise that to you and people feel that that's okay.
01:46:05.000 Yeah.
01:46:09.000 But, for example, keeping your email private is quite feasible.
01:46:17.000 It's possible.
01:46:19.000 But it's also easy to hack.
01:46:21.000 Like, people could be reading your emails all the time and you should probably assume that they do.
01:46:27.000 Well...
01:46:32.000 It's a complicated issue, but we keep, for example, your emails private.
01:46:39.000 And generally, we actually do do that.
01:46:43.000 Generally, for most people.
01:46:45.000 But my point is, as this technology scales upward, when you have greater and greater computational power, And then you're also integrated with this technology.
01:46:56.000 How does that keep whatever group is in charge from being able to essentially access the thing that is inside your head now?
01:47:09.000 If you have a technology that's going to be upgraded and you're going to get new software and it's going to keep improving as time goes on, what kind of privacy would be involved in that if you're literally having something that can get into your brain?
01:47:25.000 And if most people can't get into your brain, can intelligence agencies get into your brain?
01:47:29.000 Can foreign governments get into your brain?
01:47:33.000 What does that look like?
01:47:35.000 I'm not looking at this as a negative.
01:47:37.000 I'm just saying, if you're just looking at this completely objectively, what are the possibilities that this could look like?
01:47:44.000 I'm trying to paint a weird picture of what this could look like.
01:47:49.000 Well, a lot of things you want to share.
01:47:52.000 Music and so on.
01:47:55.000 It's desirable to share that.
01:47:57.000 You'd want that to be shared.
01:47:59.000 If you didn't share anything, you'd be pretty lonely.
01:48:04.000 Sure.
01:48:04.000 What do you think about the potential for a universal language?
01:48:08.000 Do you think that one of the things that holds people back is, you know, the Rosetta Stone, the Tower of Babel, the idea that we can't really understand what all these other people are saying.
01:48:20.000 We don't know how they think.
01:48:21.000 If we can develop a universal worldwide language through this, Do you think it's feasible?
01:48:27.000 I mean, all languages that we have were created.
01:48:30.000 They're all...
01:48:31.000 Well, we have a certain means of changing one language into another.
01:48:34.000 Right.
01:48:35.000 That's what I'm saying.
01:48:36.000 And we're doing that now with some, like Google does that with Translate, and the new Samsung phones do that in real time.
01:48:42.000 Yeah.
01:48:43.000 Yeah.
01:48:44.000 I wrote about that in 1989, that we'd be able to have universal translation between languages.
01:48:52.000 But do you think that the adoption of a universal language...
01:48:56.000 It's not perfect, but it's actually pretty good.
01:48:57.000 It's pretty good.
01:48:58.000 But there's also context that's missing, because there's different...
01:49:05.000 There's different ways that people say things.
01:49:08.000 There's gendered language and other nationalities used and other countries used.
01:49:13.000 You could try to get that into the language translation as well.
01:49:17.000 You can, but it's a little bit imperfect, right?
01:49:19.000 You might have something that's said very quickly and you'd have to translate it into much longer language in order to capture that.
01:49:29.000 Right.
01:49:29.000 But would a universal language be possible?
01:49:34.000 If you're creating something...
01:49:37.000 Why would you need that?
01:49:38.000 Because what we have, all of our language is pretty flawed.
01:49:44.000 Ultimately, I mean, we use it, but how many versions of yore do we have?
01:49:48.000 There's a bunch of different weird things about language that's imperfect because it's old.
01:49:54.000 It's like old technology.
01:49:55.000 If we decided to make a better version of language through artificial technology and say, listen, instead of trying to translate everything, Now that we're super powerful, intelligent beings that are enhanced by artificial intelligence,
01:50:11.000 let's create a better, more superior, universally adopted language.
01:50:17.000 Maybe.
01:50:18.000 I mean, do you see that as a major need?
01:50:21.000 Yeah, I do.
01:50:22.000 Yeah.
01:50:23.000 I think that would change a lot.
01:50:24.000 I mean, we'd lose all the amazing nuances of cultures, which I don't think is good for us as human beings, but we're not going to be human beings.
01:50:33.000 So maybe it would be better if we could communicate exactly the way we prefer to.
01:50:39.000 Well we would be human beings and in my mind the human being is someone who can change both ourselves and means of communication to enjoy Better means of expressing art and culture and so on.
01:51:01.000 No other animal really quite does that.
01:51:04.000 Right.
01:51:04.000 Except human beings.
01:51:05.000 So that is an essence of what it means to be a human being.
01:51:10.000 For now.
01:51:11.000 But when you're a mind-reading eagle and you're flying around, are you really a human being anymore?
01:51:17.000 Yes, because we are able to change ourselves.
01:51:20.000 So that's just a new definition of what a human being is.
01:51:23.000 Yeah.
01:51:23.000 What are your thoughts on simulation theory?
01:51:30.000 If you mean that we're living in a simulation, well, first of all, some people believe that we can express physics as formulas.
01:51:47.000 And that the universe is actually able to...
01:51:51.000 It's capable of computation, and therefore everything that happens is a result of some computation.
01:52:11.000 And therefore the universe is capable of...
01:52:18.000 We are living in something that is computable.
01:52:24.000 And there's some debate about whether that's feasible, but that doesn't necessarily mean that we're living in a simulation.
01:52:34.000 Generally, if you say we're living in a simulation, you assume that some other place and teenagers in that world like to create a simulation.
01:52:47.000 So they created a simulation that we live in And you want to make sure that they don't turn the simulation off, so it'd have to be interesting to them, and so they keep the simulation going.
01:53:02.000 But the whole universe...
01:53:07.000 Could be capable of simulating reality and that's what we live in and it's not a game, it's just the way the universe works.
01:53:19.000 I mean, what would the difference be if we lived in a simulation?
01:53:25.000 This is what I'm saying.
01:53:27.000 If we can and we're on our way to creating something that is indiscernible from reality itself, I don't think we're that far away from that, many decades away from having some sort of a virtual experience that's indiscernible from regular reality.
01:53:42.000 I mean, we try to do that with games and so on.
01:53:45.000 Right.
01:53:46.000 And those are far superior to what they were.
01:53:50.000 I mean, I'm younger than you, but I can remember Pong.
01:53:53.000 Remember Pong?
01:53:54.000 It was groundbreaking.
01:53:55.000 You could play a video game on your television.
01:53:57.000 This is crazy.
01:53:58.000 It was so nuts.
01:53:59.000 And we're way beyond that now.
01:54:01.000 Yeah.
01:54:01.000 Now you look at, like, the Unreal 5 engine.
01:54:03.000 It's insane how beautiful it is and how incredible what the capabilities are.
01:54:08.000 So if you live in that, that's kind of a simulation that Right, but as you expand that further and you get to the point where you're actually in a simulation and that your life is not this carbon-based biological life feeling and texture that you think it is,
01:54:25.000 but that you're really a part of this thing that's been created.
01:54:29.000 This is where it gets real weird with like probability theory, right?
01:54:33.000 Because they think that if a simulation is possible, it's more likely it's already happened.
01:54:42.000 I mean, there's really an unlimited amount of things that we could simulate and experience.
01:54:49.000 So it's hard to say we're living in a simulation, because a lot of what we're doing is living in a computational world anyway, so it's basically being simulated.
01:55:02.000 In a way, yeah.
01:55:05.000 And if you were some sort of an alien life form, wouldn't that be the way you go instead of like taking physical metal crafts and shooting them off into space?
01:55:18.000 Wouldn't you sort of create artificial space?
01:55:23.000 Create artificial worlds?
01:55:25.000 Create something that exists in the sense that you experience it.
01:55:28.000 And it's indiscernible to the person experiencing it.
01:55:31.000 But if you're intelligent enough, you'll be able to tell what's being simulated and what's not.
01:55:36.000 Up to a point.
01:55:38.000 Until it actually does all the same things that regular reality does.
01:55:43.000 It just does it through technology.
01:55:45.000 And maybe that's what the universe is.
01:55:49.000 But that's okay.
01:55:50.000 We could still experience what's happening.
01:55:54.000 Yeah.
01:55:55.000 And we could also experience people doing galaxy-wide engineering, not all of which would be simulated.
01:56:03.000 So the galaxy-wide engineering is the main thing that you look at to the point where I don't see any evidence for life outside.
01:56:12.000 Well, there's definitely no real evidence that we've seen, other than these people that talk about UFOs, UAPs and pilots and all these people that say that there's these things...
01:56:20.000 Well, we basically don't see any evidence that life is simulated outside of our own life.
01:56:28.000 I mean, we can simulate things and experience it.
01:56:32.000 We don't see any evidence that other beings are doing that elsewhere.
01:56:38.000 Right, but this is based on such limited data, though, right?
01:56:42.000 I mean, look at what limited data we just have of Mars.
01:56:45.000 We have a rover running around, satellites in orbit.
01:56:48.000 It's very limited data with something that's just one planet over.
01:56:52.000 We don't really have the data to understand what's going on in Alpha Centauri.
01:56:56.000 It's possible that there's simulated life elsewhere.
01:57:00.000 I mean, we don't see any evidence for it, but it's possible.
01:57:06.000 Is it something that intrigues you, or do you just look at it like there's no evidence, so I'm not going to concentrate on that?
01:57:12.000 I'm very interested to see what we can achieve, because I can see that we're on that path.
01:57:23.000 So it doesn't take a lot of curiosity in my part to imagine other people simulating life and enjoying it.
01:57:35.000 I'm much more interested to see what will be feasible for us, and we're not that far away from it.
01:57:44.000 So over the next four years, five years, you think we're going to be able to far surpass the ability of human beings.
01:57:53.000 We're going to be able to stop aging and then eventually reverse aging.
01:57:58.000 And then 2045 comes along.
01:58:01.000 What does that look like?
01:58:05.000 Well, one of the reasons we call it singularity is because we really don't know.
01:58:11.000 I mean, that's why it's called singularity.
01:58:14.000 Singularity in physics is where you have a black hole.
01:58:19.000 No energy can get out of a black hole, and therefore we don't really know what's going on in it, and we call it a singularity.
01:58:27.000 So this is a historical singularity based on the kinds of things we've been talking about.
01:58:33.000 And again, we don't really know what that will be like, and that's why we call it a singularity.
01:58:42.000 Do you have any theories?
01:58:44.000 Another way of looking at it, I mean, we have mice, and they have experiences.
01:58:58.000 It's a limited amount of complexity because that particular species hasn't really evolved very much.
01:59:10.000 And we'll be going beyond what human beings can do.
01:59:15.000 So, to ask a human being what it's like to be a human being in singularity, it's like asking a mouse, what would it be like if you were to evolve to become like a human?
01:59:29.000 Now, if you ask a mouse that, it wouldn't understand the question, it wouldn't be able to formulate an answer, it wouldn't even be able to think about it.
01:59:43.000 And asking a current human being what it's going to be like to live in a singularity is a little bit like that.
01:59:51.000 So it's just, who knows?
01:59:54.000 It's going to be wild.
01:59:56.000 We'll be able to do things that we can't even imagine today, right?
02:00:01.000 Well, I'm very excited about it, even though it's scary.
02:00:04.000 I know I ask a lot of tough questions about this because these are my own questions.
02:00:09.000 This is like what bounces around inside my own head.
02:00:12.000 Well, that's why I'm excited about it also because it basically means more intelligence and we'll be able to think about things that we can't even imagine today.
02:00:22.000 And solve problems.
02:00:24.000 Yes.
02:00:25.000 Yeah.
02:00:25.000 Including like dying, for example.
02:00:29.000 Yeah.
02:00:30.000 Listen man, I'm glad you're out there.
02:00:32.000 It's very important that people have access to this kind of thinking and You've dedicated your whole life to this in this book Ray Kurzweil the singularity is near when we merge with AI. It's available now.
02:00:44.000 Did you do the audio version of it?
02:00:47.000 That's being worked on now.
02:00:49.000 Are you doing it?
02:00:50.000 It's coming out June No, no I want to hear it in your voice.
02:00:59.000 It's your words.
02:01:00.000 Yeah, that's what people say.
02:01:01.000 Yeah, why don't you do it?
02:01:02.000 You know what you should do.
02:01:04.000 Just get AI to do it.
02:01:05.000 Why waste all that time sitting around doing it?
02:01:08.000 Basically, they could do it now.
02:01:08.000 We just talked about that yesterday.
02:01:10.000 100%.
02:01:11.000 Look, they could take your voice from this podcast and do this book in an audio version.
02:01:17.000 Easy.
02:01:19.000 Do you know what they're doing now with Spotify?
02:01:21.000 They're translating this podcast.
02:01:23.000 They're going to translate it to German, French, and Spanish.
02:01:28.000 And it's going to be like your voice in perfect Spanish, my voice in perfect Spanish.
02:01:32.000 This actually came up yesterday.
02:01:34.000 I'll think about that.
02:01:35.000 Pretty wild.
02:01:36.000 Yeah.
02:01:37.000 It's 100% you should do that.
02:01:38.000 Okay.
02:01:39.000 My friend Duncan does that all the time.
02:01:40.000 He'll have friends, text friends, or send a voice message as a fake voice message.
02:01:45.000 That's ridiculous.
02:01:46.000 You know, talking about how he's marrying his cat or something like that.
02:01:49.000 It's just like, just, but he does it with AI and it sounds exactly like whoever that person is.
02:01:54.000 Okay.
02:01:55.000 So that's the solution.
02:01:57.000 Have AI read your...
02:01:58.000 Of course you should have AI read your book.
02:02:01.000 I can't believe we even would think of you sitting down for 40 hours or whatever it would take.
02:02:07.000 It would probably take more than that to read this whole book.
02:02:10.000 And then if you mess up, you got to go back and start again.
02:02:14.000 Well, certainly that's going to be feasible.
02:02:16.000 Whether it's feasible now to get all the nuances correct.
02:02:21.000 I bet it's pretty close.
02:02:22.000 I bet it's pretty close right now.
02:02:24.000 But it has to be very close because we're doing it like in the next month or so.
02:02:29.000 Don't you think they could do it, Jamie?
02:02:31.000 Yeah, I think they could do it right now.
02:02:33.000 Listen, Ray, I appreciate you very much.
02:02:35.000 Thank you very much for being here.
02:02:36.000 My pleasure.
02:02:37.000 And thank you for this book.
02:02:39.000 When is it available?
02:02:40.000 June 24th.
02:02:42.000 I got an early copy, kids.
02:02:44.000 Thank you, sir.
02:02:44.000 Really appreciate you.
02:02:45.000 Thank you very much.
02:02:46.000 My pleasure.
02:02:47.000 Bye, everybody.