The Joe Rogan Experience - September 11, 2019


Joe Rogan Experience #1350 - Nick Bostrom


Episode Stats

Length

2 hours and 32 minutes

Words per Minute

158.41885

Word Count

24,180

Sentence Count

1,434

Misogynist Sentences

8


Summary

In this episode, we talk about artificial intelligence and what it means for the future of humanity, and why we should be worried about it. We also talk about the implications of super-intelligent machines replacing us, and how we can prepare for them. This episode was produced and edited by Annie-Rose Strasser and Alex Blumberg. Our theme song was written and performed by Micah Vellian and our ad music was made by Mark Phillips. Additional music was written, produced, and produced by Matthew Boll. Additional production and mixing by Patrick Muldowney. Our thanks to our sponsor, Ajinomoto, for the use of their logo and logo design, and our editor-in-chief, Kevin McLeod for the music, and for the production of the show's theme song, "Goodbye Outer Space" by Suneaters, courtesy of Epitaph Records. The show was mixed by Haley Shaw. Music by Jeff Kaale and Christian Bladt. Art: Mackenzie Moore Music: Hayden Coplen Editor: Will Witwer Additional mixing and mastering by David Fincher Thank you to Peter Thiel Thanks to our sponsors, Ayn Rand, Alyssa Miller, Ben Kuchta, and Tyler Cowen and Rachel Goodman for producing the episode's score and sound design by Jeff Perla, and thanks to Rachel Ward at the Electric Lighthearted Productions of the excellent score and production by John Rocha, and the amazing work of our good vibes, and special thanks to the excellent sound design and editing by Ben Kotler, and music by Ben Koppen, and Rachel Ward, and Andrew Kuchter, and Patrick McElroy we hope you enjoy this episode and hope you do so much so that you enjoy it! thanks to everyone who sent us a review of this episode. , and all the feedback and support we can help us make it even better than last week's review, and review it, and all of our support is so much more, and we really appreciate the feedback we get a chance to make it better next week! - thank you for your feedback, and thank you so much of it's worth listening to it, we really really appreciate it, please review it. - and we appreciate it - we really do appreciate it. Thank you for all the love and support and support us.


Transcript

00:00:00.000 And here we go.
00:00:03.000 Alright Nick, this is one of the things that scares people more than anything.
00:00:09.000 Is the idea that we're creating something, or someone's going to create something that's going to be smarter than us.
00:00:14.000 That's going to replace us.
00:00:17.000 Is that something we should really be concerned about?
00:00:20.000 I presume you're referring to babies?
00:00:24.000 I'm referring to artificial intelligence.
00:00:26.000 Ah, yes.
00:00:28.000 Well, it's the big fear and the big hope, I think.
00:00:32.000 Both?
00:00:33.000 At the same time, yeah.
00:00:34.000 How is it the big hope?
00:00:36.000 Well, there are a lot of things wrong with the world as it is now.
00:00:39.000 Pull this up to your face if you would.
00:00:43.000 All the problems we have, most of them could be solved if we were smarter or if we had somebody on our side who were a lot smarter with better technology and so forth.
00:00:58.000 Also, I think if we want to imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's going to happen at all, after we have super intelligence that then develops the technology to make that possible.
00:01:18.000 The real question is whether or not we would be able to harness this intelligence or whether it would dominate.
00:01:24.000 Yeah, that certainly is one question.
00:01:27.000 Not the only.
00:01:29.000 You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history.
00:01:36.000 So I think there are really two challenges we need to meet.
00:01:40.000 One is to make sure we can align it with human values and then make sure that we together do something better with it than fighting wars or oppressing one another.
00:01:50.000 I think, well, what I'm worried about more than anything is that human beings are going to become obsolete.
00:01:54.000 That we're going to invent something that's the next stage of evolution.
00:01:58.000 I'm really concerned with that.
00:02:00.000 I'm really concerned with if we look back on ancient hominids, Australopithecus, just think of some primitive ancestor of man.
00:02:07.000 We don't want to go back to that.
00:02:10.000 That's a terrible way to live.
00:02:12.000 I'm worried that what we're creating is the next thing.
00:02:17.000 I think...
00:02:20.000 We don't necessarily want, or at least I wouldn't be totally thrilled with a future where humanity as it is now was the last and final word.
00:02:32.000 The ultimate version beyond that.
00:02:35.000 I think there's a lot of room for improvement.
00:02:37.000 But not anything that is different is an improvement.
00:02:40.000 So the key would be, I think, to find some...
00:02:44.000 We're good to go.
00:02:53.000 We're good to go.
00:03:12.000 Yeah, the idea that we're in a state of evolution, that we are just like we look at ancient hominids, that we are eventually going to become something more advanced or at least more complicated than we are now.
00:03:24.000 But what I'm worried is that biological life itself has so many limitations.
00:03:28.000 When we look at the evolution of technology, if you look at Moore's Law or if you just look at new cell phones, like they just released a new iPhone yesterday and they're talking about all these Incremental increases in the ability to take photographs and wide-angle lenses and night mode and a new chip that works even faster.
00:03:45.000 These things, there's not, the word evolution is incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine biologically.
00:03:55.000 Like if we had a thing that we had created, if we had created, instead of artificial intelligence in terms of like something in a chip or computer, If we created a life form, a biological life form, but this biological life form was improving radically every year.
00:04:12.000 It didn't even exist.
00:04:13.000 The iPhone existed in 2007. That's when it was invented.
00:04:16.000 If we had something that was 12 years old, but all of a sudden was infinitely faster and better and smarter and wiser than it was 12 years ago, the newest version of it, version X1, we would start going, whoa, whoa, whoa, hit the brakes on this thing, man.
00:04:32.000 How many more generations before this thing's way smarter than us?
00:04:36.000 How many more generations before this thing thinks that human beings are obsolete?
00:04:40.000 It's coming at us fast, it feels like.
00:04:43.000 But some people think, oh, it's slowing down now.
00:04:48.000 Who thinks it's slowing down?
00:04:50.000 Well, don't I have like Tyler Cowen and even Peter Thiel sometimes goes on about the pace of innovation not really being what it needs to be.
00:05:02.000 I mean, maybe it was faster in like 1890s, but still compared to almost all of human history, it seems like a period of unprecedented rapid progress right now.
00:05:17.000 Unprecedented.
00:05:18.000 I'd say so.
00:05:19.000 Yeah, I mean, except for maybe a couple of decades, a hundred years ago, when there was a lot of, you know, electricity, the whole thing.
00:05:25.000 Yeah.
00:05:26.000 No, I agree.
00:05:28.000 I just, I don't think it's a concern, because it's more of a curiosity.
00:05:34.000 I mean, I am concerned, but the more I look at it and go, well, this is, it seems inevitable.
00:05:40.000 Yeah.
00:05:40.000 That we're going to run into artificial intelligence.
00:05:44.000 But the questions are so open-ended.
00:05:46.000 We really don't know when.
00:05:47.000 We really don't know what form it's going to take.
00:05:49.000 And we really don't know what it's going to do to us.
00:05:54.000 Yeah, so I see it as not something that should be avoided, neither something that we should just be completely gung-ho about, but more like a kind of gate through which we will have to pass at some point.
00:06:07.000 All paths that are both plausible and lead to really great futures, I think, at some point involve the development of greater-than-human intelligence, machine intelligence.
00:06:17.000 And so that our focus should be on getting our act together as much as we can in whatever period of time we have before that occurs.
00:06:25.000 Prepare ourselves.
00:06:27.000 Well, I mean, that might involve doing some research into various technical questions as how you build these systems so that we actually understand what they are doing.
00:06:38.000 We're good to go.
00:07:01.000 Well, that's certainly possible.
00:07:02.000 We're certainly capable of screwing it all up.
00:07:05.000 Where is the current state of technology now in regards to artificial intelligence and how far away do you think we are from AGI? Well, different people have different views on that.
00:07:17.000 I think the truth of the matter is that it's very hard to have accurate views about the timelines for these things that still involve kind of beginning breakthroughs that have to happen.
00:07:32.000 Certainly, I mean, over the last eight or ten years, there has been a lot of excitement with the deep learning revolution.
00:07:39.000 Things that...
00:07:40.000 I mean, it used to be that people thought of AI as this kind of autistic savant, really good at logic and counting and memorizing facts, but...
00:07:50.000 With no intuition.
00:07:53.000 And this deep learning evolution, when you began to do these deep neural networks, you kind of solved perception in some sense.
00:08:01.000 You have computers that can see, that can hear, and that have visual intuition.
00:08:08.000 So that has enabled a whole wide suite of applications, which makes it commercially valuable, which then drives a lot of investment in it.
00:08:18.000 So there's now quite a lot of momentum in machine learning and trying to kind of stay ahead of that.
00:08:26.000 It's interesting that when we think about artificial intelligence and whatever potential form that it's going to take, if you look at films like 2001, like Hal, like, open the door, Hal, you know?
00:08:38.000 We think of something that's communicating to us, like a person would, and maybe is a little bit colder and doesn't share our values and has a more pragmatic view of life and death and things.
00:08:54.000 When we think of intelligence, though, I think intelligence in our mind is almost inexorably connected to all the things that make us human, like emotions and And ambition and all these things, like the reason why we innovate.
00:09:07.000 It's not really clear.
00:09:09.000 We innovate because we enjoy innovation and because we want to make the world a better place and because we want to fix some problems that we've created and we want to solve some limitations of the human body and the environment that we live in.
00:09:21.000 But we sort of assume that intelligence that we create will also have some motivations.
00:09:28.000 Well, there is a fairly large class of possible structures you could do.
00:09:34.000 If you want to do anything that has any kind of cognitive or intellectual capacity at all, a large class of those would be what we might call agents.
00:09:42.000 So these would be systems that interact with the world in pursuit of some goal.
00:09:49.000 And if there are a sophisticated class of agents, they can plan ahead a sequence of actions.
00:09:55.000 Like more primitive agents might just have reflexes.
00:09:59.000 But the sophisticated agent might have a model of the world where it can kind of think ahead before it starts doing stuff.
00:10:06.000 It can kind of think, what would I need to do in order to reach this desired state?
00:10:10.000 And then reason backwards from that.
00:10:12.000 So I think it's a fairly natural...
00:10:14.000 It's not the only possible cognitive system you could build, but it's also not this weird, bizarre, special case that, you know, it's a fairly natural thing to aim for.
00:10:23.000 If you're able to specify the goal, something you want to achieve, but you don't know how to achieve it, a natural way of trying to go about that is by building this system that has this goal and is an agent and then moves around and tries different things and eventually perhaps learns to solve that task.
00:10:39.000 Do you anticipate different types of artificial intelligence?
00:10:44.000 Like artificial intelligence that mimics the human emotions?
00:10:49.000 Do you think that people will construct something that's very similar to us in a way that we can interact with it in common terms?
00:10:57.000 Or do you think it will be almost like communicating with an alien?
00:11:04.000 So there are different scenarios here.
00:11:07.000 My guess is that the first thing that actually achieves superintelligence would not be very human-like.
00:11:16.000 There are different possible ways you could try to get to this level of technology.
00:11:21.000 One would be by trying to reverse engineer the human brain.
00:11:23.000 We have an existence in the limiting case.
00:11:27.000 Imagine if you just made an exact duplicate in silicon of the human brain, like every neuron had some counterpart.
00:11:34.000 So that seems technologically very difficult to do, but it wouldn't require a big theoretical breakthrough to do it.
00:11:41.000 You could just, if you had sufficiently good microscopy and large enough computers and enough elbow grease, you could kind of...
00:11:48.000 But it seems to me plausible that what will work before we are able to do it that way will be some more synthetic approach.
00:11:56.000 That would only be a very rough resemblance, maybe with the neocortex.
00:12:01.000 Okay.
00:12:01.000 Yeah, that's one of the big questions, right?
00:12:03.000 Whether or not we can replicate all the functions of the human brain in the way it functions and mimic it exactly, or whether we could have some sort of superior method that achieves the same results that the human brain does in terms of its ability to calculate and reason and do multiple tasks at the same time.
00:12:21.000 Yeah, and I also think that maybe once you have a sufficiently high level of this general form of intelligence, then you could use that maybe to emulate or mimic things that we do differently.
00:12:36.000 The cortex is quite limited, so we rely a lot on earlier neurological structures that we have.
00:12:42.000 We have to be guided by emotion because we can't just calculate everything out.
00:12:47.000 And instinct, and if we lost all of that, we would be helpless.
00:12:53.000 But maybe some system that had a sufficiently high level of this more abstract reasoning capability could maybe use that to substitute for things that weren't built in in the same way that we do.
00:13:03.000 Have you ever talked to Sam Harris about this?
00:13:05.000 Yeah, a little bit.
00:13:06.000 Have you ever had a podcast with him?
00:13:09.000 Yeah, actually he had him on his podcast half a year ago.
00:13:12.000 I'll have to listen to it because he has the worst view of the future in terms of artificial intelligence.
00:13:21.000 He's terrified of it.
00:13:22.000 And when I talk to him, he terrifies me.
00:13:24.000 And Elon Musk is right up there.
00:13:26.000 He also has a terrifying view of what artificial intelligence could potentially be.
00:13:32.000 What do you say to those guys?
00:13:34.000 Well, I mean, I do think that there are these significant risks that will be associated with this transition to the machine intelligence era, including existential risks, threats to the very survival of humanity or what we care about.
00:13:50.000 So why are we doing this?
00:13:52.000 There are a lot of things we're doing that maybe globally it would be better if we didn't do.
00:13:57.000 Why do we build thousands of nuclear weapons?
00:14:00.000 Why do we overfish the oceans?
00:14:04.000 If I actually ask why do different individuals work on AI research or why do different Thank you.
00:14:16.000 Thank you.
00:14:30.000 Just like when you had steam engines and industrialization a few hundred years ago and electricity.
00:14:39.000 It's going to just open up a lot of economic opportunities.
00:14:42.000 You want to be in there.
00:14:43.000 You want to be this kind of, we are going to do subsistence agriculture while the rest of the world is moving on.
00:14:52.000 It's kind of overdetermined.
00:14:53.000 You could remove some of these reasons and there would still be enough reasons for why people would be pushing forward with this.
00:14:59.000 One of the things that scares me the most is the idea that if we do create artificial intelligence, then it will improve upon our design and create far more sophisticated versions of itself.
00:15:09.000 And that it will continue to do that until it's unrecognizable, until it reaches literally a godlike potential.
00:15:18.000 I mean, I forget what the real numbers were, maybe you could tell us, but someone had calculated some Reputable source and calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time.
00:15:33.000 Like if it was allowed to innovate and then make better versions of itself and those better versions of itself were allowed to innovate and make better versions of itself.
00:15:40.000 You're talking about not an exponential increase of intelligence but an explosion.
00:15:45.000 Well, we don't know.
00:15:47.000 So it's hard enough to forecast the pace at which we will make advances in AI. Because we just don't know how hard the problems are that we haven't yet solved.
00:15:58.000 And, you know, once you get to human level or a little bit above, I mean, who knows?
00:16:03.000 It could be that there is some level where to get further, you would need to put in a lot of...
00:16:09.000 Thinking time to kind of get there.
00:16:11.000 Now, what is easier to estimate is if you just look at the speed, because that's just a function of the hardware that you're running it on, right?
00:16:19.000 So there we know that there is a lot of room in principle.
00:16:23.000 If you look at the physics of computation and you look at what would an optimally arranged physical system be that was optimized for computation, that would weigh many, many orders above what we can do now.
00:16:36.000 And then you could have arbitrarily large systems like that.
00:16:39.000 So, from that point of view, we know that that could be things that would be like a million times faster than the human brain and with a lot more memory and stuff like that.
00:16:48.000 And then something, if it did have a million times more power than the human brain, it could create something with a million times more computational power than itself.
00:17:00.000 It could make better versions.
00:17:02.000 It could continue to innovate.
00:17:04.000 Like if we create something and we say, you are, I mean, it is sentient.
00:17:10.000 It is artificial intelligence.
00:17:12.000 Now, please go innovate.
00:17:14.000 Please go follow the same directive and improve upon your design.
00:17:19.000 Yeah, well, we don't know how long that would take then to get to something.
00:17:23.000 We already have sort of millions of times more thinking capacity than a human has.
00:17:30.000 I mean, we have millions of humans.
00:17:32.000 So if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do.
00:17:39.000 But then that might still be quite a lot of orders of magnitude until it would be equivalent of the whole human species.
00:17:47.000 And maybe during that time other things happen, maybe we upgrade our own abilities in some way.
00:17:54.000 So there are some scenarios where it's so hard to get even to one human baseline that we kind of use this massive amount of resources just to barely create kind of a village agent using billions of dollars of compute,
00:18:09.000 right?
00:18:10.000 So if that's the way we get there, then, I mean, it might take quite a while, because you can't easily scale something that you've already spent billions of dollars building.
00:18:18.000 Yeah, some people think the whole thing is blown out of proportion, that we're so far away from creating artificial general intelligence that resembles human beings, that it's all just vaporware.
00:18:28.000 What do you say to those people?
00:18:30.000 Well, I mean, one would be that I would want to be more precise about just how far away does it have to be in order for us to be rational to ignore it.
00:18:41.000 It might be that if something is sufficiently important and high stakes, then even if it's not going to happen in the next 5, 10, 20, 30 years, it might still be wise for our pool of 7 billion plus people to have some people actually thinking about this ahead of time.
00:18:58.000 Yeah, for sure.
00:18:59.000 So some of these disagreements, I guess this is my point, are more apparent than real.
00:19:04.000 Like, some people say it's going to happen soon, and some other people say, no, it's not going to happen for a long time.
00:19:08.000 And then, you know, one person means by soon, five years, and another person means by a long time, five years.
00:19:15.000 And, you know, it's more of different attitudes rather than different specific beliefs.
00:19:19.000 So I would first want to make sure that there actually is a disagreement.
00:19:25.000 Now, if there is, if somebody is very confident that it's not going to happen in hundreds and hundreds of years, then I guess I would want to know their reasons for that level of confidence.
00:19:35.000 What's the evidence they're looking at?
00:19:37.000 Do they have some ground for being very sure about this?
00:19:41.000 Certainly, the history of technology prediction is not that great.
00:19:46.000 You can find a lot of other examples where even very eminent technologists and scientists where culture, it's not going to happen in our lifetime.
00:19:55.000 In some cases, it actually already just happened in some other part of the world, or it happened a year or two later.
00:20:02.000 So I think some epistemic humility with these things would be wise.
00:20:09.000 I was watching a talk that you were giving and you were talking about the growth of innovation technology and GDP over the last 100 years and you were talking about the entire history of life on earth and what a short period of time humans have been here and then during what a short period of time what a stunning amount of innovation and how much change we've enacted on the earth and just a blink of an eye and you had the scale of GDP over the course
00:20:39.000 of the last hundred years.
00:20:40.000 It's crazy, because it's so difficult for us with our current perspective, just being a person, living, going about the day-to-day life that seems so normal, to put it in perspective time-wise and see what an enormous amount of change has taken place in relatively an incredibly short amount of time.
00:21:01.000 Yeah.
00:21:03.000 We think of this as sort of the normal way for things to be.
00:21:06.000 The idea that the alarm wakes you up in the morning and then you commute in and sit in front of a computer all day and you try not to eat too much.
00:21:13.000 And that if you sort of imagine that, you know, maybe in 50 years or 100 years or at some point in the future, it's going to be very different.
00:21:20.000 That's like some radical hypothesis.
00:21:22.000 But, of course, this quote-unquote normal condition is a huge anomaly any which way you look at it.
00:21:29.000 I mean, if you look at it on a geological timescale, the human species is very young.
00:21:34.000 If you look at it historically, you know, for more than 90%, we were just hunter-gatherers running around and agriculturalists for...
00:21:47.000 The last couple of hundred years, when some parts of the world have escaped the Malthusian condition, where you basically only have as much income as you need to be able to produce two children.
00:21:59.000 And we have the population exploit.
00:22:01.000 All of this is very, very, very recent.
00:22:04.000 And in space as well, of course, almost everything is ultra-high vacuum, and we live on the surface of this little special crumb.
00:22:13.000 And yet we think this is normal and everything else is weird, but I think that's a complete inversion.
00:22:19.000 And so when you do plot, if you do plot, for example, world GDP, which is a kind of rough measure for the total amount of productive capability that we have, right?
00:22:32.000 Right.
00:22:33.000 If you plot it over 10,000 years, what you see is just a flat line and then a vertical line.
00:22:40.000 And you can't really see any other structure.
00:22:42.000 It's so extreme, the degree to which humanity's productive capacity.
00:22:47.000 So if I look at this picture, now we imagine this is now the normal, this is the way it's going to be now, indefinitely.
00:22:55.000 It just can't seem...
00:22:57.000 Prima facia implausible.
00:22:59.000 It sort of doesn't look like we are in a static period right now.
00:23:03.000 It looks like we're in the middle of some kind of explosion.
00:23:07.000 Explosion.
00:23:08.000 And oddly enough, everyone involved in the explosion, everyone that's innovating, everyone that's creating all this new technology, they're all apart.
00:23:18.000 of this momentum that was created before they were even born.
00:23:21.000 So it does feel normal.
00:23:23.000 They're just a part of this whole spinning machine and they jump in, they're born, they go to college, next thing you know they have a job and they're contributing to making new technology and then more people jump in and add on to it and there's very little perspective in terms of like the historical significance of this incredible explosion technologically.
00:23:43.000 When you look at What you're talking about, that gigantic spike.
00:23:46.000 No one feels it, which is one of the weirdest things about it.
00:23:50.000 I mean, you kind of expect every year there will be a better iPhone or whatever, right?
00:23:54.000 Yes, if not, we'd be upset.
00:23:55.000 For almost all of human history.
00:23:56.000 People lived and died, and so absolutely no technological change.
00:24:00.000 And in fact, you could have many, many generations.
00:24:04.000 The very idea that there was some trajectory...
00:24:09.000 In the material conditions is a relatively new idea.
00:24:14.000 I mean, people thought of history either as, you know, some kind of descent from a golden age, or some people had a cyclical view.
00:24:22.000 But it was all in terms of political organization, that would be a great kingdom, and then a wise ruler would rule for a while.
00:24:29.000 And then like a few hundred years later, you know, they're Grand-great-grandchildren would be too greedy, and it would come into anarchy, and then a few hundred years later it would come back together again.
00:24:40.000 So it would be all these pieces moving around, but no new pieces really entering.
00:24:44.000 Or if they did, it was at such a slow rate that you didn't notice.
00:24:49.000 But over the eons, the wheel slowly turns, and somebody makes a slightly better wheel, somebody figures out how to They irrigate a lot better.
00:25:00.000 They breed better crops.
00:25:02.000 And eventually there is enough that you could have enough of a population, enough brains that then create more ideas at a quick enough rate that you get this industrial revolution.
00:25:16.000 And that's where we are now, I think.
00:25:19.000 Elon Musk had the most terrifying description of humanity.
00:25:22.000 He said that we are the biological bootloader for artificial intelligence.
00:25:30.000 That's what we're here for.
00:25:31.000 Well, bootleaders are important.
00:25:33.000 They are important, but I think...
00:25:37.000 There's like objectively and there's personally.
00:25:39.000 Like objectively, if you were outside of the human race and you were looking at all these various life forms competing on this planet for resources and for survival, you would look at humanity and you go, well, you know, clearly it's not finished.
00:25:54.000 So there's going to be another version of it.
00:25:56.000 It's like, when is this version going to take place?
00:25:58.000 Is it going to take place?
00:25:59.000 Over millions and millions of years like it has historically when it comes to biological organisms or is it going to invent something?
00:26:08.000 That takes over from there, and then that's the new thing.
00:26:11.000 Something that's not based on tissue, something that's not based on cells, it doesn't have the biological limitations that we have, nor does it have all the emotional attachments to things like breeding, social dominance, hierarchies, all those things were no consequence to it.
00:26:27.000 It doesn't mean anything, because it's not biological.
00:26:30.000 Yeah, I mean, I don't think millions of years, I mean, a number of decades or whatever.
00:26:37.000 But it's interesting that even if we set that aside, we say machine intelligence is possible for some reason.
00:26:43.000 Let's just play with that.
00:26:45.000 I still think that would be very rapid change, including biological change.
00:26:50.000 I mean, we are doing great advances, making great advances in biotech as well, and we'll increasingly be able to control what our own organisms are doing through different means and enhance human capacities through biotechnology.
00:27:09.000 So even there, it's not going to happen overnight, but over an historically very short period of time, I think you would still see quite profound change just from applying bioscience to change human capacities.
00:27:25.000 Yeah, one of the technologies or one of the things that's been discussed to sort of mitigate the dangers of artificial intelligence is a potential merge.
00:27:35.000 Some sort of symbiotic relationship with technology that you hear discussed, like...
00:27:41.000 I don't know exactly how Elon's neural link works, but it seems like a step in that direction.
00:27:49.000 There's some sort of a brain implant that interacts with an external device, and all of this increases the bandwidth for available intelligence and knowledge.
00:28:01.000 Yeah, I'm sort of skeptical that that will work.
00:28:04.000 I mean, good that somebody tries it, you know, but I think it's quite technically hard to improve a normal, healthy human being's, say, cognitive capacity or other capacities by implanting things in them.
00:28:22.000 And get benefits that you couldn't equally well get by having the gadget outside of the body.
00:28:27.000 So I don't need to have an implant to be able to use Google, right?
00:28:32.000 Right.
00:28:33.000 And there are a lot of advantages to having it external.
00:28:36.000 You can kind of upgrade it very easily.
00:28:38.000 You can shut it off.
00:28:40.000 Well, hopefully you could do that even with implant.
00:28:43.000 And once you start to look into the details, there's sort of these kind of demos, but then if you actually look at the papers, often you find, well, then there were these side effects, and the person had headaches, or they had some deficit, and the speech, you know, like, infection.
00:28:57.000 Like, it's just, biology is messy.
00:28:58.000 Yes.
00:29:00.000 So, maybe it will work better than I expect.
00:29:06.000 That could be good.
00:29:07.000 But otherwise, I think that the place where it will first become possible to enhance...
00:29:16.000 Human biological capacities would be through genetic selection, which is technologically something very near.
00:29:27.000 You mean like CRISPR type?
00:29:28.000 So that would be editing, right?
00:29:30.000 When you actually go in and change things.
00:29:31.000 That also is moving.
00:29:33.000 What do you mean by selection?
00:29:34.000 Well, so this would just be in the context of, say, in vitro fertilization.
00:29:36.000 You have usually some half dozen or dozen embryos created during this fertility procedure, which is standardly used.
00:29:45.000 So rather than just a doctor kind of looking at these embryos and saying, well, that one looks healthy, I'm going to implant that, you could run some genetic test and then use that as a predictor and select the one you think has the most desirable attributes.
00:30:01.000 And so this could be a trend in terms of how human beings reproduce, that we...
00:30:06.000 Instead of just randomly having sex, woman gets pregnant, gives birth to a child, we don't know what it's going to be, what's going to happen.
00:30:15.000 We just hope that it's a good kid.
00:30:17.000 Instead of that, you start looking at all the various components that we can measure.
00:30:24.000 Yeah.
00:30:25.000 And so, I mean, to some extent, we already do this.
00:30:28.000 There are a lot of testing done for various chromosomal abnormalities that you can already check for.
00:30:37.000 But our ability to look beyond clear, stark diseases, that this one gene is wrong.
00:30:44.000 To look at more complex traits is increasing rapidly.
00:30:49.000 So obviously there are a lot of ethical issues and different views that come into that.
00:30:53.000 But if we're just talking about what is technologically feasible, I think that already you could do a very limited amount of that today.
00:31:00.000 And maybe you would get two or three IQ points in expectation more if you selected using current technology based on 10 embryos, let's say.
00:31:10.000 So very small.
00:31:11.000 But as genomics gets better at deciphering the genetic architecture of Whether it's intelligence or personality attributes, then you would have more selection power and you could do more.
00:31:25.000 And then there is a number of other technologies we don't yet have, but which if you did, would then kind of stack with that and enable much more powerful forms of enhancement.
00:31:35.000 So there, yeah, I don't think there are any major technological hurdles, really, in the way.
00:31:43.000 Just some small amount of incremental further improvement.
00:31:47.000 That's when you talk about Doing something with genetics and human beings and selecting.
00:31:56.000 Selecting for the superior versions.
00:31:58.000 And then if everybody starts doing that.
00:32:01.000 The ethical concerns, when you start discussing that, people get very nervous.
00:32:04.000 Because they start to look at their own genetic defects.
00:32:07.000 And they go, oh my god, what if I didn't make the cut?
00:32:09.000 Like, I wouldn't be here.
00:32:10.000 And you start thinking about all the imperfect people that have actually contributed in some pretty spectacular ways to what our culture is.
00:32:17.000 And like, well, if everybody has perfect genes, would all these things even take place?
00:32:21.000 Like, what are we doing, really, if we're bypassing nature and we're choosing to select for the traits and the attributes that we find to be the most positive and attractive?
00:32:33.000 Like, what are, like, that gets slippery.
00:32:35.000 And you think what would have happened if, say, some earlier age...
00:32:40.000 had had this ability to kind of lock in their, you know, their prejudices, or if the Victorians had had this, maybe we would all be, whatever, pious and patriotic now or something.
00:32:55.000 Yeah, we know, like the Nazis.
00:32:57.000 So, in general, with all of these powerful technologies we are developing, there is I think the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools.
00:33:15.000 But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom.
00:33:22.000 But we haven't earned them.
00:33:24.000 The people that are using them are sort of...
00:33:27.000 Think about the technology that all of us use.
00:33:32.000 How many...
00:33:33.000 How many pieces of technology do you use in a day and how much do you actually understand any of those?
00:33:38.000 Most people have very little understanding of how any of the things they use work.
00:33:42.000 They put no effort at all into creating those things, but yet they've inherited the responsibility of the power that those things possess.
00:33:50.000 Yeah, I mean, that's the only way we can do it.
00:33:53.000 It's just way too complex for any person.
00:33:56.000 If you had to sort of learn how to build every tool you use, you wouldn't get very far.
00:34:01.000 Isn't that fascinating, though, when you think about human beings and all the different things we do?
00:34:06.000 We have very little understanding of the mechanisms behind most of what we need for day-to-day life, yet we just use them because there's so many of us and so many people are understanding various parts of all these different things that together, collectively,
00:34:21.000 we can utilize the intelligence of all these millions of people that have innovated and we, with no work whatsoever, just go into the Verizon store and pick up the new phone.
00:34:30.000 I mean, and not just technology, but worldviews and political ideas as well.
00:34:36.000 It's not as if most people sit down with an empty table, try to think from the basic principles of what would be the ideal configuration of the state or something like that.
00:34:47.000 You just kind of absorb it and go with it.
00:34:49.000 You float in the stream of culture.
00:34:51.000 Yeah.
00:34:53.000 And it's amazing just how little of that actually at any point channels through your sort of conscious attention where you make some rational otherwise with like deliberate decision.
00:35:02.000 Most you just get carried away with.
00:35:07.000 But that again, I mean, if this is what we have to work with, then there's no other way.
00:35:12.000 There's no other way.
00:35:14.000 There's no other way and there's no way, even like you and I discussing this, like Discussing the history of this incredible spike of evolution, or innovation rather, in technology.
00:35:28.000 It just doesn't feel like anything.
00:35:32.000 It feels normal.
00:35:33.000 So even though we can intellectualize it, even though we can have this conversation, talk about what an incredible time we're in and how terrifying it is that things are moving at such an incredibly rapid rate.
00:35:44.000 And no one's putting the brakes on it.
00:35:47.000 No one's thinking about the potential pros and cons.
00:35:50.000 We're just pushing ahead.
00:35:51.000 Yeah.
00:35:52.000 Well, not nobody.
00:35:53.000 I mean, there are a few people.
00:35:55.000 I've got my research group.
00:35:56.000 Yes.
00:35:56.000 There's actually increased...
00:35:58.000 I mean, when I got interested in these things in the 90s, and it was very much a fringe activity.
00:36:05.000 There was some internet mailing lists, some people exchanging ideas.
00:36:08.000 But since then, I mean, there's now a small...
00:36:22.000 Yeah.
00:36:23.000 Yeah.
00:36:36.000 Well, actually, the field of artificial intelligence sometimes is kind of dated to 1956. That was a conference, but I mean, it's somewhat arbitrary, but roughly that's when it got started.
00:36:49.000 But the pioneers, even right back at the beginning, They thought that they were going to be able to do all the things that the human brain does.
00:36:59.000 In fact, they were quite optimistic.
00:37:00.000 They thought maybe 10 years or something like that.
00:37:02.000 Back then?
00:37:03.000 Yeah, many of them.
00:37:04.000 Really?
00:37:04.000 Even before computers?
00:37:06.000 No, they had computers in 1956. How did they?
00:37:08.000 What kind of computers?
00:37:09.000 Well, slow.
00:37:11.000 Slow computers.
00:37:12.000 When was the computer invented?
00:37:14.000 Well, it's one of those things.
00:37:17.000 I think during the Second World War, they had computers that were useful for doing stuff.
00:37:26.000 Then before that, they had kind of tabulating machines.
00:37:30.000 And before that, they had designs for things that, if they had been put together, would have been able to calculate a lot of numbers.
00:37:37.000 And then before that, they had an Abacus.
00:37:39.000 It kind of...
00:37:42.000 There's a number of, like, the line from having some external tool like a notepad, which you can calculate bigger numbers, right, if you can scribble on a piece of paper to a modern-day supercomputer, like, that kind of, you can break it down into small steps and they happen gradually.
00:37:57.000 But, yeah, roughly since the 40s or so.
00:38:03.000 That's when they first invented code?
00:38:04.000 Like, electrical, yeah.
00:38:05.000 Yeah.
00:38:06.000 I think.
00:38:08.000 So even back then, they thought we're only about 10 years away.
00:38:12.000 Some people.
00:38:26.000 There was some summer project that we're going to have a few students or whatever and work over the summer, and I thought, oh, maybe we can solve vision over the summer.
00:38:37.000 And now we've kind of solved vision, but that's like six years later.
00:38:43.000 It can be hard to know how hard the problem is until you've actually solved it.
00:38:46.000 But the really interesting thing to me is that even though I can understand why they were wrong about how difficult it is, because how would you know, right, if it's 10 years of work or 100 years of work?
00:38:57.000 Kind of hard to estimate at the outset.
00:38:59.000 But what is striking is that even the ones who thought it was 10 years away, they didn't think of what the obvious next step would be after that.
00:39:07.000 Like if you actually succeeded, At mechanizing all the functions of the human mind.
00:39:12.000 They couldn't think, well, it's obviously not going to stop there once you get human equivalence.
00:39:17.000 You're going to get superintelligence.
00:39:21.000 But it was as if the imagination muscle had so exhausted itself thinking of this radical possibility.
00:39:25.000 You could have a machine that does everything that the human does.
00:39:28.000 You couldn't kind of take the next step.
00:39:31.000 Or for that matter, the immense ethical and social implications.
00:39:36.000 Even if all you could do is to replicate a human mind, like in a machine.
00:39:39.000 If you actually thought you were building that and you were 10 years away, it'd be crazy not to spend a lot of time thinking about how this is going to impact the world.
00:39:47.000 But that didn't really seem to have occurred much to them at all.
00:39:51.000 Well, sometimes it seems that people just want to do it.
00:39:55.000 Like, even with the creation of the atomic bomb, I mean, they felt like they had to do it because we had to develop it before the Germans did.
00:40:04.000 Right.
00:40:04.000 But that was a specific reason.
00:40:06.000 Like, it wasn't just, oh, it could be fun to do, right?
00:40:09.000 Sure.
00:40:10.000 And so with the Manhattan Project, obviously, it was during wartime and maybe Hitler had a program.
00:40:16.000 They thought you could easily see why that would motivate a lot of people.
00:40:22.000 But even before they actually started the Manhattan product, so the guy who kind of first conceived of the idea that you could make a nuclear explosion, Leo Szilard, he was a kind of eccentric physicist who conceived of the idea of a chain reaction.
00:40:39.000 So it's been known before that that you could split the atom and a little bit of energy came out.
00:40:43.000 But if you're going to split one atom at a time, You're never going to get anything because it's too little.
00:40:49.000 So the idea of a chain reaction was that if you split an atom and it releases two neutrons, then each of those can split another two atoms that then release four neutrons and you get an exponential blow-up.
00:41:02.000 So he thought of this...
00:41:04.000 I forget exactly when.
00:41:05.000 It must have been in the early 30s, probably.
00:41:10.000 And he was a remarkable person because he didn't just think, oh, this is a fun idea.
00:41:15.000 I should publish it and get a lot of citations.
00:41:17.000 But he thought, what would this mean for the world?
00:41:19.000 Gee, this is...
00:41:22.000 This could be bad for civilization.
00:41:24.000 And so he then went to try to persuade some other of his colleagues who were also working in nuclear physics not to pursue this, not to publish unrelated ideas and have some partial success.
00:41:38.000 So there was some partial success where his colleagues agreed.
00:41:41.000 Some things were not published immediately.
00:41:43.000 Not all of his colleagues listened to him.
00:41:45.000 Of course.
00:41:46.000 Isn't that the problem?
00:41:48.000 That is the problem.
00:41:48.000 Some people are always going to want to be the ones that sort of innovate.
00:41:52.000 That is the problem in those cases where you would actually prefer the innovation not to happen.
00:41:57.000 Historically, of course, we now look back and think there are a lot of dissenters that we are now glad could have their way because a lot of cultures were quite resistant to innovation and they wanted to do the way things had always been,
00:42:17.000 whether it's like social innovation or technological innovation.
00:42:21.000 The Chinese were at one point ahead in seafaring, exploring, and then they shut all of that down because the emperor at the time, I guess, didn't like it.
00:42:33.000 So there are many examples of kind of stasis, but as long as there were a lot of different places, a lot of different countries, a lot of different mavericks, then somebody would always do it.
00:42:41.000 And then once the others could see that it worked, they could kind of copy and...
00:42:46.000 Things move forward.
00:42:47.000 But of course if there is a technology you actually want not to be developed, then this multipolar situation makes it very, very hard to coordinate, to refrain from doing that.
00:43:01.000 Yeah, this I think is a kind of structural problem in the current human condition that is ultimately responsible for a lot of the existential risks that we will face in this century.
00:43:14.000 There's this kind of failure of ability to solve global coordination problems.
00:43:19.000 Yeah, and when you think about the people that did Oppenheimer and the people behind the Manhattan Project, they were inventing this to deal with this existential threat, this horrific threat from Nazi Germany, the Japanese and the World War II,
00:43:36.000 you know, this idea that this evil empire is going to try to take over the world, and this created The momentum and this created the motivation to develop this incredible technology that wind up making a great amount of our electricity and wound up creating enough nuclear weapons to destroy the entire world many times over.
00:43:57.000 And we're in this strange state now where it was motivated by this horrific moment in history, this evil empire that tries to take over the world and we come up with this incredible technological solution, the ultimate weapon That we detonate a couple of times on some cities and then now we're in this weird state where,
00:44:17.000 you know, we're how many years later?
00:44:21.000 80 years later?
00:44:22.000 And we're not doing it anymore.
00:44:24.000 We don't drop any bombs on people anymore, but we all have them and we all have them pointed at each other.
00:44:29.000 Well, not all.
00:44:29.000 Well, yes.
00:44:30.000 Which is a good thing, I think.
00:44:31.000 Quite a few.
00:44:32.000 But it's incredible that the motivation for this incredible technology, this amazing technology, was actually to deal with something that was awful.
00:44:43.000 Yeah, I mean, war has had a way of focusing minds and stuff.
00:44:49.000 No, I think that nuclear energy we would have had anyway.
00:44:51.000 Maybe it would have been developed like five years or ten years later.
00:44:56.000 Reactors are not that difficult to do.
00:45:01.000 So I think we could have gotten to all the good uses of nuclear technology that we have today without having to have had kind of the nuclear bomb developed.
00:45:10.000 Now, you pay attention to Boston Dynamics and all these different robotic creations that they've made?
00:45:17.000 They seem to have a penchant for doing really sinister-looking bots.
00:45:22.000 I think all robots that are, you know, anything that looks autonomous is kind of sinister-looking.
00:45:28.000 Well, I mean, you see the Japanese have these big-eyed, sort of rounded, so it's a different...
00:45:34.000 They're trying to trick us.
00:45:34.000 Boston Dynamics is, I guess, they want the Pentagon to give them funding or something.
00:45:38.000 Right, DARPA. They look like they're developing Terminators.
00:45:42.000 Yeah.
00:45:42.000 Yeah.
00:45:43.000 But what I was thinking is...
00:45:46.000 If we do eventually come to a time where those things are going to war for us instead of us, like if we get involved in robot wars, our robots versus their robots,
00:46:01.000 and this becomes the next motivation for increased technological innovation to try to deal with superior robots by the Soviet Union or by China, right?
00:46:10.000 These are more things that could be threats that could push people to some crazy level of technological innovation.
00:46:18.000 Yeah, it could.
00:46:20.000 I mean, I think there are other drivers for technological innovation as well that seem plenty strong commercial drivers, let us say, that we wouldn't have to rely on war or the threat of war to kind of stay innovative.
00:46:41.000 I mean, there has been this effort to try to see if it would be possible to have some kind of ban on lethal autonomous weapons.
00:46:52.000 There are a few technologies that we have.
00:46:54.000 There has been a relatively successful ban on chemical and biological weapons, which have by and large been honored and upheld.
00:47:08.000 There are kind of treaties on nuclear weapons, which has limited proliferation.
00:47:12.000 Yes, there are now maybe, I don't know, a dozen.
00:47:15.000 I don't know the exact number.
00:47:17.000 But it's certainly a lot better than 50 or 100 countries.
00:47:20.000 Yes.
00:47:22.000 And some other weapons as well, blinding lasers, landmines, cluster munitions.
00:47:29.000 So some people think maybe we could do something like this with lethal autonomous weapons, killer bots, that Is that really what humanity needs most now, like another arms race to develop killer bots?
00:47:41.000 It seems arguably the answer to that is no.
00:47:48.000 I've kind of, as a lot of my friends are supportive, I kind of stood a little bit on the sidelines on that particular campaign, being a little unsure exactly what it is.
00:48:00.000 I mean, certainly I think it'd be better if we refrained from having some arms race to develop these than not.
00:48:07.000 But if you start to look in more detail, what precisely is the thing that you're hoping to ban?
00:48:12.000 So if the idea is the autonomous bit, like the robot should not be able to make its own firing decision.
00:48:17.000 Well, if the alternative to that is...
00:48:22.000 There's some 19-year-old guy sitting in some office building and his job is whenever the screen flashes fire now, he has to press a red button.
00:48:31.000 And then exactly the same thing happens.
00:48:33.000 I mean, I'm not sure how much is gained by having that extra step.
00:48:37.000 But it is something, it feels better for us.
00:48:40.000 For some reason, someone is pushing the button.
00:48:42.000 Right.
00:48:42.000 But exactly what does that mean?
00:48:44.000 Like in every particular firing decision?
00:48:46.000 Or is it like some...
00:48:49.000 Well, you've got to attack this group of surface ships here, and here are the general parameters, and you're not allowed to fire outside these coordinates.
00:48:58.000 I don't know.
00:48:59.000 I mean, another is the question of it would be better if we had no wars, but if there is going to be a war, Maybe it is better if it's robots v.
00:49:08.000 robots or if there's going to be bombing.
00:49:11.000 Maybe you want the bombs to have high precision rather than low precision, like get fewer civilian casualties.
00:49:18.000 And operating under artificial intelligence so it makes better decisions.
00:49:21.000 Well, it depends exactly on how.
00:49:23.000 So I don't know.
00:49:24.000 On the other hand, you could imagine it kind of reduces the threshold for going to war if you think that you wouldn't fear any casualties.
00:49:30.000 Maybe you would be more eager to do it.
00:49:33.000 Right.
00:49:34.000 Or if it proliferates and you have these kind of mosquito-sized killer bots that terrorists have and It doesn't seem like a good thing to have a society where you have a facial recognition thing and then the bot flies out and you just have a kind of dystopia.
00:49:52.000 I think we're thinking rationally.
00:49:55.000 We're thinking rationally given the overall view of the human race that we want peace and everything to be well.
00:50:03.000 Realistically, if you were someone who is trying to attack someone militarily, you'd want the best possible weapons that give you the best possible advantage.
00:50:12.000 And that's why we had to develop the atomic bomb first.
00:50:17.000 It's probably why we'll try to develop the killer autonomous robot first.
00:50:24.000 Yeah, yeah.
00:50:25.000 Someone else would have it.
00:50:26.000 Right, the fear that the other is.
00:50:27.000 So this is why it's basically a coordination problem.
00:50:31.000 Like, it's hard for any one country unilaterally to make sure that the world is peaceful and...
00:50:39.000 Sure.
00:50:40.000 And kind, right?
00:50:40.000 It requires everybody to synchronize their actions.
00:50:44.000 And then you can have successes like we've had with some of these treaties.
00:50:48.000 Like, we've not had a big arms race in biological weapons or in chemical weapons.
00:50:53.000 I mean, there have been.
00:50:53.000 There were cheaters even on the biological warfare program, like the Soviet Union had massive efforts there, but still probably less use of that and less development than if there had been no such treaty.
00:51:08.000 Or just look at the amount of money being wasted every year to maintain these large arsenals so that we can kill one another if one day we decide to do it.
00:51:18.000 There's got to be a better way.
00:51:19.000 But getting there is hard.
00:51:21.000 We would hope that we would get to some point where all this would be irrelevant because there's no more war.
00:51:26.000 Yeah, and so if you look at the biggest efforts so far to make that happen, so after the First World War, people were really aware of this.
00:51:36.000 They said, this sucks, like war.
00:51:38.000 I mean, look at this.
00:51:40.000 Like a whole generation just ground up machine guns.
00:51:43.000 Got to make sure this never happens again.
00:51:45.000 So they tried to do the League of Nations, but then didn't really invest it with very much power.
00:51:52.000 And then the second war, second world war happened.
00:51:55.000 And so then again, just after that, it's fresh in people's memory saying, well, never again.
00:51:58.000 This is it.
00:52:00.000 The United Nations and in Europe, the European Union is kind of both designed as ways to try to prevent this.
00:52:07.000 But again, with kind of maybe in the case of the United Nations, quite limited powers to actually enforce the agreements.
00:52:13.000 And there's a veto, which makes it hard if it's two of the major powers that are at loggerheads.
00:52:19.000 So it might be that if there were a third big conflagration, that then people would say, well, this time, you know, we've got to really put some kind of institutional solution in place that has enough enforcement power that we don't try this yet again.
00:52:37.000 So we don't have a second robot war.
00:52:39.000 So once we get through the first robot war...
00:52:41.000 I mean, but the kind of memories fade, right?
00:52:43.000 Yes, that's the problem, right?
00:52:45.000 So even the Cold War, I mean, I grew up...
00:52:47.000 I'm Swedish, I remember.
00:52:49.000 We were kind of in between, right?
00:52:52.000 And we were taught in schools about nuclear fallout and stuff.
00:52:55.000 It was like a very palpable sense that at any given point in time, there could be some miscalculation or crisis or something.
00:53:06.000 And all the way up to senior statesmen at the time, these were like very real and very serious.
00:53:13.000 And I feel that memory of just how bad it is to live in that kind of hair-trigger nuclear arms race Cold War situation has kind of faded, and now we think, wow, maybe the world didn't blow up, so maybe it wasn't so bad after all.
00:53:27.000 Well, I think that would be the wrong lesson to learn.
00:53:30.000 It's a bit like you're...
00:53:32.000 Playing Russian roulette and you survive one and you say, well, it isn't so dangerous at all to play Russian roulette.
00:53:37.000 I think I'm going to have another go.
00:53:39.000 You've got to realize, well, maybe that was a 10% chance or a 30% chance that the world would blow up during the Cold War and we were lucky, but it doesn't mean we want to have another one.
00:53:48.000 When I was in high school, it was a real threat.
00:53:50.000 When I was in high school, everyone was terrified that we were going to go to war with Russia.
00:53:54.000 It was a big thing.
00:53:56.000 And you talk to people from my generation about that, and everybody remembers it.
00:54:00.000 Remember that feeling that you had in high school.
00:54:02.000 Like, at any day, something could go wrong, and we could be at war with another country that's a nuclear superpower.
00:54:11.000 But that's all gone now.
00:54:13.000 Like, that feeling, that fear.
00:54:15.000 People are so confident that that's not going to happen, that that's not even in people's consciousness.
00:54:20.000 Yeah.
00:54:23.000 And then a number of maneuvers are made and then you find yourself in a kind of situation where there's like honor at stake and reputation and you feel you can't back down and then another thing happens and you get into this place where if you even say something kind about the other side,
00:54:40.000 You seem to be like, you know, you're a soft, you're a pinky, you're a light.
00:54:44.000 And on both sides, on the other side as well, obviously, they're going to have the same internal dynamic.
00:54:48.000 And each side says bad things about the other.
00:54:50.000 It makes the other side hate them even more.
00:54:52.000 And these things are then hard to reverse.
00:54:54.000 Like, once you find this dynamic happening, it's kind of almost, well, it's not too late.
00:54:58.000 You can't try it, but it can be very hard to back out of that.
00:55:01.000 And so if you can prevent yourself from going down that path to begin with, that's much preferable.
00:55:07.000 When you see Boston Dynamics and you see those robots, is there something comparable that's being developed either in the Soviet Union or in China or somewhere else in the world where there's similar type robots?
00:55:19.000 Well, I think a lot of the Boston Dynamics thing seems more showy than actually useful.
00:55:25.000 Really?
00:55:25.000 These kind of animal-like things that hop around with 150 decibel or something.
00:55:33.000 If I were a special ops trying to sneak in, I wouldn't want...
00:55:36.000 This is kind of big alarm.
00:55:39.000 But I think a lot of action would be more in terms of flying drones, maybe submarine stuff, missiles, that kind of stuff.
00:55:51.000 But when you see these robots and you see the ones that look like dogs or insects, Couldn't you imagine those things being armed with guns?
00:56:06.000 I could.
00:56:07.000 When they are, then it doesn't really look showy anymore.
00:56:10.000 It seems pretty effective.
00:56:11.000 You can't even kick those things over.
00:56:13.000 Yeah, well, I mean, I think if it has a gun, I mean, it doesn't It really doesn't matter whether it looks like a dog or if it's just a small flying platform.
00:56:22.000 I mean, in general, I think the more with AI and robotics, the cooler something looks, usually technically the less impressive it is.
00:56:32.000 As you see, the extreme case of this is these robots that look exactly like a human, maybe shaped like a beautiful woman or something like that.
00:56:42.000 They're complete hype.
00:56:45.000 Like ex machina.
00:56:46.000 Well, so the movies, obviously, they do it because they don't want to film in movies.
00:56:50.000 But every once in a while, you have some press release.
00:56:53.000 I forget what the name is of this female-looking robot that got citizenship in Saudi Arabia a few years ago.
00:57:01.000 It's like a pure publicity stunt, but the media just laps it up.
00:57:04.000 Wow, they've created this.
00:57:06.000 It's exactly like a human.
00:57:07.000 What a big breakthrough.
00:57:09.000 And it's like nothing.
00:57:10.000 Do you anticipate, like when you see Ex Machina, Do you think that that's something that could be realistically, that could be implemented in a hundred years or so?
00:57:22.000 Like we really could have some form of artificial human that's indistinguishable?
00:57:29.000 Well, I think the action is not going to lie in the robotic part so much as in the brain part.
00:57:39.000 I think it's the AI part.
00:57:41.000 And robotics only insofar as it becomes enabled by having, say, much better learning algorithms.
00:57:47.000 So right now, if you have a robot, for the most part, in any one of these big factories, It's like a blind, dumb thing that executes a pre-programmed set of motions over and over again.
00:57:58.000 And if you want to change off the production, you need to get in some engineers to reprogram it.
00:58:02.000 But with a human, you could kind of show them how to do something once or twice, and then they can do it.
00:58:09.000 So it will be interesting to see over the next few years whether we can see some kind of progress in robotics that enable this kind of imitation learning.
00:58:19.000 To work well enough that you could actually start doing it.
00:58:22.000 There are demonstrations already, but robustly enough that it would be useful and you could replace a lot of these kind of industrial robotics experts by having this.
00:58:36.000 So I think in terms of making things look like human, I think that's more for Hollywood and for press releases than the actual driver of progress.
00:58:47.000 More so the actual driver of progress, but someone is going to probably try to replicate a human being once the technology becomes viable.
00:58:55.000 Did you see the movie Ex Machina?
00:59:00.000 It's a little bit of a blur.
00:59:03.000 I've seen some of these and not others.
00:59:06.000 Ex Machina was the one where the guy lives in a very remote location.
00:59:10.000 Yeah, like a beautiful place in Norway.
00:59:13.000 He created this beautiful girl robot that seduces this man.
00:59:18.000 At the end of it, she leaves him locked up in this thing and just takes off and gets on the helicopter and flies away.
00:59:25.000 The thing that's disturbing is that they She knew how to manipulate his emotions to achieve a desired result, which was him helping her escape.
00:59:34.000 But then once she did, she had no real emotions.
00:59:37.000 So he was screaming and she had no compassion and no empathy.
00:59:40.000 She just hopped on the helicopter and left him there to starve to death inside that locked box.
00:59:44.000 And that is what scares people.
00:59:47.000 This idea that we're going to create something that's intelligent, it has intelligence like us, but it doesn't have all the things that we have.
00:59:54.000 Like...
00:59:55.000 Caring, love, friendship, compassion, the need for other human beings.
01:00:00.000 If you develop an autonomous robot that's really autonomous, it has no need for other people, that's where we get weirded out.
01:00:07.000 Like, it doesn't need us.
01:00:09.000 Right, yeah.
01:00:10.000 I mean, I think...
01:00:10.000 The same would hold even if it were not a robot, but just a program inside a computer.
01:00:16.000 But yeah, the idea that you could have something that is strategic and deceptive and so forth.
01:00:22.000 But then other elements of the movie, of course, and in general, a reason why it's bad to get your kind of map of the future from Hollywood itself.
01:00:32.000 So if you think it's this one guy, presumably some genius, living out in the nowhere and kind of inventing this whole system, like in reality, it's like anything else.
01:00:42.000 There are hundreds of people programming away on their computers, writing on whiteboards, and sharing ideas with other people across the world.
01:00:50.000 It doesn't look like a human thing.
01:00:56.000 And that would often be some economic reason for doing it in the first place, like not just, oh, we have this Promethean attitude that we want to kind of bring.
01:01:07.000 So all of those things don't make for such good plot lines, so they just get removed.
01:01:14.000 But then I wonder if people actually think of the future in terms of some kind of...
01:01:19.000 Super villain and some hero and it's going to come down to these two people and they're going to wrestle.
01:01:26.000 And it's going to be very personalized and concrete and localized.
01:01:30.000 Whereas a lot of things that determine what happens in the world are very spread out and bureaucracies churning away.
01:01:36.000 Sure.
01:01:38.000 Yeah, that was a big problem that a lot of people had with the movie was the idea that this one man could innovate.
01:01:43.000 At such a high level and be so far beyond everyone else is ridiculous.
01:01:47.000 That he's just doing it by himself on this weird compound somewhere.
01:01:52.000 Come on.
01:01:53.000 But that makes a great movie, right?
01:01:56.000 Yeah.
01:01:56.000 Fly in in the helicopter, drop you off in a remote location.
01:01:59.000 This guy shows you something he's created that is going to change the whole world.
01:02:03.000 And it looked beautiful.
01:02:03.000 I mean, I could imagine doing some writer's retreat there or something.
01:02:08.000 Well, when...
01:02:09.000 The iconic image of aliens from another world is these little gray things with no sexual organs and large heads and black eyes.
01:02:21.000 This is the iconic thing that we imagine when we think about things from another planet.
01:02:27.000 I've often wondered if What we think of in terms of like artificial life from another planet or life from another planet is that.
01:02:36.000 It's like an artificial creation.
01:02:38.000 Like in our ideas that we understand that the biological limitations of the body when it comes to traveling through space, the dealing with radiation, the death, need for food, things along those lines, that what we would do is create some artificial thing to travel for us like we've already done on Mars,
01:02:56.000 right?
01:02:56.000 We have the rover.
01:02:57.000 That roams around Mars.
01:02:59.000 The next step would be an artificial, autonomous, intelligent creature that has no biological limitations like we do in terms of its ability to absorb radiation from space.
01:03:10.000 And we create one of those little guys just like that with an enormous head.
01:03:15.000 No sex organs.
01:03:16.000 Doesn't need sex organs.
01:03:17.000 And we have this thing.
01:03:20.000 Pilot these ships that can defy our own physical limitations in terms of what would happen to us if we had to deal with 1 million G-force because it's moving at some preposterous rate through space.
01:03:34.000 When we think of these things coming from another planet, if we think of life on another planet, If they can innovate in a similar fashion the way we do, we would imagine they would create an artificial creature to do all their dirty work.
01:03:49.000 Like, why would they want to, like, risk their body?
01:03:52.000 Right.
01:03:52.000 Yeah, I mean, except I think creature might conjure up stuff that...
01:03:56.000 I mean, if you have this spaceship, I mean, you don't have to have, like, build a little thing that sits and turns the steering wheel.
01:04:02.000 I mean, this could be all automated.
01:04:04.000 Sure.
01:04:04.000 And you'd imagine a technology...
01:04:07.000 That is spacefaring in a serious way would have nanotechnology.
01:04:11.000 So they'd have basically the ability to arbitrarily configure matter in whatever structure they wanted.
01:04:18.000 They would have like nanoscale probes and things that could shapeshift.
01:04:23.000 It would not be that there would be this person sitting in a seat behind the steering wheel.
01:04:28.000 If they wanted to, there could be invisible tasks, I think, like nanoscale things hiding in a rock somewhere, than just connecting with an information link up to some planetary-sized computer somewhere.
01:04:44.000 I think that's the way that space is most likely to get colonized.
01:04:50.000 It's not going to be like with meat sacks kind of driving spaceships around and having Star Trek adventures.
01:04:55.000 It's going to be some spherical frontier emanating from whatever the home planet was, moving at some significant fraction of the speed of light and converting everything in its path into infrastructure.
01:05:10.000 Of whatever type is maximally valuable for that civilization.
01:05:14.000 Maybe computers and launchers to launch more of these space probes so that the whole wavefront can continue to propagate.
01:05:24.000 But we are...
01:05:25.000 I mean, one of the things you brought up earlier is that if human beings are going to continue and we're going to propagate through the universe, we're going to try to go to other places, we're going to try to populate other planets, And are we going to do that with just robots?
01:05:42.000 Or are we going to try to do that biologically?
01:05:44.000 We're probably going to try to do it biologically.
01:05:46.000 One of the things you were saying earlier is one of the things that artificial intelligence could possibly do is accelerate our ability to travel to other lands or other planets.
01:05:54.000 I mean, we're going to try.
01:05:55.000 I mean, in fact, some people are, right?
01:05:57.000 I just think that's going to not lead to anything important until those efforts become obsoleted.
01:06:06.000 By some radical new technology wave, probably triggered by machine superintelligence that then rapidly leads to something approximating technological maturity.
01:06:19.000 Once innovation happens at digital timescales rather than human timescales, then all these things that you could imagine we're doing, if we had 40,000 years to work on it, we would have space colonies and cures for aging and all of these things, right?
01:06:32.000 But if that thinking time happens, you know, Digital space, then that long future gets telescoped, and I think you fairly quickly reach a condition where you have close to optimal technology.
01:06:47.000 And then you can colonize the space cost-effectively.
01:06:50.000 You just need to send out one little probe that then can land on some resource and set up a production facility to make more probes, and then it spreads exponentially everywhere.
01:07:01.000 And then if you want to, you could then, like, after that initial infrastructuring has happened, you could transport biological human beings to other planets if you wanted to.
01:07:11.000 But it's not really where the action is going to be.
01:07:13.000 But what if we were concerned there's some sort of a threat to the Earth?
01:07:19.000 Like...
01:07:20.000 Some sort of asteroid impact, something.
01:07:23.000 I mean, at that stage of technology, averting some asteroid would be, I think, trivial.
01:07:28.000 Really?
01:07:29.000 It would be like a gift of free energy.
01:07:31.000 Like, oh, here comes an energy package.
01:07:33.000 Great.
01:07:33.000 That's a funny way to look at it.
01:07:36.000 Do you think we're going to eventually colonize Mars?
01:07:40.000 Well, I think the answer is if and only if we manage to get through these key technological transitions.
01:07:51.000 And then I think we will colonize not just Mars, but everything else that is accessible in the universe.
01:07:59.000 When you talk about these things, people always want to know when.
01:08:02.000 When do you think it's going to happen?
01:08:03.000 What's the timeline?
01:08:03.000 Yeah, so my guess would be after technological maturity, like after superintelligence.
01:08:08.000 Now, with Mars, it's possible that there would be like a little kind of prototype colonization thing because people are really excited about that.
01:08:17.000 So you could imagine some little demo projects.
01:08:21.000 Happening sooner.
01:08:22.000 But if we're talking about something, say, that would survive long term, even if the Earth disappeared, like some kind of self-sustaining civilization, I think that's going to be very difficult to do until you have super intelligence and then it's going to be trivial.
01:08:37.000 So you think superintelligence could potentially be what, I mean, one of the applications would be to terraform Mars, to change the atmosphere, to make it sustainable for biological life.
01:08:49.000 Yeah, for biological life.
01:08:50.000 So we have like a second spot.
01:08:52.000 Yeah, for example.
01:08:52.000 Like a vacation house.
01:08:54.000 Now, I also think that at this, this is a very radical context, technological maturity, because we already, maybe there are additional technologies we can't even think of yet, but even just what we already know, About physics, etc.
01:09:07.000 We can sort of see possible technologies that we're not yet able to build, but we can see that they would be consistent with physics, that would be stable structures.
01:09:16.000 And already that creates a vast space of things you could do.
01:09:21.000 And so, for example, I think it would be possible at technological maturity to upload human minds into computers, for example.
01:09:32.000 You think that's going to happen, like Ray Kurzweil stuff?
01:09:34.000 Well, I think, again, it would be technologically possible at technological maturity to do it.
01:09:40.000 Now, whether it's actually going to happen then depends, A, do we reach technological maturity?
01:09:45.000 And B, are we interested in using our technology for that purpose at that time?
01:09:52.000 But both of those seem kind of reasonably...
01:09:58.000 Possible?
01:09:59.000 Yeah, reasonably possible.
01:10:01.000 Possible, yeah.
01:10:02.000 Especially in comparison to what we've already achieved.
01:10:04.000 If I had a time machine and it could jump you 1,000 years from now into the future, would you do it?
01:10:12.000 Would you jump in?
01:10:15.000 I mean, I think just going on a long jet flight is kind of already stretching my...
01:10:22.000 What if it was an instantaneous trip to 1,000 years?
01:10:24.000 Could I come back?
01:10:25.000 No.
01:10:26.000 Well...
01:10:30.000 I probably wouldn't.
01:10:32.000 I don't know.
01:10:33.000 I mean, I'm kind of a bit cautious with these things.
01:10:39.000 At the very least, I'd rather think about it for a long time before.
01:10:42.000 Also, I have attachments.
01:10:44.000 There are people I care about here and projects and maybe even opportunities to try to make some difference.
01:10:50.000 If we actually are in this weird time right now, different from all of earlier human history when nothing really much was happening and we're not yet Yeah.
01:11:23.000 And, you know, if you have some ambition to try to do some good in the world, then that kind of can be a very exciting prospect as well.
01:11:32.000 Like, there might be no other better time to exist if your goal is to do good.
01:11:37.000 Yeah, we might be in the golden years.
01:11:40.000 In terms of ability to have...
01:11:43.000 To take actions that have large consequences.
01:11:45.000 Also this very unique transitionary period between the times of old and the times of new.
01:11:51.000 Like we're really in the heat of the change in terms of like we, you know, the internet is only 20 plus years old.
01:11:59.000 Phones are only, you know, cell phones at least, people carrying them all the time, it's only 15 plus years old.
01:12:06.000 This is very, very new.
01:12:08.000 Yeah.
01:12:09.000 So it's an exciting, crazy time where all these changes are taking place really rapidly.
01:12:14.000 Like, if you were from the future, this might be the place where you would travel to, to experience what it was like to see this immense change take place almost instantaneously.
01:12:25.000 Like, if you could go back in time to a specific time in history and experience what life was like, to me, I think I'd probably pick ancient Egypt, like, during the days of the pharaohs.
01:12:36.000 I would love to see what it was.
01:12:39.000 No, no, you just need to watch.
01:12:41.000 Just to see what it looks like, you know, what it's like to experience life back then.
01:12:46.000 But if I was from the future, where things were...
01:12:49.000 Just of the curiosity, what do you think it would look like?
01:12:52.000 Like, what do you imagine yourself seeing in this?
01:12:54.000 I would imagine, I mean, I've really thought long and hard about the construction methods of ancient Egypt.
01:13:04.000 I would love to see what it looked like when they were building the pyramids.
01:13:09.000 How long did it take?
01:13:11.000 What were they doing?
01:13:13.000 How did they do it?
01:13:14.000 We still don't know.
01:13:15.000 It's all really theoretical.
01:13:16.000 There's all these ideas of how they constructed it with incredible precision.
01:13:22.000 We're good to go.
01:13:40.000 Because we really don't know.
01:13:41.000 It's all speculation.
01:13:43.000 During the burning of the Library of Alexandria, we lost so much information.
01:13:46.000 We've got hieroglyphs and the physical structures that are still daunting.
01:13:51.000 We have no idea.
01:13:53.000 They look at the Great Pyramid of Giza, the huge one with two million-plus stones in it.
01:14:01.000 Who made that?
01:14:02.000 How?
01:14:02.000 How did you guys do it?
01:14:04.000 What?
01:14:06.000 Did you draw it out first?
01:14:08.000 How did you get all the rocks there?
01:14:10.000 I mean, I think that would be probably the spot that I would want to go to.
01:14:15.000 I would want to be there in the middle of the construction of the pyramids just to watch.
01:14:19.000 So those certainly would be big, I guess, tourist destinations for time travelers.
01:14:25.000 In terms of if one is thinking, I'm just saying...
01:14:28.000 What was going on back then?
01:14:30.000 We think the pyramids and the slave trains.
01:14:33.000 But of course, for most Egyptians, most of the time, they would be picking weeds from their field or putting their baby to sleep or stuff like that.
01:14:41.000 So kind of the typical moment of human existence.
01:14:45.000 They don't even think it's slaves anymore, I don't think.
01:14:47.000 I think they think it's skilled labor based on their diet.
01:14:50.000 Based on the diet, the utensils that they found in these camps, these workers' camps...
01:14:54.000 They think that these were highly skilled craftspeople, that it wasn't necessarily slaves.
01:15:01.000 They used to think it was slaves, but now because of the bones of the food, they were eating really well, and they think that, well, and also the level of sophistication involved.
01:15:12.000 This is not something you just get kind of slaves to do.
01:15:15.000 This seems to be that there was a population of structural engineers, that there was a population of skilled construction people, and that they tried to, you know, utilize all of these great minds that they had back then and put this thing together.
01:15:31.000 But it's still a mystery.
01:15:32.000 I think that's the spot that I would go to because I think it would be amazing to see so many different innovative times.
01:15:39.000 I mean, it would be amazing to be alive during the time of Genghis Khan or to be alive during some of the wars of 1,000, 2,000 years ago just to see what it was like.
01:15:54.000 The pyramids would be the big one.
01:15:56.000 But I think if I was in the future, some weird dystopian future where artificial intelligence runs everything and human beings are linked to some sort of Neurological implant that connects us all together and we long for the days of biological independence and we would like to see what was it like when they first started inventing phones?
01:16:18.000 What was it like when the internet was first opened up for people?
01:16:22.000 What was it like when people saw, when someone had someone like you on a podcast and was talking about We're good to go.
01:16:41.000 This really Goldilocks period of great change where we're still human, but we're worried about privacy.
01:16:47.000 We're concerned our phones are listening to us.
01:16:50.000 We're concerned about surveillance dates and people put little stickers over the laptop camera.
01:16:55.000 We see it coming, but it hasn't quite hit us yet.
01:16:59.000 We're just seeing the problems that are associated with this increased level of technology in our lives.
01:17:09.000 Which is, yeah, that is a strange thing.
01:17:12.000 If we add up all these pieces, it does put us in this very weirdly special position.
01:17:18.000 And you wonder, hmm, it's a little bit too much of a coincidence.
01:17:25.000 It might be the case, but yeah, it does put some strain on it.
01:17:28.000 When you say a little too much of a coincidence, how so?
01:17:32.000 I mean, I guess the intuitive way of thinking about it, like what are the chances that just by chance you would happen to be living in the most interesting time in history, being like a celebrity, like whatever, like that's pretty low prior probability.
01:17:48.000 Oh, you mean like for me?
01:17:49.000 Well, from you, I mean for all of us, really.
01:17:53.000 For all of us.
01:17:54.000 And so that could just be, I mean, if there's a lottery, somebody's got to have the ticket, right?
01:18:03.000 Or, yeah, or we are wrong about this whole picture, and there is some very different structure in place, which would make our experiences more typical.
01:18:16.000 That's what I was getting to.
01:18:17.000 Yeah, I gathered.
01:18:19.000 Yeah, so...
01:18:21.000 How much have you considered the possibility of a simulation?
01:18:25.000 Well, a lot.
01:18:26.000 I mean, I developed a simulation argument back in the early 2000s.
01:18:32.000 And so, yeah.
01:18:35.000 But I mean, I know that you developed this argument and I know that you've spent a great deal of time working on this.
01:18:41.000 But personally, the way you view the world How much does it play into your vision of what reality is?
01:18:54.000 Well, it's hard to say.
01:19:02.000 I mean, for the majority of my time, I'm not actively thinking about that.
01:19:08.000 I'm just living.
01:19:13.000 Now, I have this weird that my work is actually to think about big picture questions.
01:19:18.000 So it kind of comes in through my work as well.
01:19:23.000 When you're trying to make sense of our position, our possible future prospects, the levers which we might have available to affect the world, what would be a good and bad way of pulling those levers, then you have to try to put all of these constraints and considerations together.
01:19:40.000 And in that context, I think it's important.
01:19:44.000 I think if you are just going about your daily existence, then it might not really be very useful or relevant to constantly try to bring in hypotheses about the nature of our reality and stuff like that.
01:20:02.000 Because for most of the things you're doing on a day-to-day basis, they work the same, whether it's inside a simulation or in basement-level physical reality.
01:20:11.000 You still need to get your car keys out.
01:20:14.000 So in some sense, it kind of factors out and is irrelevant for many practical intents and purposes.
01:20:20.000 Do you remember when you started to contemplate the possibility of a simulation?
01:20:27.000 No, I mean, I remember when the simulation argument occurred to me, which is less, it's not just, I mean, for as long as I can remember, like, yeah, I mean, maybe it's a possibility, like, oh, it could all be a dream, it could be a simulation, but that there is this specific argument that kind of narrows down the range of possibilities and where the simulation hypothesis is then one of only three What are the three options?
01:20:54.000 Well, one is that there is almost all civilizations at our current stage of technological development go extinct before reaching technological maturity.
01:21:04.000 That's like option one.
01:21:06.000 Could you define technological maturity?
01:21:09.000 Well, say having developed at least all those technologies that we already have good reason to think are physically possible.
01:21:17.000 So that would include the technology to build extremely large and powerful computers on which you could run detailed computer simulations of conscious individuals.
01:21:32.000 So that kind of would be a pessimistic, like if almost all civilizations at our stage failed to get there, that's bad news, right?
01:21:41.000 Because then we'll fail as well, almost certainly.
01:21:45.000 That's one possibility.
01:21:46.000 Yeah, so that's option one.
01:21:49.000 Option two is that there is a very strong convergence among all technologically mature civilizations in that they all lose interest in creating ancestor simulations or these kinds of detailed computer simulations of conscious people like their historical predecessors or variations.
01:22:07.000 So maybe they have all of these computers that could do it, but for whatever reason, they all decide not to do it.
01:22:13.000 Maybe there's an ethical imperative not to do it or some other...
01:22:16.000 I mean, we don't really know much about these post-human creatures and what they want to do and don't want to do.
01:22:22.000 Post-human creatures.
01:22:23.000 Well, I'd imagine that by the time they have the technology to do this, they would also have enhanced themselves in many different ways.
01:22:30.000 Right.
01:22:31.000 Perhaps enhancing their ability to recognize the consequences.
01:22:34.000 Right.
01:22:35.000 Yeah.
01:22:35.000 Of creating some sort of simulation.
01:22:36.000 Yeah, that would almost certainly have cognitively enhanced themselves, for example.
01:22:39.000 Well, is the concept of...
01:22:42.000 Downloading consciousness into a computer, it almost ensures that there's going to be some type of simulation.
01:22:48.000 If you have the ability to download consciousness into a computer, once it's contained into this computer, what's to stop it from existing there?
01:22:58.000 As long as there's power and as long as these chips are firing and electricity is being transferred and data is being moved back and forth, you would essentially be in some sort of a simulation.
01:23:13.000 Well, I mean, if you have the capability to do that and also the motive...
01:23:16.000 It would have to simulate something that resembles some sort of a biological interface.
01:23:23.000 Otherwise, it's not going to know what to do, right?
01:23:24.000 Yeah.
01:23:25.000 So we have these kind of virtual reality environments now that are imperfect but improving.
01:23:33.000 And you could kind of imagine that they get better and better and then you have a perfect virtual reality environment.
01:23:39.000 But imagine also that your brain, instead of sitting in a box with big headphones and some glasses on, the brain itself also could be part of the simulation.
01:23:48.000 The matrix.
01:23:50.000 Well, I think in the matrix there are biological humans outside that plug in, right?
01:23:54.000 Right.
01:23:54.000 But you could include in the simulation, just as you have maybe simulated coffee mugs and cars, etc., you could have simulated brains.
01:24:06.000 And so...
01:24:08.000 Here is one assumption coming in from outside the simulation argument, and one can talk about it separately, but it's the idea that I call it the substrate independence thesis, that you could in principle have conscious experiences implemented on different substrates.
01:24:26.000 It doesn't have to be carbon atoms, as is the case with the human brain.
01:24:30.000 It could be silicon atoms.
01:24:31.000 What creates conscious experiences is some kind of structural feature of the computation that is being performed.
01:24:38.000 Rather than the material that is used to underpin it.
01:24:42.000 So in that case, you could have a simulation with detailed simulations of brains in it where maybe every neuron and synopsis simulated and then those brains would be conscious.
01:24:53.000 And that's possibility number two?
01:24:55.000 Well, no, so the possibility number two is that these post-humans just are not at all interested in doing it.
01:25:00.000 And not just that some of them don't, but like of all these civilizations that reach technological maturity, that's kind of pretty uniformly, just don't do that.
01:25:09.000 And what's number three?
01:25:11.000 That we are in a simulation, the simulation hypothesis.
01:25:14.000 And where do you lean?
01:25:16.000 Well, I generally tend to punt on the question of precise probabilities there.
01:25:21.000 I mean, I think it would be a probability thing, right?
01:25:23.000 Yes.
01:25:24.000 You assign some to each.
01:25:25.000 But yeah, I've refrained from giving a very precise probability.
01:25:32.000 Partly because, I mean, if I said some particular number, it would get called there and it would create this maybe sense of false precision.
01:25:40.000 The argument doesn't allow you to derive this, the probability is X, Y, Z. It's just that at least one of these three has to obtain.
01:25:50.000 So, yeah, so that narrows it down.
01:25:52.000 Because you might think...
01:25:54.000 Why do we know the future is big?
01:25:56.000 You could just make up any story and we have no evidence for it.
01:25:59.000 But it seems that there are actually, if you start to think everything through, quite tight constraints on what probabilistically coherent views you could have.
01:26:08.000 And it's kind of hard even to find one overall hypothesis that fits this and various other considerations that we think we know.
01:26:17.000 The idea would be that if there is one day the ability to create a simulation, that it would be indiscernible from reality itself.
01:26:27.000 Say if we're not in a simulation yet.
01:26:30.000 If this is just biological life, we're just extremely fortunate to be in this Goldilocks period.
01:26:35.000 But we're working on virtual reality in terms of like Oculus and all these companies are creating these consumer-based virtual reality things that are getting better and better and really kind of interesting.
01:26:47.000 You've got to imagine that 20 years ago there was nothing like that.
01:26:50.000 20 years from now, it might be indiscernible.
01:26:53.000 You might be able to create a virtual reality that's impossible to...
01:27:02.000 I think?
01:27:24.000 If they figure out a way to do that, one day they will have an artificial reality that's indiscernible from reality itself.
01:27:32.000 And if that is the case, how do we know if we're in it?
01:27:35.000 Right.
01:27:35.000 That is roughly the gist of it.
01:27:40.000 Now, as I said, I think if you simulate the brain also, You have a cheaper overall system than if you have a biological component in the center surrounded by virtual reality gear.
01:27:55.000 So you could, for a given cost, I think create many more ancestry simulations with simulated brains in them rather than biological brains with VR gear.
01:28:07.000 So most, in these scenarios where there would be a lot of simulations, most of those scenarios, it would be the kind of where everything is digital.
01:28:15.000 Because it's just cheaper with mature technology to do it that way.
01:28:20.000 This is one of the biggest, for lack of a better term, mindfucks.
01:28:27.000 When you really stop and think about reality itself.
01:28:30.000 That if we are living in a simulation, like, what is it?
01:28:34.000 And why?
01:28:35.000 And where does it go?
01:28:37.000 And how do I respond?
01:28:40.000 How do I move forward?
01:28:41.000 If I really do believe this is a simulation, what am I doing here?
01:28:45.000 Yeah, those are big questions.
01:28:47.000 Huge questions.
01:28:49.000 And some of them arise even if we're not in a simulation.
01:28:53.000 Yeah.
01:28:53.000 And aren't there people that have done some strange, impossible to understand calculations that are designed to determine whether or not there's a likelihood of us being involved in a simulation currently?
01:29:05.000 Yeah.
01:29:06.000 Yeah, I think it slightly misses the point.
01:29:12.000 So there are these attempts to try to figure out the computational requirements that would be required if you wanted to simulate some physical system with perfect precision.
01:29:26.000 So if we have some human, a brain, a room, let's say, and we wanted to simulate every little part, every atom, every subatomic particle, the whole quantum wave function, What would be the computational load of that?
01:29:45.000 And would it be possible to build a computer powerful enough that you could actually do this?
01:29:51.000 Now, I think the way that this misses the point is that it's not necessary to simulate all the details of this environment that you want to create in an ancestry simulation.
01:30:04.000 You would only have to simulate it insofar as it is perceptible to the observer inside the simulation.
01:30:11.000 So, if some post-human civilization wanted to create a Joe Rogan doing a podcast simulation, they'd need to simulate...
01:30:21.000 Joe Rogan's brain, because that's where the experiences happen.
01:30:24.000 And then whatever parts of the environment that you are able to perceive.
01:30:28.000 So surface appearances, maybe of the table and walls.
01:30:32.000 Maybe they would need to simulate me as well, or at least a good enough simulacrum that I could sort of spit out words that would sound like they came from a real human, right?
01:30:41.000 I don't know.
01:30:42.000 Now we're getting quite good with this GPT-2, like this kind of AI that just spews out words with...
01:30:50.000 I don't know whether...
01:30:51.000 Anyway, but what is happening inside this table right now is completely irrelevant.
01:30:56.000 You have no idea of knowing whether there even are atoms there.
01:30:59.000 Now, you could...
01:31:01.000 Take a big electron microscope and look at finer structure and then you could take an atomic force microscope and you could see individual atoms even and you could perform all kinds of measurements.
01:31:13.000 And it might be important that if you did that you wouldn't see anything weird because physicists do these experiments and they don't see anything weird.
01:31:20.000 But then you could kind of fill in those details like if and when somebody were performing those experiments.
01:31:25.000 That would be vastly cheaper than continuously running all of this.
01:31:29.000 And so this is the way a lot of computer games are designed today, that they have a certain rendering distance.
01:31:35.000 You only actually simulate the virtual world when the character goes close enough that you could see it.
01:31:42.000 And so I imagine these kind of super-intelligent post-humans doing this.
01:31:45.000 Obviously, they would have figured that out and a lot of other optimizations.
01:31:50.000 So in other words, these calculations or experiments, I think, don't really tell on the hypothesis.
01:31:56.000 Right.
01:31:58.000 Without assigning a probability to either one of those three scenarios, what makes you think?
01:32:06.000 If you do stop and think, I think we're in a simulation, what are the things that are convincing to you?
01:32:13.000 Well, it would mainly go through the simulation argument.
01:32:17.000 To the extent that I think the alternative two hypotheses are improbable, then that would kind of shift the probability mass on the third remaining.
01:32:26.000 Is it really only three?
01:32:27.000 So the ones are...
01:32:30.000 That human beings go extinct.
01:32:32.000 And also other civilizations at our stage in the cosmos or whatever.
01:32:37.000 Yes.
01:32:38.000 It's a strong filter.
01:32:41.000 That they either go extinct or they decide not to pursue it.
01:32:44.000 They all lose interest, yeah.
01:32:45.000 Or it becomes a simulation.
01:32:46.000 Is that really the only three options?
01:32:47.000 Well, I think the only three live options.
01:32:49.000 So you can...
01:32:51.000 I can kind of unfold the argument a little bit more and look more granular.
01:32:55.000 So suppose that the first two options are false.
01:32:59.000 So some non-trivial fraction of civilizations at our stage do get through.
01:33:03.000 And some non-trivial fraction of those are still interested.
01:33:09.000 Then I think you can convincingly show that by using just a small portion of their resources they could create very, very many simulations.
01:33:20.000 And you can show that or argue for that by comparing the computational power of systems that we know are physically possible to build.
01:33:31.000 We can't currently build them, but we could see that you could build them with nanotech and if you have planetary-sized resources on the one hand.
01:33:38.000 And on the other hand, estimates of how much compute power it would take to simulate a human brain.
01:33:45.000 And you find that a mature civilization would have many, many orders of magnitude more.
01:33:50.000 So that even if they just used 1% of their compute power of one planet for one minute, they could still run thousands and thousands and thousands of these simulations.
01:34:00.000 And they might have billions of planets and they might last for billions of years.
01:34:04.000 So the numbers are quite extreme, it seems.
01:34:07.000 So then what you get is this implication that if the first two options are false, it Would follow that there would be many, many more simulated experiences of our kind than there would be original experiences of our kind.
01:34:25.000 So the idea is that if we continue to innovate, if human beings or intelligent life in the cosmos continues to innovate, that creating a simulation is almost inevitable?
01:34:37.000 No, no.
01:34:37.000 I mean, the second might be...
01:34:39.000 That we decide not to.
01:34:40.000 Yeah, and others with the same capability.
01:34:43.000 But what if they don't decide not to?
01:34:45.000 If they don't decide not to...
01:34:48.000 The first option, if human beings do figure out a way to not die and stay innovative and we don't have any sort of natural disasters or man-made created disasters, then step two,
01:35:03.000 if we don't We don't decide to not pursue this.
01:35:08.000 If we continue to pursue all various forms of technological innovation, including simulations, that it becomes inevitable.
01:35:18.000 If we get past those two first options, it becomes inevitable that we pursue it.
01:35:25.000 Well, so if they have the capacity...
01:35:28.000 Then they will do it.
01:35:29.000 And the motive, or like the desire to do it.
01:35:32.000 Yes.
01:35:33.000 So then they would create hugely many of these.
01:35:37.000 So not just one simulation, right?
01:35:39.000 Because it's so cheap at technological maturity, if you have a cosmic empire of resources, they don't have to have a very big desire to do this.
01:35:48.000 They might just think, well, you know...
01:35:51.000 Well, that was the big question that Elon said he would ask artificial intelligence.
01:35:55.000 He said, what's beyond the simulation?
01:35:58.000 That's the real question.
01:35:59.000 If this is a simulation, if there's many, many simulations running currently, What's beyond the simulation?
01:36:07.000 Well, yeah, you might be curious about that.
01:36:09.000 I mean, I think the more important question would be, like, what do we all things considered have the most reason to do in our situation?
01:36:17.000 Like, what would it be wise for us to do?
01:36:19.000 Is there, like, some way that we can be helpful or have the best life or whatever your goal is?
01:36:26.000 Or is that ridiculous to even consider?
01:36:28.000 Maybe it's beyond us.
01:36:31.000 The question of what is outside?
01:36:32.000 Yes.
01:36:34.000 Well...
01:36:36.000 I mean, I don't think it's ridiculous to consider.
01:36:38.000 I think it might be beyond us, but maybe we would be able to form some abstract conception of what it is.
01:36:44.000 I mean, in fact, if the path to believing the simulation hypothesis is the simulation argument, then we have a bunch of structure there that gives us some idea.
01:36:54.000 Like, there would be some advanced civilization that would have developed a lot of technology over time, including compute technology.
01:37:15.000 Right, right.
01:37:26.000 And then for one reason or another, they would have decided to use some of the resources to create simulations.
01:37:32.000 And inside one of those simulations, perhaps, our experiences would be taking place.
01:37:39.000 So you could more speculatively fill in more details there.
01:37:44.000 But I still think that fundamentally our ability to grok this whole thing would be very limited.
01:37:50.000 And...
01:37:52.000 There might be other considerations that we are oblivious to.
01:37:57.000 I mean, if you think about the simulation argument, it's quite recent, right?
01:38:02.000 So it's less than 20 years old.
01:38:05.000 So if you think that...
01:38:08.000 So I suppose it's correct for the sake of argument.
01:38:10.000 Then up to this point, everybody was missing something like hugely important and fundamental, right?
01:38:16.000 Really smart people, hundreds of years, like this massive piece right in the center.
01:38:22.000 But what's the chances that we now have figured out the last big missing piece?
01:38:26.000 Presumably, There must be some further big, giant realization that is like beyond us currently.
01:38:33.000 So I think having some...
01:38:35.000 Yeah, I mean, that looks kind of plausible, but maybe there are further big discoveries or revelations that would kind of maybe not falsify the simulation, but maybe change the interpretation, like do something that is hard to know in advance what that would be.
01:38:49.000 Now, is the concept that if there is a simulation that all the historical record is simulated as well?
01:38:55.000 Or when did it take in?
01:38:57.000 Well, there are different options there, and there might be many different simulations that are configured differently.
01:39:03.000 There could be ones that run for a very long time, ones that run for a short period of time, ones that simulate everything and everybody, others that just focus on some particular scene or person.
01:39:15.000 It's just a vast space of possibilities there.
01:39:19.000 And which ones of those would be most likely is really hard to say much about because it would depend on the reasons for creating these simulations, like what would the interests of these hypothetical post-humans be.
01:39:30.000 Have you ever had a conversation with a pragmatic, capable person who really understands what you're saying, but they disagree about even the possibility of a simulation?
01:39:41.000 Yeah.
01:39:45.000 It must have occurred, but it doesn't tend to be the place where the conversation usually goes.
01:39:52.000 Where does the conversation usually go?
01:39:55.000 Well, I mean, I move in kind of unrepresentative circles.
01:40:00.000 So I think amongst the folk I interact with a lot, I think a common reaction is that it's plausible and still there is some uncertainty because these things are always hard to figure out.
01:40:16.000 But we should assign it some probability.
01:40:21.000 But I'm not saying that would be the typical reaction if you kind of did a Gallup survey or something like that.
01:40:28.000 I mean, another common thing is, I guess, to misinterpret it in some way or another.
01:40:39.000 And there are different versions of that.
01:40:41.000 So one would be this idea that in order for the simulation hypothesis to be true, it has to be possible to simulate everything around us to perfect microscopic detail, which we discussed earlier.
01:40:58.000 Then some people might not immediately get this idea that the brain itself could be part of the simulation.
01:41:03.000 So they imagine it would be plugged in with a big cable and if you just somehow could reach behind you, that would be another possible common misconception, I guess.
01:41:21.000 Then I think a common thing is to conflate the simulation hypothesis with the simulation argument.
01:41:27.000 The simulation hypothesis is we are in a simulation.
01:41:30.000 The argument is that one of these three options is true, only one of which is the simulation hypothesis.
01:41:38.000 Some conflation there happens.
01:41:40.000 How do you factor dreams into the simulation hypothesis?
01:41:43.000 Well, I think they are irrelevant to it.
01:41:46.000 That is that whether or not we are in a simulation, people presumably still have dreams and there are other reasons and explanations for why that would happen.
01:41:56.000 So you have dreams even if you're in the simulation?
01:41:58.000 Well, why not?
01:42:01.000 Hmm.
01:42:03.000 Okay, okay.
01:42:04.000 Why not?
01:42:05.000 So I sometimes get these kind of random emails that's like, oh, well, you know, yes, thank you, Rostrom.
01:42:16.000 Your theory is very interesting, and I found proof.
01:42:19.000 And like, oh, when I looked in my bathroom mirror, I saw pixels.
01:42:22.000 Like, random things like that.
01:42:24.000 Crazy people.
01:42:26.000 Varying degrees.
01:42:26.000 I mean, maybe we're all crazy.
01:42:28.000 Yes, for sure.
01:42:29.000 But I think that those things are not evidence.
01:42:32.000 In general speaking, you would expect...
01:42:35.000 If we're not in a simulation, there's still to be various people who claim to perceive various things.
01:42:41.000 Sometimes people have hallucinations, sometimes they misremember, sometimes they make stuff up.
01:42:45.000 You just imagine that it would be...
01:42:47.000 So the most likely explanation for those things is not...
01:42:50.000 Even if we are in a simulation, the most likely explanation for those things is not that there was a glitch in the simulation.
01:42:55.000 It's that one of these normal psychological phenomena took place.
01:42:58.000 Right.
01:42:59.000 So, yeah, I would not be inclined to think that this would be an explanation.
01:43:05.000 If somebody has those kind of experiences, it's probably not because we are...
01:43:10.000 Even if the simulation hypothesis is true, it's probably not the explanation.
01:43:14.000 The concept of creativity, how does that play into a simulation?
01:43:20.000 If during the simulation you're coming up with these unique creative thoughts, are these unique creative thoughts your own or are these unique creative thoughts stimulated by the simulation?
01:43:34.000 They would be your own in the sense that it would be your brain that was producing them.
01:43:38.000 Something else would have produced your brain.
01:43:40.000 But obviously there's some incredible influences on your brain if you're involved in some sort of an external stimulation.
01:43:46.000 That's true in physical reality as well.
01:43:50.000 It doesn't come from nowhere.
01:43:53.000 But it's still your brain.
01:43:54.000 I think it would be potentially as much your own in the simulation as it would be outside the simulation.
01:44:01.000 I mean, unless the simulators had, for whatever reason, set it up with the view that they, for some reason, they just wanted to have, oh, this is Rogan coming up with this particular idea, and that kind of configured the initial conditions and just the right way to achieve that.
01:44:17.000 Maybe then, when you come up with it, maybe it's less your achievement than the people who set up the initial conditions.
01:44:24.000 But other than that, I think...
01:44:28.000 Because the reason I ask that is all ideas, everything that gets created, all innovation, initially comes from some sort of a point of someone figuring something out or coming up with a creative idea.
01:44:40.000 All of it.
01:44:41.000 Like everything that you see in the external world, like everything from televisions to automobiles, was an idea.
01:44:47.000 And then somebody implemented that idea or groups of people implemented the technology involved in that idea and then eventually it came to fruition.
01:44:54.000 If you're in a simulation, How much of that is being externally introduced into your consciousness by the simulation?
01:45:05.000 And is it pushing the simulation in a certain direction?
01:45:08.000 Yeah, I don't know.
01:45:09.000 I mean, you could imagine both kinds of simulations.
01:45:11.000 Like simulations where you just set up the initial conditions and let it run to see what happens.
01:45:15.000 Right.
01:45:15.000 And others where maybe you want to just simulate this particular historical counterfactual.
01:45:23.000 Right.
01:45:23.000 What would have happened if Napoleon had been defeated?
01:45:28.000 Maybe that's our simulation.
01:45:30.000 You put in some specific thing there.
01:45:32.000 You could imagine either or both of those types of ways of doing it.
01:45:37.000 But your simulation hypothesis, if we're in it, it's running.
01:45:46.000 Now, is it running and we independently interact with the simulation?
01:45:53.000 Or is the simulation introducing ideas into our minds that then come to fruition inside the simulation?
01:46:02.000 Is that how things get done?
01:46:05.000 Like, if we are in a simulation, right?
01:46:07.000 And if during the simulation someone has created a new iPhone, why are they doing that?
01:46:12.000 Are there other people in the simulation?
01:46:15.000 Or is this simulation entirely unique to the individual?
01:46:19.000 Is each individual involved in a different...
01:46:23.000 Co-existing simulation?
01:46:25.000 Right.
01:46:30.000 I think the kind of simulation that it would be the clearest case for why that would be possible would be one where all people would be simulated that you perceive in each brain.
01:46:43.000 Because then you could get the realistic behavior out of the brain if you simulated the whole brain at a sufficient level of detail.
01:46:52.000 So everyone you interact with is also a simulation?
01:46:55.000 Well, that type of simulation should certainly be possible.
01:46:58.000 Then it's more of an open question whether it would also be possible to create simulations where there was, say, only one person conscious and the others were just like simulacra.
01:47:12.000 They acted like humans, but there's nothing inside.
01:47:16.000 So these would be in philosopher's parlance zombies, that is...
01:47:22.000 It's like a technical term, but it means when philosophers discuss it, somebody who acts exactly like a human but with no conscious experience.
01:47:28.000 Now, whether those things are possible or not is an open question.
01:47:33.000 Do you consider that ever when you're communicating with people?
01:47:36.000 Do you ever stop and think?
01:47:37.000 Not really.
01:47:38.000 I mean, it has occurred to me, but not regularly, no.
01:47:42.000 Yeah, but does it ever get to your head where you're like, this might not be real.
01:47:48.000 Like, this person...
01:47:51.000 Might not be a real person.
01:47:53.000 This might be a simulation.
01:47:55.000 Right.
01:47:55.000 I mean, I guess there are two things.
01:47:57.000 One is that you'd probably have some probability distribution over all these different kinds of situations that you could be in.
01:48:05.000 Maybe all of those situations are simulated in different frequencies and stuff.
01:48:13.000 Different numbers of times, that is.
01:48:15.000 So there would be some probability distribution there.
01:48:17.000 That would be the first thought.
01:48:19.000 That in reality you're always kind of uncertain.
01:48:22.000 The second would be that even if you were in that kind of simulation, it might still be that behaviorally what you should do is exactly the same as if you were in the other simulation.
01:48:34.000 So it might not have that much day-to-day implications.
01:48:40.000 Do you think there's psychological benefits for interacting with life as if it's a simulation?
01:48:46.000 No, I don't think that would be an advantage.
01:48:48.000 Maybe a disadvantage in some cases.
01:48:51.000 What, alleviation of existential angst?
01:48:53.000 Yeah, maybe, but who knows?
01:48:56.000 It could also, I guess, if you sort of interpret it in the wrong way, maybe lead it to feel more alienated or something like that.
01:49:07.000 I don't know.
01:49:09.000 But I think to a first approximation, the same things that would be Work well and make a lot of sense to do in physical reality would be also our best bets in a simulated reality.
01:49:24.000 That's where it gets really weird.
01:49:27.000 Like, if it's a simulation, but you must behave in each and every instance as if it's not.
01:49:36.000 If you know, if you had a test you could take, like a pregnancy test, when you went to the CVS and you pee on a strip and it tells you, guess what, Nick?
01:49:49.000 This shit isn't real.
01:49:51.000 You're in a simulation.
01:49:53.000 100% proven, absolutely positive.
01:49:55.000 You know from now on, from this moment on, that everything you interact with is some sort of a creation.
01:50:03.000 It's not real.
01:50:05.000 But it is real, because you're having the same exact experience as if it was real.
01:50:11.000 Right.
01:50:12.000 How do you proceed?
01:50:14.000 Yeah, I think there might be very subtle reprioritizations that would happen.
01:50:20.000 What would you do, personally?
01:50:22.000 Well...
01:50:25.000 I don't know the full answer to that.
01:50:27.000 I think there are certain possibilities that look kind of far-fetched if we're not in a simulation that become, like, more realistic if we are.
01:50:38.000 So one obvious one is, like, if a simulation could be shut off, like if the computer where the simulation is running is if the plug is pulled, right?
01:50:47.000 So we think physical universe, as we normally understand, can just suddenly pop out of existence.
01:50:52.000 There's a conservation of energy and momentum and so forth.
01:50:55.000 But a simulated universe, that seems like something that could happen.
01:50:59.000 It doesn't mean it is likely to happen or it doesn't say anything about what time frame, but at least it's like enters as a possibility where it was not there before.
01:51:07.000 Other things as well become more maybe similar to various theological possibilities that exist.
01:51:14.000 Like afterlife and stuff like that.
01:51:16.000 And in fact, it kind of maybe through a very different path leads to some similar destinations as people through thinking about theology and stuff have arrived at.
01:51:35.000 I mean, it's kind of different.
01:51:37.000 I think there is no logically necessary connection either way.
01:51:41.000 But there are some kind of structural parallels, analogs, between the situation of a simulated creature to their simulators and a created entity to their creator.
01:51:55.000 That are interesting, although kind of different.
01:51:59.000 So that might be kind of comparisons there that you could make that would give you some possible ways of proceeding.
01:52:08.000 It seems like paralysis by analysis.
01:52:10.000 You just sit there and think about it, at least I would.
01:52:14.000 I would almost wind up not being able to do anything or not being able to act or move or think.
01:52:19.000 That seems kind of likely to be suboptimal, right?
01:52:23.000 Suboptimal for sure.
01:52:26.000 But the concept is so prevalent and it's so common and it's so often discussed.
01:52:32.000 It's interesting how much it has just over the last 10-15 years, how long the idea has It's interesting how ideas can migrate from some kind of extreme radical fringe and some decade or two later,
01:52:58.000 they're just kind of almost common sense.
01:53:00.000 Why do you think that is?
01:53:02.000 Well, we have a great ability to get used to things.
01:53:05.000 I mean, this comes back to our discussion about the pace of technological progress.
01:53:09.000 It seems like the normal way for things to be.
01:53:12.000 We are very adaptable creatures, right?
01:53:15.000 You can adjust to almost everything, and we have no kind of external reference point, really, and mostly these judgments...
01:53:25.000 Are based on what we think other people think.
01:53:28.000 So if it looks like some high-status individual, Elon Musk or whatever, seems to take the simulation argument seriously, then people think, oh, it's a sensible idea.
01:53:38.000 And it only takes like one or two or three of those people that are highly regarded and suddenly it becomes normalized.
01:53:47.000 Is there anyone highly regarded that openly dismisses this possibility?
01:53:52.000 There must be, but I'm not sure they would have bothered to go on the record specifically.
01:53:58.000 I guess the people who are dismissive of it wouldn't maybe even bother to address it or something.
01:54:07.000 I'm trying to think, yeah, and I'm drawing a blank of whether there's a particular person I could.
01:54:11.000 I would love to hear the argument against it.
01:54:13.000 I would love to hear someone like you or Elon interact with them and try to volley back and forth these ideas.
01:54:24.000 That could be interesting.
01:54:26.000 Yeah.
01:54:26.000 So you've never had some sort of a debate with someone who openly dismisses it?
01:54:31.000 Well, like a big public debate?
01:54:33.000 I don't know.
01:54:33.000 Or even private.
01:54:34.000 Yeah.
01:54:34.000 I don't know.
01:54:36.000 It was kind of a long time since when I first put this article out.
01:54:43.000 I guess I had more conversations about the argument itself.
01:54:47.000 What was the reaction when you first put it out?
01:54:49.000 There was a lot of attention, right?
01:54:51.000 I mean, pretty much right off the bat, including public...
01:54:54.000 I mean, it was published in some academic journal, Philosophical Quarterly.
01:54:58.000 But yeah, it quickly...
01:55:03.000 True, a lot of it.
01:55:04.000 And then it's kind of come in waves, like every year or so.
01:55:08.000 There should be like some new group of, either a new generation or some new community that hears about it for the first time, and it kind of gets a new wave of attention.
01:55:19.000 But in parallel to these waves, there's also this chronic...
01:55:31.000 Yeah.
01:55:40.000 Maybe if there were some big flaw in the idea, it would have been discovered by now.
01:55:43.000 So if it's been around for a while, it makes it a little bit more credible.
01:55:46.000 It might also be slightly assisted by just technological progress.
01:55:51.000 If you see virtual reality getting better and stuff, it becomes maybe easier to imagine how it could become so good one day that you could create perfectly flawless.
01:55:59.000 I was going to introduce that as option four.
01:56:03.000 Is option four the possibility that one day we could conceivably create some sort of an amazing simulation, but it hasn't been done yet.
01:56:13.000 And this is why it's become this topic of conversation is that there's some need for concern because as you extrapolate technology and you think about where it's going now and where it's headed, There could conceivably be one day where this exists.
01:56:26.000 Should we consider this and deal with it now?
01:56:29.000 Well, so I'd say that that would be highly unlikely in that if the third – so if the first two are wrong, right, then there are many, many more simulated ones than non-simulated ones, will be over the course of all of history.
01:56:42.000 Over the course of all of history, but what if it hasn't yet happened?
01:56:45.000 Right, but so then the question is, given that – Sure.
01:57:05.000 Right.
01:57:10.000 Right.
01:57:18.000 Or should you think you're one amongst the larger sets, the simulated ones?
01:57:22.000 Or should you think that it just has not happened yet?
01:57:26.000 But that would be equivalent to saying that you would be one of the non-simulated ones.
01:57:30.000 You're talking about in the universe.
01:57:33.000 Yeah, but you could make it even just, you could look at the narrow case of just the Earth.
01:57:39.000 Let's just look in the narrow case of just the Earth.
01:57:41.000 In the narrow case of just the Earth, if the historical record is accurate, if it's not a simulation, then it seems very reasonable that we're just dealing with incremental increases in technology that's pretty stunning and pretty profound currently, but that we haven't Well,
01:58:00.000 that's how it looks, right?
01:58:01.000 Sure.
01:58:02.000 Yeah, but that's also how it would look if you were in a simulation.
01:58:05.000 Yes, but it's also how it would look if you're not in a simulation yet.
01:58:09.000 That's also a possibility too, no?
01:58:11.000 Right, yeah.
01:58:12.000 But for most people for whom it looks like that, it would be the case that they would be simulated.
01:58:20.000 Why?
01:58:22.000 Well, by assumption, if there are all these simulations created...
01:58:27.000 Well, not yet.
01:58:28.000 Well, right.
01:58:28.000 But you don't know what time it is in external reality.
01:58:33.000 Right, but why we assume something so unbelievably fantastic when just life itself is preposterous.
01:58:39.000 Because life itself, just being a human being on a planet...
01:58:42.000 You know, this planet spinning a thousand miles an hour, hurling through infinity.
01:58:46.000 That, in itself, is fairly preposterous if it didn't exist.
01:58:50.000 But it does exist.
01:58:52.000 And we know that we, at least, we're all agreeing upon a certain historical record.
01:58:58.000 We're agreeing upon Oppenheimer, the Manhattan Project, World War I, World War II. We're agreeing on Korea and Vietnam.
01:59:04.000 We're agreeing on Reagan and Kennedy.
01:59:06.000 We're agreeing on all these things, historically.
01:59:08.000 Right.
01:59:09.000 If we are all agreeing that there's a sort of historical process and we're all agreeing, I remember when the first iPhone was invented.
01:59:17.000 I remember when the first computer.
01:59:19.000 I remember when this.
01:59:20.000 I remember the internet.
01:59:22.000 Why would we assume that there's a simulation?
01:59:28.000 We could assume that there's a possibility of a simulation, but why would we assume the simulation hasn't occurred?
01:59:33.000 Why wouldn't we assume the simulation hasn't occurred yet?
01:59:37.000 Right.
01:59:37.000 I mean, so it is a possibility that we would be in the first time segment of all of these.
01:59:44.000 Wouldn't that be more likely?
01:59:46.000 Well, I'd say no.
01:59:48.000 I mean, so it comes down then to this field, which is tricky and problematic called anthropics.
01:59:53.000 So this is about how to assign probabilities in situations where you have uncertainty about who you are, what time it is, where you are.
02:00:03.000 Right.
02:00:05.000 So if you imagine, for example, all of these people who would exist in this scenario having to place bets on whether they're simulated or not.
02:00:17.000 And you think about two possible different ways of reasoning about this.
02:00:21.000 So one is you assume you're a randomly selected individual from all these individuals and you bet accordingly.
02:00:29.000 Randomly selected individuals.
02:00:30.000 Yeah, so then you would bet you're one of the simulated ones because like a randomly selected ones, if most are simulated, most lottery tickets are...
02:00:37.000 But why are we assuming that most are simulated?
02:00:39.000 This is where I'm getting confused.
02:00:40.000 Well, it will have been simulated by the end of time.
02:00:43.000 By the end of time.
02:00:43.000 This is like a timeless claim.
02:00:45.000 But why already when it hasn't existed yet?
02:00:50.000 Let's say for the sake of argument, because I don't really have an opinion on this, pro or con, open the air.
02:00:55.000 But if I was going to argue about pragmatic reality, the practicality of biological existence as a person that has a finite lifespan, you're born, you die, you're here right now, and we're a part of this just long line of humanity that's created all these incredible things that's led up to civilization.
02:01:15.000 That's led up to this moment right now where you and I are talking into these microphones.
02:01:18.000 It's being broadcast everywhere.
02:01:21.000 Why isn't it likely that a simulation hasn't occurred yet?
02:01:25.000 That we are in the process of innovating and one day could potentially experience a simulation.
02:01:31.000 But why are you not factoring in the possibility or the probability that that hasn't taken place yet?
02:01:36.000 Yeah, I mean, so it's in there.
02:01:38.000 But if you imagine that people...
02:01:42.000 I follow this general principle of assuming that they would be the ones in the original history before the simulations had happened.
02:01:52.000 Right.
02:01:53.000 Then almost all of them would turn out to be wrong and they would lose their bets.
02:01:57.000 Once a simulation has actually...
02:01:59.000 Right.
02:01:59.000 I mean, if you kind of integrate over the universe...
02:02:03.000 But there's no evidence that a simulation has taken place.
02:02:06.000 But there is evidence that you're alive.
02:02:07.000 You have a mother, you have a father.
02:02:09.000 Those things could be true in the simulation as well.
02:02:12.000 Could be, but isn't that a pipe dream?
02:02:15.000 Well, it depends on what simulation, right?
02:02:17.000 I mean, a lot of simulations might run for a long time and have… Might.
02:02:21.000 Yeah, yeah.
02:02:21.000 But we know that if someone shoots you, you'll die.
02:02:25.000 We know if you eat food, you get full.
02:02:27.000 We know these things.
02:02:28.000 These things could be objective facts.
02:02:32.000 These could be… Yeah, I think they are true, yeah.
02:02:34.000 Yes, right?
02:02:35.000 Now, why would we assume… Why would a simulation be the most likely scenario when we've experienced, at least we believe we've experienced, all this innovation in our lifetime?
02:02:48.000 We see it moving towards a certain direction.
02:02:50.000 Why wouldn't we assume that that hasn't taken place yet?
02:02:55.000 Yeah, I think to try to argue for the premise that conditional on there being first an initial segment of non-simulated Joe Rogan experiences and then a lot of other segments of simulated ones, that conditional on that being the way the world in totality looks,
02:03:14.000 you should think you're one of the simulated ones.
02:03:16.000 Why?
02:03:17.000 Well, to argue for that, I think then you need to roll in this piece of probability theory called anthropics, which I alluded to.
02:03:25.000 And just to pull one little element out of there to kind of create some initial plausibility for this.
02:03:31.000 If you think in terms of rational betting strategies for this population of Joe Rogan experiences, the ones that...
02:03:40.000 Would lead to the overall maximal amount of winning would be if you all thought you're probably one of the simulated segments.
02:03:48.000 If you had the general reasoning rule that in this kind of situation you should think that you're the initial segment of non-simulated Rogen, then the great preponderance of these simulated experiences would lose their bets.
02:04:04.000 But there's no evidence of a simulation.
02:04:08.000 Well, I'd say that there is indirect evidence insofar as there is evidence against these two alternatives.
02:04:16.000 Well, the two alternatives being that intelligent life goes extinct before they create any sort of simulation or that they agree to not create a simulation.
02:04:28.000 But what about if they're going to create a simulation?
02:04:30.000 There has to be a time before the simulation is created.
02:04:34.000 Why wouldn't you assume that that time is now currently happening when you've got a historical record of all the innovation that's leading up to today?
02:04:43.000 I think the historical record would be there in the simulation.
02:04:47.000 But why would it have to be there in a simulation and not be there in reality?
02:04:52.000 Well, I mean, it could be there in the simulation if it's a kind of simulation that tracks the original, yeah.
02:04:58.000 If it's a fantasy simulation, then, you know, maybe it wouldn't be there.
02:05:02.000 Right, but it could just be reality.
02:05:04.000 It doesn't have to be a simulation.
02:05:05.000 I mean, in some sense, it would be both, right?
02:05:08.000 I mean, there would be one Joe Rogan experience in the real original history, and then, like, maybe a million, let's just say.
02:05:15.000 In simulated realities later.
02:05:17.000 But if you think about your actions that kind of can't distinguish between these different possible locations in space-time where you could be, most of the impact of your decisions will come from impacting all of these million Dior Hogan instances.
02:05:33.000 Yeah, but this is once a simulation has been proven to exist, which it hasn't been.
02:05:38.000 We have, at least in terms of what we all agree, we're proven to have biological lives.
02:05:46.000 We breed, we sleep, we eat, we travel on planes.
02:05:51.000 All these things are very tangible and real.
02:05:52.000 I'd say those are true, probably even if we're in a simulation.
02:05:57.000 But why would you assume we're in a simulation?
02:05:59.000 This is where I'm stuck.
02:06:01.000 Because why wouldn't you assume that a simulation is one day possible?
02:06:04.000 There's no proof or no evidence that makes any sense to me That there is currently any simulation.
02:06:12.000 Right, I mean, so it's a matter of probabilities and number schemes, right?
02:06:17.000 Is it?
02:06:19.000 That's what I would assert, yes.
02:06:22.000 But what would point to the possibility that it's more probable that we're in a simulation?
02:06:27.000 This is what escapes me.
02:06:28.000 Okay, so I could mention some possibilities that would...
02:06:33.000 Okay.
02:06:33.000 So the most obvious, like a big window pops up in front of you saying, you're in a simulation.
02:06:38.000 Click here for more information.
02:06:40.000 That would be pretty conclusive.
02:06:41.000 Right, yes.
02:06:42.000 Right.
02:06:42.000 So short of that...
02:06:44.000 You would have weaker probabilistic evidence insofar as you had evidence against the two alternatives.
02:06:52.000 So, for example, if you got some evidence that suggested it was less likely that all civilizations at our stage go extinct before maturity.
02:07:01.000 Let's say we get our act together, we eliminate nuclear weapons, we become prudent and...
02:07:08.000 We check all the asteroids, nothing is on collision course with Earth.
02:07:11.000 That kind of tend to lower the probability of the first, right?
02:07:14.000 Okay.
02:07:16.000 So that would tend to shift probability over on the remaining alternatives.
02:07:20.000 Let's suppose that we moved closer ourselves.
02:07:23.000 To becoming post-human.
02:07:25.000 We develop more advanced computers and VR and we're getting close to this point ourselves and we still remain really interested in running Ancestry simulations.
02:07:35.000 We think this is what we really want to spend our resources on as soon as we can make it work.
02:07:40.000 That would move probability over from the second alternative.
02:07:45.000 It's less likely that there is this strong convergence among all post-human technologically mature civilizations if we ourselves are almost post-human and we still have this interest in creating ancestor simulations.
02:07:57.000 So that would shove probability over to the remaining alternative.
02:08:01.000 Take the extreme case of this.
02:08:02.000 Imagine if we...
02:08:04.000 A thousand years from now have built our own planetary-sized computer that can run these simulations, and we are just about to switch it on, and it will create the simulation of precisely people like ourselves.
02:08:18.000 And as we move towards the big button to sort of initiate this, then the probability of the first simulation Two hypotheses basically goes to zero, and then we would have to conclude with near certainty that we are ourselves in a simulation as we push this button to create a million simulations.
02:08:37.000 Once we achieve that state, but we have not achieved that state, why would we not assume that we are in the actual state that we currently experience?
02:08:44.000 Well, I said yes.
02:08:45.000 We shouldn't assume.
02:08:47.000 We should assume that we are ignorant as to which of these different time slices we are, which of these different Joe Rogan experiences is the present one.
02:09:00.000 We just can't tell from the inside which one it is.
02:09:06.000 If you could see some objective clock and say that, well, as yet the clock is so early that no simulations have happened, then obviously you could conclude that you're in the original history.
02:09:19.000 But if we can't see that clock outside the window, if there is no window in the simulation to look out, then it would look the same.
02:09:26.000 And then I'd say we have no way of telling which of these different instances we are.
02:09:31.000 One of them might be that there is no simulation and that we're moving towards that simulation.
02:09:37.000 That one day it could be technologically possible.
02:09:40.000 It could be a one in a million.
02:09:40.000 Really?
02:09:41.000 So one in a million is that life is what you experience right now.
02:09:44.000 One in a million.
02:09:45.000 No, no, no.
02:09:45.000 Conditional on the other...
02:09:48.000 But not even condition on those other alternatives being wrong.
02:09:51.000 Let's say that human beings haven't blown themselves up yet.
02:09:55.000 Let's say that human beings haven't come up with – there is no need to make the decision to not activate the simulation because the simulation hasn't been invented yet.
02:10:05.000 Isn't that also a possibility?
02:10:07.000 Isn't it also a possibility that the actual timeline – Of technological innovation that we all agree on is real and that we're experiencing this as real, live human beings not in a simulation that one day the simulation could potentially take place but has not yet.
02:10:23.000 Isn't that also a possibility?
02:10:25.000 Yeah, I mean sure.
02:10:26.000 It's just a question of how probable that is given the… Why isn't it super probable?
02:10:30.000 Because we're experiencing it.
02:10:31.000 Well, I mean, it would be a very unusual situation for somebody with your experiences to be in.
02:10:37.000 What about your experiences?
02:10:38.000 For my experiences, the same there, yeah.
02:10:41.000 It would be extremely unusual.
02:10:42.000 But there's 7 billion unusual experiences taking place simultaneously.
02:10:46.000 Why would you assume that's...
02:10:47.000 Well, if there were, like, say, a million simulations, then, you know, that would be a million times more.
02:10:55.000 But why would there be any simulations?
02:10:57.000 Why would there not just be 7 billion people experiencing life?
02:11:01.000 Right, yeah.
02:11:01.000 I mean, that would have to be something that prevents these simulations from being created.
02:11:07.000 This is where you lose me.
02:11:08.000 Yeah.
02:11:08.000 So I think maybe the difference is I tend to think in terms of the world as a four-dimensional structure with time being one dimension, right?
02:11:18.000 Okay.
02:11:19.000 So you think in the totality of existence...
02:11:24.000 That will have happened by the end of time.
02:11:27.000 You look at all the different experiences that match your current experience.
02:11:36.000 Given these various assumptions, the vast majority of those would be simulated.
02:11:43.000 Why?
02:11:47.000 The various assumptions being that option one and two are false, basically.
02:11:51.000 What about option, my option?
02:11:52.000 Yeah, so in your option, the vast majority of all these experiences that will ever have existed will also be simulated, if I understand your option correctly.
02:12:01.000 No, no, no.
02:12:01.000 My option is that nothing's happened yet.
02:12:03.000 Yeah, but there will have been.
02:12:05.000 Maybe.
02:12:06.000 But not yet.
02:12:07.000 Right.
02:12:08.000 But as I understand your option is that if we look at the universe at the end of time and we look back, there will be a lot of simulated versions of you and then one original one.
02:12:17.000 But I'm not even considering that.
02:12:19.000 And you think you might be the original one.
02:12:20.000 No, I'm not even considering that.
02:12:21.000 What I'm saying is we may just be here.
02:12:27.000 That there is no simulation.
02:12:29.000 And that maybe it will take place someday, but maybe it will not.
02:12:33.000 Right.
02:12:34.000 But you have to pick which of those...
02:12:38.000 Which of those scenarios you're considering.
02:12:40.000 That is the scenario I'm considering.
02:12:42.000 The scenario I'm considering is we are just here.
02:12:45.000 We are actually live.
02:12:46.000 But what happens after?
02:12:47.000 So I want the scenario to say what's happened in the past, what happens now, and what will happen in the future.
02:12:53.000 Well, we don't know what's going to happen in the future.
02:12:55.000 That's right.
02:12:56.000 So we can consider both options, right?
02:12:58.000 Yes.
02:12:58.000 One option where there are no simulations created later.
02:13:02.000 Right.
02:13:03.000 Then I would say that means one of the first two alternatives.
02:13:08.000 But another option is there could be a simulation created later, but it has not taken place yet.
02:13:13.000 And that there will be simulations later.
02:13:16.000 That it's a possibility, but it has not happened yet.
02:13:19.000 Right, but that there will be later.
02:13:20.000 That's one possibility.
02:13:21.000 And so then I say, if that's the world that we are looking at, then most experiences of your kind exist inside the simulation.
02:13:35.000 I still don't understand that.
02:13:37.000 Why can it not have happened yet?
02:13:42.000 Well, it kind of depends on which of these experiences is your present moment in that scenario, right?
02:13:49.000 So there's going to be a million of them plus an initial one.
02:13:54.000 You can tell from the inside.
02:13:56.000 Maybe there will be a million of them, but there's right now no evidence that there's going to be.
02:14:03.000 No evidence that there is.
02:14:05.000 No evidence that it's ever even going to be possible technologically.
02:14:09.000 We think there could be, but it hasn't happened yet.
02:14:12.000 So why would you assume that we are in a simulation currently when there's no evidence whatsoever that it's even possible to create a simulation?
02:14:22.000 Maybe there is some alternative way of trying to explain how I'm thinking.
02:14:29.000 I understand what you're saying.
02:14:30.000 I'm thinking like suppose...
02:14:32.000 I understand you're saying that there's...
02:14:33.000 I'm sorry to interrupt you.
02:14:34.000 Sorry.
02:14:35.000 I'm just thinking maybe we could think of some simpler thought experiment which has nothing to do with simulations and stuff, but...
02:14:44.000 Imagine if...
02:14:45.000 So I'm making this up as I go along, so we'll see if it actually works.
02:14:52.000 You're taking into a room, and then you're awake there for one hour, and then a coin is tossed.
02:15:04.000 And if it lands heads, then the experiment ends and you exit the room and everything is normal again.
02:15:10.000 But if it lands tails, then you're given an amnesia drug And then you're woken up in the room again.
02:15:16.000 You think you're there for the first time because you don't remember having been there before.
02:15:20.000 And then this is repeated 10 times.
02:15:23.000 So we have a world where either there is one hour experience of you in the room or else it's a world with 10 Joe Rogan experiences in the room with an episode of amnesia in between.
02:15:39.000 But when you're in the room now, you find yourself in this room, you're wondering, hmm...
02:15:45.000 Is this the first time I'm in this room?
02:15:47.000 It could be.
02:15:49.000 But it could also be that I'm later on and I was just given an amnesia drug.
02:15:55.000 So the question now is, when you wake up in this room, you have to assign probabilities to these different places you could be in time.
02:16:04.000 And maybe you have to bet or make some decision that depends on where you are.
02:16:11.000 So...
02:16:15.000 I guess I could ask you, like, so if you wake up in this room, what do you think the probability should be that the coin, that you're, like, at time one versus at some later time?
02:16:28.000 Well, what is the probability that I'm actually here versus what is the probability of this highly unlikely scenario that I keep getting drugged over and over again every hour?
02:16:37.000 Well, we assume that, like, you're certain that That the setup is such that there was this mad scientist who had the means to do this and he was going to flip this coin.
02:16:46.000 So we're assuming that you're sure about that either way.
02:16:49.000 The only thing you're unsure about is how the coin landed.
02:16:52.000 Okay.
02:16:53.000 Well, if that was a scenario where I knew that there was a possibility of a mad scientist and I could wake up over and over again, that seems like a recipe for insanity.
02:17:03.000 Yeah.
02:17:04.000 Well, it's a philosophical thought experiment, so we can abstract away from the possibility of it.
02:17:08.000 My point initially, and I'll get back to it, is there's no evidence at all that we're in a simulation.
02:17:13.000 So why wouldn't we assume that the most likely scenario is taking place, which is we are just existing, and life is as it seems, but strange.
02:17:23.000 Okay, so if you don't want to do this thought experiment...
02:17:27.000 No, I do want to do a thought experiment, but it seems incredibly limited.
02:17:30.000 Right.
02:17:31.000 Well, I'm trying to distill the probability theory part from the wider simulation.
02:17:39.000 But I guess I could also ask you, if we were to move closer to this point where we ourselves can create simulations, if we survive, we become a multi-planetary, we build planetary-sized computers.
02:17:51.000 Yeah.
02:17:53.000 How would your probability in the simulation hypothesis change as we kind of develop?
02:17:59.000 Well, it would change based on the evidence of some profound technological innovation that actually would allow Yeah, I think...
02:18:28.000 It's not an outcome in that it would require you to postulate that you are this very unusual and special observer amongst all the observers that will exist.
02:18:38.000 But everyone is unusual in their own way.
02:18:42.000 That's true.
02:18:43.000 Because there's no clones.
02:18:45.000 There's no one person that's a version that's living the same exact life in a million different scenarios.
02:18:52.000 But in this respect, If there are all these simulations, then most of these people are not special in this way.
02:19:00.000 Most of them are simulated.
02:19:03.000 And only a tiny minority.
02:19:04.000 If there's a simulation.
02:19:05.000 There are many simulations.
02:19:06.000 Or if there's no simulations.
02:19:08.000 If there are no simulations and there will never be any simulations, then...
02:19:13.000 Well, who's saying there never will be?
02:19:14.000 Well, so this...
02:19:16.000 Since we don't know what time it is now in external reality, and we therefore can't tell from looking at our evidence, Where we are in a world where either there is just an original history and then it ends,
02:19:37.000 or there is a world with an original history and then a lot of simulations.
02:19:41.000 We need to think about how to assign probabilities given each of these two scenarios.
02:19:46.000 And so then we have a situation that is somewhat analogous to this one with the amnesia room, where you have some number of episodes.
02:19:54.000 And so the question is, in those types of situations, how do you allocate probability over the different hypotheses about how the world is structured?
02:20:05.000 And This kind of betting argument is one type of argument that you can try to reduce to kind of get some grip on that.
02:20:15.000 And another is by looking at various applications in cosmology and stuff where you have multiverse theories.
02:20:25.000 Which say the universe is very big, maybe there are many other universes, maybe there are a lot of observers, maybe all possible observers exist out there in different configurations.
02:20:34.000 How do you drive probabilistic predictions from that?
02:20:36.000 It seems like whatever you observe would be observed by somebody, so how could you test that kind of theory?
02:20:43.000 And this same kind of anthropic reasoning that I want to use in the context of the simulation argument, Also plays a role, I think, in deriving observational predictions from these kinds of cosmological theories,
02:21:00.000 where you need to assume something like you're most likely a typical observer from amongst the observers that will ever have existed, or so I would suggest.
02:21:12.000 Now, I should...
02:21:15.000 Admit as an asterisk that this field of anthropic reasoning is tricky and not fully settled yet.
02:21:22.000 And there are things there that we don't yet fully understand.
02:21:27.000 But still, the particular application of anthropic reasoning that is relevant for the simulation argument, I think, is one of the relatively less problematic ones.
02:21:36.000 So that Conditional on there being, by the end of time, a large number of simulated Joe Rogans and only one original one, I think, conditional on that hypothesis, it would seem that most of your probability should be on being one of the simulated ones.
02:21:53.000 But I'm not sure I have any other ways of making it more vivid or possible.
02:21:58.000 No, I completely understand what you're saying.
02:21:59.000 I completely understand what you're saying.
02:22:01.000 But I don't know why you're not willing to take into account the possibility that it hasn't occurred yet.
02:22:07.000 The way I see it is that I have taken that into account and it receives the same probability that I'm that initial segment as I would give to any of the other Nick Bostrom segments that all have the same evidence.
02:22:20.000 See, that's where we differ because I would give much more probability to the fact that we are existing right now in the current state as we experience it in real life, carbon life, no simulation, but that potentially one day there could be a simulation which leads us to look at the possibilities and look at the probabilities that it's already occurred.
02:22:42.000 All right, so what about this?
02:22:45.000 Suppose it is the case that...
02:22:48.000 All right, so what we think happened is there was a big bang, planets formed, and then some billions of years later, we evolved, and here we are now, right?
02:22:55.000 Suppose some physicists told you that, well, the universe is very big, and early on in the universe, very, very rare occasions, there was some big gas cloud.
02:23:04.000 In an infinite universe, this will happen somewhere, right?
02:23:07.000 Where, just by chance...
02:23:09.000 That was a kind of Joe Rogan-like brain coming together for a minute and then dissolved in the gas.
02:23:16.000 And yeah, if you have an infinite universe, it's going to happen somewhere.
02:23:19.000 But there's got to be many, many fewer Joe Rogan brains In such situations, then, will exist later on on planets, because evolution helps funnel probability into these kinds of organized structures, right?
02:23:34.000 So, if some physicists told you that, well, this is the structure of our part space-time.
02:23:42.000 Like, there are a few very, very rare spontaneously materialized brains from gas clouds early in the universe, and then there are the normal rogances much later.
02:23:50.000 And there are, of course, many, many more normal ones.
02:23:53.000 The normal ones happen in one out of every, you know, 10 to the power of 50 planets, whereas the weird ones happen in one out of 10 to the power of 100. Normal versus weird, how so?
02:24:03.000 How are you defining it?
02:24:03.000 Well, the normal ones are ones that have evolved on planets and had the mother and...
02:24:08.000 Different planets.
02:24:09.000 Is that what you're talking about?
02:24:09.000 Yeah, different planets.
02:24:10.000 Okay, but we only have one planet, right?
02:24:12.000 Right, but this again is like a...
02:24:14.000 Well, I mean, actually, there are a lot of planets in the universe, and if it's infinite, there's got to be a lot of copies of you, right?
02:24:19.000 Right, but one planet that you're aware of that has life.
02:24:21.000 This is pure speculation, right?
02:24:23.000 But this is a thought experiment, which in fact actually probably matches reality in this respect.
02:24:30.000 Most likely there's some other planets out there.
02:24:33.000 I think the fact that it matches reality is, I think, irrelevant to the point I want to make.
02:24:38.000 So if this turned out to be the way the world works, a few weird ones happening from gas clouds and then the vast majority are just normal people living on a planet.
02:24:48.000 Would you similarly say, given that model, that you should think, oh, it might just as well be one of these gas cloud ones?
02:24:57.000 Because after all, the other ones might not have happened yet.
02:25:02.000 Or have I lost you?
02:25:03.000 You lost me.
02:25:04.000 Sorry.
02:25:05.000 Yeah.
02:25:09.000 Anyway, I think that this would be a structurally similar situation where there would be a few exceptional early living versions that would be very small in numbers compared to the later ones.
02:25:22.000 And if they allow themselves the same kind of reasoning where they would say, well, the other ones may or may not come to exist later on planets.
02:25:30.000 I have no reason to believe I'm one of the planet living ones.
02:25:33.000 Then it seems that in this model of the universe, you should think you're one of these early gas cloud ones.
02:25:40.000 And as I said, I mean, this looks like it probably actually is the world we're living in, in that it looks like it's infinitely big and that would have been a few georogons spontaneously generated very early from random processes.
02:25:58.000 There are going to be very few numbers compared to ones that have, you know, risen on planets.
02:26:05.000 So that by taking the path you want to take with relation to the simulation argument, I wonder if you would not then be committed to thinking that you would be like, in effect, a Boltzmann brain in a gas cloud super early in the universe.
02:26:20.000 I still don't understand what you're saying.
02:26:21.000 What I'm saying is that we scientists agree.
02:26:26.000 If you believe in science and if you believe in the discoveries that so far people have all currently agreed to, we've agreed that clouds are formed and that planets are created and that all the matter comes from inside of the explosions of a star And that it takes multiple times for this to coalesce before we can develop carbon-based life forms.
02:26:49.000 All that stuff science currently agrees on, right?
02:26:52.000 And then we believe in single-celled organisms, become multi-celled organisms through random mutation and natural selection.
02:26:58.000 We get evolution, and then we agree that we have...
02:27:02.000 We've come to a point now where technology has hit this gigantic spike that you described earlier.
02:27:08.000 So human beings have created all this new innovation.
02:27:10.000 Why wouldn't we assume that all this is actually taking place right now with no simulation?
02:27:17.000 Yeah, I mean, the simulation argument is the answer to that, but with a qualification that A, the simulation argument doesn't even purport to prove the simulation hypothesis, because there are these two alternatives.
02:27:30.000 B, that even if the simulation hypothesis is true, in many versions of it, it would actually be the case that In the simulation, all of these things have taken place.
02:27:44.000 And the simulation might go back a long time, and it might be a reality tracking simulation.
02:27:51.000 Maybe these same things also happened before or outside the simulation.
02:27:55.000 I understand that.
02:27:56.000 But, or, all these things have actually happened, and there is no simulation yet.
02:28:02.000 That's possible, too.
02:28:04.000 Doesn't that seem really probable?
02:28:07.000 Well, to me it seems probable only if at least one of the other alternatives is true.
02:28:13.000 Or, I admit that there is also this general possibility, which is always there, that I'm confused about some big thing, like maybe the simulation argument is wrong in some way.
02:28:26.000 I'm just looking at the track record of science and philosophy, we find we're sometimes wrong.
02:28:31.000 So I attach some probability to that.
02:28:34.000 But if we're working within the parameters of what currently seems To me to be the case, that we would be the first civilization in a universe where there will later be many, many simulations seems unlikely for those exact reasons.
02:28:54.000 And that if we are the first, it's probably because one of the alternatives is true.
02:29:00.000 It's a mind blower, Nick.
02:29:03.000 The more you sit and think about it, the more you ponder these concepts, and I'm not on one side or the other.
02:29:11.000 It's scary, but it's also amazing.
02:29:16.000 And what else is there that we haven't figured out yet?
02:29:19.000 If we come back in 50 years, Even just with human beings thinking about stuff.
02:29:27.000 And I think I have this concept of a crucial consideration.
02:29:31.000 I alluded to it a little bit earlier.
02:29:33.000 But the idea of some argument or data or insight that if only we got it would Radically change our mind about our overall scheme of priorities.
02:29:48.000 Not just change the precise way in which we go about something, but kind of totally reorient ourselves.
02:29:54.000 An example would be if you are an atheist and you have some big conversion experience and suddenly your life feels very different.
02:30:02.000 What were you doing before?
02:30:03.000 You were basically wasting your time and now you found what it's all about.
02:30:07.000 But there could be sort of slightly smaller versions of this.
02:30:12.000 I wonder what the chances are that we have discovered all crucial considerations now.
02:30:18.000 Because it looks like...
02:30:21.000 At least up until very recently, we hadn't, in that there are these important considerations that seems to, whether it's AI, like if this stuff about AI is true, like maybe that's the one most important thing that we should be focusing on and the rest is kind of frittering away our time as a civilization.
02:30:37.000 We should be focused on AI alignment.
02:30:39.000 So we can see that it looks like all earlier ages, up until very recently, We're oblivious to at least one crucial consideration, insofar as they wanted to have maximum positive impact on the world.
02:30:52.000 They just didn't know what the thing was to focus on.
02:30:54.000 And it also seems kind of unlikely that we just now have found the last one.
02:31:01.000 That just seems kind of...
02:31:04.000 Given that we keep discovering these up until quite recently, we are probably missing out on one or more likely several more crucial considerations.
02:31:12.000 And if that's the case, then it means that we are fundamentally in the dark.
02:31:17.000 We are basically clueless.
02:31:20.000 We might try to improve the world, but we are...
02:31:27.000 Overlooking maybe several factors, each one of which would make us totally change our mind about how to go about this.
02:31:35.000 And so it's less of a problem, I think, if your goal is just to lead your normal life and be happy and have a happy family.
02:31:44.000 Because there we have a lot more Evidence and it doesn't seem to keep changing every few years.
02:31:50.000 Like we still know, yeah, have good relationships, you know, don't ruin your body, don't jump in front of trains, like these are tried and tested, right?
02:31:58.000 But if your goal is to somehow steer humanity's future in such a way that you maximize expected utility, there it seems our best guess is keep jumping around every few years and we haven't kind of settled down into some stable conception of that.
02:32:15.000 Nick, I'm going to have to process the conversation for a long time.
02:32:18.000 But I appreciate it.
02:32:20.000 And thank you for being here, man.
02:32:21.000 It was really cool, very fascinating discussion.
02:32:24.000 Good to meet you, yeah.
02:32:25.000 Thank you.
02:32:25.000 Thank you very much.
02:32:26.000 Thank you.
02:32:26.000 If people would like to read any of your stuff, where can they get it?
02:32:31.000 NickBostrom.com.
02:32:33.000 Probably the best starting point.
02:32:34.000 Okay.
02:32:35.000 Thank you.
02:32:36.000 My brain's broken.
02:32:38.000 Bye, everybody.