In this episode, we talk about artificial intelligence and what it means for the future of humanity, and why we should be worried about it. We also talk about the implications of super-intelligent machines replacing us, and how we can prepare for them. This episode was produced and edited by Annie-Rose Strasser and Alex Blumberg. Our theme song was written and performed by Micah Vellian and our ad music was made by Mark Phillips. Additional music was written, produced, and produced by Matthew Boll. Additional production and mixing by Patrick Muldowney. Our thanks to our sponsor, Ajinomoto, for the use of their logo and logo design, and our editor-in-chief, Kevin McLeod for the music, and for the production of the show's theme song, "Goodbye Outer Space" by Suneaters, courtesy of Epitaph Records. The show was mixed by Haley Shaw. Music by Jeff Kaale and Christian Bladt. Art: Mackenzie Moore Music: Hayden Coplen Editor: Will Witwer Additional mixing and mastering by David Fincher Thank you to Peter Thiel Thanks to our sponsors, Ayn Rand, Alyssa Miller, Ben Kuchta, and Tyler Cowen and Rachel Goodman for producing the episode's score and sound design by Jeff Perla, and thanks to Rachel Ward at the Electric Lighthearted Productions of the excellent score and production by John Rocha, and the amazing work of our good vibes, and special thanks to the excellent sound design and editing by Ben Kotler, and music by Ben Koppen, and Rachel Ward, and Andrew Kuchter, and Patrick McElroy we hope you enjoy this episode and hope you do so much so that you enjoy it! thanks to everyone who sent us a review of this episode. , and all the feedback and support we can help us make it even better than last week's review, and review it, and all of our support is so much more, and we really appreciate the feedback we get a chance to make it better next week! - thank you for your feedback, and thank you so much of it's worth listening to it, we really really appreciate it, please review it. - and we appreciate it - we really do appreciate it. Thank you for all the love and support and support us.
00:00:36.000Well, there are a lot of things wrong with the world as it is now.
00:00:39.000Pull this up to your face if you would.
00:00:43.000All the problems we have, most of them could be solved if we were smarter or if we had somebody on our side who were a lot smarter with better technology and so forth.
00:00:58.000Also, I think if we want to imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's going to happen at all, after we have super intelligence that then develops the technology to make that possible.
00:01:18.000The real question is whether or not we would be able to harness this intelligence or whether it would dominate.
00:01:29.000You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history.
00:01:36.000So I think there are really two challenges we need to meet.
00:01:40.000One is to make sure we can align it with human values and then make sure that we together do something better with it than fighting wars or oppressing one another.
00:01:50.000I think, well, what I'm worried about more than anything is that human beings are going to become obsolete.
00:01:54.000That we're going to invent something that's the next stage of evolution.
00:02:20.000We don't necessarily want, or at least I wouldn't be totally thrilled with a future where humanity as it is now was the last and final word.
00:03:12.000Yeah, the idea that we're in a state of evolution, that we are just like we look at ancient hominids, that we are eventually going to become something more advanced or at least more complicated than we are now.
00:03:24.000But what I'm worried is that biological life itself has so many limitations.
00:03:28.000When we look at the evolution of technology, if you look at Moore's Law or if you just look at new cell phones, like they just released a new iPhone yesterday and they're talking about all these Incremental increases in the ability to take photographs and wide-angle lenses and night mode and a new chip that works even faster.
00:03:45.000These things, there's not, the word evolution is incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine biologically.
00:03:55.000Like if we had a thing that we had created, if we had created, instead of artificial intelligence in terms of like something in a chip or computer, If we created a life form, a biological life form, but this biological life form was improving radically every year.
00:04:13.000The iPhone existed in 2007. That's when it was invented.
00:04:16.000If we had something that was 12 years old, but all of a sudden was infinitely faster and better and smarter and wiser than it was 12 years ago, the newest version of it, version X1, we would start going, whoa, whoa, whoa, hit the brakes on this thing, man.
00:04:32.000How many more generations before this thing's way smarter than us?
00:04:36.000How many more generations before this thing thinks that human beings are obsolete?
00:04:40.000It's coming at us fast, it feels like.
00:04:43.000But some people think, oh, it's slowing down now.
00:04:50.000Well, don't I have like Tyler Cowen and even Peter Thiel sometimes goes on about the pace of innovation not really being what it needs to be.
00:05:02.000I mean, maybe it was faster in like 1890s, but still compared to almost all of human history, it seems like a period of unprecedented rapid progress right now.
00:05:47.000We really don't know what form it's going to take.
00:05:49.000And we really don't know what it's going to do to us.
00:05:54.000Yeah, so I see it as not something that should be avoided, neither something that we should just be completely gung-ho about, but more like a kind of gate through which we will have to pass at some point.
00:06:07.000All paths that are both plausible and lead to really great futures, I think, at some point involve the development of greater-than-human intelligence, machine intelligence.
00:06:17.000And so that our focus should be on getting our act together as much as we can in whatever period of time we have before that occurs.
00:06:27.000Well, I mean, that might involve doing some research into various technical questions as how you build these systems so that we actually understand what they are doing.
00:07:02.000We're certainly capable of screwing it all up.
00:07:05.000Where is the current state of technology now in regards to artificial intelligence and how far away do you think we are from AGI? Well, different people have different views on that.
00:07:17.000I think the truth of the matter is that it's very hard to have accurate views about the timelines for these things that still involve kind of beginning breakthroughs that have to happen.
00:07:32.000Certainly, I mean, over the last eight or ten years, there has been a lot of excitement with the deep learning revolution.
00:07:40.000I mean, it used to be that people thought of AI as this kind of autistic savant, really good at logic and counting and memorizing facts, but...
00:07:53.000And this deep learning evolution, when you began to do these deep neural networks, you kind of solved perception in some sense.
00:08:01.000You have computers that can see, that can hear, and that have visual intuition.
00:08:08.000So that has enabled a whole wide suite of applications, which makes it commercially valuable, which then drives a lot of investment in it.
00:08:18.000So there's now quite a lot of momentum in machine learning and trying to kind of stay ahead of that.
00:08:26.000It's interesting that when we think about artificial intelligence and whatever potential form that it's going to take, if you look at films like 2001, like Hal, like, open the door, Hal, you know?
00:08:38.000We think of something that's communicating to us, like a person would, and maybe is a little bit colder and doesn't share our values and has a more pragmatic view of life and death and things.
00:08:54.000When we think of intelligence, though, I think intelligence in our mind is almost inexorably connected to all the things that make us human, like emotions and And ambition and all these things, like the reason why we innovate.
00:09:09.000We innovate because we enjoy innovation and because we want to make the world a better place and because we want to fix some problems that we've created and we want to solve some limitations of the human body and the environment that we live in.
00:09:21.000But we sort of assume that intelligence that we create will also have some motivations.
00:09:28.000Well, there is a fairly large class of possible structures you could do.
00:09:34.000If you want to do anything that has any kind of cognitive or intellectual capacity at all, a large class of those would be what we might call agents.
00:09:42.000So these would be systems that interact with the world in pursuit of some goal.
00:09:49.000And if there are a sophisticated class of agents, they can plan ahead a sequence of actions.
00:09:55.000Like more primitive agents might just have reflexes.
00:09:59.000But the sophisticated agent might have a model of the world where it can kind of think ahead before it starts doing stuff.
00:10:06.000It can kind of think, what would I need to do in order to reach this desired state?
00:10:14.000It's not the only possible cognitive system you could build, but it's also not this weird, bizarre, special case that, you know, it's a fairly natural thing to aim for.
00:10:23.000If you're able to specify the goal, something you want to achieve, but you don't know how to achieve it, a natural way of trying to go about that is by building this system that has this goal and is an agent and then moves around and tries different things and eventually perhaps learns to solve that task.
00:10:39.000Do you anticipate different types of artificial intelligence?
00:10:44.000Like artificial intelligence that mimics the human emotions?
00:10:49.000Do you think that people will construct something that's very similar to us in a way that we can interact with it in common terms?
00:10:57.000Or do you think it will be almost like communicating with an alien?
00:11:04.000So there are different scenarios here.
00:11:07.000My guess is that the first thing that actually achieves superintelligence would not be very human-like.
00:11:16.000There are different possible ways you could try to get to this level of technology.
00:11:21.000One would be by trying to reverse engineer the human brain.
00:11:23.000We have an existence in the limiting case.
00:11:27.000Imagine if you just made an exact duplicate in silicon of the human brain, like every neuron had some counterpart.
00:11:34.000So that seems technologically very difficult to do, but it wouldn't require a big theoretical breakthrough to do it.
00:11:41.000You could just, if you had sufficiently good microscopy and large enough computers and enough elbow grease, you could kind of...
00:11:48.000But it seems to me plausible that what will work before we are able to do it that way will be some more synthetic approach.
00:11:56.000That would only be a very rough resemblance, maybe with the neocortex.
00:12:01.000Yeah, that's one of the big questions, right?
00:12:03.000Whether or not we can replicate all the functions of the human brain in the way it functions and mimic it exactly, or whether we could have some sort of superior method that achieves the same results that the human brain does in terms of its ability to calculate and reason and do multiple tasks at the same time.
00:12:21.000Yeah, and I also think that maybe once you have a sufficiently high level of this general form of intelligence, then you could use that maybe to emulate or mimic things that we do differently.
00:12:36.000The cortex is quite limited, so we rely a lot on earlier neurological structures that we have.
00:12:42.000We have to be guided by emotion because we can't just calculate everything out.
00:12:47.000And instinct, and if we lost all of that, we would be helpless.
00:12:53.000But maybe some system that had a sufficiently high level of this more abstract reasoning capability could maybe use that to substitute for things that weren't built in in the same way that we do.
00:13:03.000Have you ever talked to Sam Harris about this?
00:13:34.000Well, I mean, I do think that there are these significant risks that will be associated with this transition to the machine intelligence era, including existential risks, threats to the very survival of humanity or what we care about.
00:14:53.000You could remove some of these reasons and there would still be enough reasons for why people would be pushing forward with this.
00:14:59.000One of the things that scares me the most is the idea that if we do create artificial intelligence, then it will improve upon our design and create far more sophisticated versions of itself.
00:15:09.000And that it will continue to do that until it's unrecognizable, until it reaches literally a godlike potential.
00:15:18.000I mean, I forget what the real numbers were, maybe you could tell us, but someone had calculated some Reputable source and calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time.
00:15:33.000Like if it was allowed to innovate and then make better versions of itself and those better versions of itself were allowed to innovate and make better versions of itself.
00:15:40.000You're talking about not an exponential increase of intelligence but an explosion.
00:15:47.000So it's hard enough to forecast the pace at which we will make advances in AI. Because we just don't know how hard the problems are that we haven't yet solved.
00:15:58.000And, you know, once you get to human level or a little bit above, I mean, who knows?
00:16:03.000It could be that there is some level where to get further, you would need to put in a lot of...
00:16:11.000Now, what is easier to estimate is if you just look at the speed, because that's just a function of the hardware that you're running it on, right?
00:16:19.000So there we know that there is a lot of room in principle.
00:16:23.000If you look at the physics of computation and you look at what would an optimally arranged physical system be that was optimized for computation, that would weigh many, many orders above what we can do now.
00:16:36.000And then you could have arbitrarily large systems like that.
00:16:39.000So, from that point of view, we know that that could be things that would be like a million times faster than the human brain and with a lot more memory and stuff like that.
00:16:48.000And then something, if it did have a million times more power than the human brain, it could create something with a million times more computational power than itself.
00:17:32.000So if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do.
00:17:39.000But then that might still be quite a lot of orders of magnitude until it would be equivalent of the whole human species.
00:17:47.000And maybe during that time other things happen, maybe we upgrade our own abilities in some way.
00:17:54.000So there are some scenarios where it's so hard to get even to one human baseline that we kind of use this massive amount of resources just to barely create kind of a village agent using billions of dollars of compute,
00:18:10.000So if that's the way we get there, then, I mean, it might take quite a while, because you can't easily scale something that you've already spent billions of dollars building.
00:18:18.000Yeah, some people think the whole thing is blown out of proportion, that we're so far away from creating artificial general intelligence that resembles human beings, that it's all just vaporware.
00:18:30.000Well, I mean, one would be that I would want to be more precise about just how far away does it have to be in order for us to be rational to ignore it.
00:18:41.000It might be that if something is sufficiently important and high stakes, then even if it's not going to happen in the next 5, 10, 20, 30 years, it might still be wise for our pool of 7 billion plus people to have some people actually thinking about this ahead of time.
00:18:59.000So some of these disagreements, I guess this is my point, are more apparent than real.
00:19:04.000Like, some people say it's going to happen soon, and some other people say, no, it's not going to happen for a long time.
00:19:08.000And then, you know, one person means by soon, five years, and another person means by a long time, five years.
00:19:15.000And, you know, it's more of different attitudes rather than different specific beliefs.
00:19:19.000So I would first want to make sure that there actually is a disagreement.
00:19:25.000Now, if there is, if somebody is very confident that it's not going to happen in hundreds and hundreds of years, then I guess I would want to know their reasons for that level of confidence.
00:19:35.000What's the evidence they're looking at?
00:19:37.000Do they have some ground for being very sure about this?
00:19:41.000Certainly, the history of technology prediction is not that great.
00:19:46.000You can find a lot of other examples where even very eminent technologists and scientists where culture, it's not going to happen in our lifetime.
00:19:55.000In some cases, it actually already just happened in some other part of the world, or it happened a year or two later.
00:20:02.000So I think some epistemic humility with these things would be wise.
00:20:09.000I was watching a talk that you were giving and you were talking about the growth of innovation technology and GDP over the last 100 years and you were talking about the entire history of life on earth and what a short period of time humans have been here and then during what a short period of time what a stunning amount of innovation and how much change we've enacted on the earth and just a blink of an eye and you had the scale of GDP over the course
00:20:40.000It's crazy, because it's so difficult for us with our current perspective, just being a person, living, going about the day-to-day life that seems so normal, to put it in perspective time-wise and see what an enormous amount of change has taken place in relatively an incredibly short amount of time.
00:21:03.000We think of this as sort of the normal way for things to be.
00:21:06.000The idea that the alarm wakes you up in the morning and then you commute in and sit in front of a computer all day and you try not to eat too much.
00:21:13.000And that if you sort of imagine that, you know, maybe in 50 years or 100 years or at some point in the future, it's going to be very different.
00:21:22.000But, of course, this quote-unquote normal condition is a huge anomaly any which way you look at it.
00:21:29.000I mean, if you look at it on a geological timescale, the human species is very young.
00:21:34.000If you look at it historically, you know, for more than 90%, we were just hunter-gatherers running around and agriculturalists for...
00:21:47.000The last couple of hundred years, when some parts of the world have escaped the Malthusian condition, where you basically only have as much income as you need to be able to produce two children.
00:22:01.000All of this is very, very, very recent.
00:22:04.000And in space as well, of course, almost everything is ultra-high vacuum, and we live on the surface of this little special crumb.
00:22:13.000And yet we think this is normal and everything else is weird, but I think that's a complete inversion.
00:22:19.000And so when you do plot, if you do plot, for example, world GDP, which is a kind of rough measure for the total amount of productive capability that we have, right?
00:23:08.000And oddly enough, everyone involved in the explosion, everyone that's innovating, everyone that's creating all this new technology, they're all apart.
00:23:18.000of this momentum that was created before they were even born.
00:23:23.000They're just a part of this whole spinning machine and they jump in, they're born, they go to college, next thing you know they have a job and they're contributing to making new technology and then more people jump in and add on to it and there's very little perspective in terms of like the historical significance of this incredible explosion technologically.
00:23:43.000When you look at What you're talking about, that gigantic spike.
00:23:46.000No one feels it, which is one of the weirdest things about it.
00:23:50.000I mean, you kind of expect every year there will be a better iPhone or whatever, right?
00:23:56.000People lived and died, and so absolutely no technological change.
00:24:00.000And in fact, you could have many, many generations.
00:24:04.000The very idea that there was some trajectory...
00:24:09.000In the material conditions is a relatively new idea.
00:24:14.000I mean, people thought of history either as, you know, some kind of descent from a golden age, or some people had a cyclical view.
00:24:22.000But it was all in terms of political organization, that would be a great kingdom, and then a wise ruler would rule for a while.
00:24:29.000And then like a few hundred years later, you know, they're Grand-great-grandchildren would be too greedy, and it would come into anarchy, and then a few hundred years later it would come back together again.
00:24:40.000So it would be all these pieces moving around, but no new pieces really entering.
00:24:44.000Or if they did, it was at such a slow rate that you didn't notice.
00:24:49.000But over the eons, the wheel slowly turns, and somebody makes a slightly better wheel, somebody figures out how to They irrigate a lot better.
00:25:02.000And eventually there is enough that you could have enough of a population, enough brains that then create more ideas at a quick enough rate that you get this industrial revolution.
00:25:37.000There's like objectively and there's personally.
00:25:39.000Like objectively, if you were outside of the human race and you were looking at all these various life forms competing on this planet for resources and for survival, you would look at humanity and you go, well, you know, clearly it's not finished.
00:25:54.000So there's going to be another version of it.
00:25:56.000It's like, when is this version going to take place?
00:25:59.000Over millions and millions of years like it has historically when it comes to biological organisms or is it going to invent something?
00:26:08.000That takes over from there, and then that's the new thing.
00:26:11.000Something that's not based on tissue, something that's not based on cells, it doesn't have the biological limitations that we have, nor does it have all the emotional attachments to things like breeding, social dominance, hierarchies, all those things were no consequence to it.
00:26:27.000It doesn't mean anything, because it's not biological.
00:26:30.000Yeah, I mean, I don't think millions of years, I mean, a number of decades or whatever.
00:26:37.000But it's interesting that even if we set that aside, we say machine intelligence is possible for some reason.
00:26:45.000I still think that would be very rapid change, including biological change.
00:26:50.000I mean, we are doing great advances, making great advances in biotech as well, and we'll increasingly be able to control what our own organisms are doing through different means and enhance human capacities through biotechnology.
00:27:09.000So even there, it's not going to happen overnight, but over an historically very short period of time, I think you would still see quite profound change just from applying bioscience to change human capacities.
00:27:25.000Yeah, one of the technologies or one of the things that's been discussed to sort of mitigate the dangers of artificial intelligence is a potential merge.
00:27:35.000Some sort of symbiotic relationship with technology that you hear discussed, like...
00:27:41.000I don't know exactly how Elon's neural link works, but it seems like a step in that direction.
00:27:49.000There's some sort of a brain implant that interacts with an external device, and all of this increases the bandwidth for available intelligence and knowledge.
00:28:01.000Yeah, I'm sort of skeptical that that will work.
00:28:04.000I mean, good that somebody tries it, you know, but I think it's quite technically hard to improve a normal, healthy human being's, say, cognitive capacity or other capacities by implanting things in them.
00:28:22.000And get benefits that you couldn't equally well get by having the gadget outside of the body.
00:28:27.000So I don't need to have an implant to be able to use Google, right?
00:28:40.000Well, hopefully you could do that even with implant.
00:28:43.000And once you start to look into the details, there's sort of these kind of demos, but then if you actually look at the papers, often you find, well, then there were these side effects, and the person had headaches, or they had some deficit, and the speech, you know, like, infection.
00:29:34.000Well, so this would just be in the context of, say, in vitro fertilization.
00:29:36.000You have usually some half dozen or dozen embryos created during this fertility procedure, which is standardly used.
00:29:45.000So rather than just a doctor kind of looking at these embryos and saying, well, that one looks healthy, I'm going to implant that, you could run some genetic test and then use that as a predictor and select the one you think has the most desirable attributes.
00:30:01.000And so this could be a trend in terms of how human beings reproduce, that we...
00:30:06.000Instead of just randomly having sex, woman gets pregnant, gives birth to a child, we don't know what it's going to be, what's going to happen.
00:30:25.000And so, I mean, to some extent, we already do this.
00:30:28.000There are a lot of testing done for various chromosomal abnormalities that you can already check for.
00:30:37.000But our ability to look beyond clear, stark diseases, that this one gene is wrong.
00:30:44.000To look at more complex traits is increasing rapidly.
00:30:49.000So obviously there are a lot of ethical issues and different views that come into that.
00:30:53.000But if we're just talking about what is technologically feasible, I think that already you could do a very limited amount of that today.
00:31:00.000And maybe you would get two or three IQ points in expectation more if you selected using current technology based on 10 embryos, let's say.
00:31:11.000But as genomics gets better at deciphering the genetic architecture of Whether it's intelligence or personality attributes, then you would have more selection power and you could do more.
00:31:25.000And then there is a number of other technologies we don't yet have, but which if you did, would then kind of stack with that and enable much more powerful forms of enhancement.
00:31:35.000So there, yeah, I don't think there are any major technological hurdles, really, in the way.
00:31:43.000Just some small amount of incremental further improvement.
00:31:47.000That's when you talk about Doing something with genetics and human beings and selecting.
00:32:10.000And you start thinking about all the imperfect people that have actually contributed in some pretty spectacular ways to what our culture is.
00:32:17.000And like, well, if everybody has perfect genes, would all these things even take place?
00:32:21.000Like, what are we doing, really, if we're bypassing nature and we're choosing to select for the traits and the attributes that we find to be the most positive and attractive?
00:32:33.000Like, what are, like, that gets slippery.
00:32:35.000And you think what would have happened if, say, some earlier age...
00:32:40.000had had this ability to kind of lock in their, you know, their prejudices, or if the Victorians had had this, maybe we would all be, whatever, pious and patriotic now or something.
00:32:57.000So, in general, with all of these powerful technologies we are developing, there is I think the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools.
00:33:15.000But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom.
00:33:33.000How many pieces of technology do you use in a day and how much do you actually understand any of those?
00:33:38.000Most people have very little understanding of how any of the things they use work.
00:33:42.000They put no effort at all into creating those things, but yet they've inherited the responsibility of the power that those things possess.
00:33:50.000Yeah, I mean, that's the only way we can do it.
00:33:53.000It's just way too complex for any person.
00:33:56.000If you had to sort of learn how to build every tool you use, you wouldn't get very far.
00:34:01.000Isn't that fascinating, though, when you think about human beings and all the different things we do?
00:34:06.000We have very little understanding of the mechanisms behind most of what we need for day-to-day life, yet we just use them because there's so many of us and so many people are understanding various parts of all these different things that together, collectively,
00:34:21.000we can utilize the intelligence of all these millions of people that have innovated and we, with no work whatsoever, just go into the Verizon store and pick up the new phone.
00:34:30.000I mean, and not just technology, but worldviews and political ideas as well.
00:34:36.000It's not as if most people sit down with an empty table, try to think from the basic principles of what would be the ideal configuration of the state or something like that.
00:34:47.000You just kind of absorb it and go with it.
00:34:53.000And it's amazing just how little of that actually at any point channels through your sort of conscious attention where you make some rational otherwise with like deliberate decision.
00:35:14.000There's no other way and there's no way, even like you and I discussing this, like Discussing the history of this incredible spike of evolution, or innovation rather, in technology.
00:35:33.000So even though we can intellectualize it, even though we can have this conversation, talk about what an incredible time we're in and how terrifying it is that things are moving at such an incredibly rapid rate.
00:35:44.000And no one's putting the brakes on it.
00:35:47.000No one's thinking about the potential pros and cons.
00:36:36.000Well, actually, the field of artificial intelligence sometimes is kind of dated to 1956. That was a conference, but I mean, it's somewhat arbitrary, but roughly that's when it got started.
00:36:49.000But the pioneers, even right back at the beginning, They thought that they were going to be able to do all the things that the human brain does.
00:37:42.000There's a number of, like, the line from having some external tool like a notepad, which you can calculate bigger numbers, right, if you can scribble on a piece of paper to a modern-day supercomputer, like, that kind of, you can break it down into small steps and they happen gradually.
00:37:57.000But, yeah, roughly since the 40s or so.
00:38:26.000There was some summer project that we're going to have a few students or whatever and work over the summer, and I thought, oh, maybe we can solve vision over the summer.
00:38:37.000And now we've kind of solved vision, but that's like six years later.
00:38:43.000It can be hard to know how hard the problem is until you've actually solved it.
00:38:46.000But the really interesting thing to me is that even though I can understand why they were wrong about how difficult it is, because how would you know, right, if it's 10 years of work or 100 years of work?
00:38:57.000Kind of hard to estimate at the outset.
00:38:59.000But what is striking is that even the ones who thought it was 10 years away, they didn't think of what the obvious next step would be after that.
00:39:07.000Like if you actually succeeded, At mechanizing all the functions of the human mind.
00:39:12.000They couldn't think, well, it's obviously not going to stop there once you get human equivalence.
00:39:17.000You're going to get superintelligence.
00:39:21.000But it was as if the imagination muscle had so exhausted itself thinking of this radical possibility.
00:39:25.000You could have a machine that does everything that the human does.
00:39:28.000You couldn't kind of take the next step.
00:39:31.000Or for that matter, the immense ethical and social implications.
00:39:36.000Even if all you could do is to replicate a human mind, like in a machine.
00:39:39.000If you actually thought you were building that and you were 10 years away, it'd be crazy not to spend a lot of time thinking about how this is going to impact the world.
00:39:47.000But that didn't really seem to have occurred much to them at all.
00:39:51.000Well, sometimes it seems that people just want to do it.
00:39:55.000Like, even with the creation of the atomic bomb, I mean, they felt like they had to do it because we had to develop it before the Germans did.
00:40:10.000And so with the Manhattan Project, obviously, it was during wartime and maybe Hitler had a program.
00:40:16.000They thought you could easily see why that would motivate a lot of people.
00:40:22.000But even before they actually started the Manhattan product, so the guy who kind of first conceived of the idea that you could make a nuclear explosion, Leo Szilard, he was a kind of eccentric physicist who conceived of the idea of a chain reaction.
00:40:39.000So it's been known before that that you could split the atom and a little bit of energy came out.
00:40:43.000But if you're going to split one atom at a time, You're never going to get anything because it's too little.
00:40:49.000So the idea of a chain reaction was that if you split an atom and it releases two neutrons, then each of those can split another two atoms that then release four neutrons and you get an exponential blow-up.
00:41:24.000And so he then went to try to persuade some other of his colleagues who were also working in nuclear physics not to pursue this, not to publish unrelated ideas and have some partial success.
00:41:38.000So there was some partial success where his colleagues agreed.
00:41:41.000Some things were not published immediately.
00:41:43.000Not all of his colleagues listened to him.
00:41:48.000Some people are always going to want to be the ones that sort of innovate.
00:41:52.000That is the problem in those cases where you would actually prefer the innovation not to happen.
00:41:57.000Historically, of course, we now look back and think there are a lot of dissenters that we are now glad could have their way because a lot of cultures were quite resistant to innovation and they wanted to do the way things had always been,
00:42:17.000whether it's like social innovation or technological innovation.
00:42:21.000The Chinese were at one point ahead in seafaring, exploring, and then they shut all of that down because the emperor at the time, I guess, didn't like it.
00:42:33.000So there are many examples of kind of stasis, but as long as there were a lot of different places, a lot of different countries, a lot of different mavericks, then somebody would always do it.
00:42:41.000And then once the others could see that it worked, they could kind of copy and...
00:42:47.000But of course if there is a technology you actually want not to be developed, then this multipolar situation makes it very, very hard to coordinate, to refrain from doing that.
00:43:01.000Yeah, this I think is a kind of structural problem in the current human condition that is ultimately responsible for a lot of the existential risks that we will face in this century.
00:43:14.000There's this kind of failure of ability to solve global coordination problems.
00:43:19.000Yeah, and when you think about the people that did Oppenheimer and the people behind the Manhattan Project, they were inventing this to deal with this existential threat, this horrific threat from Nazi Germany, the Japanese and the World War II,
00:43:36.000you know, this idea that this evil empire is going to try to take over the world, and this created The momentum and this created the motivation to develop this incredible technology that wind up making a great amount of our electricity and wound up creating enough nuclear weapons to destroy the entire world many times over.
00:43:57.000And we're in this strange state now where it was motivated by this horrific moment in history, this evil empire that tries to take over the world and we come up with this incredible technological solution, the ultimate weapon That we detonate a couple of times on some cities and then now we're in this weird state where,
00:44:32.000But it's incredible that the motivation for this incredible technology, this amazing technology, was actually to deal with something that was awful.
00:44:43.000Yeah, I mean, war has had a way of focusing minds and stuff.
00:44:49.000No, I think that nuclear energy we would have had anyway.
00:44:51.000Maybe it would have been developed like five years or ten years later.
00:44:56.000Reactors are not that difficult to do.
00:45:01.000So I think we could have gotten to all the good uses of nuclear technology that we have today without having to have had kind of the nuclear bomb developed.
00:45:10.000Now, you pay attention to Boston Dynamics and all these different robotic creations that they've made?
00:45:17.000They seem to have a penchant for doing really sinister-looking bots.
00:45:22.000I think all robots that are, you know, anything that looks autonomous is kind of sinister-looking.
00:45:28.000Well, I mean, you see the Japanese have these big-eyed, sort of rounded, so it's a different...
00:45:46.000If we do eventually come to a time where those things are going to war for us instead of us, like if we get involved in robot wars, our robots versus their robots,
00:46:01.000and this becomes the next motivation for increased technological innovation to try to deal with superior robots by the Soviet Union or by China, right?
00:46:10.000These are more things that could be threats that could push people to some crazy level of technological innovation.
00:46:20.000I mean, I think there are other drivers for technological innovation as well that seem plenty strong commercial drivers, let us say, that we wouldn't have to rely on war or the threat of war to kind of stay innovative.
00:46:41.000I mean, there has been this effort to try to see if it would be possible to have some kind of ban on lethal autonomous weapons.
00:46:52.000There are a few technologies that we have.
00:46:54.000There has been a relatively successful ban on chemical and biological weapons, which have by and large been honored and upheld.
00:47:08.000There are kind of treaties on nuclear weapons, which has limited proliferation.
00:47:12.000Yes, there are now maybe, I don't know, a dozen.
00:47:22.000And some other weapons as well, blinding lasers, landmines, cluster munitions.
00:47:29.000So some people think maybe we could do something like this with lethal autonomous weapons, killer bots, that Is that really what humanity needs most now, like another arms race to develop killer bots?
00:47:41.000It seems arguably the answer to that is no.
00:47:48.000I've kind of, as a lot of my friends are supportive, I kind of stood a little bit on the sidelines on that particular campaign, being a little unsure exactly what it is.
00:48:00.000I mean, certainly I think it'd be better if we refrained from having some arms race to develop these than not.
00:48:07.000But if you start to look in more detail, what precisely is the thing that you're hoping to ban?
00:48:12.000So if the idea is the autonomous bit, like the robot should not be able to make its own firing decision.
00:48:17.000Well, if the alternative to that is...
00:48:22.000There's some 19-year-old guy sitting in some office building and his job is whenever the screen flashes fire now, he has to press a red button.
00:48:31.000And then exactly the same thing happens.
00:48:33.000I mean, I'm not sure how much is gained by having that extra step.
00:48:37.000But it is something, it feels better for us.
00:48:40.000For some reason, someone is pushing the button.
00:48:49.000Well, you've got to attack this group of surface ships here, and here are the general parameters, and you're not allowed to fire outside these coordinates.
00:48:59.000I mean, another is the question of it would be better if we had no wars, but if there is going to be a war, Maybe it is better if it's robots v.
00:49:08.000robots or if there's going to be bombing.
00:49:11.000Maybe you want the bombs to have high precision rather than low precision, like get fewer civilian casualties.
00:49:18.000And operating under artificial intelligence so it makes better decisions.
00:49:34.000Or if it proliferates and you have these kind of mosquito-sized killer bots that terrorists have and It doesn't seem like a good thing to have a society where you have a facial recognition thing and then the bot flies out and you just have a kind of dystopia.
00:49:55.000We're thinking rationally given the overall view of the human race that we want peace and everything to be well.
00:50:03.000Realistically, if you were someone who is trying to attack someone militarily, you'd want the best possible weapons that give you the best possible advantage.
00:50:12.000And that's why we had to develop the atomic bomb first.
00:50:17.000It's probably why we'll try to develop the killer autonomous robot first.
00:50:53.000There were cheaters even on the biological warfare program, like the Soviet Union had massive efforts there, but still probably less use of that and less development than if there had been no such treaty.
00:51:08.000Or just look at the amount of money being wasted every year to maintain these large arsenals so that we can kill one another if one day we decide to do it.
00:51:21.000We would hope that we would get to some point where all this would be irrelevant because there's no more war.
00:51:26.000Yeah, and so if you look at the biggest efforts so far to make that happen, so after the First World War, people were really aware of this.
00:52:00.000The United Nations and in Europe, the European Union is kind of both designed as ways to try to prevent this.
00:52:07.000But again, with kind of maybe in the case of the United Nations, quite limited powers to actually enforce the agreements.
00:52:13.000And there's a veto, which makes it hard if it's two of the major powers that are at loggerheads.
00:52:19.000So it might be that if there were a third big conflagration, that then people would say, well, this time, you know, we've got to really put some kind of institutional solution in place that has enough enforcement power that we don't try this yet again.
00:52:52.000And we were taught in schools about nuclear fallout and stuff.
00:52:55.000It was like a very palpable sense that at any given point in time, there could be some miscalculation or crisis or something.
00:53:06.000And all the way up to senior statesmen at the time, these were like very real and very serious.
00:53:13.000And I feel that memory of just how bad it is to live in that kind of hair-trigger nuclear arms race Cold War situation has kind of faded, and now we think, wow, maybe the world didn't blow up, so maybe it wasn't so bad after all.
00:53:27.000Well, I think that would be the wrong lesson to learn.
00:53:39.000You've got to realize, well, maybe that was a 10% chance or a 30% chance that the world would blow up during the Cold War and we were lucky, but it doesn't mean we want to have another one.
00:53:48.000When I was in high school, it was a real threat.
00:53:50.000When I was in high school, everyone was terrified that we were going to go to war with Russia.
00:54:23.000And then a number of maneuvers are made and then you find yourself in a kind of situation where there's like honor at stake and reputation and you feel you can't back down and then another thing happens and you get into this place where if you even say something kind about the other side,
00:54:40.000You seem to be like, you know, you're a soft, you're a pinky, you're a light.
00:54:44.000And on both sides, on the other side as well, obviously, they're going to have the same internal dynamic.
00:54:48.000And each side says bad things about the other.
00:54:50.000It makes the other side hate them even more.
00:54:52.000And these things are then hard to reverse.
00:54:54.000Like, once you find this dynamic happening, it's kind of almost, well, it's not too late.
00:54:58.000You can't try it, but it can be very hard to back out of that.
00:55:01.000And so if you can prevent yourself from going down that path to begin with, that's much preferable.
00:55:07.000When you see Boston Dynamics and you see those robots, is there something comparable that's being developed either in the Soviet Union or in China or somewhere else in the world where there's similar type robots?
00:55:19.000Well, I think a lot of the Boston Dynamics thing seems more showy than actually useful.
00:55:39.000But I think a lot of action would be more in terms of flying drones, maybe submarine stuff, missiles, that kind of stuff.
00:55:51.000But when you see these robots and you see the ones that look like dogs or insects, Couldn't you imagine those things being armed with guns?
00:56:11.000You can't even kick those things over.
00:56:13.000Yeah, well, I mean, I think if it has a gun, I mean, it doesn't It really doesn't matter whether it looks like a dog or if it's just a small flying platform.
00:56:22.000I mean, in general, I think the more with AI and robotics, the cooler something looks, usually technically the less impressive it is.
00:56:32.000As you see, the extreme case of this is these robots that look exactly like a human, maybe shaped like a beautiful woman or something like that.
00:57:10.000Do you anticipate, like when you see Ex Machina, Do you think that that's something that could be realistically, that could be implemented in a hundred years or so?
00:57:22.000Like we really could have some form of artificial human that's indistinguishable?
00:57:29.000Well, I think the action is not going to lie in the robotic part so much as in the brain part.
00:57:41.000And robotics only insofar as it becomes enabled by having, say, much better learning algorithms.
00:57:47.000So right now, if you have a robot, for the most part, in any one of these big factories, It's like a blind, dumb thing that executes a pre-programmed set of motions over and over again.
00:57:58.000And if you want to change off the production, you need to get in some engineers to reprogram it.
00:58:02.000But with a human, you could kind of show them how to do something once or twice, and then they can do it.
00:58:09.000So it will be interesting to see over the next few years whether we can see some kind of progress in robotics that enable this kind of imitation learning.
00:58:19.000To work well enough that you could actually start doing it.
00:58:22.000There are demonstrations already, but robustly enough that it would be useful and you could replace a lot of these kind of industrial robotics experts by having this.
00:58:36.000So I think in terms of making things look like human, I think that's more for Hollywood and for press releases than the actual driver of progress.
00:58:47.000More so the actual driver of progress, but someone is going to probably try to replicate a human being once the technology becomes viable.
00:59:03.000I've seen some of these and not others.
00:59:06.000Ex Machina was the one where the guy lives in a very remote location.
00:59:10.000Yeah, like a beautiful place in Norway.
00:59:13.000He created this beautiful girl robot that seduces this man.
00:59:18.000At the end of it, she leaves him locked up in this thing and just takes off and gets on the helicopter and flies away.
00:59:25.000The thing that's disturbing is that they She knew how to manipulate his emotions to achieve a desired result, which was him helping her escape.
00:59:34.000But then once she did, she had no real emotions.
00:59:37.000So he was screaming and she had no compassion and no empathy.
00:59:40.000She just hopped on the helicopter and left him there to starve to death inside that locked box.
00:59:47.000This idea that we're going to create something that's intelligent, it has intelligence like us, but it doesn't have all the things that we have.
01:00:10.000The same would hold even if it were not a robot, but just a program inside a computer.
01:00:16.000But yeah, the idea that you could have something that is strategic and deceptive and so forth.
01:00:22.000But then other elements of the movie, of course, and in general, a reason why it's bad to get your kind of map of the future from Hollywood itself.
01:00:32.000So if you think it's this one guy, presumably some genius, living out in the nowhere and kind of inventing this whole system, like in reality, it's like anything else.
01:00:42.000There are hundreds of people programming away on their computers, writing on whiteboards, and sharing ideas with other people across the world.
01:00:56.000And that would often be some economic reason for doing it in the first place, like not just, oh, we have this Promethean attitude that we want to kind of bring.
01:01:07.000So all of those things don't make for such good plot lines, so they just get removed.
01:01:14.000But then I wonder if people actually think of the future in terms of some kind of...
01:01:19.000Super villain and some hero and it's going to come down to these two people and they're going to wrestle.
01:01:26.000And it's going to be very personalized and concrete and localized.
01:01:30.000Whereas a lot of things that determine what happens in the world are very spread out and bureaucracies churning away.
01:02:38.000Like in our ideas that we understand that the biological limitations of the body when it comes to traveling through space, the dealing with radiation, the death, need for food, things along those lines, that what we would do is create some artificial thing to travel for us like we've already done on Mars,
01:02:59.000The next step would be an artificial, autonomous, intelligent creature that has no biological limitations like we do in terms of its ability to absorb radiation from space.
01:03:10.000And we create one of those little guys just like that with an enormous head.
01:03:20.000Pilot these ships that can defy our own physical limitations in terms of what would happen to us if we had to deal with 1 million G-force because it's moving at some preposterous rate through space.
01:03:34.000When we think of these things coming from another planet, if we think of life on another planet, If they can innovate in a similar fashion the way we do, we would imagine they would create an artificial creature to do all their dirty work.
01:03:49.000Like, why would they want to, like, risk their body?
01:04:07.000That is spacefaring in a serious way would have nanotechnology.
01:04:11.000So they'd have basically the ability to arbitrarily configure matter in whatever structure they wanted.
01:04:18.000They would have like nanoscale probes and things that could shapeshift.
01:04:23.000It would not be that there would be this person sitting in a seat behind the steering wheel.
01:04:28.000If they wanted to, there could be invisible tasks, I think, like nanoscale things hiding in a rock somewhere, than just connecting with an information link up to some planetary-sized computer somewhere.
01:04:44.000I think that's the way that space is most likely to get colonized.
01:04:50.000It's not going to be like with meat sacks kind of driving spaceships around and having Star Trek adventures.
01:04:55.000It's going to be some spherical frontier emanating from whatever the home planet was, moving at some significant fraction of the speed of light and converting everything in its path into infrastructure.
01:05:10.000Of whatever type is maximally valuable for that civilization.
01:05:14.000Maybe computers and launchers to launch more of these space probes so that the whole wavefront can continue to propagate.
01:05:25.000I mean, one of the things you brought up earlier is that if human beings are going to continue and we're going to propagate through the universe, we're going to try to go to other places, we're going to try to populate other planets, And are we going to do that with just robots?
01:05:42.000Or are we going to try to do that biologically?
01:05:44.000We're probably going to try to do it biologically.
01:05:46.000One of the things you were saying earlier is one of the things that artificial intelligence could possibly do is accelerate our ability to travel to other lands or other planets.
01:05:55.000I mean, in fact, some people are, right?
01:05:57.000I just think that's going to not lead to anything important until those efforts become obsoleted.
01:06:06.000By some radical new technology wave, probably triggered by machine superintelligence that then rapidly leads to something approximating technological maturity.
01:06:19.000Once innovation happens at digital timescales rather than human timescales, then all these things that you could imagine we're doing, if we had 40,000 years to work on it, we would have space colonies and cures for aging and all of these things, right?
01:06:32.000But if that thinking time happens, you know, Digital space, then that long future gets telescoped, and I think you fairly quickly reach a condition where you have close to optimal technology.
01:06:47.000And then you can colonize the space cost-effectively.
01:06:50.000You just need to send out one little probe that then can land on some resource and set up a production facility to make more probes, and then it spreads exponentially everywhere.
01:07:01.000And then if you want to, you could then, like, after that initial infrastructuring has happened, you could transport biological human beings to other planets if you wanted to.
01:07:11.000But it's not really where the action is going to be.
01:07:13.000But what if we were concerned there's some sort of a threat to the Earth?
01:08:03.000Yeah, so my guess would be after technological maturity, like after superintelligence.
01:08:08.000Now, with Mars, it's possible that there would be like a little kind of prototype colonization thing because people are really excited about that.
01:08:17.000So you could imagine some little demo projects.
01:08:22.000But if we're talking about something, say, that would survive long term, even if the Earth disappeared, like some kind of self-sustaining civilization, I think that's going to be very difficult to do until you have super intelligence and then it's going to be trivial.
01:08:37.000So you think superintelligence could potentially be what, I mean, one of the applications would be to terraform Mars, to change the atmosphere, to make it sustainable for biological life.
01:08:54.000Now, I also think that at this, this is a very radical context, technological maturity, because we already, maybe there are additional technologies we can't even think of yet, but even just what we already know, About physics, etc.
01:09:07.000We can sort of see possible technologies that we're not yet able to build, but we can see that they would be consistent with physics, that would be stable structures.
01:09:16.000And already that creates a vast space of things you could do.
01:09:21.000And so, for example, I think it would be possible at technological maturity to upload human minds into computers, for example.
01:09:32.000You think that's going to happen, like Ray Kurzweil stuff?
01:09:34.000Well, I think, again, it would be technologically possible at technological maturity to do it.
01:09:40.000Now, whether it's actually going to happen then depends, A, do we reach technological maturity?
01:09:45.000And B, are we interested in using our technology for that purpose at that time?
01:09:52.000But both of those seem kind of reasonably...
01:10:44.000There are people I care about here and projects and maybe even opportunities to try to make some difference.
01:10:50.000If we actually are in this weird time right now, different from all of earlier human history when nothing really much was happening and we're not yet Yeah.
01:11:23.000And, you know, if you have some ambition to try to do some good in the world, then that kind of can be a very exciting prospect as well.
01:11:32.000Like, there might be no other better time to exist if your goal is to do good.
01:11:37.000Yeah, we might be in the golden years.
01:12:09.000So it's an exciting, crazy time where all these changes are taking place really rapidly.
01:12:14.000Like, if you were from the future, this might be the place where you would travel to, to experience what it was like to see this immense change take place almost instantaneously.
01:12:25.000Like, if you could go back in time to a specific time in history and experience what life was like, to me, I think I'd probably pick ancient Egypt, like, during the days of the pharaohs.
01:14:30.000We think the pyramids and the slave trains.
01:14:33.000But of course, for most Egyptians, most of the time, they would be picking weeds from their field or putting their baby to sleep or stuff like that.
01:14:41.000So kind of the typical moment of human existence.
01:14:45.000They don't even think it's slaves anymore, I don't think.
01:14:47.000I think they think it's skilled labor based on their diet.
01:14:50.000Based on the diet, the utensils that they found in these camps, these workers' camps...
01:14:54.000They think that these were highly skilled craftspeople, that it wasn't necessarily slaves.
01:15:01.000They used to think it was slaves, but now because of the bones of the food, they were eating really well, and they think that, well, and also the level of sophistication involved.
01:15:12.000This is not something you just get kind of slaves to do.
01:15:15.000This seems to be that there was a population of structural engineers, that there was a population of skilled construction people, and that they tried to, you know, utilize all of these great minds that they had back then and put this thing together.
01:15:32.000I think that's the spot that I would go to because I think it would be amazing to see so many different innovative times.
01:15:39.000I mean, it would be amazing to be alive during the time of Genghis Khan or to be alive during some of the wars of 1,000, 2,000 years ago just to see what it was like.
01:15:56.000But I think if I was in the future, some weird dystopian future where artificial intelligence runs everything and human beings are linked to some sort of Neurological implant that connects us all together and we long for the days of biological independence and we would like to see what was it like when they first started inventing phones?
01:16:18.000What was it like when the internet was first opened up for people?
01:16:22.000What was it like when people saw, when someone had someone like you on a podcast and was talking about We're good to go.
01:16:41.000This really Goldilocks period of great change where we're still human, but we're worried about privacy.
01:16:47.000We're concerned our phones are listening to us.
01:16:50.000We're concerned about surveillance dates and people put little stickers over the laptop camera.
01:16:55.000We see it coming, but it hasn't quite hit us yet.
01:16:59.000We're just seeing the problems that are associated with this increased level of technology in our lives.
01:17:09.000Which is, yeah, that is a strange thing.
01:17:12.000If we add up all these pieces, it does put us in this very weirdly special position.
01:17:18.000And you wonder, hmm, it's a little bit too much of a coincidence.
01:17:25.000It might be the case, but yeah, it does put some strain on it.
01:17:28.000When you say a little too much of a coincidence, how so?
01:17:32.000I mean, I guess the intuitive way of thinking about it, like what are the chances that just by chance you would happen to be living in the most interesting time in history, being like a celebrity, like whatever, like that's pretty low prior probability.
01:17:54.000And so that could just be, I mean, if there's a lottery, somebody's got to have the ticket, right?
01:18:03.000Or, yeah, or we are wrong about this whole picture, and there is some very different structure in place, which would make our experiences more typical.
01:19:13.000Now, I have this weird that my work is actually to think about big picture questions.
01:19:18.000So it kind of comes in through my work as well.
01:19:23.000When you're trying to make sense of our position, our possible future prospects, the levers which we might have available to affect the world, what would be a good and bad way of pulling those levers, then you have to try to put all of these constraints and considerations together.
01:19:40.000And in that context, I think it's important.
01:19:44.000I think if you are just going about your daily existence, then it might not really be very useful or relevant to constantly try to bring in hypotheses about the nature of our reality and stuff like that.
01:20:02.000Because for most of the things you're doing on a day-to-day basis, they work the same, whether it's inside a simulation or in basement-level physical reality.
01:20:11.000You still need to get your car keys out.
01:20:14.000So in some sense, it kind of factors out and is irrelevant for many practical intents and purposes.
01:20:20.000Do you remember when you started to contemplate the possibility of a simulation?
01:20:27.000No, I mean, I remember when the simulation argument occurred to me, which is less, it's not just, I mean, for as long as I can remember, like, yeah, I mean, maybe it's a possibility, like, oh, it could all be a dream, it could be a simulation, but that there is this specific argument that kind of narrows down the range of possibilities and where the simulation hypothesis is then one of only three What are the three options?
01:20:54.000Well, one is that there is almost all civilizations at our current stage of technological development go extinct before reaching technological maturity.
01:21:06.000Could you define technological maturity?
01:21:09.000Well, say having developed at least all those technologies that we already have good reason to think are physically possible.
01:21:17.000So that would include the technology to build extremely large and powerful computers on which you could run detailed computer simulations of conscious individuals.
01:21:32.000So that kind of would be a pessimistic, like if almost all civilizations at our stage failed to get there, that's bad news, right?
01:21:41.000Because then we'll fail as well, almost certainly.
01:21:49.000Option two is that there is a very strong convergence among all technologically mature civilizations in that they all lose interest in creating ancestor simulations or these kinds of detailed computer simulations of conscious people like their historical predecessors or variations.
01:22:07.000So maybe they have all of these computers that could do it, but for whatever reason, they all decide not to do it.
01:22:13.000Maybe there's an ethical imperative not to do it or some other...
01:22:16.000I mean, we don't really know much about these post-human creatures and what they want to do and don't want to do.
01:22:42.000Downloading consciousness into a computer, it almost ensures that there's going to be some type of simulation.
01:22:48.000If you have the ability to download consciousness into a computer, once it's contained into this computer, what's to stop it from existing there?
01:22:58.000As long as there's power and as long as these chips are firing and electricity is being transferred and data is being moved back and forth, you would essentially be in some sort of a simulation.
01:23:13.000Well, I mean, if you have the capability to do that and also the motive...
01:23:16.000It would have to simulate something that resembles some sort of a biological interface.
01:23:23.000Otherwise, it's not going to know what to do, right?
01:23:25.000So we have these kind of virtual reality environments now that are imperfect but improving.
01:23:33.000And you could kind of imagine that they get better and better and then you have a perfect virtual reality environment.
01:23:39.000But imagine also that your brain, instead of sitting in a box with big headphones and some glasses on, the brain itself also could be part of the simulation.
01:24:08.000Here is one assumption coming in from outside the simulation argument, and one can talk about it separately, but it's the idea that I call it the substrate independence thesis, that you could in principle have conscious experiences implemented on different substrates.
01:24:26.000It doesn't have to be carbon atoms, as is the case with the human brain.
01:24:31.000What creates conscious experiences is some kind of structural feature of the computation that is being performed.
01:24:38.000Rather than the material that is used to underpin it.
01:24:42.000So in that case, you could have a simulation with detailed simulations of brains in it where maybe every neuron and synopsis simulated and then those brains would be conscious.
01:24:55.000Well, no, so the possibility number two is that these post-humans just are not at all interested in doing it.
01:25:00.000And not just that some of them don't, but like of all these civilizations that reach technological maturity, that's kind of pretty uniformly, just don't do that.
01:25:25.000But yeah, I've refrained from giving a very precise probability.
01:25:32.000Partly because, I mean, if I said some particular number, it would get called there and it would create this maybe sense of false precision.
01:25:40.000The argument doesn't allow you to derive this, the probability is X, Y, Z. It's just that at least one of these three has to obtain.
01:25:56.000You could just make up any story and we have no evidence for it.
01:25:59.000But it seems that there are actually, if you start to think everything through, quite tight constraints on what probabilistically coherent views you could have.
01:26:08.000And it's kind of hard even to find one overall hypothesis that fits this and various other considerations that we think we know.
01:26:17.000The idea would be that if there is one day the ability to create a simulation, that it would be indiscernible from reality itself.
01:26:30.000If this is just biological life, we're just extremely fortunate to be in this Goldilocks period.
01:26:35.000But we're working on virtual reality in terms of like Oculus and all these companies are creating these consumer-based virtual reality things that are getting better and better and really kind of interesting.
01:26:47.000You've got to imagine that 20 years ago there was nothing like that.
01:26:50.00020 years from now, it might be indiscernible.
01:26:53.000You might be able to create a virtual reality that's impossible to...
01:27:40.000Now, as I said, I think if you simulate the brain also, You have a cheaper overall system than if you have a biological component in the center surrounded by virtual reality gear.
01:27:55.000So you could, for a given cost, I think create many more ancestry simulations with simulated brains in them rather than biological brains with VR gear.
01:28:07.000So most, in these scenarios where there would be a lot of simulations, most of those scenarios, it would be the kind of where everything is digital.
01:28:15.000Because it's just cheaper with mature technology to do it that way.
01:28:20.000This is one of the biggest, for lack of a better term, mindfucks.
01:28:27.000When you really stop and think about reality itself.
01:28:30.000That if we are living in a simulation, like, what is it?
01:28:53.000And aren't there people that have done some strange, impossible to understand calculations that are designed to determine whether or not there's a likelihood of us being involved in a simulation currently?
01:29:06.000Yeah, I think it slightly misses the point.
01:29:12.000So there are these attempts to try to figure out the computational requirements that would be required if you wanted to simulate some physical system with perfect precision.
01:29:26.000So if we have some human, a brain, a room, let's say, and we wanted to simulate every little part, every atom, every subatomic particle, the whole quantum wave function, What would be the computational load of that?
01:29:45.000And would it be possible to build a computer powerful enough that you could actually do this?
01:29:51.000Now, I think the way that this misses the point is that it's not necessary to simulate all the details of this environment that you want to create in an ancestry simulation.
01:30:04.000You would only have to simulate it insofar as it is perceptible to the observer inside the simulation.
01:30:11.000So, if some post-human civilization wanted to create a Joe Rogan doing a podcast simulation, they'd need to simulate...
01:30:21.000Joe Rogan's brain, because that's where the experiences happen.
01:30:24.000And then whatever parts of the environment that you are able to perceive.
01:30:28.000So surface appearances, maybe of the table and walls.
01:30:32.000Maybe they would need to simulate me as well, or at least a good enough simulacrum that I could sort of spit out words that would sound like they came from a real human, right?
01:31:01.000Take a big electron microscope and look at finer structure and then you could take an atomic force microscope and you could see individual atoms even and you could perform all kinds of measurements.
01:31:13.000And it might be important that if you did that you wouldn't see anything weird because physicists do these experiments and they don't see anything weird.
01:31:20.000But then you could kind of fill in those details like if and when somebody were performing those experiments.
01:31:25.000That would be vastly cheaper than continuously running all of this.
01:31:29.000And so this is the way a lot of computer games are designed today, that they have a certain rendering distance.
01:31:35.000You only actually simulate the virtual world when the character goes close enough that you could see it.
01:31:42.000And so I imagine these kind of super-intelligent post-humans doing this.
01:31:45.000Obviously, they would have figured that out and a lot of other optimizations.
01:31:50.000So in other words, these calculations or experiments, I think, don't really tell on the hypothesis.
01:31:58.000Without assigning a probability to either one of those three scenarios, what makes you think?
01:32:06.000If you do stop and think, I think we're in a simulation, what are the things that are convincing to you?
01:32:13.000Well, it would mainly go through the simulation argument.
01:32:17.000To the extent that I think the alternative two hypotheses are improbable, then that would kind of shift the probability mass on the third remaining.
01:32:51.000I can kind of unfold the argument a little bit more and look more granular.
01:32:55.000So suppose that the first two options are false.
01:32:59.000So some non-trivial fraction of civilizations at our stage do get through.
01:33:03.000And some non-trivial fraction of those are still interested.
01:33:09.000Then I think you can convincingly show that by using just a small portion of their resources they could create very, very many simulations.
01:33:20.000And you can show that or argue for that by comparing the computational power of systems that we know are physically possible to build.
01:33:31.000We can't currently build them, but we could see that you could build them with nanotech and if you have planetary-sized resources on the one hand.
01:33:38.000And on the other hand, estimates of how much compute power it would take to simulate a human brain.
01:33:45.000And you find that a mature civilization would have many, many orders of magnitude more.
01:33:50.000So that even if they just used 1% of their compute power of one planet for one minute, they could still run thousands and thousands and thousands of these simulations.
01:34:00.000And they might have billions of planets and they might last for billions of years.
01:34:04.000So the numbers are quite extreme, it seems.
01:34:07.000So then what you get is this implication that if the first two options are false, it Would follow that there would be many, many more simulated experiences of our kind than there would be original experiences of our kind.
01:34:25.000So the idea is that if we continue to innovate, if human beings or intelligent life in the cosmos continues to innovate, that creating a simulation is almost inevitable?
01:34:48.000The first option, if human beings do figure out a way to not die and stay innovative and we don't have any sort of natural disasters or man-made created disasters, then step two,
01:35:03.000if we don't We don't decide to not pursue this.
01:35:08.000If we continue to pursue all various forms of technological innovation, including simulations, that it becomes inevitable.
01:35:18.000If we get past those two first options, it becomes inevitable that we pursue it.
01:35:39.000Because it's so cheap at technological maturity, if you have a cosmic empire of resources, they don't have to have a very big desire to do this.
01:35:48.000They might just think, well, you know...
01:35:51.000Well, that was the big question that Elon said he would ask artificial intelligence.
01:35:55.000He said, what's beyond the simulation?
01:36:36.000I mean, I don't think it's ridiculous to consider.
01:36:38.000I think it might be beyond us, but maybe we would be able to form some abstract conception of what it is.
01:36:44.000I mean, in fact, if the path to believing the simulation hypothesis is the simulation argument, then we have a bunch of structure there that gives us some idea.
01:36:54.000Like, there would be some advanced civilization that would have developed a lot of technology over time, including compute technology.
01:38:35.000Yeah, I mean, that looks kind of plausible, but maybe there are further big discoveries or revelations that would kind of maybe not falsify the simulation, but maybe change the interpretation, like do something that is hard to know in advance what that would be.
01:38:49.000Now, is the concept that if there is a simulation that all the historical record is simulated as well?
01:38:57.000Well, there are different options there, and there might be many different simulations that are configured differently.
01:39:03.000There could be ones that run for a very long time, ones that run for a short period of time, ones that simulate everything and everybody, others that just focus on some particular scene or person.
01:39:15.000It's just a vast space of possibilities there.
01:39:19.000And which ones of those would be most likely is really hard to say much about because it would depend on the reasons for creating these simulations, like what would the interests of these hypothetical post-humans be.
01:39:30.000Have you ever had a conversation with a pragmatic, capable person who really understands what you're saying, but they disagree about even the possibility of a simulation?
01:39:45.000It must have occurred, but it doesn't tend to be the place where the conversation usually goes.
01:39:52.000Where does the conversation usually go?
01:39:55.000Well, I mean, I move in kind of unrepresentative circles.
01:40:00.000So I think amongst the folk I interact with a lot, I think a common reaction is that it's plausible and still there is some uncertainty because these things are always hard to figure out.
01:40:16.000But we should assign it some probability.
01:40:21.000But I'm not saying that would be the typical reaction if you kind of did a Gallup survey or something like that.
01:40:28.000I mean, another common thing is, I guess, to misinterpret it in some way or another.
01:40:39.000And there are different versions of that.
01:40:41.000So one would be this idea that in order for the simulation hypothesis to be true, it has to be possible to simulate everything around us to perfect microscopic detail, which we discussed earlier.
01:40:58.000Then some people might not immediately get this idea that the brain itself could be part of the simulation.
01:41:03.000So they imagine it would be plugged in with a big cable and if you just somehow could reach behind you, that would be another possible common misconception, I guess.
01:41:21.000Then I think a common thing is to conflate the simulation hypothesis with the simulation argument.
01:41:27.000The simulation hypothesis is we are in a simulation.
01:41:30.000The argument is that one of these three options is true, only one of which is the simulation hypothesis.
01:41:40.000How do you factor dreams into the simulation hypothesis?
01:41:43.000Well, I think they are irrelevant to it.
01:41:46.000That is that whether or not we are in a simulation, people presumably still have dreams and there are other reasons and explanations for why that would happen.
01:41:56.000So you have dreams even if you're in the simulation?
01:42:59.000So, yeah, I would not be inclined to think that this would be an explanation.
01:43:05.000If somebody has those kind of experiences, it's probably not because we are...
01:43:10.000Even if the simulation hypothesis is true, it's probably not the explanation.
01:43:14.000The concept of creativity, how does that play into a simulation?
01:43:20.000If during the simulation you're coming up with these unique creative thoughts, are these unique creative thoughts your own or are these unique creative thoughts stimulated by the simulation?
01:43:34.000They would be your own in the sense that it would be your brain that was producing them.
01:43:38.000Something else would have produced your brain.
01:43:40.000But obviously there's some incredible influences on your brain if you're involved in some sort of an external stimulation.
01:43:46.000That's true in physical reality as well.
01:43:54.000I think it would be potentially as much your own in the simulation as it would be outside the simulation.
01:44:01.000I mean, unless the simulators had, for whatever reason, set it up with the view that they, for some reason, they just wanted to have, oh, this is Rogan coming up with this particular idea, and that kind of configured the initial conditions and just the right way to achieve that.
01:44:17.000Maybe then, when you come up with it, maybe it's less your achievement than the people who set up the initial conditions.
01:44:28.000Because the reason I ask that is all ideas, everything that gets created, all innovation, initially comes from some sort of a point of someone figuring something out or coming up with a creative idea.
01:44:41.000Like everything that you see in the external world, like everything from televisions to automobiles, was an idea.
01:44:47.000And then somebody implemented that idea or groups of people implemented the technology involved in that idea and then eventually it came to fruition.
01:44:54.000If you're in a simulation, How much of that is being externally introduced into your consciousness by the simulation?
01:45:05.000And is it pushing the simulation in a certain direction?
01:46:30.000I think the kind of simulation that it would be the clearest case for why that would be possible would be one where all people would be simulated that you perceive in each brain.
01:46:43.000Because then you could get the realistic behavior out of the brain if you simulated the whole brain at a sufficient level of detail.
01:46:52.000So everyone you interact with is also a simulation?
01:46:55.000Well, that type of simulation should certainly be possible.
01:46:58.000Then it's more of an open question whether it would also be possible to create simulations where there was, say, only one person conscious and the others were just like simulacra.
01:47:12.000They acted like humans, but there's nothing inside.
01:47:16.000So these would be in philosopher's parlance zombies, that is...
01:47:22.000It's like a technical term, but it means when philosophers discuss it, somebody who acts exactly like a human but with no conscious experience.
01:47:28.000Now, whether those things are possible or not is an open question.
01:47:33.000Do you consider that ever when you're communicating with people?
01:48:19.000That in reality you're always kind of uncertain.
01:48:22.000The second would be that even if you were in that kind of simulation, it might still be that behaviorally what you should do is exactly the same as if you were in the other simulation.
01:48:34.000So it might not have that much day-to-day implications.
01:48:40.000Do you think there's psychological benefits for interacting with life as if it's a simulation?
01:48:46.000No, I don't think that would be an advantage.
01:49:09.000But I think to a first approximation, the same things that would be Work well and make a lot of sense to do in physical reality would be also our best bets in a simulated reality.
01:49:27.000Like, if it's a simulation, but you must behave in each and every instance as if it's not.
01:49:36.000If you know, if you had a test you could take, like a pregnancy test, when you went to the CVS and you pee on a strip and it tells you, guess what, Nick?
01:50:27.000I think there are certain possibilities that look kind of far-fetched if we're not in a simulation that become, like, more realistic if we are.
01:50:38.000So one obvious one is, like, if a simulation could be shut off, like if the computer where the simulation is running is if the plug is pulled, right?
01:50:47.000So we think physical universe, as we normally understand, can just suddenly pop out of existence.
01:50:52.000There's a conservation of energy and momentum and so forth.
01:50:55.000But a simulated universe, that seems like something that could happen.
01:50:59.000It doesn't mean it is likely to happen or it doesn't say anything about what time frame, but at least it's like enters as a possibility where it was not there before.
01:51:07.000Other things as well become more maybe similar to various theological possibilities that exist.
01:51:16.000And in fact, it kind of maybe through a very different path leads to some similar destinations as people through thinking about theology and stuff have arrived at.
01:51:37.000I think there is no logically necessary connection either way.
01:51:41.000But there are some kind of structural parallels, analogs, between the situation of a simulated creature to their simulators and a created entity to their creator.
01:51:55.000That are interesting, although kind of different.
01:51:59.000So that might be kind of comparisons there that you could make that would give you some possible ways of proceeding.
01:52:26.000But the concept is so prevalent and it's so common and it's so often discussed.
01:52:32.000It's interesting how much it has just over the last 10-15 years, how long the idea has It's interesting how ideas can migrate from some kind of extreme radical fringe and some decade or two later,
01:52:58.000they're just kind of almost common sense.
01:53:02.000Well, we have a great ability to get used to things.
01:53:05.000I mean, this comes back to our discussion about the pace of technological progress.
01:53:09.000It seems like the normal way for things to be.
01:53:12.000We are very adaptable creatures, right?
01:53:15.000You can adjust to almost everything, and we have no kind of external reference point, really, and mostly these judgments...
01:53:25.000Are based on what we think other people think.
01:53:28.000So if it looks like some high-status individual, Elon Musk or whatever, seems to take the simulation argument seriously, then people think, oh, it's a sensible idea.
01:53:38.000And it only takes like one or two or three of those people that are highly regarded and suddenly it becomes normalized.
01:53:47.000Is there anyone highly regarded that openly dismisses this possibility?
01:53:52.000There must be, but I'm not sure they would have bothered to go on the record specifically.
01:53:58.000I guess the people who are dismissive of it wouldn't maybe even bother to address it or something.
01:54:07.000I'm trying to think, yeah, and I'm drawing a blank of whether there's a particular person I could.
01:54:11.000I would love to hear the argument against it.
01:54:13.000I would love to hear someone like you or Elon interact with them and try to volley back and forth these ideas.
01:55:04.000And then it's kind of come in waves, like every year or so.
01:55:08.000There should be like some new group of, either a new generation or some new community that hears about it for the first time, and it kind of gets a new wave of attention.
01:55:19.000But in parallel to these waves, there's also this chronic...
01:55:40.000Maybe if there were some big flaw in the idea, it would have been discovered by now.
01:55:43.000So if it's been around for a while, it makes it a little bit more credible.
01:55:46.000It might also be slightly assisted by just technological progress.
01:55:51.000If you see virtual reality getting better and stuff, it becomes maybe easier to imagine how it could become so good one day that you could create perfectly flawless.
01:55:59.000I was going to introduce that as option four.
01:56:03.000Is option four the possibility that one day we could conceivably create some sort of an amazing simulation, but it hasn't been done yet.
01:56:13.000And this is why it's become this topic of conversation is that there's some need for concern because as you extrapolate technology and you think about where it's going now and where it's headed, There could conceivably be one day where this exists.
01:56:26.000Should we consider this and deal with it now?
01:56:29.000Well, so I'd say that that would be highly unlikely in that if the third – so if the first two are wrong, right, then there are many, many more simulated ones than non-simulated ones, will be over the course of all of history.
01:56:42.000Over the course of all of history, but what if it hasn't yet happened?
01:56:45.000Right, but so then the question is, given that – Sure.
01:57:33.000Yeah, but you could make it even just, you could look at the narrow case of just the Earth.
01:57:39.000Let's just look in the narrow case of just the Earth.
01:57:41.000In the narrow case of just the Earth, if the historical record is accurate, if it's not a simulation, then it seems very reasonable that we're just dealing with incremental increases in technology that's pretty stunning and pretty profound currently, but that we haven't Well,
02:00:05.000So if you imagine, for example, all of these people who would exist in this scenario having to place bets on whether they're simulated or not.
02:00:17.000And you think about two possible different ways of reasoning about this.
02:00:21.000So one is you assume you're a randomly selected individual from all these individuals and you bet accordingly.
02:00:30.000Yeah, so then you would bet you're one of the simulated ones because like a randomly selected ones, if most are simulated, most lottery tickets are...
02:00:37.000But why are we assuming that most are simulated?
02:00:45.000But why already when it hasn't existed yet?
02:00:50.000Let's say for the sake of argument, because I don't really have an opinion on this, pro or con, open the air.
02:00:55.000But if I was going to argue about pragmatic reality, the practicality of biological existence as a person that has a finite lifespan, you're born, you die, you're here right now, and we're a part of this just long line of humanity that's created all these incredible things that's led up to civilization.
02:01:15.000That's led up to this moment right now where you and I are talking into these microphones.
02:02:35.000Now, why would we assume… Why would a simulation be the most likely scenario when we've experienced, at least we believe we've experienced, all this innovation in our lifetime?
02:02:48.000We see it moving towards a certain direction.
02:02:50.000Why wouldn't we assume that that hasn't taken place yet?
02:02:55.000Yeah, I think to try to argue for the premise that conditional on there being first an initial segment of non-simulated Joe Rogan experiences and then a lot of other segments of simulated ones, that conditional on that being the way the world in totality looks,
02:03:14.000you should think you're one of the simulated ones.
02:03:17.000Well, to argue for that, I think then you need to roll in this piece of probability theory called anthropics, which I alluded to.
02:03:25.000And just to pull one little element out of there to kind of create some initial plausibility for this.
02:03:31.000If you think in terms of rational betting strategies for this population of Joe Rogan experiences, the ones that...
02:03:40.000Would lead to the overall maximal amount of winning would be if you all thought you're probably one of the simulated segments.
02:03:48.000If you had the general reasoning rule that in this kind of situation you should think that you're the initial segment of non-simulated Rogen, then the great preponderance of these simulated experiences would lose their bets.
02:04:04.000But there's no evidence of a simulation.
02:04:08.000Well, I'd say that there is indirect evidence insofar as there is evidence against these two alternatives.
02:04:16.000Well, the two alternatives being that intelligent life goes extinct before they create any sort of simulation or that they agree to not create a simulation.
02:04:28.000But what about if they're going to create a simulation?
02:04:30.000There has to be a time before the simulation is created.
02:04:34.000Why wouldn't you assume that that time is now currently happening when you've got a historical record of all the innovation that's leading up to today?
02:04:43.000I think the historical record would be there in the simulation.
02:04:47.000But why would it have to be there in a simulation and not be there in reality?
02:04:52.000Well, I mean, it could be there in the simulation if it's a kind of simulation that tracks the original, yeah.
02:04:58.000If it's a fantasy simulation, then, you know, maybe it wouldn't be there.
02:05:17.000But if you think about your actions that kind of can't distinguish between these different possible locations in space-time where you could be, most of the impact of your decisions will come from impacting all of these million Dior Hogan instances.
02:05:33.000Yeah, but this is once a simulation has been proven to exist, which it hasn't been.
02:05:38.000We have, at least in terms of what we all agree, we're proven to have biological lives.
02:05:46.000We breed, we sleep, we eat, we travel on planes.
02:05:51.000All these things are very tangible and real.
02:05:52.000I'd say those are true, probably even if we're in a simulation.
02:05:57.000But why would you assume we're in a simulation?
02:06:44.000You would have weaker probabilistic evidence insofar as you had evidence against the two alternatives.
02:06:52.000So, for example, if you got some evidence that suggested it was less likely that all civilizations at our stage go extinct before maturity.
02:07:01.000Let's say we get our act together, we eliminate nuclear weapons, we become prudent and...
02:07:08.000We check all the asteroids, nothing is on collision course with Earth.
02:07:11.000That kind of tend to lower the probability of the first, right?
02:07:25.000We develop more advanced computers and VR and we're getting close to this point ourselves and we still remain really interested in running Ancestry simulations.
02:07:35.000We think this is what we really want to spend our resources on as soon as we can make it work.
02:07:40.000That would move probability over from the second alternative.
02:07:45.000It's less likely that there is this strong convergence among all post-human technologically mature civilizations if we ourselves are almost post-human and we still have this interest in creating ancestor simulations.
02:07:57.000So that would shove probability over to the remaining alternative.
02:08:04.000A thousand years from now have built our own planetary-sized computer that can run these simulations, and we are just about to switch it on, and it will create the simulation of precisely people like ourselves.
02:08:18.000And as we move towards the big button to sort of initiate this, then the probability of the first simulation Two hypotheses basically goes to zero, and then we would have to conclude with near certainty that we are ourselves in a simulation as we push this button to create a million simulations.
02:08:37.000Once we achieve that state, but we have not achieved that state, why would we not assume that we are in the actual state that we currently experience?
02:08:47.000We should assume that we are ignorant as to which of these different time slices we are, which of these different Joe Rogan experiences is the present one.
02:09:00.000We just can't tell from the inside which one it is.
02:09:06.000If you could see some objective clock and say that, well, as yet the clock is so early that no simulations have happened, then obviously you could conclude that you're in the original history.
02:09:19.000But if we can't see that clock outside the window, if there is no window in the simulation to look out, then it would look the same.
02:09:26.000And then I'd say we have no way of telling which of these different instances we are.
02:09:31.000One of them might be that there is no simulation and that we're moving towards that simulation.
02:09:37.000That one day it could be technologically possible.
02:09:48.000But not even condition on those other alternatives being wrong.
02:09:51.000Let's say that human beings haven't blown themselves up yet.
02:09:55.000Let's say that human beings haven't come up with – there is no need to make the decision to not activate the simulation because the simulation hasn't been invented yet.
02:10:07.000Isn't it also a possibility that the actual timeline – Of technological innovation that we all agree on is real and that we're experiencing this as real, live human beings not in a simulation that one day the simulation could potentially take place but has not yet.
02:11:08.000So I think maybe the difference is I tend to think in terms of the world as a four-dimensional structure with time being one dimension, right?
02:11:52.000Yeah, so in your option, the vast majority of all these experiences that will ever have existed will also be simulated, if I understand your option correctly.
02:12:08.000But as I understand your option is that if we look at the universe at the end of time and we look back, there will be a lot of simulated versions of you and then one original one.
02:14:05.000No evidence that it's ever even going to be possible technologically.
02:14:09.000We think there could be, but it hasn't happened yet.
02:14:12.000So why would you assume that we are in a simulation currently when there's no evidence whatsoever that it's even possible to create a simulation?
02:14:22.000Maybe there is some alternative way of trying to explain how I'm thinking.
02:15:23.000So we have a world where either there is one hour experience of you in the room or else it's a world with 10 Joe Rogan experiences in the room with an episode of amnesia in between.
02:15:39.000But when you're in the room now, you find yourself in this room, you're wondering, hmm...
02:15:45.000Is this the first time I'm in this room?
02:16:15.000I guess I could ask you, like, so if you wake up in this room, what do you think the probability should be that the coin, that you're, like, at time one versus at some later time?
02:16:28.000Well, what is the probability that I'm actually here versus what is the probability of this highly unlikely scenario that I keep getting drugged over and over again every hour?
02:16:37.000Well, we assume that, like, you're certain that That the setup is such that there was this mad scientist who had the means to do this and he was going to flip this coin.
02:16:46.000So we're assuming that you're sure about that either way.
02:16:49.000The only thing you're unsure about is how the coin landed.
02:16:53.000Well, if that was a scenario where I knew that there was a possibility of a mad scientist and I could wake up over and over again, that seems like a recipe for insanity.
02:17:04.000Well, it's a philosophical thought experiment, so we can abstract away from the possibility of it.
02:17:08.000My point initially, and I'll get back to it, is there's no evidence at all that we're in a simulation.
02:17:13.000So why wouldn't we assume that the most likely scenario is taking place, which is we are just existing, and life is as it seems, but strange.
02:17:23.000Okay, so if you don't want to do this thought experiment...
02:17:27.000No, I do want to do a thought experiment, but it seems incredibly limited.
02:17:31.000Well, I'm trying to distill the probability theory part from the wider simulation.
02:17:39.000But I guess I could also ask you, if we were to move closer to this point where we ourselves can create simulations, if we survive, we become a multi-planetary, we build planetary-sized computers.
02:17:53.000How would your probability in the simulation hypothesis change as we kind of develop?
02:17:59.000Well, it would change based on the evidence of some profound technological innovation that actually would allow Yeah, I think...
02:18:28.000It's not an outcome in that it would require you to postulate that you are this very unusual and special observer amongst all the observers that will exist.
02:18:38.000But everyone is unusual in their own way.
02:19:16.000Since we don't know what time it is now in external reality, and we therefore can't tell from looking at our evidence, Where we are in a world where either there is just an original history and then it ends,
02:19:37.000or there is a world with an original history and then a lot of simulations.
02:19:41.000We need to think about how to assign probabilities given each of these two scenarios.
02:19:46.000And so then we have a situation that is somewhat analogous to this one with the amnesia room, where you have some number of episodes.
02:19:54.000And so the question is, in those types of situations, how do you allocate probability over the different hypotheses about how the world is structured?
02:20:05.000And This kind of betting argument is one type of argument that you can try to reduce to kind of get some grip on that.
02:20:15.000And another is by looking at various applications in cosmology and stuff where you have multiverse theories.
02:20:25.000Which say the universe is very big, maybe there are many other universes, maybe there are a lot of observers, maybe all possible observers exist out there in different configurations.
02:20:34.000How do you drive probabilistic predictions from that?
02:20:36.000It seems like whatever you observe would be observed by somebody, so how could you test that kind of theory?
02:20:43.000And this same kind of anthropic reasoning that I want to use in the context of the simulation argument, Also plays a role, I think, in deriving observational predictions from these kinds of cosmological theories,
02:21:00.000where you need to assume something like you're most likely a typical observer from amongst the observers that will ever have existed, or so I would suggest.
02:21:15.000Admit as an asterisk that this field of anthropic reasoning is tricky and not fully settled yet.
02:21:22.000And there are things there that we don't yet fully understand.
02:21:27.000But still, the particular application of anthropic reasoning that is relevant for the simulation argument, I think, is one of the relatively less problematic ones.
02:21:36.000So that Conditional on there being, by the end of time, a large number of simulated Joe Rogans and only one original one, I think, conditional on that hypothesis, it would seem that most of your probability should be on being one of the simulated ones.
02:21:53.000But I'm not sure I have any other ways of making it more vivid or possible.
02:21:58.000No, I completely understand what you're saying.
02:21:59.000I completely understand what you're saying.
02:22:01.000But I don't know why you're not willing to take into account the possibility that it hasn't occurred yet.
02:22:07.000The way I see it is that I have taken that into account and it receives the same probability that I'm that initial segment as I would give to any of the other Nick Bostrom segments that all have the same evidence.
02:22:20.000See, that's where we differ because I would give much more probability to the fact that we are existing right now in the current state as we experience it in real life, carbon life, no simulation, but that potentially one day there could be a simulation which leads us to look at the possibilities and look at the probabilities that it's already occurred.
02:22:48.000All right, so what we think happened is there was a big bang, planets formed, and then some billions of years later, we evolved, and here we are now, right?
02:22:55.000Suppose some physicists told you that, well, the universe is very big, and early on in the universe, very, very rare occasions, there was some big gas cloud.
02:23:04.000In an infinite universe, this will happen somewhere, right?
02:23:09.000That was a kind of Joe Rogan-like brain coming together for a minute and then dissolved in the gas.
02:23:16.000And yeah, if you have an infinite universe, it's going to happen somewhere.
02:23:19.000But there's got to be many, many fewer Joe Rogan brains In such situations, then, will exist later on on planets, because evolution helps funnel probability into these kinds of organized structures, right?
02:23:34.000So, if some physicists told you that, well, this is the structure of our part space-time.
02:23:42.000Like, there are a few very, very rare spontaneously materialized brains from gas clouds early in the universe, and then there are the normal rogances much later.
02:23:50.000And there are, of course, many, many more normal ones.
02:23:53.000The normal ones happen in one out of every, you know, 10 to the power of 50 planets, whereas the weird ones happen in one out of 10 to the power of 100. Normal versus weird, how so?
02:24:23.000But this is a thought experiment, which in fact actually probably matches reality in this respect.
02:24:30.000Most likely there's some other planets out there.
02:24:33.000I think the fact that it matches reality is, I think, irrelevant to the point I want to make.
02:24:38.000So if this turned out to be the way the world works, a few weird ones happening from gas clouds and then the vast majority are just normal people living on a planet.
02:24:48.000Would you similarly say, given that model, that you should think, oh, it might just as well be one of these gas cloud ones?
02:24:57.000Because after all, the other ones might not have happened yet.
02:25:09.000Anyway, I think that this would be a structurally similar situation where there would be a few exceptional early living versions that would be very small in numbers compared to the later ones.
02:25:22.000And if they allow themselves the same kind of reasoning where they would say, well, the other ones may or may not come to exist later on planets.
02:25:30.000I have no reason to believe I'm one of the planet living ones.
02:25:33.000Then it seems that in this model of the universe, you should think you're one of these early gas cloud ones.
02:25:40.000And as I said, I mean, this looks like it probably actually is the world we're living in, in that it looks like it's infinitely big and that would have been a few georogons spontaneously generated very early from random processes.
02:25:58.000There are going to be very few numbers compared to ones that have, you know, risen on planets.
02:26:05.000So that by taking the path you want to take with relation to the simulation argument, I wonder if you would not then be committed to thinking that you would be like, in effect, a Boltzmann brain in a gas cloud super early in the universe.
02:26:20.000I still don't understand what you're saying.
02:26:21.000What I'm saying is that we scientists agree.
02:26:26.000If you believe in science and if you believe in the discoveries that so far people have all currently agreed to, we've agreed that clouds are formed and that planets are created and that all the matter comes from inside of the explosions of a star And that it takes multiple times for this to coalesce before we can develop carbon-based life forms.
02:26:49.000All that stuff science currently agrees on, right?
02:26:52.000And then we believe in single-celled organisms, become multi-celled organisms through random mutation and natural selection.
02:26:58.000We get evolution, and then we agree that we have...
02:27:02.000We've come to a point now where technology has hit this gigantic spike that you described earlier.
02:27:08.000So human beings have created all this new innovation.
02:27:10.000Why wouldn't we assume that all this is actually taking place right now with no simulation?
02:27:17.000Yeah, I mean, the simulation argument is the answer to that, but with a qualification that A, the simulation argument doesn't even purport to prove the simulation hypothesis, because there are these two alternatives.
02:27:30.000B, that even if the simulation hypothesis is true, in many versions of it, it would actually be the case that In the simulation, all of these things have taken place.
02:27:44.000And the simulation might go back a long time, and it might be a reality tracking simulation.
02:27:51.000Maybe these same things also happened before or outside the simulation.
02:28:07.000Well, to me it seems probable only if at least one of the other alternatives is true.
02:28:13.000Or, I admit that there is also this general possibility, which is always there, that I'm confused about some big thing, like maybe the simulation argument is wrong in some way.
02:28:26.000I'm just looking at the track record of science and philosophy, we find we're sometimes wrong.
02:28:34.000But if we're working within the parameters of what currently seems To me to be the case, that we would be the first civilization in a universe where there will later be many, many simulations seems unlikely for those exact reasons.
02:28:54.000And that if we are the first, it's probably because one of the alternatives is true.
02:29:33.000But the idea of some argument or data or insight that if only we got it would Radically change our mind about our overall scheme of priorities.
02:29:48.000Not just change the precise way in which we go about something, but kind of totally reorient ourselves.
02:29:54.000An example would be if you are an atheist and you have some big conversion experience and suddenly your life feels very different.
02:30:21.000At least up until very recently, we hadn't, in that there are these important considerations that seems to, whether it's AI, like if this stuff about AI is true, like maybe that's the one most important thing that we should be focusing on and the rest is kind of frittering away our time as a civilization.
02:30:39.000So we can see that it looks like all earlier ages, up until very recently, We're oblivious to at least one crucial consideration, insofar as they wanted to have maximum positive impact on the world.
02:30:52.000They just didn't know what the thing was to focus on.
02:30:54.000And it also seems kind of unlikely that we just now have found the last one.
02:31:04.000Given that we keep discovering these up until quite recently, we are probably missing out on one or more likely several more crucial considerations.
02:31:12.000And if that's the case, then it means that we are fundamentally in the dark.
02:31:20.000We might try to improve the world, but we are...
02:31:27.000Overlooking maybe several factors, each one of which would make us totally change our mind about how to go about this.
02:31:35.000And so it's less of a problem, I think, if your goal is just to lead your normal life and be happy and have a happy family.
02:31:44.000Because there we have a lot more Evidence and it doesn't seem to keep changing every few years.
02:31:50.000Like we still know, yeah, have good relationships, you know, don't ruin your body, don't jump in front of trains, like these are tried and tested, right?
02:31:58.000But if your goal is to somehow steer humanity's future in such a way that you maximize expected utility, there it seems our best guess is keep jumping around every few years and we haven't kind of settled down into some stable conception of that.
02:32:15.000Nick, I'm going to have to process the conversation for a long time.