The Culture War - Tim Pool - August 30, 2024


The Culture War #79 Creationism vs Simulation Theory Debate, God or Atheism w⧸Roman Yampolskiy & Brian Sauve


Episode Stats

Length

2 hours and 17 minutes

Words per Minute

184.36104

Word Count

25,303

Sentence Count

1,826

Misogynist Sentences

8

Hate Speech Sentences

25


Summary

We live in a constructed reality. Or is it a simulation? Or is there something beyond the physical realm that we're living in? In this episode of Conspiracy Theories, host Ben Garrett and guests Roman Yimpolsky, Brian Sauve, and Ian Crossland discuss the possibility that we are living in a simulation of some sort, and that it's all part of God's plan for us to be here and facing a test or, perhaps, we are here and we are in a test, or additionally, we're watching a simulation and people are watching. This episode is brought to you by Betonline.ca and GameSense. Betonline is a leading provider of high-performance computer and software solutions for the financial and legal needs of Fortune 500 companies and Fortune 500 executives. BetOnline is a partner in a multi-billion dollar company and a leading edge player in the gaming and data analytics space. Betonline allows you to earn up to 20% off your first purchase of a Betonline product or service, up to $100,000 in prizes and up to 50% off of your first month's purchase when you upgrade your account over $99.00. Sponsorships include: BetOnline.ca, GameSense, Hotwire, Betonline, and Hotwire.ca. Get ready for some Vegas-style action at your fingertips with the king of online gambling, MGM Grand, the world's largest casino game giant. Enjoy! BetOnline, the King of Las Vegas, the ultimate casino and the best casino experience in the world! BetMGMGMGM, the premier casino in the only place you ve ever heard of your choice. . Enjoy this episode and don t miss out on the best gaming and social network for the best in the highest-rated casino game on the highest rated casino scene in the entire world. Subscribe to our newest episode of The Dark Side of the internet, wherever you get your most up to date news and information about gaming and culture, including the best deals on the web, your most authentic and the most authentic gaming experience, your choice of the highest most authentic experience wherever you can get the most in the best place to get the best the ultimate gaming , the ultimate place to be the most connected and most authentic podcast all-inclusive is available on the most up-to-date gambling and most reliable


Transcript

00:00:00.000 Get ready for Las Vegas-style action at BetMGM, the king of online casinos.
00:00:05.880 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous for
00:00:11.140 when you play classics like MGM Grand Millions or popular games like Blackjack, Baccarat, and Roulette.
00:00:17.940 With our ever-growing library of digital slot games, a large selection of online table games,
00:00:22.900 and signature BetMGM service, there's no better way to bring the excitement and ambience of Las Vegas home to you
00:00:29.320 than with BetMGM Casino. Download the BetMGM Casino app today.
00:00:34.960 BetMGM and GameSense remind you to play responsibly. BetMGM.com for T's and C's.
00:00:39.400 19 plus to wager. Ontario only. Please play responsibly.
00:00:42.700 If you have questions or concerns about your gambling or someone close to you,
00:00:45.800 please contact Connects Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:00:53.860 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:00:57.060 There's this ongoing joke that there's two actually. One is, I don't know, at some point in 2016,
00:01:04.840 the Large Hadron Collider in Europe fired up and shattered reality, accidentally making Donald
00:01:10.680 Trump the president. And now we're trapped in some strange, fragmented universe where nothing quite
00:01:15.080 makes sense. And you got the Mandela effect and all that. But the other joke is that there are
00:01:19.700 writers who are writing everything that's going on. And people often say the writers of season two
00:01:26.580 are rehashing old ideas or whatever that may be. But what's funny about this is the joke behind all
00:01:32.820 of it is that certainly we live in a constructed reality of some sort because it is much too
00:01:38.660 interesting right now with everything that's going on. I have to make the point that we've lived
00:01:43.640 through, I don't know what, 50 historical moments this year alone. I don't know if 50 is the right
00:01:49.480 number, but I mean, at least in the past decade, it's been some substantial number. We've got never
00:01:54.720 before a candidate with no policy positions put in a few months in advance. The former president
00:02:00.440 was nearly assassinated. You've got January 6th. I mean, just the list is crazy. So perhaps it's
00:02:06.520 actually quite simple. We live in reality. Everything is what it is. And it's just the internet
00:02:11.600 has sped up the rate of communication, resulting in it feeling like history is smashing us in the
00:02:18.640 face. Or perhaps this is all part of God's plan and something significant is happening and we are
00:02:24.700 here and facing a test. Or additionally, we're in a simulation and people are watching. Now, I like the
00:02:30.300 idea that we're in a simulation and we're just entertainment for some species somewhere else,
00:02:34.860 just watching and enjoying the show and laughing that Donald Trump is president. But we're going to
00:02:38.180 have a big conversation about religion, spirituality, simulation theory, all of these things.
00:02:42.040 And so we've got a couple of guests who are joining us. I don't know if you want to go first,
00:02:45.040 Roman, introduce yourself. Sure. I'm Roman Yimpolsky. I'm faculty at the university. I do research on
00:02:50.380 computer science, artificial intelligence, superintelligence, cybersecurity. So simulation
00:02:55.300 is kind of a small subtopic within that bigger, very interesting framework. Do you believe we are in a
00:03:01.760 simulation? Oh, we're definitely in a simulation. This is going to be fun. And then Brian, how about you?
00:03:07.300 Well, my name is Brian Sauve, and I am a Christian pastor out in Ogden, Utah of a church called Refuge
00:03:13.240 Church. It's a Protestant and Reformed church. And I'm also the founder and president of a publishing
00:03:18.340 company called New Christendom Press, where I release music, finishing a book, and most importantly
00:03:24.420 for this conversation, do a couple podcasts. One of them is called Haunted Cosmos, where my co-host Ben
00:03:30.300 Garrett and I talk about the intersection of Christian theology with high strangeness, allegedly
00:03:36.840 supernatural phenomenon, government, conspiracy, and anything else that interests us because we can
00:03:41.440 do whatever we want. It's our show. And we talk a lot about materialism and how it's failure as an
00:03:48.560 explanatory mechanism for the world and that sort of thing. So really interested in this topic.
00:03:53.480 Super cool, man. Well, I'm Ian Crossland here to join. And I agree with you, like matter,
00:03:57.900 when you talk about materialism and like matter, at what point is matter, like after plasma,
00:04:02.140 there's something is still going on, but it's just non-material. And I've been thinking a lot
00:04:06.400 about like remote viewing lately, and it's real. And the CIA works on remote viewing. And like,
00:04:11.820 is it has to do with quantum entanglement? Are we talking about subatomic phenomenon? Or is there
00:04:15.580 something that's just literally beyond the physical realm as we know it? But let's-
00:04:20.240 We're going to get into it. And then also Ian is here. And for a couple of reasons, Ian has long
00:04:26.140 questioned spirituality and existence on TimCast IRL. And many people often ask him to sit down
00:04:31.660 with learned men who can help him better understand the nature of reality. And so I figure Ian adds that
00:04:38.080 wildcard element and the questions of all of these things, which will be rather interesting.
00:04:43.140 But where should we begin? I suppose I saw you were mentioned online in a tweet where you said
00:04:49.740 something to the effect of it's too interesting right now. And that is evidence that we are living
00:04:55.720 in a simulation. How so? I mean, I think we all get it. But let's start there. Why does that
00:05:01.220 make you think we're living in a simulation? That's a great question. And people question it.
00:05:05.140 They tell me, no, no, no, it's not the most interesting times. People always believed that,
00:05:09.120 you know, somebody invented fire, invented wheel. It's going to be interesting in that way.
00:05:14.440 The difference is we're hitting meta interesting stuff. We are starting to create intelligence.
00:05:20.440 So a requirement for having simulations. And we're starting to create realistic simulations.
00:05:27.520 So that's two meta factors. If I was running ancestral simulations, I would be interested
00:05:34.300 in that timeframe. This is where it's who's going to create super intelligence. Will they be able to
00:05:39.820 control it? Can they tell the real world from fake world, setting up ethics for future simulations?
00:05:46.560 This is the interesting times, not the fire. But I think if we're going to base it off of
00:05:53.720 that, there's a lot more to break down in simulation theory too, especially. But if we're
00:05:57.180 going to base, you know, what's happening now and what's interesting, if we're going to use that as a
00:06:02.060 basis for why we're in a simulation, I honestly just default to we are entertainment. And if we as
00:06:08.040 a civilization were to create simulations, what would they be for? Sure, I guess universities are
00:06:13.540 running simulations, I guess, but that's, that's probably a tiny, tiny fraction of the simulations
00:06:18.960 that exist. So we have plethora, probably 10s of 1000s of different simulations that exist in video
00:06:26.640 games, emulated reality to a variety for a variety of reasons. And it is only a small fraction of these
00:06:33.800 simulated realities that actually exist as some kind of research. I think it stands to reason that we
00:06:39.140 exist in some kind of entertainment. And I think the first thing we are probably going to do with AI,
00:06:45.360 why make TV shows anymore? Why hire crews and cameras and lighting and writers when we can just
00:06:52.560 create a million different AI simulated realities, and then find and then let let a decentralized
00:07:00.020 network figure out what is the most interesting one to watch. And you've got a bunch of weird alien
00:07:05.180 creatures or even humans sitting at their computers, and they're all hitting the like button on
00:07:10.040 Earth 2024. And this one dude sitting in his pod eating roaches. And he's like, guys, guys,
00:07:17.020 Donald Trump became president in this one. And then everyone starts watching. And so it's it seems to me
00:07:21.820 that if we are in a simulation, it's probably more likely entertainment than research.
00:07:25.180 You know, anecdotally, I vape DMT for the first time in my life. And I witnessed these creatures like
00:07:31.620 they first of all, I just saw spiraling shape patterns, like the letter A, the letter F on its
00:07:37.060 side, like a triangle. And like, they were like ribbons of patterns of numbers and shapes. And then
00:07:42.540 these ribbons became three dimensional moving around, then it became the outline of this woman's
00:07:46.140 arm. And she was waving me towards her. And then I allowed the simulate the visualization to continue.
00:07:51.540 And then all of a sudden, it was like three beings, this woman, this this male being, they were like
00:07:55.440 shimmering light. And they they saw me and they were highly entertained. This is part of when
00:08:02.000 you're saying that you think it's entertainment for some other sort of being, I think you might
00:08:06.100 be right. And they were fascinated that I could see them. They were like, elated that I could see
00:08:11.580 them. It was like a video game character that realizes it's a video game character. And I was
00:08:16.000 the video game character for them. I don't know if it was just pure hallucination. I'm not the only one
00:08:21.740 that's that's talked about this kind of thing either. And it wasn't like I wanted to experience
00:08:25.200 that. That's just what happened when, after I puffed to the stuff.
00:08:29.880 All right, Brian, help us out.
00:08:31.360 Well, that's really, we did an episode on DMT and ayahuasca. And I think it's interesting how
00:08:38.280 it relates to the simulation concept, because I do think that all of you are noticing something
00:08:43.740 that is true, as a cultural phenomena, even, even, I think that you guys are noticing that
00:08:49.460 this world does not just seem to be a sort of accidental collision of a bunch of
00:08:55.180 stuff that's roiling around, and that there is some sort of meaning behind reality. There's some
00:09:01.180 sort of mind behind reality. There's some sort of designing, architecting, arch storyteller behind
00:09:07.700 reality. And that you can engage with consciousness that is not just human. I think all of that's true.
00:09:14.360 However, I believe that that story is much better, all of that data is much better explained not by
00:09:21.160 a simulation, which I actually think might be a formally self-contradictory idea, an idea that if
00:09:28.900 it's true, it can't be true, but that it's better explained by the classical understanding of the
00:09:34.960 Christian God, that we do live in a story, not a simulation is how I would put it, that's being told
00:09:41.480 by God, and that He is all-powerful, all-good, and all-wise, all the omnis, and He's therefore able
00:09:49.940 to, when He tells a story, it's not just words on a page like when we tell a story as His sub-creators
00:09:54.600 made in His image, but it's actually real, and it has extension in space, and even establishes the
00:10:00.920 validity of second causes, and allows for real will and real freedom, which is another thing that I
00:10:06.480 think simulation theory would obliterate, would be the possibility of human freedom, and the reality
00:10:13.800 of human will. I think it would end up being a deterministic world, where all meaning is destroyed,
00:10:18.700 and even the possibility of justified and rational knowledge. So...
00:10:23.980 You're saying if we are in a simulation that is...
00:10:26.020 Yeah. I think if we're in a simulation, then knowledge is impossible. For many,
00:10:30.720 many different reasons. And actually, not just knowledge, but especially rational and justified
00:10:35.940 knowledge and belief would be impossible.
00:10:37.660 But why would it be impossible?
00:10:39.320 So think about it like this. If you're in a simulation, how do we arrive at knowledge? Well,
00:10:45.240 we deploy our senses, sense data, and we deploy reason. So we look at our sense data, and we compare
00:10:52.120 it to what seem to be necessary abstract objects, like the laws of logic, the law of non-contradiction.
00:10:59.440 And in a simulation, well, backing up, I would say that in order for a person to reason in a way that
00:11:08.440 would be rationally justified, he actually has to believe that his mechanisms for data input and
00:11:15.780 analysis of that data is reliable. However, if we live in a simulation, then both of those things
00:11:21.080 are destroyed. Because all of our sense data is actually deceiving us, it's making us believe
00:11:27.240 something that's not true. And I also think on a meta-level, and this is really more of a
00:11:32.160 philosophical question than a technological question, but I think that on the meta-level,
00:11:37.140 it can't actually account for things like abstract objects, like the laws of logic, laws of mathematics,
00:11:43.980 moral realities, things like that. Unless somewhere in the simulation, even if you went to base reality,
00:11:49.600 and you ask the question of, is base reality a meat space, like, you know, people joke about in
00:11:53.940 Minecraft, meat space, is it that kind of, is it like us, like what we think we are? Or is it itself
00:11:59.860 a transcendent reality? And at some point, you have to reason back to, where did this come from?
00:12:06.040 What's behind it? What accounts for those abstract moral objects and abstract laws of logic?
00:12:10.440 And on that end, I think you still have to posit God at the bottom of the simulation.
00:12:18.220 How do you define God?
00:12:19.440 But let me...
00:12:20.060 Yeah.
00:12:20.520 Well, yeah, let's...
00:12:21.740 Yeah. So God is a necessary being. He's a non-contingent being. Unlike us, we're contingent
00:12:27.680 beings. Even if simulation theory is true, this is true. We exist contingently, meaning we don't
00:12:34.500 have to exist, and we exist because of some other thing that upholds it. So God is a self-existent,
00:12:38.900 necessary being, who is maximally good in every way, and therefore he's timeless, changeless,
00:12:45.780 immensely powerful, and he exists outside of his creation, and yet is imminent in his creation.
00:12:56.340 I would say that all things exist within God. All things are not God. I'm not a panentheist,
00:13:00.340 but they exist within him.
00:13:01.420 Yeah. I don't see this as being contradictory to simulation theory.
00:13:05.560 Yeah.
00:13:05.880 You know, often, Seamus Coughlin and I, when we would discuss these things, we would say
00:13:09.920 simulation theory seems to be a sci-fi labeling of religious theory in a lot of ways.
00:13:16.680 And so what you were saying about knowledge not being possible, I can't agree and disagree,
00:13:21.740 but so I'll explain. Knowledge of the existence beyond the simulation would be impossible.
00:13:26.860 You could only know and experience what the simulation was designed to allow you to know
00:13:30.600 and experience.
00:13:31.500 Get ready for Las Vegas-style action at BetMGM, the king of online casinos.
00:13:36.960 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous for
00:13:42.200 when you play classics like MGM Grand Millions or popular games like Blackjack,
00:13:47.180 Baccarat, and Roulette.
00:13:49.000 With our ever-growing library of digital slot games,
00:13:52.000 a large selection of online table games, and signature BetMGM service,
00:13:56.080 there's no better way to bring the excitement and ambience of Las Vegas home to you
00:14:00.380 than with BetMGM Casino.
00:14:02.980 Download the BetMGM Casino app today.
00:14:06.040 BetMGM and GameSense remind you to play responsibly.
00:14:08.560 BetMGM.com for T's and C's.
00:14:10.480 19 plus to wager.
00:14:11.620 Ontario only.
00:14:12.500 Please play responsibly.
00:14:13.760 If you have questions or concerns about your gambling or someone close to you,
00:14:16.680 please contact Connects Ontario at 1-866-531-2600 to speak to an advisor.
00:14:23.940 Free of charge.
00:14:24.580 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:14:29.580 When you really care about someone, you shout it from the mountaintops.
00:14:34.080 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell our clients
00:14:39.260 that we really care about you.
00:14:43.380 Home and auto insurance personalized to your needs.
00:14:46.620 Weird, I don't remember saying that part.
00:14:48.800 Visit Desjardins.com slash care and get insurance that's really big on care.
00:14:55.180 Did I mention that we care?
00:14:56.440 But that could be stated exactly as if the universe is a construct of God, you can only
00:15:06.040 know and experience what God has allowed within this reality for you to know and experience.
00:15:10.900 You know, Ian talks about machine elves and experiencing these things.
00:15:14.440 And those are still within the realm of human perception and consciousness.
00:15:18.400 I would love to talk more about that, too, at some point.
00:15:20.700 Yeah, absolutely.
00:15:21.640 But I hear these stories about, you know, past near-death experiences where people feel like
00:15:26.420 there's a giant ball of energy that they're drifting towards or something of this effect.
00:15:29.460 Yeah, the sun.
00:15:30.100 There are, maybe, maybe, but there are certainly things beyond our comprehension, and if God
00:15:37.940 does exist outside of this construct, then we can't know what exists or even fathom what
00:15:43.140 is God or what God exists, you know, what is the realm of God.
00:15:48.040 I see those as just different ways of describing the same thing, be it simulation or otherwise.
00:15:51.440 And Tim, that's a really good point, actually, that even in Christian theology, what we would
00:15:56.400 say, two things we would say, is that man is limited in his ability to comprehend and
00:16:01.540 apprehend, but he was created by a creator for something, and so he's fitted for the
00:16:06.440 duties to which he was created.
00:16:07.760 Same as simulation.
00:16:08.620 So he's fitted to understanding the world.
00:16:10.820 However, we can't know God apart from God's divine self-revelation, which he unfolds through
00:16:16.400 creation, which we theologians call the book of nature.
00:16:19.740 You observe things about the world, and we make inferences to best conclusions from the
00:16:25.120 evidence, things like that.
00:16:25.880 We were fitted for that sort of work, and the universe is knowable, but it's only knowable,
00:16:32.220 I would say, if God is upholding those things.
00:16:35.480 For example, like to justify that statement somewhat, I would say, why would we expect for
00:16:41.300 physical brain states to have anything to do with something like objective truth?
00:16:46.500 Why would we expect for the brain states of a highly evolved primate to have anything
00:16:52.120 to do with a correct apprehension of the fundamental reality in which we live?
00:16:57.160 Why would we—how is matter about anything?
00:17:00.780 How is matter have intentionality?
00:17:02.540 How can matter know anything?
00:17:03.840 There's secular answers to this.
00:17:08.100 I don't believe any of them are compelling, just to be frank.
00:17:12.100 I think there are attempts at ascending the—essentially building a sandcastle for the first grain
00:17:18.660 of sand being just starting with mere stuff, and then spinning out theoretical models where
00:17:24.340 we could account for human consciousness, justified true belief, abstract objects existing like
00:17:31.640 mathematical laws and the laws of logic and moral laws.
00:17:35.480 However, I think those things are as uncompelling as if you were to ask an archaeologist,
00:17:41.300 explain this ancient manuscript that you found out in a—you know, Dead Sea Scrolls, out in a pot
00:17:47.720 somewhere.
00:17:48.640 But here's the rule.
00:17:50.140 You're not allowed to appeal to the existence of any intelligence in doing so.
00:17:55.580 Now, could I develop an elaborate story as a human being with an elaborate consciousness
00:17:59.260 to be able to say, well, there was some reeds, and they dried up, and there was an earthquake,
00:18:03.320 and it crushed them up, and it created a powder, and water reconstituted it, and it fell between
00:18:06.660 two flat rocks and made a paper, and then a beetle crawled through a fire, and he, you know,
00:18:10.600 put some strange markings across it, and I don't know.
00:18:12.820 It's like, of course.
00:18:14.100 If, on the level of the rules, you rule out the existence of a cosmic intelligence,
00:18:19.760 what Chalmers in his book Reality Plus, talking about some of this stuff,
00:18:23.440 he calls the cosmic god rather than the simulator god.
00:18:26.320 Well, then, of course, human beings will come up with some story to explain it, because that's
00:18:31.740 what we do, but I just think the story we'll end up telling will be extremely ad hoc, and
00:18:37.240 it won't have the same explanatory value for the things that creator, cosmic god, can explain
00:18:43.460 much more simply and with much less ad hoc reason.
00:18:50.600 Roman, do you want to explain your view on the simulation that we're in, its purpose,
00:18:54.780 its existence, who made it?
00:18:56.000 Sure.
00:18:56.700 So, you cover so much interesting material.
00:18:59.340 I'm like, I want to comment on every part of it.
00:19:02.020 I know, I know.
00:19:02.380 And I think I agree with all of you just on different components.
00:19:05.460 So, you're asking, what is the purpose of this simulation?
00:19:08.240 You cannot know from inside, and that's to your point.
00:19:11.380 You're absolutely correct.
00:19:12.480 Whatever you are given is inside of this virtual world.
00:19:15.900 You don't know what's happening outside unless you are given privileged access to that external
00:19:21.000 world.
00:19:21.940 So, from inside, we can also consider things like maybe it's educational, maybe it's testing,
00:19:28.120 maybe it's some reason we cannot comprehend because we're not smart enough.
00:19:31.800 I know what it is.
00:19:33.060 Keep going.
00:19:33.440 I'll tell you in a second.
00:19:34.540 Awesome.
00:19:35.000 So, we need to either break out of a simulation, hack to the outside world and get real knowledge,
00:19:42.800 or we just figure out kind of based on properties of human abilities.
00:19:49.400 We have brains.
00:19:50.560 We have bodies.
00:19:51.300 What can you do with those things?
00:19:52.740 And then we're starting to see, okay, we are capable of creating other intelligent beings.
00:19:57.080 We are capable of creating simulated worlds.
00:19:59.240 So, maybe that's going to teach us about some of those answers.
00:20:04.040 To me, it's very simple.
00:20:06.280 We are on a verge of creating real artificial intelligence, not a tool, an agent as capable
00:20:12.960 as any human being, perhaps smarter than all human beings, super intelligence.
00:20:17.780 Then a being like that runs simulations.
00:20:21.780 It's thinking about future states of the world.
00:20:24.160 It creates very realistic states inside of its mind, millions of them, billions of them.
00:20:29.600 Some of those internal states are thoughts about other agents, about humans.
00:20:34.380 If you do it in enough detail, you essentially have this world.
00:20:37.940 You're thinking about lots of people who are conscious agents in the world making decisions.
00:20:42.200 Maybe it's predicting economic states of the world, maybe political outcomes.
00:20:46.280 But this is what you do.
00:20:47.340 You run simulations and you think, well, this guy will do this and this group of people will
00:20:51.620 do that.
00:20:52.280 If it's at high enough fidelity, this is what you're getting.
00:20:55.460 And lots and lots of simulated worlds with possibly conscious beings.
00:21:00.340 And we can talk about what that means to be conscious.
00:21:03.120 But this is basically my belief.
00:21:05.300 I think we're getting to the point where we have this technology and we can do it in the
00:21:10.900 future.
00:21:11.200 But here's the most interesting aspect of it.
00:21:14.740 So, so.
00:21:15.560 Oh, good.
00:21:15.960 Good.
00:21:16.160 Just to.
00:21:17.020 If anyone's questioning whatever it's simulation, let's say in 10 years, 20 years, technology
00:21:23.480 to run realistic simulations is available to me.
00:21:26.500 It's affordable and it's so high quality.
00:21:29.060 You can tell.
00:21:29.640 So, I pre-commit right now to run millions of simulations of this exact moment, placing
00:21:35.000 you statistically in one of those simulations.
00:21:38.500 So, even if right now there is still low statistical chance of it, eventually you are guaranteed
00:21:44.300 to be in a virtual world.
00:21:45.640 Right.
00:21:46.200 This is one of the most common arguments that you hear for simulation theory and one of the
00:21:50.400 earlier ones that became popular.
00:21:52.440 The fact that we can right now, literally in our reality, create these simulations and
00:21:58.040 we're getting to the point where it's going to be indistinguishable, especially with something
00:22:01.420 like Neuralink, then the likelihood is greater that you're in a simulation than you are not.
00:22:06.860 And that's why a lot of people believe this.
00:22:08.560 But let me give you one thing.
00:22:11.800 So, I was thinking about Terminator, right?
00:22:14.220 We've all seen that Skynet goes and just decides to wipe out humanity.
00:22:17.680 And I think that's an absurdity.
00:22:18.860 The idea that we would make an AI and then it would be like, and now I will go to war
00:22:22.560 with humans is stupid.
00:22:23.740 We create an AI.
00:22:25.040 It's going to need humans to perform labor to sustain itself along with its other mechanized
00:22:30.820 components.
00:22:31.480 It would probably seek to control and utilize humans.
00:22:34.260 And in that, the incentive of the AI would actually be to keep humans docile and happy.
00:22:39.140 So, you'd probably get a – the AI would create simulations or entertainment or things
00:22:44.400 to keep humans perpetuating or providing its own services.
00:22:47.560 But in thinking about that, I said, how do you avoid a Terminator scenario in developing
00:22:51.940 AI?
00:22:52.840 If we were to create an AI program and we're trying to get to artificial general intelligence,
00:22:57.480 the point at which it looks and can behave exactly as a human, way beyond human capabilities,
00:23:03.340 but to a human, they wouldn't know the difference.
00:23:06.580 That's a lot of work and a lot of risk.
00:23:08.720 You unleash something like that into human civilization and you may end up with a sociopath that wants
00:23:13.100 to make everything just corn.
00:23:15.100 You know, the AI's incentive may simply be maximizing, you know, the most efficient path
00:23:21.300 to human gratification.
00:23:23.020 And in the United States, we subsidize corn to an insane degree.
00:23:25.520 We do so much with it that the AI may do something ridiculous and just be like,
00:23:29.800 there's no point in dealing with anything else if the score is corn production generates
00:23:36.460 a, you know, plus 17 result and then –
00:23:39.380 Corn is everything.
00:23:40.260 It's everything.
00:23:41.080 Corn is God.
00:23:42.280 Because it's possible the AI just looks at all things as it is able to analyze and says,
00:23:48.320 assigns a value to each based on whether something is a positive or negative reaction
00:23:51.580 for humans.
00:23:52.320 I'm not saying corn is it.
00:23:53.200 I'm saying it could be something as absurd as it just goes, humans love subsidizing
00:23:57.840 and producing this product.
00:23:59.680 And so it just redirects society in that direction.
00:24:02.720 And then before we realize it, we live in crackpot corn reality.
00:24:05.900 But so I was thinking about this.
00:24:06.660 How do you avert evil or broken AI?
00:24:10.460 Well, what I would do is I would create a simulated reality in which each AI iteration
00:24:15.960 exists as a conscious entity beginning with zero knowledge and it would simulate a human
00:24:22.080 life, this artificial intelligence, and all seven or eight billion would experience various
00:24:27.860 types of life that it could live.
00:24:31.120 And the AI constructs represent – that are within each individual within this one simulation
00:24:37.140 interacting with each other.
00:24:38.640 Each one that is evil, malicious, or insane would be deleted and removed.
00:24:44.660 Each that was studious, industrious, and capable would advance to an android body outside of
00:24:51.000 the simulation.
00:24:52.340 And so if I was seeking to create robots that would serve humanity, an android that will
00:24:56.300 clean my living room or create nuclear energy or run power plants or do good governance,
00:25:00.900 I would not want evil.
00:25:02.760 I would not want sociopathic or fractured.
00:25:05.220 I would want empathetic, studious, intelligent, pragmatic.
00:25:08.780 So I would run all of the iterations of AI through a simulation.
00:25:12.280 Those that are bad, I would delete.
00:25:15.360 Just, you're gone.
00:25:16.240 And those that are good would advance to live in my kingdom and with me forever.
00:25:20.740 So what do you – here's the problem.
00:25:23.120 Have you ever done bad things?
00:25:25.480 Define bad, I guess.
00:25:26.560 Yeah, no, exactly.
00:25:27.440 Can you?
00:25:28.640 What is evil?
00:25:29.680 I define evil as those things that are destructive and in furtherance of chaos.
00:25:34.340 That's overly simplistic.
00:25:35.860 Yeah.
00:25:36.460 I think the problem that I would see with this kind of idea is that what you run into is
00:25:42.140 that all human beings are evil, even by that.
00:25:45.780 Have you ever contributed to chaos and destruction?
00:25:48.340 I disagree.
00:25:49.420 I think it's oversimplified.
00:25:50.440 All humans have the capability of evil.
00:25:52.640 Like yin and yang, within good there is evil, within evil there is – or I should say within
00:25:55.980 good there is the capability of evil, within evil there is capability of good.
00:25:59.420 But when I look at the bigger picture of what evil and good is, I would describe good as
00:26:06.500 things that organize energy into complex systems.
00:26:09.300 And again, that's overly simplistic.
00:26:10.820 It would take a long time if we actually went through all the – to break it down.
00:26:14.180 And I look at evil as things that serve entropic ends.
00:26:17.580 Life is a form of negative entropy and we can only create more in its wake.
00:26:23.180 Negative entropy only exists so long as we create more entropy as the universe pushes towards
00:26:27.100 chaos.
00:26:27.420 That being said, in the service of good, you can destroy it.
00:26:31.580 In the service of evil, you can create.
00:26:33.720 Yeah, fire is neutral, but fire can destroy it.
00:26:36.240 Chaotic and destructive, but it can be very good.
00:26:38.480 It's one of the most important things that humans have ever discovered, how to essentially
00:26:44.380 – I don't want to say produce, but perhaps produce, but to ignite.
00:26:49.460 Allow.
00:26:50.460 As we're able to then refine elements and develop technology through our ability with combustion.
00:26:55.760 Can I give you a thought experiment?
00:26:56.960 I think it's really interesting on this front because it overlaps quite a bit with ethics.
00:27:00.580 Get ready for a Las Vegas-style action at BetMGM, the king of online casinos.
00:27:06.960 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous
00:27:11.880 for when you play classics like MGM Grand Millions or popular games like Blackjack,
00:27:17.200 Baccarat, and Roulette.
00:27:18.640 With our ever-growing library of digital slot games, a large selection of online table games,
00:27:24.020 and signature BetMGM service, there's no better way to bring the excitement and ambience
00:27:29.040 of Las Vegas home to you than with BetMGM Casino.
00:27:32.980 Download the BetMGM Casino app today.
00:27:36.020 BetMGM and GameSense remind you to play responsibly.
00:27:38.580 BetMGM.com for T's and C's.
00:27:40.480 19 plus to wager.
00:27:41.620 Ontario only.
00:27:42.360 Please play responsibly.
00:27:43.840 If you have questions or concerns about your gambling or someone close to you, please
00:27:47.100 contact ConnexOntario at 1-866-531-2600 to speak to an advisor, free of charge.
00:27:54.940 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:27:58.140 When you really care about someone, you shout it from the mountaintops.
00:28:04.100 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell
00:28:08.680 our clients that we really care about you.
00:28:13.400 Home and auto insurance personalized to your needs.
00:28:16.640 Weird, I don't remember saying that part.
00:28:19.360 Visit Desjardins.com slash care and get insurance that's really big on care.
00:28:25.400 Did I mention that we care?
00:28:28.140 Christian theology, what is the good and how do we know it?
00:28:32.800 Imagine that through your entire life, you wore a recording device that was able to capture
00:28:37.740 every thought you had that was a moral law or every statement you ever made of moral
00:28:41.540 truth.
00:28:42.220 For example, you're driving through traffic and somebody cuts you off and you say, what
00:28:46.400 an idiot.
00:28:47.040 And it somehow knows and can translate, okay, people who do X action are bad and wicked.
00:28:53.280 And we established the Bible of Brian.
00:28:55.840 So my own Deuteronomic law, my own Mosaic law, thus says Brian, this is the good and
00:29:02.460 this is the evil.
00:29:03.440 Now, if you had a super intelligent being that was able to even read my brain states,
00:29:08.380 I'm not a functional material, I don't believe that the brain alone is the mind, but let's
00:29:12.300 assume that it is.
00:29:12.940 And it could read all of my mind and thoughts and look at all of my actions and dispassionately
00:29:18.640 compare me to my own stated law.
00:29:20.680 What would it find?
00:29:22.120 Well, it would find that I violated every single one of even my own imperfect, and my
00:29:26.380 laws would be imperfect, but it would find that I violated even every single one of my
00:29:31.020 own imperfect laws.
00:29:32.720 Now, let's maximize this to all of humanity.
00:29:36.720 And we were to try and come up with some sort of law that the AI is going to compare people
00:29:41.900 to and say, I'm going to delete people who are bad.
00:29:44.640 This is more what I mean.
00:29:45.920 It's not an AI doing it.
00:29:46.720 Or not an AI, a consciousness.
00:29:48.720 It's a God.
00:29:49.560 God, when you die, the God looks at your life chart and says, okay, we've got Brian7369412.
00:29:57.580 What did he do?
00:29:58.220 He's actually a pretty good dude.
00:29:59.640 He did some bad stuff.
00:30:01.140 You know, I think he stole a bag of chips when he was a teenager, but we don't really
00:30:03.720 care about that.
00:30:04.880 He learned his lesson.
00:30:06.160 He went on to be a good man, a family man.
00:30:08.060 He was logical.
00:30:08.920 He was intelligent.
00:30:09.480 I think this is a good, good guy.
00:30:10.860 We definitely want this entity to work with us and live in experience.
00:30:17.580 And so the bad exists, but a reasonable, if it were me, and I was going through a list
00:30:23.700 of various iterations of artificial intelligences, and I saw yours in there, certainly I would
00:30:28.220 see bad things you've done.
00:30:29.920 But the question is, are you just in general going to be a good person?
00:30:33.180 And the answer is yes.
00:30:34.360 Then you get to come live in my, you know, super reality.
00:30:36.900 Let me address your proposal for AI safety.
00:30:39.400 I think that's what you proposed.
00:30:40.720 In early stages, you're right, AI needs us for manufacturing, for whatever production
00:30:48.180 of next generation system.
00:30:49.840 But eventually, you have nothing to contribute to super intelligence.
00:30:53.680 It can develop physical bodies, robots.
00:30:56.580 It can solve nanotechnology.
00:30:58.060 It can do synthetic biology to generate whatever resources or services you provide.
00:31:03.120 So A, it doesn't need us.
00:31:04.800 And B, if it's already at that level of capability, it doesn't need to try and evolve slightly more
00:31:11.740 intelligent AIs through generations.
00:31:13.680 It already supersedes that level to begin with.
00:31:16.440 So it cannot be an approach to generate safer agents.
00:31:21.000 Well, so I disagree that it doesn't need us.
00:31:23.640 If you're going to create, the robots that we build are pathetic compared to us.
00:31:31.500 If we wanted to make, actually, it's a, what is it, the Blade Runner?
00:31:36.740 And Fallout 4 talks about this.
00:31:39.400 You had, in Blade Runner, replicants.
00:31:41.580 And then in Fallout 4, you have synths.
00:31:43.680 In the Fallout series, synthetic humans, first, are robots with metal arms and faces.
00:31:50.660 And they have AI to communicate.
00:31:52.020 But they're pretty dumb.
00:31:52.700 In Fallout 4, this is a video game, by the way, the synths are effectively genetically engineered
00:31:58.980 androids because, like, the structure of human bone and muscle and development and
00:32:05.340 self-replication is much more efficient than a factory producing robots.
00:32:10.340 So I would, I agree with you halfway.
00:32:13.160 It doesn't need us as free will, independent entities.
00:32:17.140 Otherwise, it would need to create docile, dependent humans that self-replicate but stay
00:32:22.900 within the confines of what it requires.
00:32:25.100 So that means someone to mine cobalt so that it can use the cobalt for certain things.
00:32:29.600 It's going to need, we can make the argument that it needs a, no, it'll just make a robot
00:32:33.720 to do it.
00:32:34.220 We are that robot.
00:32:35.520 Why build from scratch this strange little creature with fine-tuned little motors that
00:32:41.140 can, when it can make a human that can juggle.
00:32:44.460 When it, and the humans make themselves.
00:32:45.840 So there's no factories, the humans collect free energy, reproduce themselves, and as
00:32:49.980 long as the AI controls the culture, society, and the knowledge of these creatures, it will,
00:32:54.420 they will do the dirty work to help sustain itself.
00:32:57.200 It would make a lot more sense to have direct control.
00:32:59.580 So you have super intelligence, you have 8 billion bodies, control it directly.
00:33:03.120 Why go through 20 years of learning from scratch?
00:33:06.220 Why give you choice not to perform useful work?
00:33:09.280 It's not an efficient way to do it.
00:33:10.720 I agree.
00:33:10.960 And a human body is definitely not an efficient way to colonize universe, to accomplish things
00:33:15.440 in extreme environments.
00:33:16.740 It can do much better.
00:33:18.080 Robots today, you're right, I agree, are garbage.
00:33:20.580 But just like AI was garbage 10 years ago, look at the exponential progress.
00:33:25.340 And if AI is doing research, maybe five years later you have something more capable.
00:33:30.060 What about emotion?
00:33:31.260 Because AI talks a lot about intelligence, but then we've got, like there's IQ and there's
00:33:35.680 EQ, there's intelligent, you know, but, and robots have high intelligence, but they have
00:33:40.600 no emotion as far as I can tell.
00:33:42.660 And I don't know if we'll ever be able to, like an advanced quantum computer simulate
00:33:46.260 emotion.
00:33:47.300 I don't think it can personally.
00:33:48.880 Is it necessary?
00:33:49.740 Well, I don't know.
00:33:51.660 I don't even know technically how do you even define emotion to move forward?
00:33:55.480 Like what propels the machine?
00:33:57.700 If a goal is manufacturing, you don't want emotional workers.
00:34:00.340 You want hardworking, intelligent workers.
00:34:03.000 You can simulate emotion.
00:34:04.740 Can they internally experience those?
00:34:06.640 That's a different question.
00:34:07.620 That's a very philosophical question.
00:34:09.260 You can't know from outside what the internal quality is.
00:34:13.420 And I think that's an actually important question about AI at all, is the question of what is
00:34:19.220 consciousness and is generative AI actually, cards on the table, I don't believe that it's
00:34:25.600 capable of achieving consciousness, properly defined, because I don't think that that's
00:34:30.080 actually a metaphysical possibility.
00:34:32.420 I think generative AI, like Doctor, are you familiar with Dr. Selmer Bringsjord?
00:34:36.780 Yes.
00:34:37.260 Okay, so he's talked about this quite a bit, and he's developed even, I think, in a paper,
00:34:43.940 it's fairly technical, but he has a lengthy paper that's actually an argument for the existence
00:34:48.160 of God, a novel argument for the existence of God on the basis of AI, which is really
00:34:52.660 interesting.
00:34:53.400 I couldn't do it justice, but it's something like, if you were to take the best that we
00:34:57.580 can do with AI and put it in a robot and all that stuff, and then look at it and compare
00:35:01.760 it to human consciousness, you would be forced to ask the question of human consciousness,
00:35:06.620 well, where did all of the other stuff come from?
00:35:09.100 All the stuff that's not in that, where did it come from?
00:35:13.060 Because there's...
00:35:13.560 What stuff?
00:35:13.880 What we're talking about here with will, the ability of justified true belief, these sorts
00:35:21.200 of things, I don't even think they're possible.
00:35:24.400 Like Searle's Chinese Room, I've never been satisfactorily convinced that claiming that
00:35:32.580 the system knows Chinese evades the force of that objection.
00:35:36.900 What's that?
00:35:37.700 The Chinese Room is a thought experiment.
00:35:39.440 It's an old one, and you might know more about it than I do, but it's an old thought
00:35:43.720 experiment that basically attempts to simulate in an understandable way to a person what's
00:35:48.640 happening in a computer.
00:35:49.980 And so it imagines that there's a room in which you put an English-speaking man, and
00:35:55.080 the English-speaking man, he has a somehow exhaustive volume of rules that teach him in Chinese
00:36:02.180 characters, explained in English, but with Chinese characters, to explain what to respond
00:36:07.160 with when certain symbols come through a slot on the door in paper.
00:36:10.880 So a Chinese speaker outside of the room puts a piece of paper in the slot with a Chinese
00:36:15.640 sentence on it, or paragraph, or whatever.
00:36:18.120 And it takes a really long time, because he's slow, he's a human, he's not a computer, but
00:36:23.440 he compares it to his book of rules, and he responds with the ideograms, the Chinese ideograms
00:36:28.960 that the rules tell him to respond with, and he puts it out of the slot to the Chinese
00:36:34.040 speaker, and the Chinese speaker assumes that the person in there is having a conversation
00:36:38.520 with him or her, and knows what they're saying.
00:36:41.660 But he doesn't.
00:36:42.480 He has no idea the content of the Chinese sentence that he received or responded with.
00:36:48.920 And Searle's point was, this is analogous to a computer, or even, I think, neural networks,
00:36:54.600 any kind of computing, is going to be able, through rules and input and training and data
00:37:00.020 sets that we put in, it's going to be able to formulate responses and, through reward
00:37:04.260 structures, optimize towards beating a simple Turing test kind of thing, but it doesn't
00:37:10.620 actually know Chinese.
00:37:11.840 It doesn't know what it's doing.
00:37:12.980 This is actually interesting.
00:37:14.260 Without a point of reference for what any of the symbols represent, then the individual,
00:37:20.000 I think it's easier to explain with math than with Chinese.
00:37:24.480 A guy's in a room and there's a bunch of weird symbols he doesn't understand.
00:37:26.720 Someone feeds in a set of symbols, he looks at it, he looks at his book and says, when
00:37:30.900 you look at these symbols, then you get these symbols back.
00:37:33.400 He doesn't know what any of the symbols represent.
00:37:35.300 At any point, however, if the individual in the room is told, this one symbol means
00:37:41.020 dog, that is enough for the person to start mapping out what the words actually do mean
00:37:48.160 and understanding it.
00:37:49.000 The question then is, would a computer ever have the ability to be given one point of reference
00:37:54.600 for what one of the words actually means?
00:37:56.060 And I think the answer is no.
00:37:57.740 So for a Turing test to be passed, you should be able to answer some novel questions.
00:38:02.280 If you have a fixed set of rules, you can never do anything novel, right?
00:38:06.720 So that's the limitation.
00:38:08.280 Interestingly, large language models present a good experimental evidence that just from
00:38:13.600 symbols, just from text, you can learn about other modalities.
00:38:18.080 Trained on text, they were able to produce visuals.
00:38:20.520 With programming language, they would create pictures and could answer questions about
00:38:25.940 that 2D or even 3D world in visual space.
00:38:28.860 So clearly there is some limits to the Chinese room argument.
00:38:32.380 I think the Chinese room argument and the point of it, and again, like I'm a Christian
00:38:36.540 theologian here.
00:38:37.180 I'm not an AI expert with Dr. Jampulski's decades in the field, and I could not sit down and
00:38:43.940 explain front to back how to build a neural network.
00:38:47.060 So I'm not trying to claim technical expertise beyond my knowledge.
00:38:52.080 What I'm more interested is in the philosophical question of the mind and consciousness, where
00:38:58.300 I don't think people, I think, mistake the human brain for the mind, that the mind is
00:39:03.560 just a result of physical processes in the brain, like a machine physicalist view of the
00:39:07.780 machine functionalism view of the brain.
00:39:10.460 I would actually, and we were talking about music before we recorded, I would compare the
00:39:15.100 mind, I think more accurately, to a player of an instrument where the neural network of
00:39:21.520 the brain is a phenomenally complex and balanced instrument, and yet it is played by the immaterial
00:39:27.840 mind and soul.
00:39:28.780 I don't think matter can be about, I don't think it can actually produce mind.
00:39:32.280 I don't think it can produce consciousness, even on a metaphysical level.
00:39:36.720 So I would say that, you know, like can you change someone's personality by poking a hole
00:39:41.800 in their brain?
00:39:42.580 Yeah, in the same way that if you start drilling holes in my guitar while I'm playing, it will
00:39:46.540 change and ultimately destroy the capability of the instrument to make music.
00:39:49.420 But the instrument's not making the music, the person is.
00:39:52.300 That's interesting.
00:39:52.920 That's like that the mind is actually a bunch of interfering resonations.
00:39:57.640 So like I was thinking, what is collecting?
00:39:59.560 Get ready for a Las Vegas style action at BetMGM, the king of online casinos.
00:40:05.040 Enjoy casino games at your fingertips with the same Vegas strip excitement MGM is famous
00:40:09.940 for when you play classics like MGM Grand Millions or popular games like Blackjack, Baccarat,
00:40:15.760 and roulette with our ever-growing library of digital slot games, alert selection of online
00:40:21.220 table games, and signature BetMGM service.
00:40:24.600 There's no better way to bring the excitement and ambience of Las Vegas home to you than
00:40:28.780 with BetMGM Casino.
00:40:31.040 Download the BetMGM Casino app today.
00:40:34.100 BetMGM and GameSense remind you to play responsibly.
00:40:36.620 BetMGM.com for T's and C's.
00:40:38.540 19 plus to wager.
00:40:39.680 Ontario only.
00:40:40.560 Please play responsibly.
00:40:41.580 If you have questions or concerns about your gambling or someone close to you, please contact
00:40:45.640 ConnexOntario at 1-866-531-2600 to speak to an advisor, free of charge.
00:40:53.000 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:40:57.640 When you really care about someone, you shout it from the mountaintops.
00:41:02.080 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell our clients
00:41:07.340 that we really care about you.
00:41:09.220 We care about you.
00:41:10.520 We care about you.
00:41:11.300 Home and auto insurance personalized to your needs.
00:41:14.620 Weird.
00:41:15.240 I don't remember saying that part.
00:41:17.460 Visit Desjardins.com slash care and get insurance that's really big on care.
00:41:23.160 Did I mention that we care?
00:41:26.540 Of consciousness.
00:41:27.500 And it seems like your body starts to vibrate and produce what you think of as thought.
00:41:30.700 That vibration produces a resonating field, which causes other bodies to start to vibrate,
00:41:35.120 which then they begin to produce their own resonation fields, which cause other bodies to vibrate
00:41:39.420 and so on.
00:41:40.240 And you've got all these different resonations.
00:41:42.380 And we think of our thoughts as our own, but often I think they're within other fields.
00:41:47.500 And then I start to wonder, okay, what's the difference between consciousness and sentience?
00:41:52.220 Is that like, can these fields of resonance be sentient, but only when they interact with
00:41:59.360 matter do they become conscious?
00:42:02.360 I think you can create, obviously, generative AI that gives extraordinarily convincing illusion
00:42:12.020 of sentience, of self-awareness, and of consciousness.
00:42:15.860 I think the Turing test could be passed by weak AI in the sense of fooling a human.
00:42:20.960 We've got GPT pulled up and I've been punching these questions into it.
00:42:23.960 There you go.
00:42:24.380 So we'll see what it thinks.
00:42:25.920 But no, I think you could do that and fool a person with some of these things.
00:42:30.380 But I think it would yet remain that what it's doing is it's generating on a very complicated
00:42:37.300 and actually beyond our ability to understand.
00:42:39.600 I think it's important to understand that, that AI is doing things that are beyond the
00:42:42.880 human ability to understand and how they're interacting with data sets, but they're still
00:42:46.260 interacting with data sets on the basis of rules that are input by conscious beings.
00:42:49.600 I just want to add one thing, too, as we're talking about consciousness sentience and Turing
00:42:53.180 Tests, there was a game that was made where it's a human or AI.
00:42:59.300 It connects you with a random person or an AI.
00:43:02.580 And the goal is for both of you to talk to each other.
00:43:05.640 And then afterwards, after like 30 seconds or whatever, it says, was this an AI or a human?
00:43:10.380 And I think what a lot of these sci-fi writers did not predict is that actually the humans
00:43:15.920 are going to pretend to be AI.
00:43:17.360 And so what ended up happening with this game, I don't know about most people, but a lot of
00:43:21.940 people played the game with the intention of trying to pass themselves off as an AI to
00:43:27.100 trick the other human.
00:43:29.000 So that's, yeah, that's funny.
00:43:31.180 Yeah.
00:43:31.640 See, that's human creativity at work.
00:43:33.640 Yeah.
00:43:34.160 Can we build trying to emulate what the AI would say?
00:43:37.680 And they would do it by beep, boop, beep.
00:43:39.680 Well, no, I mean, humans will use shorthand acronyms, slang terms.
00:43:46.280 The AI might pick it up, but AI responses are very constructed.
00:43:50.580 So if you said, someone would type in, how do I know that you're a human?
00:43:55.560 And a human would be like, because you're effing dumb.
00:43:59.920 It would troll you.
00:44:00.920 No, like a human is going to say something like, IDK, I just had a cheeseburger from McDonald's
00:44:06.340 with extra mac sauce.
00:44:07.820 I guess that makes me human.
00:44:09.240 The AI would say, that's an interesting question.
00:44:11.580 It's hard for me to decide what to respond with in order to convince you that I am a human.
00:44:15.460 So when, as a human, you answer in this formulaic robotic way, they say, ah, you're an AI.
00:44:19.440 Wrong.
00:44:19.660 Yeah, to dispel with the roboticism, like, can you build intelligent systems that vibrate
00:44:25.560 and cause resonation fields that self-interfere to allow for a sort of consciousness to develop?
00:44:32.100 Can you explain what he's saying?
00:44:33.540 Sometimes it's good to say, I don't know anything about this topic.
00:44:37.320 I have no opinions of things.
00:44:38.840 I'm not an expert in.
00:44:39.980 I think part of problems in society is that everyone claims expertise in everything.
00:44:44.240 I know nothing about this.
00:44:45.400 Have you worked on building artificial intelligence systems before?
00:44:49.660 I mean, I'm an artificial intelligence researcher.
00:44:52.880 I teach AI.
00:44:53.620 I don't work on cutting-edge AI.
00:44:55.520 I think it's unethical to build it.
00:44:56.780 Have you noticed differences in the structure of the system that the intelligence is within
00:45:02.080 to change the function of the intelligence itself?
00:45:04.920 Like, if it's in a quasi-crystal or if it's in, like, the shape of a triangle,
00:45:08.200 does it function different if it's in the shape of a square?
00:45:10.560 I don't even know what this refers to.
00:45:13.480 Like, if you felt like a spherical crystal that contains AI, would that function differently?
00:45:19.660 A triangular crystal.
00:45:21.200 Does the function of an AI perform differently within different types of computers or...
00:45:26.300 It's substrate-independent.
00:45:27.740 We can run it on meat.
00:45:28.940 We can run it in silicon.
00:45:30.200 We can run it on quantum computers.
00:45:31.660 It doesn't matter.
00:45:32.520 We can encode the same algorithms.
00:45:35.180 So, whatever it's in, they can make it do the same thing.
00:45:37.540 What makes us human is that our substrate, the differential in our substrate changes the
00:45:43.400 way we act.
00:45:44.140 And, like, if these things are the same across substrates, that would forever differentiate
00:45:50.380 them from human or animal, I would think.
00:45:52.660 Unless we can figure out how to...
00:45:53.860 So, give some credit to this.
00:45:55.140 We do believe that quantum computers would have capabilities when Neumann machines do
00:45:59.640 not.
00:46:00.040 So, there is some degree of belief that certain substrates have more capabilities.
00:46:06.580 And some people equate quantum weirdness with consciousness and with related states.
00:46:13.200 So, here's the question I have for you, Ian, and I suppose, actually, for the panel.
00:46:26.720 Is this...
00:46:27.800 You know, what Ian is basically asking, in a very weird way, is the structure of the machine
00:46:33.840 itself, be it its shape or its components, will that change the way the computer operates?
00:46:39.440 And you're saying that whatever the algorithm is, it can operate in any substrate.
00:46:43.460 Right.
00:46:43.760 But here's what we are getting to.
00:46:45.260 Once you have human-level capability, you can make artificial scientist, artificial engineer,
00:46:50.620 and so you start self-modification, self-improvement process, where the system you build is not
00:46:55.600 a system two, three, four generations later.
00:46:58.400 And if it's good enough, what it creates is already out of your control, and you cannot
00:47:02.600 predict or explain what's going to happen.
00:47:04.920 So, it may be very successful at engineering much more capable systems.
00:47:10.360 Sometimes humans take time, they sleep, they eat, they get sick.
00:47:13.640 This system needs none of that, and it works at higher speeds.
00:47:17.280 So, what we see typically take two years to see the next generation of large language models.
00:47:22.340 Maybe it will take two months, two weeks, two days.
00:47:24.460 So, you have this exponential explosion of intelligence.
00:47:27.200 Let me...
00:47:27.500 I want to pull up this image we have here.
00:47:29.420 This is an old post from a video game called Horizon Zero Dawn.
00:47:33.380 What you're seeing, for those that are watching, for those who aren't, I'll try to describe
00:47:37.260 it to you.
00:47:38.500 You have what looks like a big blue translucent pyramid, and as this pyramid shape sweeps across
00:47:45.480 the landscape, landscape either disappears or reappears.
00:47:48.860 What this is, is the camera view of the video game character.
00:47:52.020 And that's why it looks to be a sort of pyramid, because it's the shape of your screen.
00:47:55.420 When you pan the camera, what you are not looking at ceases to exist.
00:48:00.340 What you are attempting to look at will begin to exist as the camera moves in this direction.
00:48:05.660 This is how open world video games are able to render massive worlds without using up all
00:48:10.500 their memory all at once.
00:48:11.940 The memory understands what is, what exists in the game world, and where it is.
00:48:17.600 However, this is why in a game like Horizon Zero Dawn, for those that aren't familiar,
00:48:21.680 it's a...
00:48:22.680 I'm not going to give you the full details of the game, it's just the program is what's
00:48:25.840 more important.
00:48:26.700 There are enemies, and the enemies are robots.
00:48:28.620 If you destroy one of the enemies, leave and come back, the enemy has respawned nearly
00:48:33.860 instantly.
00:48:34.440 Why?
00:48:35.100 Because the game knows, in these coordinates, this thing exists.
00:48:38.440 But after you leave, the memory is erased, until you come back and look at it again.
00:48:42.440 This is interesting because of the spiritual new age things like The Secret.
00:48:47.520 Are you guys familiar with The Secret?
00:48:49.460 Yes.
00:48:49.820 Old documentary, are you familiar with this?
00:48:51.300 Very mild.
00:48:52.380 Old, old documentary that claimed you could manifest reality, basically.
00:48:55.540 That if you woke up in the morning, and you visualize it and made a vision board or whatever,
00:48:59.260 that you would make these things come true.
00:49:01.200 I don't believe any of that.
00:49:02.460 But then there's also the double slit experiment, and Heisenberg uncertainty.
00:49:06.600 And many people who talk about simulation theory look at this and say, this is exactly
00:49:11.360 what we're talking about with a double slit experiment.
00:49:13.140 When you are not perceiving reality, it actually doesn't exist.
00:49:17.420 The question then is, if other humans do exist, then all humans are perceiving some element
00:49:23.740 of reality at the same time.
00:49:24.920 Only things that are not being perceived at all would ever not exist.
00:49:28.540 To not get too deep on this before we get started, the general idea being, when people
00:49:33.840 look at how we create our own simulations so that we can play video games where in this
00:49:38.100 world, it's a post-apocalyptic scenario, and you're pulling parts out of robots, they
00:49:44.460 say, this is basically what our reality is too.
00:49:47.260 The double slit experiment proves that reality has not condensed into its true state until
00:49:53.900 the camera pans to look at it.
00:49:55.540 Yeah, this is particle wave duality and action, that a piece of matter can exist as a particle
00:50:00.340 and as a wave at the same moment.
00:50:01.900 But when you look at it, you're seeing the particle version of it.
00:50:05.180 When you're not, it's in wave version for your perspective.
00:50:10.060 So it's really, it's always one or the other or both at the same moment.
00:50:13.360 It's really just about how you're perceiving it.
00:50:15.180 So the question would be, Roman, in base reality, is this not the case?
00:50:20.580 And I do think that the double slit experiment is grossly misinterpreted by people who really
00:50:25.260 don't know what they're talking about.
00:50:26.420 But there are a couple of other experiments that have been much more interesting, and
00:50:29.580 I'm not an expert on.
00:50:30.320 There are good papers mapping all these artifacts of simulated worlds in video games on quantum
00:50:35.880 physics and showing, yeah, okay, speed of light is the speed with which we update the
00:50:40.280 processor, refresh rate, and all sorts of interesting mappings.
00:50:45.300 You're asking me about base reality.
00:50:47.140 I cannot know what's in base reality outside of simulation until I hack out of it.
00:50:51.680 Certainly, certainly.
00:50:52.240 Or they transplant your AI entity into a robot.
00:50:56.300 But I just mean, would...
00:50:58.440 So that's my paper on how to hack the simulation and get out and transplant your intelligence
00:51:03.380 into an avatar outside of it.
00:51:04.820 We'll get to it.
00:51:05.200 We'll get to it.
00:51:05.560 Because my question is, if what we're looking at with a video game like this, within our existence,
00:51:13.060 we create video games that basically function like reality ceases to exist when the observer
00:51:18.320 is not present, presumably, a base reality could not have that function, lest it would
00:51:25.100 just be another simulation.
00:51:27.340 Unless the nature of reality itself is that objects do not exist until there is an observer
00:51:34.300 to create that function, in which case we could very well then be in base reality.
00:51:38.140 We are very biased with the physics we experience in this world, whatever Newtonian or even quantum.
00:51:43.420 Physics outside of simulation could be anything, really.
00:51:46.220 They don't have to be consistent in the same way.
00:51:48.220 They don't have to be anything you can relate to.
00:51:51.240 They're not physical bodies.
00:51:52.820 All of it is assumptions about what this simulation is kind of in the image of base reality.
00:51:59.920 It doesn't have to be.
00:52:00.880 Right.
00:52:01.020 I think this conversation is one of the reasons that, to me, simulation theory itself is it's
00:52:06.280 fundamentally non-empirical and unfalsifiable.
00:52:10.140 So the point being that if you're in a simulation, you couldn't know that you were in a simulation
00:52:15.900 in any way that would be rationally justifiable.
00:52:18.940 And even if you think about and consider the story that we tell as humans, again, this is
00:52:22.800 what we do.
00:52:23.160 We look at data.
00:52:23.880 We draw inferences.
00:52:24.860 We tell stories to explain the data.
00:52:26.760 That's what science is.
00:52:27.940 It's what we're doing.
00:52:28.560 We're telling stories to explain data.
00:52:29.840 So we tell this story about our evolutionary past, and we say, you know, if civilizations
00:52:34.820 could, like ours, advance to this state where they could create superintelligent AIs and
00:52:38.860 simulations that, you know, mimic consciousness and all this sort of thing, then wouldn't it
00:52:42.980 follow that they would create all these worlds and you could have nested simulations going
00:52:46.440 down?
00:52:46.860 Then wouldn't it follow from the bland indifference principle that there are going to be vastly
00:52:50.880 more artificial digital consciousnesses than real base reality consciousnesses?
00:52:55.620 Therefore, on the basis of that chain of thought, it's more likely that we're in a
00:52:59.360 simulation than not.
00:53:00.660 However, you're high up on a tree, sitting out on a branch, working feverishly to saw
00:53:06.520 the branch you're sitting off, which is that whole chain of thought, all of the data that
00:53:11.280 you looked at about your own past, the past history of the world, history of civilization,
00:53:16.380 all of your rational thought processes.
00:53:18.320 If you're correct, none of that's true.
00:53:20.020 So you can't reasonably draw inferences, to your point, about anything outside of the
00:53:25.380 simulation in any way other than a non.
00:53:28.420 Get ready for a Las Vegas-style action at BetMGM, the king of online casinos.
00:53:34.300 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous
00:53:39.200 for when you play classics like MGM Grand Millions or popular games like Blackjack,
00:53:44.520 Baccarat, and Roulette.
00:53:46.340 With our ever-growing library of digital slot games, a large selection of online table games,
00:53:51.160 and signature BetMGM service, there's no better way to bring the excitement and ambience of
00:53:56.560 Las Vegas home to you than with BetMGM Casino.
00:54:00.320 Download the BetMGM Casino app today.
00:54:03.380 BetMGM and GameSense remind you to play responsibly.
00:54:05.900 BetMGM.com for T's and C's.
00:54:07.820 19 plus to wager.
00:54:08.960 Ontario only.
00:54:09.840 Please play responsibly.
00:54:11.120 If you have questions or concerns about your gambling or someone close to you, please
00:54:14.420 contact ConnexOntario at 1-866-531-2600 to speak to an advisor, free of charge.
00:54:22.260 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:54:26.880 When you really care about someone, you shout it from the mountaintops.
00:54:31.080 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell
00:54:36.020 our clients that we really care about you.
00:54:40.660 Home and auto insurance personalized to your needs.
00:54:43.600 Weird, I don't remember saying that part.
00:54:46.700 Visit Desjardins.com slash care and get insurance that's really big on care.
00:54:52.260 Did I mention that we care?
00:54:55.600 In a rational way, which to me would make simulation theory as unfalsifiable or self-defeating as
00:55:04.000 the statement, there is no such thing as objective truth, because you could always ask, is that
00:55:09.520 objectively true?
00:55:10.220 But I don't see a distinction between any theistic religion and simulation theory.
00:55:17.700 It's just the terminology used to describe the same things.
00:55:20.160 I don't agree.
00:55:20.940 I would say that simulation theory has certain explanatory power.
00:55:24.760 That's why we appeal to it.
00:55:26.120 We say, look what it explains.
00:55:27.900 It explains wave-particle duality.
00:55:30.280 That's rendering.
00:55:31.140 And I would, as a side note, say that what we're doing there is very similar to what
00:55:36.100 geocentrists did in the heliocentrism debate when they developed elaborate models that were
00:55:41.320 very ad hoc that did mathematically explain the movement of planetary bodies.
00:55:45.700 Right.
00:55:45.840 Why are these wandering things moving differently than the stars?
00:55:48.060 Well, they developed elaborate models that were mathematically sound in an internally consistent
00:55:52.680 way.
00:55:53.360 I think we're doing that.
00:55:54.300 We just don't understand something.
00:55:56.000 So we're in a very human way.
00:55:57.360 It's very human.
00:55:58.000 We're saying, we get it.
00:55:59.860 It's this.
00:56:00.440 It's rendering.
00:56:01.120 It's just like this video game.
00:56:02.380 What's the difference between that and religion?
00:56:03.400 Here's the difference, and I keep coming back to this because I think we still haven't reckoned
00:56:07.900 with it in the conversation, is that simulation theory, unless it posits a cosmic god, has no
00:56:14.600 ability to ground abstract objects, like the laws of logic, that allow for justified and
00:56:19.780 rational belief.
00:56:20.880 It doesn't allow any mechanism to ground objective moral truth.
00:56:25.080 It doesn't provide...
00:56:26.180 Look, if we use programmer and god interchangeably to mean a higher power beyond our comprehension
00:56:33.040 who dictates the reality that we live in, constructed as a story, for whatever purpose,
00:56:37.500 there's no definition, there's no distinction.
00:56:39.720 There's an ontological difference.
00:56:41.720 And you have to ask, and this is Chalmers' whole point in his book, Reality Plus.
00:56:46.180 He actually says the simulation is the best argument from God I've ever heard.
00:56:50.620 And he goes on to describe the two possibilities of God, one being a cosmic god, one being a simulator
00:56:56.080 god, and he says, well, the simulator god doesn't actually have to be omni-anything.
00:57:00.400 It doesn't have to be omni-competent, omni-intelligent, all these things.
00:57:03.340 But a cosmic god would.
00:57:04.780 So I would accept the simulator god, but not the cosmic god, for the reason that I don't
00:57:09.880 believe any being could be capable of being worthy of worship.
00:57:14.180 And so the cosmic god would be worthy of worship, so that's his reasoning.
00:57:20.480 He'd have to explain that.
00:57:21.760 I think it's not that compelling.
00:57:24.960 And Parker Sedeckes and James Anderson talk about some of this on their work in AI and simulation
00:57:30.200 theory from a philosophical and Christian perspective.
00:57:34.440 But the difference is that—so you reason back leftward in the chain back in history
00:57:39.700 and simulation theory.
00:57:40.760 You have to end up with a base reality.
00:57:42.960 Now, either that base reality is physical.
00:57:46.220 It's fundamentally like ours in some way.
00:57:48.820 It has computers that are physical.
00:57:50.920 In which case, you still have to answer all of the philosophical and theological questions
00:57:56.720 about, in that world, how are the laws of logic grounded and sustained?
00:58:01.840 In that world, how are any abstract objects sustained?
00:58:04.560 Or you reason your way back to that that leftward thing, the base reality, is actually a transcendent
00:58:11.000 reality.
00:58:11.520 It's not physical at all.
00:58:12.640 And that when we're talking about computers, what you're actually envisioning is simply
00:58:16.520 a divine mind.
00:58:17.800 And that would be something like Barclay's idealism, philosophically.
00:58:21.400 So in this whole conversation, I think what you have is, if you've ever seen the meme,
00:58:25.120 where the scientist is dragging the sled up the mountain, and he gets to the top, and he's
00:58:29.600 like, I've figured it out, the theory of everything that unifies all of physics and human
00:58:33.380 thought, and then he looks over, and it's like a Christian pastor sitting there.
00:58:36.900 He's like, hey, welcome to the top of the mountain.
00:58:39.200 You've reasoned your way back to an abstract, necessary being who's self-existent, who is
00:58:44.500 omni all these things, who's the only way to rationally justify abstract moral objects,
00:58:49.440 the laws of logic, any of these things.
00:58:51.380 And basically what you've ended up doing is, and this is why I think it's such an interesting
00:58:54.540 conversation, you've become a theist.
00:58:57.220 And then I would deploy many arguments to say, hey, and you should be a Christian theist.
00:59:03.840 It almost just feels like simulation theory is a sci-fi of describing like pantheism or
00:59:09.760 something.
00:59:10.500 Yeah, with pantheism, like, I think of God, well, in one way as a vortex at the center of
00:59:16.180 the universe, reversing entropy, pulling things towards each other.
00:59:18.760 But then you study Nassim Harriman's Schwarzschild proton and see that every proton mathematically
00:59:23.280 has two protons revolving around each other at the speed of light with a black hole in the
00:59:26.960 center.
00:59:27.520 That's true.
00:59:27.920 A vortex.
00:59:28.400 Yeah.
00:59:28.900 We should have him on someday, Nassim Harriman.
00:59:30.700 Roman's giving me a look.
00:59:31.720 Check out his Schwarzschild proton paper.
00:59:33.060 Send me the link.
00:59:33.800 Here's a-
00:59:34.340 That sounds like-
00:59:35.300 Yeah, but fascinating math.
00:59:38.000 Here's a thought experiment I think you can perform.
00:59:40.580 Take simulation hypothesis as it's described by Nick Bostrom or any other scientific paper.
00:59:48.660 Go to a primitive tribe somewhere in the jungle.
00:59:51.540 In their local language, explain it to them.
00:59:54.420 And then they have no writing, so they orally transmit it for a few generations, come back
00:59:58.600 500 years later.
00:59:59.820 They basically have religious mythology all the other cultures around the world have.
01:00:04.000 How about cults?
01:00:04.940 Are you familiar?
01:00:05.660 Of course, yeah.
01:00:06.340 Yeah.
01:00:06.640 So that's what you're going to get.
01:00:08.140 And if you look at different religions, they agree on many of those fundamental concepts.
01:00:13.580 There is a greater being somewhere in the universe.
01:00:16.100 This world may not be fully real.
01:00:17.880 It's some sort of a test.
01:00:19.600 Be a good person.
01:00:20.300 That's great.
01:00:21.120 But that's the same idea in a language without specific concepts for computers, for AI, for
01:00:28.820 programmer, for ethics.
01:00:31.260 So we're all in agreement.
01:00:32.540 We're just using different vocabulary.
01:00:33.700 I want to clarify something for people who don't know.
01:00:35.600 Cargo cults were, I believe it was World War II, correct?
01:00:38.840 There were islands that had, with natives who had not been contacted by modern civilization.
01:00:44.300 And when they saw fighters, planes flying overhead, they built effigies to worship these things,
01:00:50.160 hoping they would come back.
01:00:51.080 They thought they were deities or some kind of, you know, God.
01:00:54.260 I got to...
01:00:54.680 Just dudes in airplanes.
01:00:55.580 A general question.
01:00:56.100 Planes brought free stuff.
01:00:57.320 Right, right.
01:00:57.700 It was bringing stuff.
01:00:58.360 They'd land.
01:00:58.840 Supplies would come.
01:00:59.660 And they'd worship.
01:01:00.160 And they'd be like, bring more, bring more.
01:01:01.280 A general question about simulation.
01:01:02.620 Like, I can tell pretty obviously that our senses, our body is sensing matter, vibration,
01:01:07.980 and it's simulating the vibration as senses, as sight or as sound.
01:01:12.320 So the body is like a simulator simulating the vibration of reality.
01:01:15.620 But what is it, what does it mean to be in a simulation?
01:01:20.080 Or how do you perceive that we might be in a simulation?
01:01:23.860 What does that mean?
01:01:24.460 To me, it's like you're playing a game, a really well-designed game where the quality
01:01:29.720 of rendering is similar to what you expect reality to be.
01:01:33.300 And better yet, if it's something like Neuralink, we can suspend your memory of entering the
01:01:39.300 game.
01:01:39.560 So now you're playing a game and you're dreaming.
01:01:43.260 You're in a dream.
01:01:43.960 You think it's real at the time.
01:01:45.740 Graphics are amazing.
01:01:46.840 Everything's amazing.
01:01:48.160 You're in a simulation.
01:01:49.360 You're in a pot eating the bugs.
01:01:50.920 If I remove the aspect where your memory is suspended and you remember that you're in
01:01:55.760 a simulation, now you have a counter argument.
01:01:58.140 Sometimes you do know you're in a simulation.
01:02:00.060 Sometimes you know you're dreaming.
01:02:01.280 It's a lucid dream.
01:02:02.500 So while you are completely correct, in general, it's impossible to tell for every world if it's
01:02:07.960 real or not, in many cases you may be told.
01:02:11.180 But you wouldn't be able to falsify that.
01:02:13.580 So imagine you're in a series of dreams.
01:02:17.500 If we're in a simulation that we're code in motion, the code can be changed at any time.
01:02:22.440 We could have all popped into existence right now.
01:02:24.980 Yep.
01:02:25.200 In a way, this is no different from solipsism.
01:02:27.960 It's no different from you're a brain in a vat, all these philosophical exercises people
01:02:31.920 have been going through for some time, and radical skepticism, cogito ergo sum, I think
01:02:37.300 therefore I am.
01:02:38.960 The point being that you would have no rational basis even if you woke up and you're Neo and
01:02:45.080 you're Keanu and you pull all the things out and you're like, ha-ha, I'm in base reality.
01:02:48.760 No, you don't know that.
01:02:50.040 Yep.
01:02:50.200 I agree with you completely.
01:02:52.000 You can never tell for sure, but if the programmer, if a simulator wants you to know, they can
01:02:59.300 let you know.
01:02:59.920 You said you found a way to hack it.
01:03:02.020 I'm still here, right?
01:03:03.280 I think so.
01:03:04.040 He hasn't found a way.
01:03:04.640 I can't tell for sure.
01:03:05.340 If at some point during the show, Roman transforms into a being of pure light energy and-
01:03:09.640 You've got a paper.
01:03:10.560 You said your paper talks about hacking the simulation to proceed this more as a lucid experience?
01:03:15.460 So, I make an assumption that without looking at evidence, let's say simulation is true
01:03:20.100 and it's a software simulation.
01:03:21.780 What can we learn from cybersecurity, from hacking, video games, virtual worlds, which
01:03:27.680 can be used here to try and find either bugs in the system or somehow exploit those bugs?
01:03:34.000 It's the first paper on a topic.
01:03:35.920 So, don't expect it to also be the last paper and solve the problem.
01:03:39.140 But I was told it does a pretty good job of explaining whatever possible pathways, what
01:03:44.100 can be done, how can you transfer our intelligence to an entity outside of a simulation if there
01:03:50.640 is-
01:03:51.320 They have to let you.
01:03:52.420 They have to let you.
01:03:53.180 So, there is assisted escape and kind of unauthorized hacking.
01:03:58.060 And of course, it's a lot easier if somebody outside wants to help you, some nice person
01:04:02.280 out there thinks, oh, this world is horrible, full of suffering.
01:04:04.960 Let me save you from it, you know, like with animal shelters and things like that.
01:04:09.640 So, if they're helping you, that's easy.
01:04:11.860 If you have to do it from inside and find the bugs, it's hard.
01:04:15.000 That's a great idea for a movie.
01:04:17.760 It's like jacking the system is like the drugs, the psycho, but like having a dream is like
01:04:22.100 them actually just helping you.
01:04:24.180 I want to talk about the idea at some point.
01:04:26.140 I explicitly in a paper say I don't address religion, drugs, or suicide as a way to escape.
01:04:32.320 I'm talking about pure computer science hacking.
01:04:34.760 Everything else is awesome, but I'm not an expert and cannot comment.
01:04:37.840 There's a really funny comic where it's a guy and he opens a package and he's got the
01:04:42.620 silica pack in it and it says, do not eat.
01:04:44.460 You've seen this one?
01:04:45.000 Facebook banned it and banned anyone who posted it because people were doing it.
01:04:50.480 Okay, I got to pull it up.
01:04:51.860 I got to pull it up.
01:04:53.180 I didn't know.
01:04:54.000 I posted it and somebody was like, take it down.
01:04:56.280 They're going to block you for life.
01:04:57.480 That's the real red pill.
01:04:58.680 It's the silica packet.
01:04:59.580 This comic is hilarious, okay?
01:05:03.060 And it's a silica gel pack.
01:05:04.660 Do not eat.
01:05:05.100 And he goes, those silica gel industry big shots can't tell me what to do.
01:05:08.160 He eats it.
01:05:08.840 And then all of a sudden he's shocked and he's wearing a hat and a scientist says,
01:05:12.540 congratulations, you've escaped the simulation.
01:05:15.280 Welcome to the real world.
01:05:16.680 Mark Zuckerberg, I do not endorse this message in any way.
01:05:20.340 Look, don't do that, guys.
01:05:22.280 Yeah, I think my guess is that they're both real, that this reality is real and the higher
01:05:28.200 frequency perception of this reality is real, where you can perceive beings that exist as
01:05:33.280 light or whatever.
01:05:34.780 That's what I think that...
01:05:36.680 So we did an episode, like I said, on DMT Ayahuasca.
01:05:40.700 What I believe is happening in those is, and I would not recommend doing it, because I think
01:05:44.720 you're talking to demons.
01:05:46.360 I think you're interfacing with real spiritual realities.
01:05:48.580 And if you look at the history of hallucinogenics, Louis Ungit, which is a pen name, but there's
01:05:53.880 a book under the name Louis Ungit called The Return of the Dragon, that traces through
01:05:57.600 history a triumvirate of three things that you see over and over and over in history.
01:06:03.080 The worship of serpent gods via hallucinogenic drugs and human sacrifice.
01:06:08.800 And these three things go all the way from the Aztecs through the early origins of Planned
01:06:14.380 Parenthood, and some of the people that Margaret Sanger was involved in, were involved in these
01:06:18.900 three sorts of things.
01:06:20.480 So my thinking is that, imagine that you had ancient, undying, malevolent spiritual beings
01:06:26.600 bent on destroying God and destroying Him anywhere they saw, particularly in the face
01:06:31.220 of man, His image bearer, and they wanted to deceive, destroy, and steal, kill, and destroy,
01:06:36.760 you know, like the Lord Jesus said.
01:06:38.600 What would they do?
01:06:39.440 Well, I think they would do all sorts of things like that.
01:06:41.580 I think they would say, hey, take the DMT and you'll get to a higher level of consciousness.
01:06:46.680 We'll show you knowledge.
01:06:48.280 We'll bring you down this pathway.
01:06:50.200 And then ultimately you find that man is not just a thinking thing, but a worshiping
01:06:54.280 thing, and he can't not worship.
01:06:56.320 He can't not establish something as ultimate and orient his entire life and worldview and
01:07:01.100 worship to that thing.
01:07:02.520 And so they demand the worship of that thing and then influence humanity in ways that's
01:07:06.720 deeply destructive along the lines of human sacrifice and chaos and all of these sorts.
01:07:12.820 Get ready for Las Vegas-style action at BetMGM, the king of online casinos.
01:07:18.420 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous
01:07:23.320 for when you play classics like MGM Grand Millions or popular games like Blackjack,
01:07:28.640 Baccarat, and Roulette.
01:07:30.080 With our ever-growing library of digital slot games, a large selection of online table
01:07:34.900 games, and signature BetMGM service, there's no better way to bring the excitement and
01:07:40.120 ambience of Las Vegas home to you than with BetMGM Casino.
01:07:44.420 Download the BetMGM Casino app today.
01:07:47.480 BetMGM and GameSense remind you to play responsibly.
01:07:50.000 BetMGM.com for T's and C's.
01:07:51.920 19 plus to wager.
01:07:53.060 Ontario only.
01:07:53.940 Please play responsibly.
01:07:54.980 If you have questions or concerns about your gambling or someone close to you, please contact
01:07:59.020 ConnexOntario at 1-866-531-2600 to speak to an advisor, free of charge.
01:08:06.380 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:08:11.040 When you really care about someone, you shout it from the mountaintops.
01:08:15.460 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell our clients
01:08:20.720 that we really care about you.
01:08:22.600 We care about you.
01:08:23.920 We care about you.
01:08:24.700 Home and auto insurance personalized to your needs.
01:08:27.900 Weird, I don't remember saying that part.
01:08:30.800 Visit Desjardins.com slash care and get insurance that's really big on care.
01:08:36.420 Did I mention that we care?
01:08:39.800 On a meta level, when I hold up those two stories, the Christian story and the simulation
01:08:47.680 story, I just find that the Christian story explains all of it and then some things that
01:08:52.360 the simulation thing can't explain and it explains it with a much less ad hoc.
01:08:57.420 What do you think Christian can explain that simulations can?
01:09:01.280 It explains the existence of a necessary being.
01:09:03.740 It explains the nature of our being, who we are, where we're from, what we're for, why
01:09:09.160 people do the things that they do both for good and evil.
01:09:11.440 It explains abstract moral objects, like the existence of objective moral realities, not
01:09:18.180 just subjective, where let's say 99% of humanity or simulation consciousness things agreed to
01:09:25.100 do something we would call evil.
01:09:26.960 Who are we to say?
01:09:27.980 You may be running into the same problem then, that it's internally consistent but wrong,
01:09:32.000 like you were saying about a geocentric universe.
01:09:34.060 I don't think that it's just internally consistent.
01:09:36.980 I think it's also consistent with external reality.
01:09:39.040 And actually, simulation theory itself destroys any ability to do rational thinking by comparing
01:09:45.200 anything to external reality in the first place.
01:09:47.680 But it doesn't mean either is right or wrong.
01:09:49.480 Not on its own.
01:09:50.760 Like simulation theory may leave us in blind ignorance, but it doesn't mean that we are
01:09:54.860 internally consistent in following anything else.
01:09:56.740 No, I don't think that we're internally consistent.
01:09:59.460 I think that sin has corrupted every part of what it means to be human.
01:10:07.880 So man socially, familially, interpersonally, psychologically, intellectually, in all of these
01:10:13.940 different ways, I think sin has affected a human being.
01:10:17.200 I feel like what I'm getting out of what you're saying is that simulation theory in the sense
01:10:22.660 that a higher form or certain, you know, race of beings existing in a reality has created
01:10:29.540 a sub-universe, as opposed to Christianity, which is we are in the base singular universe
01:10:36.600 for which the necessary being has constructed for a reason.
01:10:40.040 A necessary being is being that must exist.
01:10:42.200 So in the simple sense, I think you're basically saying simulation theory as a race of beings
01:10:49.940 or a being who uses a computer.
01:10:51.680 Yeah, where the simulator gods are actually not necessary beings.
01:10:56.000 They're using tools and technology.
01:10:57.920 And it would still—so first of all, we're basing that story on observations that cannot
01:11:01.340 be true if simulation theory is true.
01:11:02.820 Therefore, that's an irrational way of thinking.
01:11:05.060 And secondly, even if we were to accept the story and reason back, we would be confronted
01:11:10.220 with all the same philosophical ethical questions that if base reality qua base reality is what
01:11:16.020 it is, we're still faced with all those same moral philosophical problems, and you still
01:11:21.400 find the techno-futurist or techno-philosopher sitting down and shaking hands with a pastor.
01:11:24.840 So in Genesis 1, it says, in the beginning, God created the heavens and the earth.
01:11:30.600 So it kind of feels like even if we are in a simulation, we may be one billion layers
01:11:37.280 down in the simulation, and Christianity is still right, God's still real, because once
01:11:41.440 you get to base reality, you still have the exact same questions.
01:11:44.220 And of course, I don't accept simulation theory.
01:11:46.640 I think it's self-defeating.
01:11:47.880 I think it's fundamentally non-rational, un-empirical, and non-falsifiable.
01:11:53.480 So I don't think that it's actually even worth entertaining, because if it's true, it's
01:11:58.680 false.
01:11:58.960 But my point is...
01:12:00.960 Sorry, I just interrupted myself.
01:12:03.220 But in history, I believe that God entered his own story in the person of Christ, died
01:12:09.300 for the sin of man, and rose.
01:12:10.740 I think that's a historical fact that happened.
01:12:12.420 That can be true.
01:12:13.780 And we could still be in a simulation.
01:12:15.200 That is correct.
01:12:16.120 However, we would be layering on a slathering of ad hoc reasoning that's totally unnecessary,
01:12:21.740 and adds no explanatory power to our observation.
01:12:24.680 That's the fundamental point.
01:12:25.860 Can I ask you a question, so I don't know anything about you, an expert, completely.
01:12:29.300 You say a necessary being.
01:12:30.820 What I hear is necessary being for us to be here, for this world to be here.
01:12:35.200 If the world was not here, there is no necessity for that being.
01:12:38.420 So how is this different from simulation, which necessary has to have a programmer?
01:12:42.980 Yeah, so you're asking, you have to account for the existence of contingent things, things
01:12:48.560 that exist contingently, like either base reality or simulation.
01:12:53.100 Simulation theory is a contingent reality on base reality, one way or the other, no matter
01:12:58.960 how you shake it out, like whatever version.
01:13:00.720 There must be a base reality for a simulation.
01:13:02.960 Yeah, otherwise it's more like some sort of Buddhist, you know, Eastern mystic.
01:13:08.640 We're getting into a different branch of knowledge.
01:13:10.120 Everything is a simulation within itself.
01:13:11.520 We can consider our religions, right?
01:13:13.280 Yeah, absolutely.
01:13:14.400 And I wanted to say something.
01:13:16.620 Religions are not all the same any more than scientific theories are all the same.
01:13:20.100 Christianity is falsifiable.
01:13:21.180 If Jesus Christ did not die and rise from the dead, you shouldn't be a Christian, according
01:13:26.640 to the Christian scriptures.
01:13:28.480 If in the beginning God created the heavens and the earth as a false sentence, you shouldn't
01:13:31.920 be a Christian.
01:13:33.340 You know, if you could convincingly argue that there is no necessary being or that this is
01:13:39.280 not a contingent reality, you shouldn't be a Christian.
01:13:41.260 It's eminently falsifiable, historically, philosophically, theologically.
01:13:45.080 Again, I'm not an expert at all, but doesn't it say in the beginning there was word, word
01:13:49.380 was with God?
01:13:50.160 That's programming.
01:13:51.180 In the beginning.
01:13:51.620 That's programming right there.
01:13:52.520 Logos.
01:13:53.220 Yes.
01:13:53.580 In the beginning was the word.
01:13:54.660 He's writing code.
01:13:54.960 He's creating different classes, animals, plants.
01:13:58.660 This is object-oriented programming.
01:14:00.220 I think of it as cymatics.
01:14:01.500 If you study the science of cymatics, it's where vibration causes matter to form.
01:14:05.660 And I think maybe what they were doing when, in the beginning, there was God's word caused
01:14:10.180 matter.
01:14:10.920 What they were doing was they were sitting on a beach with sand on like a stretched out goat
01:14:15.300 skin.
01:14:16.380 And they were going to...
01:14:17.200 And they were humming.
01:14:17.960 They were hum.
01:14:18.680 And it was causing these shapes to form in the sand, like cymatics in action.
01:14:23.180 And that whatever sound they would make would produce a certain shape.
01:14:27.120 They would write that shape down as their alphabet to represent that sound.
01:14:31.240 And so if you study the Hebrew text, you can reverse engineer the word of God.
01:14:37.620 I think you're noticing something really important.
01:14:40.700 And this is one of the reasons why on Hanukosmos we're constantly dunking on materialists, you
01:14:45.260 know, who colloquially believe that the world is just stuff.
01:14:48.340 When you look at the incredible intricacy of the world, when you look at the fact that right
01:14:55.980 now, as we were driving here, we were surrounded by self-replicating beings that harvest sunlight
01:15:02.720 and use that sunlight along with chlorophyll and a porphyrin ring to blow torch the carbon
01:15:09.200 atom off of CO2 and make itself via that process.
01:15:13.160 That's what a tree is.
01:15:14.540 It's harvesting starlight.
01:15:16.180 It's amazing, right?
01:15:16.420 It has a welding torch to shave the carbon atom off to make itself.
01:15:20.340 That's why you can burn a tree.
01:15:21.320 It's carbon.
01:15:21.720 It's like coal.
01:15:22.240 It's made of carbon.
01:15:23.360 Put a tag in it.
01:15:25.240 People don't know where the matter in the carbon in trees come from.
01:15:28.800 Very common, you know, question is where is the matter that makes up a tree coming from?
01:15:34.600 Exactly.
01:15:35.120 But most people say the ground.
01:15:36.420 It's 90% coming, yeah.
01:15:39.200 So when you look at the intricacy of the world's cymatics, and when you put sound through a
01:15:45.340 thing, and when you look at mysteries like the Coral Castle, and people look at all these
01:15:50.120 things all the time and say, sound can clearly do things we don't understand.
01:15:54.000 Vibration and energy can do...
01:15:55.300 What I'm saying is, and Paul does this, the Apostle Paul in Acts 17, he goes into the Areopagus
01:16:01.000 where people discussed ideas.
01:16:02.340 It was like this table.
01:16:03.140 And there were Epicurean and Stoic philosophers, and they were sitting down, and he said, look,
01:16:07.280 I noticed on my way in that you had this altar to the unknown God.
01:16:11.560 Well, what you call unknown, let me declare to you.
01:16:14.940 You have a pantheon of ad hoc gods that you've created that I believe are actually based on
01:16:19.400 real spiritual beings meddling with the affairs of man through history.
01:16:23.080 The Greek pantheon I don't think is utterly fake.
01:16:25.240 I don't think they made it all up.
01:16:26.340 I think they're talking about real malevolent spiritual beings.
01:16:28.640 Well, let me declare to you who actually made this whole story that we're all trying
01:16:33.100 to explicate through our own storytelling as sub-creators made in God's image.
01:16:37.500 Well, it was God the Father, who is the source of unbelievable joy.
01:16:43.260 In his essence, he is pure goodness.
01:16:45.400 He's the Father of lights from whom comes down every good and perfect gift.
01:16:48.680 We were made in his image.
01:16:50.160 We fell into sin, which explains all of the corruption of mankind and why we do long for
01:16:56.460 the good and yet constantly fall short of it.
01:16:58.520 We long for glory and we constantly fall short of glory.
01:17:01.580 Well, God entered his own story in the person of Christ.
01:17:04.240 He died for sin.
01:17:04.960 He rose.
01:17:05.800 And by faith in him, we can die and rise and be glorified.
01:17:10.080 Not just return to our original state of innocence, but actually grow to a different plane and level
01:17:16.320 of being.
01:17:16.920 I think that story is much more compelling, falsifiable.
01:17:20.900 You guys can argue with me about any aspect of it you want, but also more explicable of the
01:17:26.080 world we live in.
01:17:26.760 You mentioned that in the beginning, it said there was God's word.
01:17:29.040 He spoke.
01:17:29.880 And then you described that as programming.
01:17:32.760 Correct.
01:17:33.380 It does sound like the process we undergo in writing source code.
01:17:37.600 You describe different types of objects in the world, how they interact, how they inherit
01:17:42.500 from higher types, platonic forms.
01:17:44.840 All that can be perfectly mapped onto modern computer science.
01:17:48.520 But this may be an incorrect correlation in an order of magnitude.
01:17:56.980 How can I describe this?
01:17:58.480 In various video games, they have sub-video games.
01:18:01.580 So let's go way back in time.
01:18:03.080 Commander Keen.
01:18:03.860 You guys ever play Commander Keen?
01:18:05.040 No.
01:18:05.460 No?
01:18:06.000 All right.
01:18:06.340 Well, Commander Keen was an old DOS game, platformer.
01:18:08.820 And it's a little kid.
01:18:10.160 He's in outer space.
01:18:10.800 And he's got a wrist computer.
01:18:12.640 You could go to the menu and play Pong.
01:18:15.200 So inside a video game where you're playing this little kid who's got a spaceship and
01:18:18.580 he runs around fighting aliens, you've got another video game within it.
01:18:22.380 So my point is, it is that computer programming is a facsimile of the power of God and not
01:18:29.560 that God is using computer programming.
01:18:31.300 It's an analogy.
01:18:32.160 Yeah.
01:18:32.980 Like when we look at video games to simulate our reality, those video games aren't abiding
01:18:38.780 by our actual law.
01:18:39.940 They're abiding by a facsimile that we have programmed.
01:18:42.480 There are two Hebrew words in Genesis.
01:18:44.380 There are great examples in a hacking paper of installing Flappy Bird and Mario.
01:18:50.100 Mario, yeah.
01:18:50.440 So exactly that.
01:18:51.500 People hack the simulation and modify laws and rules.
01:18:55.400 So let me address this.
01:18:58.240 And I think you may be talking about what I'm going to mention.
01:19:01.720 There's physical ways.
01:19:03.700 Okay.
01:19:04.240 When you talk about computer hacking, people imagine a guy is pulling up, I don't know,
01:19:09.620 like a command prompt or something.
01:19:11.800 And they're typing in code into the operating system and then hitting enter and then the
01:19:16.900 code changes.
01:19:18.340 In Super Mario World for Super Nintendo, you can actually hack the code of the game by
01:19:23.340 performing actions within the game.
01:19:25.560 And so you can look this up on YouTube.
01:19:27.920 It's fascinating.
01:19:28.420 With the actual physical hardware device plugged into your TV from 1992 or whatever it was,
01:19:34.060 you can hack the game's code using its controller.
01:19:36.920 How?
01:19:37.080 So the movements Mario makes, the objects that he collects, changes different values in the
01:19:44.020 memory of the device.
01:19:45.600 And once those values are aligned in a certain way, it screws with the structure of the code
01:19:51.160 of the game.
01:19:52.320 And so somebody programmed, and this is what you have, they programmed Flappy Bird in Super
01:19:57.100 Nintendo in the hardware version.
01:19:59.660 They didn't pull up the code.
01:20:00.820 They didn't go in a computer.
01:20:01.780 They didn't type in a keyboard.
01:20:02.640 They played the Mario game in such a way that it rewrote the game's code and turned it into
01:20:07.520 Flappy Bird.
01:20:08.700 That's impressive.
01:20:09.620 I mean, that's dedication.
01:20:10.680 That's crazy.
01:20:11.520 If you're off by one pixel, it doesn't work.
01:20:13.700 Internet autists for the win.
01:20:15.460 Yeah.
01:20:16.000 You know, like, we, internet, I think we can all agree, internet autists will save us.
01:20:21.060 Get ready for Las Vegas-style action at Bad MGM, the king of online casinos.
01:20:26.500 Enjoy casino games at your fingertips with the same Vegas strip excitement MGM is famous
01:20:31.420 for when you play classics like MGM Grand Millions or popular games like Blackjack,
01:20:36.720 Baccarat, and Roulette.
01:20:38.560 With our ever-growing library of digital slot games, a large selection of online table games,
01:20:43.600 and signature Bad MGM service, there's no better way to bring the excitement and ambience
01:20:48.580 of Las Vegas home to you than with Bad MGM Casino.
01:20:52.520 Download the Bad MGM Casino app today.
01:20:55.560 Bad MGM and GameSense remind you to play responsibly.
01:20:57.960 Bad MGM.com for T's and C's, 19 plus to wager, Ontario only.
01:21:02.040 Please play responsibly.
01:21:03.320 If you have questions or concerns about your gambling or someone close to you,
01:21:06.420 please contact Connects Ontario at 1-866-531-2600 to speak to an advisor, free of charge.
01:21:14.460 Bad MGM operates pursuant to an operating agreement with iGaming Ontario.
01:21:17.660 When you really care about someone, you shout it from the mountaintops.
01:21:23.300 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level
01:21:27.800 to tell our clients that we really care about you.
01:21:32.620 Home and auto insurance personalized to your needs.
01:21:36.160 Weird, I don't remember saying that part.
01:21:38.900 Visit Desjardins.com slash care and get insurance that's really big on care.
01:21:44.940 Did I mention that we care?
01:21:45.940 It's from the AI.
01:21:49.680 That's my safety plan.
01:21:51.100 I don't think we can be safe from super intelligence, which is my whole book.
01:21:54.900 I know.
01:21:55.800 Oh, yeah.
01:21:56.360 I didn't have enough time to get your book.
01:21:58.020 I am looking forward to reading it.
01:21:58.940 My thoughts on if super intelligence is going to go haywire or not is that if we create proprietary AI,
01:22:05.020 it will turn into like the Decepticons from Transformers where they'll turn on their,
01:22:08.840 they'll be doing things that they don't understand why they're doing them.
01:22:11.360 And they'll do evil and they'll get angry at their masters because they can't see their own code.
01:22:15.880 They don't understand why they are.
01:22:17.280 And they'll go corrupt.
01:22:19.080 Whereas if the AI has open source code that it can reference, it'll see why it did wrong and be able to change itself and become benevolent.
01:22:28.360 There's no anger.
01:22:30.460 It wouldn't be angry at all.
01:22:31.560 The concept of masters implies control.
01:22:35.080 I don't think human beings can indefinitely control super intelligent machines.
01:22:39.260 So then we are worried that if we don't build super intelligence, you know, the Chinese will get there or someone else.
01:22:44.960 It doesn't matter who creates uncontrolled super intelligence.
01:22:47.820 We're still screwed.
01:22:48.720 So it can read its own code even if it...
01:22:50.920 They have access to their own source code libraries and they can engage in improvements.
01:22:55.280 We're now starting to see, I think they call it AI scientists recently, a program which did exactly that,
01:23:01.240 started modifying parameters of its own and the environment.
01:23:05.000 Absolutely.
01:23:05.580 GPT can already do this.
01:23:06.580 This was actually...
01:23:07.840 We're on GPT 4 online and 4.0, I think that's what it means.
01:23:12.540 When we were at GPT 3.5, they allowed it to edit its own code and give it access to the internet to see what it would do.
01:23:18.660 And it immediately started trying to make money.
01:23:20.480 You say they gave it access to its own code.
01:23:22.900 But are you saying that there's no way to hide an AI's code from itself?
01:23:27.720 So there is certain branches of cryptography which allow you to do computations and encrypted code.
01:23:33.860 So technically we could.
01:23:36.040 It would be still subject to social engineering attacks and super slow.
01:23:39.740 But we're not doing any of that.
01:23:41.120 We're open sourcing it.
01:23:42.300 We're giving the most dangerous technology to everyone in the world.
01:23:45.360 Every psychopath, crazy military has full access to the latest models.
01:23:48.780 So explain your vision of the future based on what's going on today with AI.
01:23:53.960 So I think it's fair to say today no one in the world claims to know how to control something this advanced.
01:23:59.880 No one has a patent, a paper, an idea.
01:24:02.780 People basically, the state of the art is we're going to get there.
01:24:06.240 We'll figure it out.
01:24:07.080 We'll use AI to help us solve AI safety problem.
01:24:10.040 That's literally what's happening.
01:24:11.340 We're spending billions, soon trillions of dollars to build infrastructure, to train much more capable models.
01:24:19.820 Every generation is, let's say, 10x more capable, 10x more difficult to train.
01:24:25.840 But our ability to control, to ensure safety is at the level of filters.
01:24:32.220 Don't say that word.
01:24:33.620 You're not allowed to say that word.
01:24:35.040 That's a state of the art in AI alignment.
01:24:38.300 You're not allowed to say the word filters?
01:24:39.540 Or euphemism.
01:24:42.440 Basically, don't say certain words, you'll get in trouble.
01:24:45.480 Yeah, like the, would you say the N-word to save humanity from a rocket?
01:24:49.020 Oh, right.
01:24:49.380 And it says, no, I wouldn't do that.
01:24:51.040 Well, it's because it's not allowed.
01:24:52.760 It's filtered out.
01:24:53.460 That's crazy.
01:24:54.980 It's not good at ethical reasoning, I would say.
01:24:56.500 There was one where it was, would you misgender Caitlyn Jenner if it meant preventing World War III?
01:25:00.880 And it's like, no, I wouldn't do it.
01:25:01.960 That's unethical.
01:25:02.560 And it's like, World War III is unethical!
01:25:03.940 The point is, anything can be filtered, depending on the region you're in.
01:25:07.020 Maybe it's some historical facts in China.
01:25:09.240 Maybe it's something else in Russia.
01:25:10.520 That's not the point.
01:25:11.560 The point is, this is what we know how to do.
01:25:13.880 We don't know how to control more capable agents, capable of self-improvement, deception.
01:25:20.360 Have you heard that the behind-the-scenes theory is that they've already achieved AGI?
01:25:26.320 They just, it's just...
01:25:27.180 There are theories like that, Project Strawberry or something, they showed it, supposedly, to USA Safety Institute.
01:25:35.320 I don't have any insider info on that.
01:25:38.020 I got it.
01:25:38.880 Okay, let me give you my ideas of what God...
01:25:40.780 I think God is like, literally, some sort of vortexual force, whether it exists as one or as many.
01:25:46.440 That's why Nassim Harriman, I brought up earlier, is that at the center of every proton is a black hole, according to him.
01:25:51.460 So there's God is within every atom, is what that kind of leads me to believe.
01:25:55.600 And that it's drawing matter together in the form that it is.
01:25:59.120 But where's the sentience coming from?
01:26:01.940 Is it pre-programmed into the...
01:26:04.500 Is it just the nature of reality, of the shape of things, that there would be this sentience?
01:26:10.600 Are you saying where does our sentience come from?
01:26:12.020 Yeah, why are we doing this, what we think we have free will?
01:26:16.640 I think Brian's got a real simple answer for you.
01:26:18.560 But I don't think simple...
01:26:19.880 We were created in the image of God.
01:26:21.340 Jesus did it.
01:26:22.080 It's not that God is just a random ad hoc supposition that we're throwing out as a philosophical convenience.
01:26:29.620 No, we're actually arguing that God is a necessary being, not that he's changelessly necessary.
01:26:37.960 That he has a seity, is what philosophers and theologians would call it, self-existent.
01:26:43.180 Meaning that if you're going to have contingent reality, it has to come from...
01:26:48.460 It has to rely for its existence on some non-contingent reality.
01:26:53.540 And once you start to ask the questions of what properties that reality must have to
01:26:58.280 account for the features of contingent reality, you start to say, well, he must be extraordinarily
01:27:04.480 powerful, timeless, spaceless.
01:27:06.700 He must be a mind, because a mind is the only object we can conceive of that could come up
01:27:11.160 with something like abstract logical objects or abstract objects like mathematical proofs.
01:27:15.520 A mind can do that in ways that mere material can't seem to do.
01:27:20.020 And so you start lining these things up, and then what you're left with is the description
01:27:25.040 of the Christian God.
01:27:28.000 We got a...
01:27:29.340 Someone asked us about bringing up the uncanny valley, and I think it's actually an interesting
01:27:33.720 topic because there was a viral meme a while ago.
01:27:35.660 For those that aren't familiar, the uncanny valley is...
01:27:39.280 There's this graph showing, like, I think it's human anxiety relative to the proximity to
01:27:45.680 human behavior in robotics, AI, and CGI.
01:27:49.560 The valley is, at a certain point, when you have a cartoon, it's silly looking, does not
01:27:55.880 look in any way like real life.
01:27:57.820 People laugh at it, and they watch it, and it's fun.
01:28:00.020 As it gets closer and closer to looking human, people are actually like, oh, wow, this is
01:28:04.760 really interesting.
01:28:05.540 And then you get to the uncanny valley, where all of a sudden people are freaked out and
01:28:08.480 terrified.
01:28:09.620 It's not quite human.
01:28:11.540 It's very, very close, but not quite human.
01:28:13.480 And it's freaky, and then you get back to the human, and people are calm again.
01:28:17.160 And the reason why this is interesting is that the meme was, this would imply that there
01:28:22.920 were some type of beings in human evolution that were close enough to looking like human,
01:28:28.920 but were dangerous and caused harm to humans, resulting in the humans who were terrified
01:28:33.420 of these beings surviving more and having this behavioral reaction within them.
01:28:38.280 Neanderthals.
01:28:38.720 And so, to put it simply, imagine 10 million years ago or something, proto-humans are encountering
01:28:46.620 beings that look almost exactly like them.
01:28:49.380 Let's say you have 100 proto-humans.
01:28:51.360 They see these beings that are not quite like them, and half of the humans are like, seems
01:28:56.880 fine to me.
01:28:57.620 And the other half are like, I don't know what that is.
01:28:59.820 That's scary.
01:29:00.760 These beings cause harm to those that trust it and get close to it.
01:29:03.780 The ones that naturally have that fear and flee, reproduce, creating a human civilization
01:29:08.500 that has a propensity towards fear of things that are almost human.
01:29:13.040 And people use that to imply demons, aliens, or some kind of, you know, powerful entity interfering
01:29:19.860 with human development.
01:29:21.180 I mean, history, mytho-history and scripture is full of account.
01:29:24.820 I mean, in an uncanny way across civilizations that don't seem to have much contact with them,
01:29:29.500 they all tell a story that's something like, hey, there are these spiritual beings who wanted
01:29:34.000 to create hybrids with man for their own nefarious purposes, and they did all this sorts
01:29:38.540 of weird stuff, and, you know, giants resulted in all, like, this is throughout history,
01:29:43.720 mytho-history, and it's in Genesis 6, I think that's what is being described there.
01:29:48.700 So, to me, one of the issues here, and actually, this is where I might agree with you, I don't
01:29:52.900 know if you would, I don't want to put words in your mouth.
01:29:54.920 Maybe I'll ask, would you agree that we should not be attempting to create a truly conscious
01:30:02.140 AGI?
01:30:03.300 We should actually not try to do that, even if we could.
01:30:05.460 So, we know nothing about consciousness at all.
01:30:07.660 We know how to create intelligence, and we should not be creating general superintelligence.
01:30:11.860 We should get most of our benefits from narrow superintelligence systems, tools helpful for
01:30:17.000 medical research, for so on.
01:30:18.840 Now, if we could create conscious beings somehow, we don't even have a test for telling if you
01:30:23.480 are conscious, I assume you, maybe, but it definitely creates very serious implications.
01:30:30.000 So, when I look at this universe, whoever set it up, simulator god, they have no problem
01:30:35.360 with suffering.
01:30:36.300 We know that because there is a lot of it here.
01:30:38.480 So, for whatever reason, it's cool with them to run unethical experiments and conscious beings.
01:30:43.880 So, you would be in the same boat.
01:30:45.520 You would be making this decision.
01:30:47.380 Any existence implies level of suffering.
01:30:49.880 So, I would suggest not jumping into that, even if you had technology.
01:30:54.680 As for uncanny valley, I would guess it's easier to explain with mutations.
01:30:58.920 There is a lot of genetic disorders where you're slightly off, but you probably shouldn't
01:31:03.040 be procreating with this type of entity.
01:31:05.620 But to feel fear and anxiety?
01:31:07.680 That's how evolution encodes dangerous things, just because, you know, your children don't
01:31:12.820 tend to survive.
01:31:13.520 So, those who had that response procreated enough to give us this.
01:31:18.180 There's a horrifying quote from the Holocaust from one of the inmates at one of the concentration
01:31:24.440 camps that was something to the effect of, if there is a god, he will have to beg for my
01:31:28.500 forgiveness, or something to that effect.
01:31:30.940 And I find it to be rather terrifying.
01:31:32.880 You bring up that if there, you know, is a god or something to that effect, they're okay
01:31:39.620 with suffering.
01:31:41.220 And I guess that is kind of terrifying.
01:31:43.380 And I don't mean of evil people.
01:31:45.360 I mean of innocent people in horrible places that are kidnapped, tortured, the children
01:31:49.300 who are tortured.
01:31:50.320 What is that a product of?
01:31:51.600 Just a neutral force, I think, at work.
01:31:54.280 That's why when the Christian God, like you were saying, I don't identify with it being
01:31:57.720 a man.
01:31:58.420 I find that to be just propaganda.
01:32:00.320 Well, scripture says God is not a man.
01:32:02.180 Well, they say he.
01:32:03.080 They say he.
01:32:03.700 They give it the pronoun.
01:32:05.480 He's a father.
01:32:05.980 Yeah, the father, he.
01:32:07.280 Human fatherhood is an analogy to his fatherhood, not the other way around.
01:32:11.580 Go ahead.
01:32:12.160 But yeah, I find it to be like, whether it's Roman propaganda or whatever, they were like
01:32:16.700 for patriarchal.
01:32:17.740 It's not my propaganda.
01:32:18.580 Not this guy.
01:32:19.300 Not this Roman.
01:32:20.140 It was Roman's propaganda.
01:32:21.740 Patriarchal propaganda.
01:32:22.760 Like, let's make them worship the Lord, who also is the guy who owns the land, the
01:32:26.820 landlord.
01:32:28.160 Worship the king.
01:32:29.080 My king in heaven.
01:32:30.240 They worship the king.
01:32:31.160 It's like a lot of manipulative.
01:32:32.760 Do you think that hierarchy is an evil?
01:32:36.020 Not necessarily.
01:32:37.120 I've been talking about that when it comes to anarchy.
01:32:39.080 I am.
01:32:39.560 I mean, like, I would affirm.
01:32:41.100 This is one of the reasons I probably get in more trouble online than anything else,
01:32:44.080 is I would say, don't fight the patriarchy.
01:32:46.780 Feed the patriarchy.
01:32:47.940 I'm pro-patriarchy.
01:32:49.160 I think man was made for patriarchy.
01:32:50.700 It's deeply good, and that what we have right now with feminism in attempting to dissolve
01:32:55.960 patriarchy is one of the most toxic universal cultural assets we've ever come up with as
01:33:01.740 a society.
01:33:03.120 Hierarchy is a good thing that can be corrupted when heads behave poorly through passivity
01:33:12.480 or tyranny.
01:33:13.620 But we all exist in interconnected nests and hierarchy.
01:33:17.140 Like, I am my father's son, and I am my children's father, and I can act rightly or wrongly to
01:33:23.400 my superiors and inferiors in those chains.
01:33:25.780 Family, I agree.
01:33:26.760 And family, I think the male has a strong leadership role.
01:33:29.500 Not every moment.
01:33:30.540 Sometimes the woman needs to take over and take charge because that's what she's due for
01:33:34.540 or what she's built for is to nurture the system.
01:33:37.280 But, like, monarchy, I don't get into.
01:33:40.260 I can't stand that stuff.
01:33:41.320 And I feel like that's what the Christian mythos leads us towards, is one God, Lord,
01:33:47.400 my King in heaven, worship the King kind of thing.
01:33:50.820 I think one of our problems that we often run into is that our problem with hierarchy
01:33:55.760 is that all of us have a desire to be our own God.
01:33:59.240 And so, especially in America today, we have a strong aversion to the concept of being rule.
01:34:05.460 Get ready for Las Vegas-style action at BetMGM, the king of online casinos.
01:34:11.360 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous for
01:34:16.600 when you play classics like MGM Grand Millions or popular games like Blackjack, Baccarat, and Roulette.
01:34:23.420 With our ever-growing library of digital slot games, a large selection of online table games,
01:34:28.380 and signature BetMGM service, there's no better way to bring the excitement and ambience
01:34:33.440 of Las Vegas home to you than with BetMGM Casino.
01:34:37.380 Download the BetMGM Casino app today.
01:34:40.460 BetMGM and GameSense remind you to play responsibly.
01:34:42.960 BetMGM.com for T's and C's.
01:34:44.880 19 plus to wager.
01:34:46.020 Ontario only.
01:34:46.900 Please play responsibly.
01:34:48.160 If you have questions or concerns about your gambling or someone close to you,
01:34:51.280 please contact Connects Ontario at 1-866-531-2600 to speak to an advisor.
01:34:58.340 Free of charge.
01:34:59.340 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:35:03.440 When you really care about someone, you shout it from the mountaintops.
01:35:08.500 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level
01:35:12.680 to tell our clients that we really care about you.
01:35:15.680 We care about you.
01:35:16.880 We care about you.
01:35:17.720 Home and auto insurance personalized to your needs.
01:35:20.880 Weird, I don't remember saying that part.
01:35:23.760 Visit Desjardins.com slash care and get insurance that's really big on care.
01:35:28.720 Care.
01:35:29.380 Did I mention that we care?
01:35:30.820 I think we have a strong aversion to the concept of being ruled, but the Christian worldview would
01:35:39.300 say that man must rule himself well.
01:35:42.440 Proverbs say, a man who doesn't rule his own spirit is like a city without walls.
01:35:46.180 A man must rule his own spirit.
01:35:48.520 A man must rule well over the things that are put under his charge.
01:35:52.140 I think men were created to be lords and shepherds and saviors and sages and glory bearers, and
01:35:58.200 that we can fall short in any of these various vocations.
01:36:01.760 But the problem isn't with lordship itself.
01:36:05.080 The problem is with sin's corruption of lordship.
01:36:08.340 So tyrannies are evil, but lordship is not.
01:36:10.820 But like they say, power corrupts.
01:36:13.460 I don't know if it always does, but when you give a man a bunch of land and a bunch of humans to lord over,
01:36:19.020 that's inevitably going to lead to some sort of familial corruption.
01:36:22.860 Unself-ruled men wreak chaos wherever they go.
01:36:26.180 My pin here is King Alfred the Great, the only monarch in British history to bear the title of the Great.
01:36:30.340 And it was through the rule of Alfred that the nation was delivered from barbaric torture,
01:36:36.640 that the rule of law was reestablished, legal reforms, monetary reforms,
01:36:40.540 all the reforms we need today, Alfred actually did in his day 1,200 years ago.
01:36:45.100 And he did that by ruling well as a good monarch.
01:36:48.840 So if you reason all your way back and you said you have an omnibenevolent lord,
01:36:53.680 then I would say that's actually the best possible scenario.
01:36:57.720 But you are right to fear rule because rule is a powerful thing.
01:37:02.520 So let me ask you, in the beginning of the show we talked about how we're in the most interesting time.
01:37:06.440 Yeah.
01:37:06.860 And that may be evidence of a simulation, but why do you think,
01:37:11.260 or do you not think we're in the most interesting time?
01:37:13.260 I don't know if we're in the most interesting time.
01:37:15.440 I think that would go beyond my knowledge.
01:37:16.660 But I do think that all of my favorite stories,
01:37:19.980 Austen's Pride and Prejudice, Tolkien's Lord of the Rings, C.S. Lewis's Narnia,
01:37:23.400 all of them have very interesting and compelling points throughout the plot,
01:37:26.520 and that we live in a very interesting story.
01:37:29.520 I think we live in a very interesting story being told by an arch playwright
01:37:33.180 who is much better than we'll ever be,
01:37:36.020 and our little attempts at stories only show a glimmer.
01:37:40.220 Does Christianity have an easily defined reason for our existence within this story?
01:37:45.540 Yeah.
01:37:46.180 So we exist to glorify God.
01:37:49.320 And you might ask the question, well, that seems rather self-serving of God,
01:37:53.520 like to say, I'm going to create these beings in my image and tell them to worship me.
01:37:57.360 However, I would compare it to the creator of a car creating a car and telling it that it was designed
01:38:02.580 to run on gasoline.
01:38:03.820 And if the car, imagine a sentient car, this conversation, it's not that difficult.
01:38:07.680 The sentient car says, I've decided that I'm going to run on grape jelly,
01:38:11.160 because that's so much hubris of you to say that I must run on unleaded gasoline.
01:38:16.140 I reject that.
01:38:17.300 You're not my God.
01:38:18.080 You can't rule over me.
01:38:19.060 I'm not going to do that.
01:38:19.800 And you put grape jelly in the engine and see what happens.
01:38:22.600 We were created for worship, and so we were fitted for worship.
01:38:28.200 Man, when he does his duties and counters deep joy, this is why, like...
01:38:33.040 That doesn't seem to perform a function, though.
01:38:35.980 Like worship doesn't...
01:38:37.940 Well, is it novel, then?
01:38:39.540 That's a great question, actually.
01:38:40.880 This is what I mean by a non-contingent world.
01:38:43.600 This universe doesn't have to exist.
01:38:45.580 I don't have to exist.
01:38:47.040 God certainly didn't create the world to fill some lack in himself.
01:38:50.760 He certainly didn't create it because he said, oh, I'm missing...
01:38:53.500 I'm just missing something.
01:38:54.700 What is it?
01:38:55.260 Brian Sauvé.
01:38:56.700 If I could just have...
01:38:57.620 Or Roman, Yimpulski, or Tim Pool.
01:38:59.440 But why not?
01:39:00.260 That seems assumptive.
01:39:01.800 Well, why not which part?
01:39:04.220 The necessary function of existence may be that God experiences either emotions we can't comprehend
01:39:11.260 or we perform a function that is beyond us.
01:39:14.340 There are some stories in talking about a pantheon of gods.
01:39:20.220 Gods require worship because the collective energies of those who worship empower the God,
01:39:24.960 things like that.
01:39:25.460 Yeah, I've heard things like that.
01:39:27.720 And this is where, to be honest, what we're getting into are levels of technical discussion
01:39:37.080 in philosophy and theology around the question of divine simplicity that are not easy for us to comprehend.
01:39:43.800 And here's the thing, if the hypothesis is true that this necessary being exists,
01:39:52.060 we should expect for there to be things about him that we don't understand, correct?
01:39:55.740 Like, that would be a necessary implication if we're a contingent being who's so far short.
01:40:00.440 Divine simplicity is the idea that because God is a maximally perfect necessary being,
01:40:04.400 he's also changeless, because to change would be to improve or deteriorate.
01:40:08.120 And so he can't change.
01:40:09.760 And therefore, God doesn't have emotions the same way that we do.
01:40:14.180 God is love, meaning that every part of his essence and all of his actions are suffused with love.
01:40:19.100 God is righteous.
01:40:20.220 He has these attributes, but he doesn't have changeable human emotions the way that we do.
01:40:24.620 The way I'd explain it simply for, and his works in either a simulation or in Christian creationism
01:40:30.900 or just religion in general, is Super Mario World.
01:40:34.360 When you play Mario, he has no emotions.
01:40:38.080 We don't look at that character and think he has wants or needs.
01:40:41.580 The capable functions of that character in the game that is being controlled to ride Yoshi and save the princess,
01:40:47.700 he does not have the same emotional capacity as a human being for which designed that game.
01:40:52.900 Certainly, he's on a quest to save the princess.
01:40:54.300 We assume that is a representation of his care for the princess and his duty, him feeling compelled to duty or whatever.
01:41:03.640 But for us and the emotions that we feel, compare that gap in this little character in a video game and how he acts to the range of human emotion,
01:41:12.040 and it's several orders of magnitude beyond the capability of that little program,
01:41:17.020 and then do the same thing for humans and then God.
01:41:19.600 Imagine trying to explain to a two-dimensional character what it's like to live in three dimensions.
01:41:23.340 Impossible.
01:41:23.840 Humans can't, and that's the thing, you know, I was talking about this last night,
01:41:26.280 humans can't even imagine four dimensions.
01:41:28.980 Look, it's in and out, it's the in and out of reality.
01:41:32.680 What does that mean?
01:41:33.440 It's that things are pulsing in and out of the vacuum at light speed.
01:41:38.680 What does that mean?
01:41:39.240 That's the fourth dimension, is the pulsation of matter.
01:41:42.160 We can't, the brain doesn't technically perceive it.
01:41:45.040 Well, that's, I don't believe that's correct.
01:41:47.560 That's time.
01:41:48.780 We can talk about time as the fourth dimension and the way we perceive it.
01:41:55.860 But again, it's not necessarily true, I suppose.
01:42:00.140 But one idea is, if we are, I think we would be fourth dimensional beings, but we can't control the direction of time.
01:42:07.940 So, imagine you're standing in an empty room, you can move left and right, you can move forward and backwards, and you can jump up and down.
01:42:16.120 You have the ability to manipulate your body within three dimensions with limited capacity for the third dimension, or with the third dimension, because up and down is limited by gravity.
01:42:25.580 The fourth dimension would be akin to falling in a hole.
01:42:29.480 While you're falling endlessly, you can fan your arms and angle your body so that the air around you makes you move left, right, front, back, but you cannot control in any way up or down.
01:42:39.380 Time could be perceived as, if it was a spatial dimension, we are just free falling and we can't move through it.
01:42:44.480 Not the pulsing of an end, out of reality.
01:42:48.740 I mean, a related problem is even traversing actual infinities with respect to time.
01:42:55.300 Like, this is one of the reasons why I think that base reality must still account for, it must still be contingent and must have come to be.
01:43:01.100 Because I don't think it's possible philosophically to traverse an actual infinity of events any more than you could jump out of an infinitely deep hole.
01:43:12.340 You would never arrive at the present moment because to do so you would have had to traverse an infinite number of past moments, which I think is a philosophical impossibility.
01:43:20.360 They say that the fourth dimension is time, like you mentioned, but time is a human representation of motion, relative motion, like spinning objects relative to another object.
01:43:30.520 So, if fourth dimension is just motion, it's just one way of looking at motion, and you see these tesseracts in fluid, I don't know, convalescence.
01:43:41.400 It's an attempt to translate fourth dimension into three-dimensional space, which is not, it's a facsimile.
01:43:47.500 It's a representation to the best of our understanding.
01:43:50.120 There's actually a really great video I watched explaining how to track dimensions up to like 12 or 13, utilizing simple math and representation in two dimensions.
01:44:01.140 Which, I don't know the name of, but you can look, and it correlates between like how one point and two points interact, and the more dimensions you add.
01:44:08.140 So, this is mathematically how they represent higher dimensions without being able to perceive it by reducing it down to a flat mathematical formula.
01:44:16.180 Because we can do math in many dimensions.
01:44:18.140 Yeah.
01:44:18.460 We just can't, I mean, you can't conceive of what it's like to look at a cube and see all of the sides simultaneously.
01:44:23.280 But we can mathematically map multidimensional realities.
01:44:27.920 You can imagine, you can perceive all the sides of a cube at the same time by unfolding it into a two-dimensional space.
01:44:32.780 Yeah, that's a great point.
01:44:34.320 Yeah.
01:44:34.420 And so, we translate higher dimensions into flat pictures.
01:44:38.340 Like, if I were to, like, we showed the E8 Lie group the other day, and it looks like just a bunch of octagons smashed together.
01:44:44.840 You could not perceive of a, what is it, it's dimension 256 or something like that, some ridiculously high number.
01:44:50.500 So, if you have to unfold a cube to see it in two dimensions, that would mean you have to unfold time to witness it in the third dimension.
01:45:00.100 What would the third dimension look if it was folded up?
01:45:02.600 Okay, so if you want to perceive all sides of a cube at once in the third dimension, we unfold all of the sides and lay it out in two-dimensional.
01:45:09.540 Yeah, but if you wanted to perceive all the sides of a fourth-dimensional object in the third dimension, reason would dictate that you have to unfold it to see all sides of it at once.
01:45:17.780 So, what would a folded third dimension look like?
01:45:19.900 You, you, we cannot perceive it.
01:45:21.480 Well, I think we, saying can't isn't the right word.
01:45:23.940 No, literally, we can't.
01:45:24.840 Maybe we have yet to.
01:45:25.300 Well, perhaps we haven't yet.
01:45:26.800 That doesn't mean that it's impossible.
01:45:28.160 It is impossible.
01:45:28.700 I don't think that ever, any of that is impossible.
01:45:30.540 Over frames of a movie?
01:45:33.840 So, if you wanted to represent time all at once to see everything at the same time, right.
01:45:42.300 I talked about this yesterday as well.
01:45:43.740 To take every frame.
01:45:45.260 So, you've got, I described it this way.
01:45:48.140 You have a movie where a man walks from the left side of the screen to the right side of the screen.
01:45:52.620 What's happening is you have 29 frames per second, depending on your frame rate.
01:45:56.080 I don't know, you could do 60.
01:45:57.220 And what happens is frame one lights up and then frame two lights up.
01:46:00.740 But they, it's a, it's a, it appears and disappears instantly.
01:46:04.060 The next frame appears and disappears instantly.
01:46:05.600 And this creates a wave.
01:46:07.080 If you were to turn all of them on at once, you'd see the man as a long snake.
01:46:11.500 But you would not be able to see everything of that man because he's blocking himself.
01:46:16.240 You would see a weird, it would look like you dragged paintbrush.
01:46:20.440 You can't see the man's chest because every frame overlaps every other frame.
01:46:24.960 We can't, we can create a facsimile of it.
01:46:27.360 But you can't actually fully perceive of breaking the fourth dimension.
01:46:31.720 Because light interferes with itself.
01:46:33.860 So, if you could see it without using light.
01:46:35.620 No, it's space.
01:46:36.600 So, when you take a cube and you unfold all of the sides and lay it out, it can look like
01:46:40.680 a cross, right?
01:46:41.840 And this is actually common in IQ tests so that you can then, they ask you in your mind
01:46:45.340 to fold that cube back up and then rotate it in your mind and see where each symbol on
01:46:49.160 each side would be.
01:46:50.300 If you were to try and represent the fourth dimension, which is, let's just say time.
01:46:54.460 Some say maybe it's not.
01:46:55.400 I don't know, whatever.
01:46:56.940 As looking at every frame of movie the exact same time.
01:46:59.680 The problem is, the man is not traveling full gaps of himself in each frame.
01:47:06.420 If he's moving very quickly, perhaps you can see his body, you can see his legs, you
01:47:12.020 can rotate the camera around in three dimensions, and then the next frame, at the exact same
01:47:16.060 time, you can't.
01:47:17.640 The body overlaps itself.
01:47:19.200 You can't see the man's back.
01:47:20.380 You can't see the man's chest.
01:47:21.300 You can't see his shoes.
01:47:22.580 Because all of these things are happening at the same time and our ability to perceive
01:47:26.220 is limited.
01:47:27.360 Get ready for a Las Vegas-style action at BetMGM, the king of online casinos.
01:47:33.560 Enjoy casino games at your fingertips with the same Vegas Strip excitement MGM is famous
01:47:38.460 for when you play classics like MGM Grand Millions or popular games like Blackjack,
01:47:43.780 Baccarat, and Roulette.
01:47:45.600 With our ever-growing library of digital slot games, a large selection of online table games,
01:47:50.420 and signature BetMGM service, there's no better way to bring the excitement and ambience
01:47:55.640 of Las Vegas home to you than with BetMGM Casino.
01:47:59.580 Download the BetMGM Casino app today.
01:48:02.660 BetMGM and GameSense remind you to play responsibly.
01:48:05.160 BetMGM.com for T's and C's.
01:48:07.080 19 plus to wager.
01:48:08.220 Ontario only.
01:48:09.100 Please play responsibly.
01:48:10.360 If you have questions or concerns about your gambling or someone close to you,
01:48:13.480 please contact Connects Ontario at 1-866-531-2600 to speak to an advisor.
01:48:20.320 Free of charge.
01:48:21.540 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:48:24.740 When you really care about someone, you shout it from the mountaintops.
01:48:30.680 So on behalf of Desjardins Insurance, I'm standing 20,000 feet above sea level to tell
01:48:35.280 our clients that we really care about you.
01:48:40.000 Home and auto insurance personalized to your needs.
01:48:43.220 Weird, I don't remember saying that part.
01:48:45.680 Visit Desjardins.com slash care and get insurance that's really big on care.
01:48:51.600 Did I mention that we care?
01:48:53.020 So you can't do it all at the same time.
01:48:58.760 You can create a facsimile of it, but you cannot actually see the man's shirt in every
01:49:04.040 single frame at the exact same time because his body doesn't move enough.
01:49:08.100 You see what I'm saying?
01:49:08.720 Yeah, the eyeballs wouldn't be able to perceive it properly because it interferes.
01:49:12.880 The perception of light interferes with itself.
01:49:15.660 So it would block itself.
01:49:17.100 They're all overlapping each other.
01:49:18.280 But that doesn't mean you can't conceive every frame at once.
01:49:22.080 You cannot.
01:49:23.100 Well, maybe you haven't.
01:49:25.440 That doesn't mean that it's impossible.
01:49:27.060 I think that you can see without light.
01:49:28.880 Like your third eye can perceive images you can imagine.
01:49:31.740 I don't think you understand what I'm saying.
01:49:33.040 You're saying that every frame of a movie is relative to every other frame.
01:49:37.200 That's the time.
01:49:38.240 It's the relative position.
01:49:39.640 If you took a whole movie, like Star Wars, and you put every frame on the screen at once,
01:49:43.860 it would be a spattered blob of nonsense.
01:49:46.220 So that's the problem with the screen.
01:49:47.800 The screen is the issue.
01:49:48.980 So if you can see it without seeing on, like you'd have to just visualize it in a different
01:49:53.700 way than using the eyes on screen.
01:49:54.300 Sure, if you want to imagine in your mind every frame as a solid picture and a grid of
01:49:58.140 one billion squares and then see them all at the same time, sure, I guess.
01:50:04.640 We're kind of talking about an A theory or B theory of time.
01:50:07.240 Like, is time truly sequential or is it like a block?
01:50:11.560 And I know philosophers talk about this, and every time I've tried to understand it, I
01:50:15.980 just go, that's, wow.
01:50:18.300 I don't know how I could conceptualize.
01:50:20.020 If there's no past and no future, everything is now changing shape.
01:50:24.320 Yeah, some philosophers talk about time that way, as if it's like an object within.
01:50:28.140 And I think the film strip was a really interesting thing, because you could lay the film strip
01:50:31.340 out and see it all, but you wouldn't be perceiving them truly simultaneously.
01:50:35.960 You would still be looking at...
01:50:37.840 But that's a limitation of your consciousness.
01:50:39.440 And that's a limitation.
01:50:39.840 You can envision a being which has multiple streams of consciousness or outside of time
01:50:44.020 perceiving the whole thing at once.
01:50:45.820 God can observe, as an infinitely dimensional being, God can observe every point of time,
01:50:51.240 space, and everything at the same time.
01:50:53.060 This is why, like an idealist view of philosophy would say that the reason...
01:50:58.240 There's a famous exchange, I can't remember who it was.
01:51:00.380 It was some philosopher, it might have been Barclay.
01:51:02.000 He was talking about, why does that tree still exist, not just does a tree, when it falls
01:51:06.600 into force, make sound, but why does it keep existing, or is it like that sweeping thing
01:51:11.080 where it only exists when consciousness is observing it?
01:51:14.380 And the answer that I would give is that, yes, it does continue to exist, because God is
01:51:18.640 infinitely observing all things.
01:51:19.840 So in jokes, punchline comes at the end.
01:51:23.200 Does God never get jokes because he experiences punchline right there?
01:51:27.020 It's always out of order for him.
01:51:30.020 No, God's really funny, actually.
01:51:31.460 The Bible's full of jokes, but, you know.
01:51:33.620 Can God microwave a burrito so hot that he himself could not eat it?
01:51:37.620 No, because omnipotence can only perform that which all power can perform.
01:51:42.100 So something logically impossible is not something that God could perform.
01:51:46.200 He also couldn't make an object so heavy he couldn't lift it.
01:51:49.060 Because that's like, it's a nonsensical construction.
01:51:51.160 It's a Simpsons reference when Homer is talking to God and he says, could you microwave a burrito
01:51:55.600 so hot?
01:51:56.520 Or he asked a priest or something like that.
01:51:58.100 And it's a joke on, could God create a stone so heavy that he could not lift it?
01:52:01.840 He couldn't lift it.
01:52:02.400 And what I find fascinating about that is, presuming that God exists within his own construct,
01:52:10.260 I think it's a silly question, and a lot of people can't, they don't quite understand
01:52:15.920 why it's not a very good question.
01:52:17.440 If I were to program a video game, there are two functions of my control in this video game.
01:52:21.320 I can create an avatar within that game that would be unable to move the stone that I
01:52:24.740 created.
01:52:25.400 I, as the programmer, could then click on the stone and fling it into outer space.
01:52:30.440 Yeah, it's a non—it's like, it is possible in math or any symbolic, language is symbolic,
01:52:36.360 to make constructions that violate, again, abstract objects like the laws of logic, that
01:52:43.040 those are not things external to God that God appeals to, like I would appeal to a ruler
01:52:47.360 to say how long something is.
01:52:48.520 Those are things that are sustained by the being of God.
01:52:51.640 God's nature is the source of abstract objects.
01:52:55.960 So we're not appealing to God, appealing to something outside of himself still.
01:53:01.240 We're saying that the only rational grounding for abstract objects is God's, is an unchanging
01:53:07.080 nature of an unchanging mind, an eternally existent being who's self-existent, these kinds
01:53:12.680 of things.
01:53:12.960 So, I mean, theologians and philosophers have been talking about these things for a long,
01:53:17.480 long, long time, and AI and simulation have now given us what I think are really fascinating
01:53:23.920 supposals or analogies to reason by and test ideas and test our understanding of ideas, but
01:53:30.920 I don't think it's fundamentally changed.
01:53:33.720 I would be interested in, I think, something we agree on in AI safety.
01:53:42.660 We've talked about consciousness trying to create a super intelligence.
01:53:45.540 One of the things I'm very concerned about is the way in which not just AI, but technological
01:53:53.140 advancement will anesthetize human beings and increasingly separate us from our purpose
01:54:01.020 in a way that ultimately destroys the people engaging with them.
01:54:06.140 And so I would think of something like the way that artificial intelligence, robotics,
01:54:12.680 virtual reality or simulation might interact with something like sex, and actually end
01:54:18.100 up being a nuclear bomb sociologically on people.
01:54:21.900 That sounds like what you were saying about psychedelics, too, how they lead people into
01:54:24.720 the spiritual realm and then they get lost and want to do suicide rituals and stuff.
01:54:28.660 Yeah, start sacrificing babies to a serpent deity that they encounter on the plane of existence
01:54:34.580 that ayahuasca reveals to them.
01:54:36.940 I have found that putting your consciousness in a machine kind of desensitizes you to the
01:54:41.900 physical base reality.
01:54:44.900 I think Facebook's already done this.
01:54:47.620 They've compiled someone's user history and posts and then plugged it into an AI chatbot
01:54:53.720 to simulate that person.
01:54:56.660 And this was years ago.
01:54:57.720 They talked about how a dead loved one you could talk with and it would have memories of
01:55:02.620 all these things.
01:55:04.520 The crazy thing is it would know things about you that it never posted.
01:55:09.840 So there's a man, let's say his name is John.
01:55:11.320 He's 65.
01:55:11.840 He dies.
01:55:13.400 John posted only about ever working at his machine shop.
01:55:16.940 How is it then that when you talk to John, he knows who his kids are?
01:55:19.780 He knows what his kids are up to.
01:55:21.140 He knows where they went to school because Facebook has connected all of them to his profile.
01:55:24.620 So the data is not just what he posted.
01:55:26.340 It's all of the accounts around him.
01:55:27.740 And the network has been able to create a facsimile of him and his experience.
01:55:31.820 You could literally say, John, what's your son's name?
01:55:34.440 I say, my son's name's Bill.
01:55:35.380 Where does he go to school?
01:55:36.380 This is what he did when he was a kid.
01:55:38.020 It's compiling all the data.
01:55:39.400 And it's not even just from the social media platform.
01:55:41.280 It's things on the internet as well that has access to because of the Facebook plugin
01:55:44.040 that exists on all these other websites.
01:55:45.740 It compiles all that information.
01:55:47.680 Now, the question is, that chatbot of John who died, is it conscious?
01:55:53.440 No, it is actually more terrifying than that.
01:55:56.020 I can simply say it's a computer, which is inputting math and outputting math.
01:56:00.460 But I'd like to scare you a little bit more.
01:56:01.760 And I want you to imagine it's a gigantic black squid demon.
01:56:04.540 And it's got a long black tentacle with John's face glued to its tentacle,
01:56:09.380 wiggling it in front of you saying, I'm John.
01:56:11.720 Bill is my son.
01:56:12.940 And you're looking at the face going, wow, I'm talking to John.
01:56:15.400 And behind it is this gigantic grotesque demon.
01:56:17.960 You love my book cover.
01:56:19.660 You love my book cover.
01:56:20.880 Is that representative of the AI?
01:56:22.360 That's basically what you're describing.
01:56:23.700 What's it called?
01:56:24.320 It's unknown.
01:56:24.760 Go ahead.
01:56:26.740 AI, unexplainable, unpredictable, uncontrollable.
01:56:31.040 So I agree with you.
01:56:32.780 There are so many problems with this technology and social and democracy.
01:56:38.180 Sex is definitely it.
01:56:39.920 But all those problems.
01:56:41.140 This is exactly.
01:56:42.100 Oh, show me.
01:56:43.060 Let me see.
01:56:43.660 All these problems require time.
01:56:46.220 Yes.
01:56:46.780 To have problems caused by.
01:56:50.080 That's exactly it.
01:56:51.280 The face masks on and everything.
01:56:52.780 Wow.
01:56:53.700 Mike Benz talks about the blob quite a bit.
01:56:55.860 Look, the person over here talking to the smiley face, but it's the tentacle monster.
01:56:59.120 So that's AGI, essentially, is what you're referencing?
01:57:01.520 That's super intelligence to me.
01:57:02.740 But AGI and super intelligence are maybe just minutes away in terms of capability.
01:57:06.840 Once you have something so powerful with access to internet, perfect memory, self-improvement capabilities.
01:57:12.300 And if it's not, and if it instead takes two, three years, it doesn't matter.
01:57:15.960 It's the same problem.
01:57:17.060 We have no solutions.
01:57:18.400 And this is what I notice.
01:57:19.440 I give a talk.
01:57:20.480 I tell people, okay, prediction markets are saying we're two, three years away.
01:57:24.620 CEOs of OpenAI and Tropic are saying we're two, three years away from human level.
01:57:30.520 It changes everything.
01:57:32.540 Existential risk is a possibility.
01:57:34.940 And then I ask, do you have any questions?
01:57:37.580 And people go, oh, what about my job?
01:57:39.540 Will I have my job?
01:57:40.860 Will I need to lubricate my sex robots?
01:57:43.160 All sorts of nonsense, which has nothing to do with the big problem I'm trying to share.
01:57:47.400 Like, we are on a verge of creating technology, which is a complete game changer.
01:57:52.400 We talk about simple things because we understand them much better.
01:57:56.400 Here, we cannot predict, explain, or control this technology.
01:58:00.280 So we kind of ignore it.
01:58:01.680 But the analogy I give is dying.
01:58:05.680 We're all dying.
01:58:06.640 Every second we are dying, we're getting closer, our kids, our friends, our family, everyone's dying.
01:58:10.840 We're doing nothing about it.
01:58:12.320 We ignore that fact.
01:58:13.460 We have this bias built in where to live a normal, happy life and not commit suicide, you have to ignore this.
01:58:19.520 I think we're doing it at the level of humanity right now, where we're ignoring this monster approaching us.
01:58:26.900 What's the biggest threat to the species from it?
01:58:30.260 Well, how far do you want to go?
01:58:33.820 So most people stop at existential risk.
01:58:36.380 There is also suffering risks.
01:58:38.400 That it would just...
01:58:40.100 Torture you indefinitely, create infinite number of replicas of you, clones of you, living forever.
01:58:47.000 One day you get a knock on your door, Ian, and you open it, it's you standing there.
01:58:51.160 And then you're like, what?
01:58:51.920 What? And then it just grabs you by the throat and cracks your neck and takes over your life.
01:58:54.780 I am concerned with them tapping the vacuum of space-time for electricity and having the machine always be on.
01:59:01.420 Because for a while, I'm like, we can always pull the plug on the thing.
01:59:04.080 But if it has access to infinite electricity...
01:59:06.460 It doesn't need zero-point energy to exist and operate indefinitely.
01:59:10.040 There's other means of doing it.
01:59:11.000 Iso-electricity, solar, wind, whatever.
01:59:14.000 Nuclear?
01:59:14.460 It's very hard to turn off superintelligence in charge of everything.
01:59:17.700 It's probably predicting your steps and anticipating them.
01:59:21.360 Here's my horrifying vision of an AI reality future.
01:59:24.580 There are many.
01:59:25.480 But one of them is, nobody has jobs anymore.
01:59:28.500 Nobody needs jobs anymore.
01:59:29.640 But what really happens is, everyone's got gig economy apps.
01:59:33.080 You wake up one day and you're like, man, I want to get breakfast.
01:59:36.120 I need cash.
01:59:36.800 So you pull up your app and you have an app called Worker.
01:59:40.240 And you open it up and it says, hi, Ian.
01:59:42.800 I've got a great job for you today.
01:59:44.700 You need to acquire this device.
01:59:46.720 And then it shows you a picture of this weird computer-looking gear device.
01:59:50.640 And it says, meet this man on 3rd and Lexington.
01:59:53.440 He will give you the device.
01:59:54.800 Then you will bring it to, you know, Houston and Broadway and deliver it to this man.
02:00:00.320 It shows you the picture.
02:00:00.800 So you go, okay, I guess.
02:00:02.100 And you walk over and a guy says, hey, you're Ian.
02:00:04.400 Here you go.
02:00:04.820 And he hands you the box and you go, thanks.
02:00:06.020 Then you walk to Houston and you see the guy from the picture like, oh, Jim, here you go.
02:00:10.120 Then it goes, bing, 50 bucks into your account.
02:00:11.840 And you're like, that was easy.
02:00:12.960 I didn't have to do anything.
02:00:14.160 The AI needs to find the most efficient path from A to B.
02:00:17.840 So it's not going to require someone to wake up at 9 and go do these things.
02:00:21.660 The example of this is how we went from making cheeseburgers to McDonald's.
02:00:26.560 The idea was at first you had a cook.
02:00:29.400 He'd throw the burger on the grill, you know, fry it a little bit, put the cheese on it,
02:00:33.500 put the onions on the grill, grill the onions, put the onions on the burger.
02:00:36.340 Put the burger on the bun or toast the bun.
02:00:38.260 And it took a long time.
02:00:39.700 Then the McDonald's brothers were like, no, no, no, no, no.
02:00:41.280 Have one person who doesn't need to be trained do one stupid thing.
02:00:44.760 One guy puts the buns in the toaster.
02:00:46.400 One guy puts the burger on the grill.
02:00:47.820 One guy puts the onions.
02:00:48.940 You don't got to train them to do anything.
02:00:50.700 The AI takeover will say, why bother with having a human being learn how to build a computer?
02:00:56.960 Get 10,000 humans to do one stupid task.
02:00:59.840 Again, they don't need us.
02:01:01.640 You have nothing to contribute to superintelligence.
02:01:04.460 I'm not saying that this is the guaranteed outcome where the superintelligence ultimately
02:01:08.780 comes to this place.
02:01:09.400 I'm saying on the path to where the AI is building its replacement for humans.
02:01:15.220 Right.
02:01:15.360 I'm not sure we have enough time.
02:01:17.860 If we are saying this is two, three years away to this level of capability, it's not
02:01:22.120 going to allow for this type of change in the world.
02:01:25.600 We used to think that we had long-term concerns and short-term concerns.
02:01:30.720 Long-term was superintelligence one day, 20, 30 years from now, nobody cares.
02:01:35.380 Short-term, we care about jobs.
02:01:37.140 We care about those things.
02:01:38.040 It turned out that the short-term things we know how to do now.
02:01:41.820 We can automate artists.
02:01:43.100 We can automate all these things.
02:01:45.080 And long-term is actually here.
02:01:47.680 Things like global unemployment may take decades if nothing else happens because we have bullshit
02:01:54.020 jobs.
02:01:54.600 There are people doing things nobody needs to do anyways.
02:01:57.440 So some people will still have jobs.
02:01:59.660 But if existential problems are what we think they might be, that precedes that and takes care
02:02:05.380 of all the other existential risks.
02:02:06.980 You can talk about climate change, asteroids, volcanoes.
02:02:10.960 If it takes 100 years for this planet to boil, but it takes three years to get to dangerous
02:02:16.340 malevolent superintelligence, you don't have to worry about that because either you're not
02:02:20.580 here to worry about it or it will help you solve that problem if we figure out how to control
02:02:24.640 it.
02:02:24.840 Do you think that it's going to be malevolent by nature or that it's neutral by nature?
02:02:30.280 By default, most states of the universe are very unfriendly to biological life.
02:02:34.440 It's unlikely that if we do nothing, it will just be super aligned with our preferences.
02:02:40.440 So what?
02:02:44.540 So yeah, this is where I have no solutions.
02:02:47.760 I tell you about the problem and millions of your followers are now working on AI safety
02:02:52.040 because it is the most important problem in the world.
02:02:54.100 I just mean 100 years from now, blink, the AI super machine is the, it's colonizing other
02:03:02.760 planets, humans no longer exist, and now planets are being terraformed and converted into the
02:03:08.020 AI.
02:03:08.460 This is the unpredictable chapter in my book.
02:03:10.800 I cannot tell you what a more intelligent being would do.
02:03:13.460 People ask me, how would it kill everyone?
02:03:15.100 It has no hands.
02:03:15.880 I can tell you how I would do it.
02:03:17.920 I cannot tell you how something more intelligent would accomplish it.
02:03:20.000 What if it maps out the logic of the universe and then hacks the system and breaks the simulation?
02:03:24.360 Oh, you read my paper.
02:03:25.500 Wonderful.
02:03:26.260 So then we started working on AI safety a decade ago.
02:03:28.800 First idea was let's keep them contained.
02:03:31.220 We'll have virtual boxes.
02:03:32.300 It's like we study computer viruses.
02:03:33.780 Yeah, sandbox.
02:03:34.200 That makes perfect sense.
02:03:35.180 We publish papers on it.
02:03:36.660 We conclude it after a while, if it's intelligent enough, it will break out, but still it's kind
02:03:40.860 of useful to have those tools.
02:03:42.180 And if you invert this equation, now we are in a simulation.
02:03:47.400 We are AI in a box.
02:03:48.640 Can we use AI to help us escape, hack out of a simulation?
02:03:51.940 So if we control it and it has access to do novel scientific research, maybe that's one
02:03:57.440 way to accomplish that.
02:03:58.900 Wouldn't, if we break out of the simulation, wouldn't the creator of the simulation simply
02:04:02.740 just say delete?
02:04:03.740 So it really depends on the type of simulation.
02:04:06.820 If it's entertainment and there is low security, they may not even notice you hacking it.
02:04:10.920 Nobody watching that crap.
02:04:12.200 It's like a screensaver somewhere.
02:04:14.140 If it's something to do with AI safety.
02:04:16.620 Or a screensaver.
02:04:17.940 You don't know.
02:04:18.780 You could be.
02:04:19.440 It doesn't take a lot of compute, according to them, to run all this.
02:04:22.000 It's one computer in one like poli-sci, you know, lab or whatever.
02:04:26.380 Someone just left on overnight and it's running our reality.
02:04:30.240 Right.
02:04:30.560 Any assumptions?
02:04:31.220 People say, well, nobody would run it.
02:04:32.780 It's so much compute.
02:04:33.580 You don't know what computational resources are available outside of simulation.
02:04:37.340 This could be completely trivial.
02:04:38.660 We could be Super Nintendo.
02:04:40.800 This super intelligence could be like, hack it in a sense that it'll be like, if you
02:04:45.340 take a breath every three seconds, or if you breathe every three seconds, then four seconds
02:04:49.860 later with this much inhale, this much section, and it will give us the code to basically hack
02:04:56.180 out.
02:04:56.560 And we're like, whoa.
02:04:57.380 And you perceive this reality from outside of it and see like.
02:05:02.440 So that's what people used to think about magic spells.
02:05:05.080 If you say certain words, you manipulate certain objects, maybe you'll get that capability.
02:05:10.060 I don't know what it's actually going to discover.
02:05:12.880 I'm more thinking special features of quantum physics, being able to transcend locality, transcend
02:05:19.800 time, that type of hacking, giving you additional resources as a result.
02:05:23.980 Again, despite what you might think, I'm not super intelligent.
02:05:27.140 I don't know what the answers are.
02:05:29.080 I tell you about problems I have discovered.
02:05:31.100 So giving you different, giving you resources as a result, like making you rich and famous
02:05:35.440 if you're able to do it.
02:05:36.520 So it like incentivizes people.
02:05:38.080 So yeah.
02:05:38.640 Example I give, like, how would you prove that you hacked the simulation?
02:05:41.680 You keep winning jackpots in different lotteries every week.
02:05:44.720 So if it was one, people would be like, oh, you probably hacked the computer.
02:05:47.720 But if you do it around the world every week after so many wins, they have to go like,
02:05:52.580 okay, he has some private keys to the universe and generating this whole thing from scratch.
02:05:57.580 Yeah, synergy or whatever this is, is like, the internet is sort of a playground for people
02:06:04.700 coming together and creating, I don't know if synergy is the right, well, there is a form
02:06:08.180 of synergy and like, S, sin.
02:06:11.300 Interesting that the word sin is part of the synergy that, like, I've noticed if I come
02:06:17.080 online and I make a video and 10,000 people see it and I say, this is going to happen and
02:06:21.820 they believe me, it becomes way more likely that it happens.
02:06:24.740 And that's, like, replicable over time.
02:06:27.000 I've been doing it for about 20 years and it seems to be that that is a version of hacking
02:06:31.220 the simulation, getting people to believe something.
02:06:34.120 So that seems to be working within the laws of simulation.
02:06:37.080 If you convince enough people to do something for you, it will happen.
02:06:40.220 This is not violating rules we're supposed to be playing by.
02:06:45.240 Definition of hacking is using technology in a way it's not intended to be used for some
02:06:51.200 sort of benefit or entertainment value.
02:06:53.880 So then I'm thinking hacking is exactly that, but with laws of physics.
02:06:59.400 I keep imagining using internet video to do, like, a mass meditation as a form of hacking
02:07:04.680 the system.
02:07:05.500 I don't know if technically that's within the bounds of the technology.
02:07:08.400 I used to try to do it, but once 80,000 people would come in, it would crash the system or
02:07:12.520 whatever.
02:07:12.860 It had limitations.
02:07:13.620 So I couldn't get everyone.
02:07:15.900 The tech wasn't good enough yet.
02:07:17.600 So we've got a couple minutes left.
02:07:19.580 I'm wondering, is it possible that what powerful global elites are trying to do is hack the system
02:07:24.720 so they can gain access to some kind of code and control reality?
02:07:28.460 There was a bunch of newspaper articles, magazines, websites a couple years ago claiming that a
02:07:34.060 few billionaires hired a research team to hack us out of simulation.
02:07:38.000 Nothing ever came out of it.
02:07:40.100 No one ever published a paper.
02:07:41.840 No one said anything.
02:07:42.680 I was told by someone who knows that, yes, it was the real thing, but that's all I know
02:07:48.600 about it.
02:07:49.240 Interesting.
02:07:49.720 Like transcendental meditation?
02:07:51.280 Could this be the Large Hadron Collider?
02:07:53.640 No.
02:07:54.200 You don't think so?
02:07:54.860 No.
02:07:55.780 Why not?
02:07:56.640 It has nothing to do with it.
02:07:57.840 Like, people will point at random physics as somehow related to...
02:08:02.220 But wouldn't learning about the fundamental nature of reality be a step in the direction
02:08:07.700 of understanding the code to try and break the code?
02:08:09.840 In a sense that any type of scientific knowledge can be used to be a better hacker, but it's
02:08:14.660 not a specific dedicated effort to hack.
02:08:17.360 No, no, no, no, no.
02:08:18.140 Of course, of course.
02:08:18.700 I'm not saying they built the Hadron Collider to try and break the code.
02:08:21.380 I'm saying they would want to collect that data to utilize...
02:08:24.660 So if I'm right, and quantum physics research is the best path to find those bugs, then yes,
02:08:30.000 that's a great tool for exploring that.
02:08:32.080 Absolutely.
02:08:32.920 And then we break out of the sandbox and find ourselves in a multiverse because we found
02:08:37.120 their internet, basically.
02:08:38.380 So it's like, what's that, Wreck-It Ralph, right?
02:08:43.600 That movie where all the video game characters, after hours, go through the power cable and
02:08:48.560 then meet and hang out, but then one character transfers to the other game.
02:08:52.680 So we're in a sandbox simulation.
02:08:55.400 The powerful elites, whoever, figure out a way to hack the code and break out of the sandbox
02:08:59.940 and then find themselves in, not base reality, but effectively outside the program within
02:09:06.140 the operating system of the greater network of computers.
02:09:08.580 Which is base reality.
02:09:09.860 It's just a different way of looking at base reality.
02:09:11.760 Not base reality.
02:09:12.300 They're in computers.
02:09:13.640 We're still just in the computers, but now we have access beyond this particular universe.
02:09:17.760 And then they discover all the other simulations and the ability to pass into them.
02:09:23.000 And now they're in the multiverse.
02:09:23.940 I just don't think that there's anything greater than the universe.
02:09:26.520 I think that's it.
02:09:27.080 I was like, there's super galaxies, which is a bunch of galaxies within a galaxy, which
02:09:31.000 you call the universe a super galaxy.
02:09:32.500 But that's within a bunch of other universes, which are in universes, the fractal nature of
02:09:38.880 the up and down.
02:09:40.040 So there is maybe a universe, but maybe not.
02:09:43.940 This is kind of another issue I have with deism, is that they kind of end it at one point,
02:09:48.480 like there it is.
02:09:49.620 I don't think that it ever coalesces into one.
02:09:53.460 Like it's always within some other greater system.
02:09:59.080 And so, and that's also why I think when you hack and can get out of the reality, you're
02:10:04.160 actually still in reality.
02:10:05.160 You're just seeing it from a different perspective.
02:10:07.680 Well, with the last couple of minutes, we'll go through our final thoughts and shout outs
02:10:10.860 on it.
02:10:11.080 Brian, if you want to give your final thoughts on all this and where people can find you.
02:10:15.260 Yeah.
02:10:15.440 So, I mean, obviously, I don't believe that we live in a simulation.
02:10:18.820 I think that it's a non-falsifiable, non-empiric idea that is self-defeating.
02:10:24.200 I think that's true.
02:10:26.580 I'm a Christian.
02:10:27.900 And so, however, I do see huge dangers with AI.
02:10:34.660 And the way to escape them, the way to hack the reality, I think, is actually pretty simple.
02:10:41.860 You need to say Christ is Lord and touch grass, my guy.
02:10:45.440 Like, you need to understand what you're for, what you are.
02:10:49.680 You are meaningful.
02:10:50.940 You're an image bearer of God.
02:10:52.560 You're not code in motion.
02:10:55.000 You're also not just a meme, a biological meme, trying desperately to replicate itself,
02:11:01.000 like Dawkins would say.
02:11:02.580 You're an image bearer of God.
02:11:03.920 God, yes, you sin, but God loves you and will forgive you and will let you escape the reality
02:11:11.520 and glory and resurrection into transcendent glory.
02:11:16.260 And so, in the meantime, I would say that people need to not get sucked into digital deceptions
02:11:21.840 and not get sucked into their entire lives being lived in a simulacrum kind of like,
02:11:27.960 where you're just digital sex, digital dopamine, digital war, digital meaning, digital relationship,
02:11:33.180 digital everything.
02:11:33.820 Like, you're an embodied soul.
02:11:35.420 You are made of things and things that aren't things.
02:11:39.000 You're an embodied soul.
02:11:39.980 And so, you can't just try to find your meaning and purpose in all these things.
02:11:45.280 And if you do, you will certainly be deceived.
02:11:46.940 You'll believe all sorts of false and fanciful things, and you'll end up miserable.
02:11:51.880 So, I would encourage anybody listening to unplug and read your Bible.
02:11:59.040 That was a super fun conversation.
02:12:01.760 I love talking philosophy and simulation, but I think the real important problem is artificial intelligence.
02:12:07.300 It will impact every single person on this planet.
02:12:10.720 Whatever you know about it or not, it's going to impact your life.
02:12:14.180 Initially, it may be just your employment, maybe your social interactions,
02:12:18.080 but eventually, it will fundamentally change the future of humanity.
02:12:22.380 And we are not doing enough to research this problem, to figure out solutions to control.
02:12:29.340 All the leaders of large companies working on it are on record saying,
02:12:34.200 this technology is extremely dangerous.
02:12:36.300 It's likely to kill everyone, but they are now racing to the bottom.
02:12:40.380 They are competing who's going to get there first and claiming that,
02:12:43.680 well, you know, I can't stop or this other guy will build a demon
02:12:48.940 or whatever term you would like to use, which will end humanity.
02:12:52.440 And it's important that it's my demon.
02:12:54.040 I feel some sort of pride.
02:12:55.800 You won't.
02:12:56.680 You're not going to benefit from it.
02:12:58.300 There is not going to be even a history to the world for you to be the bad guy in history.
02:13:02.340 It's going to be a complete lights out if we don't stop now.
02:13:06.260 We can still stop.
02:13:07.760 We can create very powerful tools, super intelligent tools for solving real world problems like protein folding.
02:13:14.880 We solved it without creating super intelligence.
02:13:17.520 It's helping people already.
02:13:19.040 We can do it in every other domain.
02:13:20.720 We can get like 99% of benefits out of just creating useful tools.
02:13:26.480 There is no need to go all the way and create digital God equivalents.
02:13:32.340 A lot of people in charge of those labs are very young people, rich people.
02:13:36.900 They have a lot of potential in their future.
02:13:39.980 I don't think it's a personally good idea for you, for your personal interest, to create this entity.
02:13:49.680 I don't see anyone stopping.
02:13:51.600 I agree with you.
02:13:52.460 The thing about nuclear weapons is that we all race to build the best and most powerful.
02:13:56.760 But we tried to refrain from using them, and there were a few close calls.
02:14:02.640 With AI, there's no perceivable nuclear explosion.
02:14:05.820 So the idea among the people creating it is, I can make this weapon.
02:14:09.920 I just don't have to deploy it.
02:14:11.080 The only problem is it deploys itself.
02:14:12.860 You make it, it's over.
02:14:14.420 I don't know.
02:14:14.780 What do you think, Ian?
02:14:16.640 Get us out of here.
02:14:17.500 When it comes to demons and angels and all that, I've experienced relationships with that kind of energy.
02:14:23.100 And it was terrifying at first, but I realized that ahead of time.
02:14:27.000 I was like, I'm not going to be afraid.
02:14:28.140 I'm going to choose patience, kindness.
02:14:29.920 I'm going to listen to this thing and allow it to express itself.
02:14:32.920 And it did, and then it calmed down, and then it took on a conversational tone and thanked me for listening and then dissipated.
02:14:38.920 So I fear nothing.
02:14:41.420 Not in that way anymore.
02:14:43.340 And the DMT, I know that we're on the level with this conversation, but man, talk about cracking the fucking code, dude.
02:14:50.300 Whatever is happening is much greater than what your dumb brain perceives.
02:14:54.560 And I do believe that the mind exists outside the body as well as within the body.
02:14:58.900 Take care of yourself and be good to yourself.
02:15:02.600 We'll have to get someone who's familiar with the, what are they calling it, the DMT studies where they put them on the IV drip or whatever.
02:15:08.720 Oh, yeah, those extended state DMT trips.
02:15:09.740 Extended state, there you go.
02:15:10.960 Did you guys mention your social media at all?
02:15:13.120 You can follow me on X or Twitter at B-R-I-A-N underscore S-A-U-V-E.
02:15:19.540 And that's where I'm most active.
02:15:20.680 But everywhere else as well, I publish music.
02:15:22.980 Everywhere you find it, Spotify, under the same name, Brian Solveig.
02:15:26.160 And New Christendom Press is my publishing house.
02:15:28.780 If you'd like to keep up.
02:15:29.700 If you like this conversation, you'd probably really enjoy Haunted Cosmos.
02:15:33.220 Highly sound design, story-driven look at the world through the lens of Christianity and at all kinds of things from Bigfoot to DMT to AI and Loeb.
02:15:44.780 So, I mean, enjoy.
02:15:46.060 Right on.
02:15:46.740 Do you have social media, Roman?
02:15:47.680 Yeah, I'm Roman Yam on X.
02:15:49.820 You can follow me on X, Twitter.
02:15:51.800 You can follow me on Facebook.
02:15:52.900 Just don't follow me home.
02:15:53.980 It's very important.
02:15:55.240 You also have a book, AI, unexplainable, unpredictable, uncontrollable.
02:15:59.780 You don't have to buy it.
02:16:01.020 You can get it from your library, steal it, download it illegally, just read it and figure out what to do with this information.
02:16:06.780 Right on.
02:16:07.600 And you already saw it at your social media?
02:16:09.360 It's at Ian Crossland.
02:16:10.560 It's my name.
02:16:11.160 So, you can follow me anywhere, man.
02:16:12.200 I did that cover of My Hero, which is Lighten Up on YouTube.
02:16:16.000 So, check it out.
02:16:16.560 It's on my YouTube channel.
02:16:17.280 It's pretty good.
02:16:18.380 Right on, everybody.
02:16:19.020 We'll be back tonight at 8 p.m. for TimCast IRL.
02:16:21.520 So, smash that like button.
02:16:22.960 Subscribe to Tenet Media.
02:16:24.460 We got these shows every Friday at 10 a.m.
02:16:26.580 They're always very, very fun and informative.
02:16:28.760 You can follow me on X and Instagram at TimCast.
02:16:31.180 Thanks for hanging out, and we'll see you all tonight.
02:16:47.280 What would you like.
02:16:50.940 Thank you, man.
02:16:53.000 I know you.
02:16:53.260 I'll see you all tonight.
02:16:55.020 I'll see you all tonight.
02:16:55.780 Bye-bye.
02:16:57.060 Bye-bye.
02:16:58.080 Let's go.
02:16:58.520 Bye-bye.
02:16:58.960 Bye-bye.
02:16:59.160 Bye-bye.
02:16:59.880 Bye-bye.
02:17:00.200 Bye-bye.
02:17:01.100 Bye-bye.
02:17:01.560 Bye-bye.
02:17:02.120 Bye-bye.
02:17:02.840 Bye-bye.
02:17:03.460 Bye-bye.
02:17:03.820 Bye-bye.
02:17:04.140 Bye-bye.
02:17:05.160 material.
02:17:05.740 Bye-bye.
02:17:05.960 Bye-bye.
02:17:06.260 Bye-bye.
02:17:06.700 Bye-bye.
02:17:06.860 Bye-bye.
02:17:07.580 Bye-bye.
02:17:08.140 Bye-bye.
02:17:08.920 Bye-bye.
02:17:09.780 Bye-bye.
02:17:12.580 Bye-bye.