The Jordan B. Peterson Podcast


308. AI: The Beast or Jerusalem? | Jonathan Pageau & Jim Keller


Summary

Jonathan Pajot is a French-Canadian artist and icon carver. Jim Keller is a microprocessor engineer known in the relevant communities and beyond for his work at Apple and AMD, among other corporations. In this episode, we discuss the perils and promise of artificial intelligence, and how they intersect with religious and cultural ideas. Dr. Jordan B. Peterson has created a new series that could be a lifeline for those battling depression and anxiety. We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling. With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way. In his new series, he provides a roadmap towards healing, showing that while the journey isn t easy, it s absolutely possible to find your way forward. If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better. Go to Daily Wire Plus now and start watching Dr. B.P. Peterson's new series on Depression and Anxiety. Let s take the first step towards the brighter future you deserve. Let s make this a step towards a brighter future we all deserve. And let s do what we can do to help each other achieve that. Thank you for listening, and let s keep moving forward together. -Jon and Jim - Jonothan and J.Keller Welcome to the Daily Wire + Podcast! . Today's episode features a three-way conversation between Jonathan and Jim Keller, and their thoughts on what artificial intelligence might mean for us in the future of AI and artificial intelligence. The Future of AI, and what AI can do in the 21st century, and why we should do it better than we already have a better day to make a better life. Today s episode will be a conversation about what AI should look like, not less so than a day in the next five years, not better than the day we can be a day, not a day that we have a day like that, and a day to be a better than that, right now, and more of what we have already got a chance to do better, right here in the morning, a better of a day than that we know that we're going to have a chance of having a better tomorrow, a more of that? Thanks for listening to the podcast, and much more!


Transcript

00:00:00.960 Hey everyone, real quick before you skip, I want to talk to you about something serious and important.
00:00:06.480 Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
00:00:12.740 We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
00:00:20.100 With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
00:00:27.420 He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
00:00:35.360 If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better.
00:00:41.780 Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
00:00:47.460 Let this be the first step towards the brighter future you deserve.
00:00:57.420 Hello everyone watching on YouTube or listening on Associated Platforms.
00:01:13.380 I'm very excited today to be bringing you two of the people I admire most intellectually, I would say, and morally for that matter.
00:01:22.840 Jonathan Pajot and Jim Keller, very different thinkers.
00:01:27.880 Jonathan Pajot is a French-Canadian liturgical artist and icon carver known for his work.
00:01:33.800 Featured in museums across the world, he carves Eastern Orthodox, among other traditional images, and teaches an online carving class.
00:01:41.720 He also runs a YouTube channel, This Symbolic World, dedicated to the exploration of symbolism across history and religion.
00:01:49.440 Jonathan is one of the deepest religious thinkers I've ever met.
00:01:52.240 Jim Keller is a microprocessor engineer known very well in the relevant communities and beyond them for his work at Apple and AMD, among other corporations.
00:02:03.840 He served in the role of architect for numerous game-changing processors, has co-authored multiple instruction sets for highly complicated designs,
00:02:13.320 and is credited for being the key player behind AMD's renewed ability to compete with Intel in the high-end CPU market.
00:02:23.520 In 2016, Keller joined Tesla, becoming Vice President of Autopilot Hardware Engineering.
00:02:30.920 In 2018, he became a Senior Vice President for Intel.
00:02:36.520 In 2020, he resigned due to disagreements over outsourcing production, but quickly found a new position at TENS Torrent as Chief Technical Officer.
00:02:46.520 We're going to sit today and discuss the perils and promise of artificial intelligence, and it's a conversation I'm very much looking forward to.
00:02:54.520 So, welcome to all of you watching and listening.
00:02:58.120 I thought it would be interesting to have a three-way conversation.
00:03:02.120 Jonathan and I have been talking a lot lately, especially with John Verveke and some other people as well, about the fact that we seem...
00:03:10.520 It seems necessary for us to view...for human beings to view the world through a story.
00:03:15.120 In fact, that our...when we describe the structure that governs our action and our perception, that is a story.
00:03:26.120 And so, we've been trying to puzzle out, I would say to some degree on the religious front, what might be the deepest stories.
00:03:34.120 And I'm very curious about the fact that we perceive the world through a story, human beings do, and that seems to be a fundamental part of our cognitive architecture.
00:03:44.720 And of cognitive architecture in general, according to some of the world's top neuroscientists.
00:03:49.720 And I'm curious, and I know Jim is interested in cognitive processing and in building systems that, in some sense, seem to run in a manner analogous to the manner in which our brains run.
00:04:02.320 And so, I'm curious about the overlap between the notion that we have to view the world through a story and what's happening on the AI front.
00:04:08.920 There's all sorts of other places that we can take the conversation.
00:04:11.920 So, maybe I'll start with you, Jim.
00:04:13.920 Do you want to tell people what you've been working on and maybe give a bit of a background to everyone about how you conceptualize artificial intelligence?
00:04:24.920 Yeah, sure.
00:04:25.920 So, first, I'll say technically I'm not an artificial intelligent researcher.
00:04:32.520 I'm a computer architect.
00:04:34.520 And I'd say my skill set goes from, you know, somewhere around the atom up to the program.
00:04:41.520 So, we make transistors out of atoms, we make logical gates out of transistors, we make computers out of logical gates.
00:04:49.120 So, we run programs on those, and recently we've been able to run programs fast enough to do something called an artificial intelligence model or a neural network, depending on how you say it.
00:05:04.120 And then we're building chips now that run artificial intelligence models fast.
00:05:11.800 And we have a novel way to do it, the company I work at.
00:05:15.800 But lots of people are working on it, and I think we were sort of taken by surprise what's happened in the last five years, how quickly models started to do interesting and intelligent-seeming things.
00:05:32.800 There's been an estimate that human brains do about 10 to the 18th operations a second, which sounds like a lot.
00:05:41.800 It's a billion-billion operations a second.
00:05:44.800 And a little computer, you know, the processor in your phone probably does 10 billion operations a second, you know, and then if you use the GPU, maybe 100 billion, something like that.
00:05:57.800 And big modern AI computers like OpenAI uses or Google or somebody, they're doing like 10 to the 16th, maybe slightly more operations a second.
00:06:09.800 So, they're within a factor of 100 of a human brain's raw computational ability.
00:06:16.800 And by the way, that could be completely wrong, our understanding of how the human brain does computation could be wrong, but lots of people have estimated based on number of neurons, number of connections, how fast neurons fire, how many operations a neuron firing seems to involve.
00:06:31.800 I mean, the estimates range by a couple orders of magnitude, but when our computers got fast enough, we started to build things called language models and image models that do fairly remarkable things.
00:06:45.800 So, what have you seen in the last few years that's been indicative of this, of the change that you describe as revolutionary?
00:06:51.800 What are computers doing now that you found surprising because of this increase in speed?
00:06:58.800 Yeah, you can have a language model read a 200,000 word book and summarize it fairly accurately.
00:07:05.800 So, it can extract out the gist?
00:07:07.800 The gist of it.
00:07:08.800 Can it do that with fiction?
00:07:10.800 Yeah.
00:07:11.800 Yeah.
00:07:12.800 And I'm going to introduce you to a friend who took a language model and changed it and fine-tuned it with Shakespeare and used it to write screenplays that are pretty good.
00:07:24.800 And these kinds of things are really interesting.
00:07:27.800 And we were talking about this a little bit earlier.
00:07:30.800 So, when computers do computations, you know, a program will say add A equal B plus C.
00:07:38.800 The computer does those operations on representations of information, ones and zeros.
00:07:43.800 It doesn't understand them at all.
00:07:46.800 The computer has no understanding of it.
00:07:49.800 But what we call a language model translates information like words and images and ideas into a space where the program, the ideas, and the operation it does on them are all essentially the same thing.
00:08:04.800 Right?
00:08:05.800 So, a language model can produce words and then use those words as inputs.
00:08:10.800 And it seems to have an understanding of what those words are, which is very different from how a computer operates on data.
00:08:19.800 I'm curious about the language models.
00:08:21.800 I mean, my sense of, at least in part, how we understand a story is that maybe we're watching a movie, let's say, and we get some sense of the character's goals.
00:08:35.800 And then we see the manner in which that character perceives the world.
00:08:40.800 And we, in some sense, adopt his goals, which is to identify with character.
00:08:45.800 And then we play out a panoply of emotions and motivations on our body because we now inhabit that goal space.
00:08:52.800 And we understand the character as a consequence of mimicking the character with our own physiology.
00:08:59.800 And you have computers that can summarize the gist of a story, but they don't have that underlying physiology.
00:09:06.800 Well, first of all, it's a theory that your physiology has anything to do with it.
00:09:12.800 You could understand the character's goals and then get involved in the details of the story.
00:09:18.800 And then you're predicting the path of the story and also having expectations and hopes for the story.
00:09:25.800 Yeah.
00:09:26.800 And a good story kind of takes you on a ride because it teases you with doing some of the things you expect, but also doing things that are unexpected.
00:09:34.800 Yeah.
00:09:35.800 And possibly that creates emotional.
00:09:37.800 Yeah, it does.
00:09:38.800 It does.
00:09:39.800 So in an AI model, so you can easily have a set of goals.
00:09:44.800 So you have your personal goals.
00:09:45.800 And then when you watch the story, you have those goals.
00:09:47.800 Yeah.
00:09:48.800 You put those together.
00:09:49.800 Like, how many goals is that?
00:09:51.800 Like, the story's goals and your goals?
00:09:53.800 Hundreds?
00:09:54.800 Thousands?
00:09:55.800 Those are small numbers, right?
00:09:57.800 Then you have the story.
00:09:58.800 The AI model can predict the story, too, just as well as you can.
00:10:02.800 How do you...
00:10:03.800 And...
00:10:04.800 That's the thing that I find mysterious is that...
00:10:06.800 As the story progresses, it can look at the error between what it predicted and what actually happened.
00:10:13.800 Mm-hmm.
00:10:14.800 And then iterate on that.
00:10:15.800 Right.
00:10:16.800 So you would call that emotional excitement, disappointment...
00:10:19.800 Anxiety.
00:10:20.800 Anxiety, perhaps.
00:10:21.800 Yeah, definitely.
00:10:22.800 Well, a big part of what anxiety does seem to be is discrepancy.
00:10:25.800 Right, and those states...
00:10:26.800 Like, some of those states are manifesting your body because you trigger hormone cascades and a bunch of stuff.
00:10:30.800 But you also can just scan your brain and see that stuff move around.
00:10:33.800 Right.
00:10:34.800 Right.
00:10:35.800 And, you know, the AI model can have an error function and look at the difference between what it expected and not.
00:10:42.800 And you could call that the emotional state.
00:10:44.800 Yeah.
00:10:45.800 Yeah, well...
00:10:46.800 If you want it.
00:10:47.800 I just talked with the direct...
00:10:48.800 And that's speculation, but...
00:10:49.800 No, no, I think that's accurate.
00:10:50.800 But, you know, we can make an AI model that could predict the result of a story probably better than the average person.
00:10:56.800 So, one of the things...
00:10:57.800 So, one of the things...
00:10:58.800 Some people are really good at...
00:10:59.800 You know, they're really well-educated about stories or they know the genre or something, but...
00:11:03.800 Yeah.
00:11:04.800 But, you know, these things...
00:11:06.800 And what they see today is the capacity of the models is...
00:11:09.800 If you say, start describing a lot, it'll make sense for a while, but it'll slowly stop making sense.
00:11:15.800 But that's possible.
00:11:16.800 That's simply the capacity of the model right now.
00:11:19.800 And the model is not well-grounded enough in a set of, let's say, goals and reality or something to make sense for a while.
00:11:26.800 So, what do you think would happen, Jonathan?
00:11:28.800 This is, I think, associated with the kind of things that we've talked through to some degree.
00:11:34.800 So, one of my hypotheses, let's say, about deep stories is that they're meta-gists in some sense.
00:11:46.800 So, you could imagine a hundred people telling you a tragic story and then you could reduce each of those tragic stories to the gist of the tragic story.
00:11:55.800 And then you could aggregate the gists and then you'd have something like a meta-tragedy.
00:11:59.800 And I would say, the deeper the gist, the more religious-like the story gets.
00:12:05.800 And that's part of...
00:12:06.800 It's that idea is part of the reason that I wanted to bring you guys together.
00:12:10.800 I mean, one of the things that what you just said makes me wonder is...
00:12:13.800 Imagine that you took Shakespeare and you took Dante and you took, like, the canonical Western writers and you trained an AI system to understand the structure of each of them.
00:12:27.800 And then, and now you have, you could pull out the summaries of those structures, the gists, and then couldn't you pull out another gist out of that?
00:12:37.800 So, it would be like the essential element of Dante and Shakespeare.
00:12:41.800 And I wonder when that would get biblical.
00:12:43.800 I want to hear what Jonathan said so far.
00:12:45.800 So, here's one funny thing to think about.
00:12:48.800 You used the word pull out.
00:12:50.800 So, when you train a model to know something, you can't just look in it and say, what does it know?
00:12:56.800 You have to query it.
00:12:58.800 Right.
00:12:59.800 You have to ask.
00:13:00.800 Right.
00:13:01.800 Right.
00:13:02.800 What's the next sentence in this paragraph?
00:13:04.800 What's the answer to this question?
00:13:06.800 There's a thing on the internet now called prompt engineering.
00:13:09.800 And it's the same way, I can't look in your brain to see what you think.
00:13:13.800 Yeah.
00:13:14.800 I have to ask you what you think.
00:13:15.800 Because if I killed you and scanned your brain and got the current state of all the synapses and stuff, A, you'd be dead, which would be sad.
00:13:23.800 And B, I wouldn't know anything about your thoughts.
00:13:26.800 Your thoughts are embedded in this model that your brain carries around.
00:13:31.800 And you can express it in a lot of ways.
00:13:34.800 And so, how do you train?
00:13:37.800 So, this is my big question.
00:13:39.800 I mean, because the way that I've been seeing it until now is that artificial intelligence is based on us.
00:13:46.800 It doesn't exist independently from humans and it doesn't have care.
00:13:51.800 The question would be, why does the computer care?
00:13:54.800 Yeah, that's not true.
00:13:57.800 Why does the computer care to get the gist of the story?
00:14:00.800 Well, yeah.
00:14:01.800 So, I think you're asking kind of the wrong question.
00:14:03.800 So, you can train an AI model on like the physics and reality and images in the world just with images.
00:14:10.800 And there are people who are figuring out how to train a model with just images.
00:14:16.800 But the model itself still conceptualizes things like tree and dog and action and run because those all exist in the world.
00:14:27.800 Right?
00:14:28.800 So, and you can actually train.
00:14:31.800 So, and when you train a model with all the language and words.
00:14:35.800 So, all information has structure.
00:14:38.800 And I know you're a structure guy from your video.
00:14:40.800 So, if you look around you at any image, every single point you see makes sense.
00:14:45.800 Yeah.
00:14:46.800 Right?
00:14:47.800 It's a teleological structure.
00:14:49.800 It's like a, it's a purpose-laden structure.
00:14:52.800 Right?
00:14:53.800 So, this is something.
00:14:54.800 So, it turns out all the words that have ever been spoken by human beings also have structure.
00:15:00.800 Right.
00:15:01.800 Right.
00:15:02.800 Right.
00:15:03.800 And so, physics has structure.
00:15:05.800 And then it turns out that some of the deep structure of images and actions and words and sentences are related.
00:15:13.800 Mm-hmm.
00:15:14.800 Like, there's actually a common core of, like, imagine there's like a knowledge space.
00:15:21.800 And, and sure, there's details of humanity where, you know, they prefer this, this accent versus that.
00:15:29.800 Those are kind of details.
00:15:30.800 But they're coherent in the language model.
00:15:32.800 But the language models themselves are coherent with our world ideas.
00:15:36.800 Mm-hmm.
00:15:37.800 And humans are trained in the world just the way the AI models are trained in the world.
00:15:41.800 Like a little baby, as it's learning, looking around, it's training on everything it sees when it's very young.
00:15:48.800 And then it's training rate goes down and it starts interacting with what it's learning and interacting with the people around it.
00:15:54.800 But it's trying to survive.
00:15:55.800 It's trying to live.
00:15:57.800 It has, like, it has a, the infant or the child has.
00:16:01.800 Neurons aren't trying.
00:16:02.800 The weights in the neurons aren't trying to live.
00:16:05.800 What they're trying to do is reduce the error.
00:16:07.800 So, neural networks generally are predictive things.
00:16:11.800 Like, what's coming next?
00:16:13.800 What makes sense?
00:16:15.800 You know, how does this work?
00:16:17.800 And when you train, when you train an AI model, you're training it to reduce the error in the model.
00:16:25.800 And if your model is big.
00:16:27.800 Okay.
00:16:28.800 Let me ask you about that.
00:16:29.800 So, well, first of all.
00:16:31.800 So, babies are doing the same thing.
00:16:33.800 Like, they're looking at stuff going around.
00:16:35.800 And in the beginning, their neurons are just randomly firing.
00:16:38.800 But as it starts to get object permanence and look at stuff, it starts predicting what will make sense for that thing to do.
00:16:43.800 And when it doesn't make sense, it'll update its model.
00:16:47.800 Basically, it compares its prediction to the events.
00:16:51.800 It will adjust its prediction.
00:16:53.800 So, in a story prediction model, the AI would predict the story, then compare it to its prediction, and then fine tune itself slowly as it trains itself.
00:17:03.800 Okay.
00:17:04.800 Or in reverse, you could ask it to say, given the set of things, tell the rest of the story, and it could do that.
00:17:09.800 Mm-hmm.
00:17:10.800 Right.
00:17:11.800 And the state of it right now is there are people having conversations with us that are pretty good.
00:17:17.800 Mm-hmm.
00:17:18.800 So, I talked to Carl Friston about this prediction idea in some detail.
00:17:22.800 And so, Friston, for those of you who are watching and listening, is one of the world's top neuroscientists.
00:17:27.800 And he's developed an entropy enclosure model of conceptualization, which is analogous to one that I was working on, I suppose, across approximately the same timeframe.
00:17:38.800 So, the first issue, and this has been well established in the neuropsychological literature for quite a long time, is that anxiety is an indicator of discrepancy between prediction and actuality.
00:17:51.800 And then, positive emotion also looks like a discrepancy reduction indicator.
00:17:57.800 So, imagine that you're moving towards a goal, and then you evaluate what happens as you move towards the goal.
00:18:04.800 And if you're moving in the right direction, what happens is what, you might say, what you expect to happen.
00:18:10.800 And that produces positive emotion, and it's actually an indicator of reduction in entropy.
00:18:14.800 That's one way of looking at it.
00:18:17.800 Then, the point is...
00:18:19.800 So, yeah, you have a bunch of words in there that are psychological definitions of states.
00:18:24.800 But you could say there's a prediction and error is a prediction.
00:18:27.800 Yes.
00:18:28.800 And you're reducing error.
00:18:29.800 Yes.
00:18:30.800 But what I'm trying to make a case for is that your emotions directly map that, both positive and negative emotion, look like they're signifiers of discrepancy reduction, both on the positive and negative emotion side.
00:18:43.800 But then there's a complexity that I think is germane to part of Jonathan's query, which is that...
00:18:50.800 So, the neuropsychologists and the cognitive scientists have talked a long time about expectation, prediction, and discrepancy reduction.
00:18:58.800 But one of the things they haven't talked about is it isn't exactly that you expect things.
00:19:04.800 It's that you desire them.
00:19:06.800 You want them to happen.
00:19:07.800 Like, because you could imagine that there's, in some sense, a literally infinite number of things you could expect.
00:19:14.800 And we don't strive only to match prediction.
00:19:18.800 We strive to bring about what it is that we want.
00:19:21.800 And so we have these preset systems that are teleological, that are motivational systems.
00:19:26.800 Well, I mean, it depends.
00:19:28.800 Like, if you're sitting idly on the beach, like, and a bird flies by, you expect it to fly along in a regular path.
00:19:35.800 Right.
00:19:36.800 But you also...
00:19:37.800 You don't really want that to happen.
00:19:38.800 Yeah, but you don't want it to turn into something that could peck out your eyes either.
00:19:41.800 Sure.
00:19:42.800 So that's a want.
00:19:43.800 Yeah.
00:19:44.800 But you're kind of following it with your expectation to look for a discrepancy.
00:19:48.800 Yes.
00:19:49.800 Now, you'll also have a, you know, depends on the person, somewhere between 10 and a million desires, right?
00:19:57.800 And then you also have fears and avoidance.
00:20:00.800 And those are context.
00:20:02.800 So if you're sitting on the beach with some anxiety that the birds are going to swerve at you and pick your eyes out.
00:20:08.800 Yeah.
00:20:09.800 So then you might be watching it much more attentively than somebody who doesn't have that worry, for example.
00:20:14.800 But both of you can predict where it's going to fly, and you will both notice a discrepancy, right?
00:20:19.800 The motivations, one way of conceptualizing fundamental motivation is they're like a priori prediction domains, right?
00:20:28.800 And so it helps us narrow our attentional focus because I know when you're sitting and you're not motivated in any sense, you can be doing just, in some sense, trivial expectation computations.
00:20:41.800 But often we're in a highly motivated state.
00:20:44.800 Sure.
00:20:45.800 And what we're expecting is bounded by what we desire and what we desire is oriented, as Jonathan pointed out, towards the fact that we want to exist.
00:20:52.800 One of the things I don't understand and wanted to talk about today is how the computer models, the AI models, can generate intelligible sense without mimicking that sense of motivation.
00:21:09.800 Because you've said, for example, they can just derive the patterns from observations of the objective world.
00:21:14.800 So again, I don't want to do all the talking, but so AI, generally speaking, like when I first learned about it, had two behaviors.
00:21:24.800 They call it inference and training.
00:21:26.800 So inference is you have a trained model.
00:21:28.800 So say you give it a picture and say, is there a cat in it?
00:21:31.800 And it tells you where the cat is.
00:21:33.800 That's inference.
00:21:34.800 The model has been trained to know where a cat is.
00:21:36.800 And training is the process of giving it an input and an expected output.
00:21:41.800 And when you first start training the model, it gives you garbage out, like an untrained brain would.
00:21:47.800 And then you take the difference between the garbage output and the expected output and call that the error.
00:21:53.800 And then they invent the big revelation was something called backpropagation with gradient descent.
00:21:58.800 But that means take the error and divide it up across the layers and correct those calculations so that when you put a new thing in, it gives you a better answer.
00:22:11.800 And then, to somewhat my astonishment, if you have a model of sufficient capacity and you train it with 100 million images, if you give it a novel image and say, tell me where the cat is, it can do it.
00:22:25.800 So training is the process of doing a pass with an expected output and propagating an error back through the network.
00:22:35.800 And inference is the behavior of putting something in and getting an output.
00:22:40.800 I think I'm really pulling.
00:22:43.800 But there's a third piece, which is what the new models do, which is called generative.
00:22:49.800 It's called a generative model.
00:22:51.800 So, for example, say you put in a sentence and you say, predict the next word.
00:22:56.800 This is the simplest thing.
00:22:58.800 So it predicts the next word.
00:23:00.800 So you add that word to the input.
00:23:02.800 And now say, predict the next word.
00:23:04.800 So it contains the original sentence and the word you generated.
00:23:08.800 And it keeps generating words that make sense in the context of the original word and additional words.
00:23:14.800 Right, right, right.
00:23:15.800 This is the simplest basis.
00:23:18.800 And then it turns out you can train this to do lots of things.
00:23:20.800 You can train it to summarize a sentence.
00:23:23.800 You can train it to answer a question.
00:23:27.800 There's a big thing about, you know, like Google every day has hundreds of millions of people asking it questions and giving answers and then rating the results.
00:23:36.800 You can train a model with that information.
00:23:38.800 So you can ask it a question and it gives you a sensible answer.
00:23:41.800 But I think in what you said, I actually have the issue that has been going through my mind so much is when you said, you know, people put in the question and then they rate the answer.
00:23:53.800 But my intuition is that the intelligence still comes from humans in the sense that it seems like in order to train whatever AI, you have to be able to give it a lot of power and then say at the beginning, this is good.
00:24:08.800 This is bad.
00:24:09.800 This is good.
00:24:10.800 This is bad.
00:24:11.800 Like reject certain things, accept certain things in order to then reach a point when then you train the AI.
00:24:16.800 And so that's what I mean about the care.
00:24:18.800 So the care will, will come from humans because the care is the one giving it the value saying this is the, this is what is valuable.
00:24:25.800 This is what is not valuable in your calculation.
00:24:29.800 So, so when they first, so there's a program called AlphaGo that learned how to play Go better than a human.
00:24:35.800 So there's two ways to train the model.
00:24:37.800 One is they have a huge database of lots of Go games with good winning moves.
00:24:43.800 So they trained the model with that.
00:24:45.800 And that worked pretty good.
00:24:46.800 And they also took two simulations of Go and they did random moves.
00:24:53.800 And all that happened was, is these two simulators played one Go game and they just recorded whichever moves happened to win.
00:25:01.800 And it started out really horrible.
00:25:03.800 And they just started training the model and this is called adversarial learning.
00:25:07.800 It's not particular adversarial.
00:25:08.800 It's like, you know, you make your moves randomly and you train a model.
00:25:13.800 And so they train multiple models.
00:25:15.800 And over time, those models got very good and they actually got better than human players.
00:25:20.800 Because the humans have limitations about what they know, whereas the models could experiment in a really random space and go very far.
00:25:28.800 Yeah.
00:25:29.800 But experiment towards the purpose of winning the game.
00:25:31.800 Yes.
00:25:32.800 Well, but you can experiment towards all kinds of things, it turns out.
00:25:37.800 And humans are also trained that way.
00:25:40.800 Like when you were learning, you were reading, you were saying, this is a good book.
00:25:43.800 This is a bad book.
00:25:44.800 This is good sentence construction.
00:25:46.800 It's good spelling.
00:25:47.800 So you've gotten so many error signals over your life.
00:25:51.800 Well, that's what culture does in large parts.
00:25:53.800 And culture does that.
00:25:54.800 Religion does that.
00:25:55.800 Yeah.
00:25:56.800 Your everyday experience does that.
00:25:58.800 Your family.
00:25:59.800 So we embody that.
00:26:01.800 Yeah.
00:26:02.800 Right.
00:26:03.800 And we're all, and everything that happens to us, we process it on the inference pass, which generates outputs.
00:26:09.800 And then sometimes we look at that and say, hey, that's unexpected or that got a bad result or that got bad feedback.
00:26:15.800 And then we back propagate that and update our models.
00:26:19.800 So really well trained models can then train other models.
00:26:24.800 So humans right now are the smartest people in the world.
00:26:28.800 So the biggest question, the biggest question that I, that, that comes now based on what you said is, because my, my main point is to try to show how it seems like artificial intelligence is always an extension of human intelligence.
00:26:43.800 Like it remains an extension of human intelligence.
00:26:46.800 And maybe the way to.
00:26:47.800 That won't be true at all.
00:26:48.800 So do you think that, do you think that at some point the artificial intelligence will be able to, because the goals recognizing cats, you know, writing plays, all these goals are goals which are, which are based on embodied human existence.
00:27:04.800 Could you train, could you train, could you train, could an AI at some point develop a goal which would be uncomprehensible to humans because of its own existence?
00:27:13.800 Yeah.
00:27:14.800 I mean, like, for example, there's a small population of humans that enjoy math, right?
00:27:19.800 And they are pursuing, you know, adventures in math space that are incomprehensible to 99.99% of humans, but that's, but they're interested in it.
00:27:30.800 And you could imagine like an AI program working with those mathematicians and coming up with very novel math ideas and then interacting with them.
00:27:41.460 But they could also, you know, if some AIs were, were elaborating out really interesting and detailed stories, they could come up with stories that are really interesting.
00:27:51.460 We're going to see it pretty soon, like all of art, movie making and everything.
00:27:56.160 Could there be a story that is interesting only to the AI and not interesting to us?
00:27:59.820 Yeah, it's possible.
00:28:02.180 So stories are like, I think, some high level information space.
00:28:07.180 So, so the, the, the, the computing age of big data, there's all this data running on computers where nobody, only humans understood it, right?
00:28:15.100 The computers don't.
00:28:16.280 So AI programs are now at the state where the information, the processing and the feedback loops are all kind of in the same space.
00:28:25.580 They're still, you know, relatively rudimentary to humans.
00:28:29.820 Like some AI programs and certain things are better than humans already, but for the most part, they're not, but it's moving really fast.
00:28:36.620 So, and so you can imagine, you know, I think in five or 10 years, most people's best friends will be AIs.
00:28:43.860 And, you know, they'll know you really well and they'll be interested in you and, you know.
00:28:48.840 Unlike your real friends.
00:28:51.100 Yeah, real friends are problematic.
00:28:52.560 They're only interested in you when you're interested.
00:28:54.720 Yeah, yeah, real friends are.
00:28:55.860 The AI systems will love you even when you're dull and miserable.
00:28:59.080 Well, there's, there's, and there's so much idea space to explore and humans have a wide range.
00:29:05.880 Some humans like to go through their everyday life doing their everyday things.
00:29:09.240 And some people spend a lot of time, like you, a lot of time reading and thinking and talking and arguing and debating, you know.
00:29:17.000 And, you know, there's going to be, like I'd say, a diversity of possibilities with what's, what a thinking thing can do when the thinking is fairly unlimited.
00:29:29.500 So, I'm curious about, I'm still, I'm curious in pursuing this, this issue that Jonathan has been developing.
00:29:39.100 So, there's a, there's a literally infinite number of ways, virtually infinite number of ways that we could take images of this room.
00:29:48.580 Right now, if a human being is taking images of this room, they're going to be, they're going to sample a very small space of that infinite range of possibilities.
00:29:56.380 Because if I was taking pictures in this room, in all likelihood, I would take pictures of identities, objects that are identifiable to human beings, that are functional to human beings, at a level of focus that makes those objects clear.
00:30:11.520 And so, then you could imagine that the set of all images on the internet has that implicit structure of perception built into it.
00:30:19.320 And that's a function of what human beings find useful.
00:30:22.460 You know, I mean, I could take a photo of you that was, the focal depth was here, and here, and here, and here, and here, and two inches past you.
00:30:29.220 And now, I suppose you could.
00:30:30.980 There's a technology for that called light fields.
00:30:33.260 Okay.
00:30:33.740 So, then you could, if you had that picture properly done, then you could move around in an image and see.
00:30:40.580 But yeah, fair enough.
00:30:41.680 I get your point.
00:30:42.620 Like the human recorded data has.
00:30:46.500 Has our biology built into it.
00:30:48.360 Has our biology built into it, but also unbelievably detailed encoding of how physical reality works.
00:30:57.440 Right.
00:30:58.160 So, every single pixel in those pictures, even though you kind of selected the view, the focus, the frame.
00:31:03.040 Right.
00:31:04.000 It still encoded a lot more information than you're processing.
00:31:08.820 Right.
00:31:09.040 And if you take a large, it turns out if you take a large number of images of things in general, so you've seen these things where you take a 2D image and turn it into a 3D image.
00:31:18.800 Yeah.
00:31:19.220 Right.
00:31:19.680 The reason that works is even in the 2D image.
00:31:22.620 The 3D is embedded.
00:31:23.700 The 3D image in the room actually got embedded in that picture in a way.
00:31:28.660 Yeah, yeah, yeah.
00:31:29.200 Then if you have the right understanding of how physics and reality works, you can reconstruct the 3D model.
00:31:34.960 Okay, so this reminds me.
00:31:37.240 But you could, you know, an AI scientist may cruise around the world with infrared and radio wave cameras, and they might take pictures of all different kinds of things.
00:31:48.020 And every once in a while, they'd show up and go, hey, the sun, you know, I've been staring at the sun and the ultraviolet and radio waves for the last month.
00:31:55.620 And it's way different than anybody thought, because humans tend to look at light in the visible spectrum.
00:32:02.800 And, you know, there could be some really novel things coming out of that.
00:32:07.080 Well, so...
00:32:07.760 But humans also, we live in the spectrum we live in, because it's a pretty good one for planet Earth.
00:32:13.320 Like, it wouldn't be obvious that AI would start some different place.
00:32:17.220 Like, visible spectrum is interesting for a whole bunch of reasons.
00:32:20.800 Right. So, in a set of images that are human-derived, you're saying that there's...
00:32:26.200 The way I would conceptualize that is that there's two kinds of logos embedded in that.
00:32:31.400 One would be that you could extract out from that set of images what was relevant to human beings.
00:32:36.880 But you're saying that the fine structure of the objective world outside of human concern is also embedded in the set of images.
00:32:44.980 And that an AI system could extract out a representation of the world, but also a representation of what's motivating to human beings.
00:32:53.020 Yes. And then some human scientists already do look at the sun and radio waves and other things, because they're trying to, you know, get different angles on how things work.
00:33:02.440 Yeah, well, I guess it...
00:33:04.380 It's a curious thing. It's like the same with, like, buildings and architecture. Mostly fit people.
00:33:11.180 Well, the other...
00:33:12.380 There's a reason for that.
00:33:13.900 The reason why I keep coming back to hammering the same point is that even in terms of the development of the AI, that is, developing AI requires immense amount of money, energy, you know, and time.
00:33:27.780 That's a transient thing. In 30 years, it won't cost anything. So, that's going to change so fast, it's amazing.
00:33:34.760 So, that's a... Like, supercomputers used to cost millions of dollars, and now your phone is the supercomputer.
00:33:40.960 So, it's... The time between millions of dollars and $10 is about 30 years.
00:33:46.860 So, it's... Like, I'm just saying, it's... Like, the time and effort isn't a thing in technology. It's moving pretty fast.
00:33:55.420 It's just... That's just... That just sets the date.
00:33:58.440 Yeah. But even making... Even making... Let's say, even... I mean, I guess maybe this is the nightmare question.
00:34:06.480 Like, could you imagine an AI system which becomes completely autonomous, which is creating itself even physically through automized factories, which is, you know, programming itself, which is creating its own goals, which is not at all connected to human endeavor?
00:34:24.580 Yeah. I mean, individual researchers can... You know, I have a friend who... I'm going to introduce you to him tomorrow.
00:34:31.520 He wrote a program that scraped all of the internet and trained an AI model to be a language model on a relatively small computer.
00:34:39.020 And in 10 years, the computer he could easily afford would be as smart as a human.
00:34:43.720 And so, he could train that pretty easily.
00:34:47.060 And that model could go on Amazon and buy 100 more of those computers and copy itself.
00:34:52.940 So, yeah, we're 10 years away from that.
00:34:56.460 And then... Then why... Like, why would it do that?
00:34:59.340 I mean, what does... It does... Is it possible... It's all about the motivational question.
00:35:03.720 I think that that's what even Jordan and I both have been coming at from the outset.
00:35:07.660 It's like... So, you have an image, right? You have an image of Skynet or of the Matrix, you know, in which the sentient AI is actually fighting for its survival.
00:35:17.360 So, it has a survival instinct, which is pushing it to self-perpetuate, like to replicate itself and to create variation on itself in order to survive and identifies humans as an obstacle to that, you know.
00:35:32.500 Yeah, yeah. So, you have a whole bunch of implicit assumptions there.
00:35:36.360 So, humans, last I checked, are unbelievably competitive.
00:35:40.280 And when you let people get into power with no checks on them, they typically run amok.
00:35:45.460 It's been a historical experience.
00:35:48.340 And then humans are, you know, self-regulating to some extent, obviously, with some serious outliers, because they self-regulate with each other.
00:35:58.360 And humans and AI models, at some point, will have to find their own calculation of self-regulation and trade-offs about that stuff.
00:36:09.760 Yeah, because AI doesn't feel pain, at least that we don't know that it feels pain.
00:36:14.780 Well, lots of humans don't feel pain either.
00:36:16.780 So, I mean, that's... I mean, humans feeling pain or not doesn't stop a whole bunch of activity.
00:36:23.160 I mean, that's...
00:36:23.720 I mean, it doesn't... The fact that we feel pain doesn't stop...
00:36:26.580 It doesn't regulate that many people.
00:36:28.920 Right, right.
00:36:30.060 I mean, there's definitely people like, you know, children, if you threaten them with, you know, go to your room and stuff, you can regulate them that way.
00:36:36.420 But some kids ignore that completely, and adults are the same way.
00:36:39.300 And it's often counterproductive.
00:36:41.040 Yeah.
00:36:41.240 So, right, you know, culture and societies and organizations, we regulate each other, you know, sometimes in success and not in success.
00:36:51.280 In competition and cooperation.
00:36:53.500 Yeah.
00:36:53.700 Do you think that... Well, we've talked about this to some degree for decades.
00:36:58.260 I mean, when you look at how fast things are moving now, and as you push that along, when you look out 10 years, and you see the relationship between the AI systems that are being built and human beings, what do you envision?
00:37:14.440 Or can you envision it?
00:37:16.060 Well, can I... Yeah.
00:37:18.980 Like I said, I'm a computer guy.
00:37:21.480 And I'm watching this with, let's say, some fascination as well.
00:37:25.740 I mean, the last...
00:37:27.220 So, Ray Kurzweil said, you know, progress accelerates.
00:37:30.960 Yeah.
00:37:31.460 Right?
00:37:31.740 So, we have this idea that 20 years of progress is 20 years.
00:37:34.980 But, you know, the last 20 years of progress was 20 years, and the next 20 years will probably be, you know, 5 to 10.
00:37:41.540 Right, right, right.
00:37:42.340 And...
00:37:43.300 And you can really feel that happening.
00:37:45.100 To some level, that causes social stress, independent of whether it's AI or Amazon deliveries.
00:37:51.980 You know, there's so many things that are going into the stress of it all.
00:37:56.580 But there's progress, which is an extension of human capacity.
00:38:01.120 And then there's this progress, which I'm hearing about, the way that you're describing it, which seems to be an inevitable progress towards creating something which is more powerful than you.
00:38:14.740 Right?
00:38:15.040 And so, what is that?
00:38:16.040 I don't even understand that drive.
00:38:17.700 Like, what is that drive to create something which can supplant you?
00:38:22.160 So, look at the average person in the world, right?
00:38:24.800 So, the average person already exists in this world.
00:38:28.060 Because the average person is halfway up the human hierarchy.
00:38:31.680 There's already many people more powerful than any of us.
00:38:35.980 They could be smarter.
00:38:37.100 They could be richer.
00:38:37.880 They could be better connected.
00:38:39.460 We already live in a world.
00:38:41.340 Like, very few people are at the top of anything.
00:38:44.460 Right?
00:38:45.060 So, that's already a thing.
00:38:46.640 So, basically, the drive to make someone a superstar, let's say, or the drive to elevate someone above you.
00:38:52.980 So, that would be the same drive that is bringing us to creating these ultra-powerful machines.
00:38:59.680 Because we have that.
00:39:00.640 Like, we have a drive to elevate.
00:39:02.340 Like, you know, when we see a rock star that we like, people want to submit themselves to that.
00:39:07.200 They want to dress like them.
00:39:08.280 They want to raise them up above them as an example.
00:39:11.100 Something to follow.
00:39:12.260 Right?
00:39:12.380 Something to subject themselves to.
00:39:14.860 You see that with leaders.
00:39:15.880 You see that in the political world.
00:39:17.160 And in teams, you see that.
00:39:20.320 In sports teams, the same thing.
00:39:21.840 And so, you think that's the drive.
00:39:22.840 Well, we've always tried to build things that are beyond us.
00:39:25.620 You know?
00:39:26.600 I mean...
00:39:27.200 I mean, it's about, are we building a God?
00:39:29.920 Is that what people...
00:39:31.480 Is that the drive that is pushing someone towards...
00:39:34.400 Because when I hear what you're describing, Jim, I hear something that is extremely dangerous.
00:39:39.760 Right?
00:39:39.940 Sounds extremely dangerous to the very existence of humans.
00:39:43.200 Yet, I see humans acting and moving in that direction almost without being able to stop it.
00:39:48.980 As if there's no one...
00:39:49.800 I think it is unstoppable.
00:39:51.600 Well, that's one of the things we've also talked about.
00:39:54.100 Because I've asked Jim straight out, you know, because of the hypothetical danger associated with this, why not stop doing it?
00:40:02.740 And, well, part of his answer is the ambivalence about the outcome.
00:40:06.020 But also that it isn't obvious at all that in some sense it's stoppable.
00:40:10.460 I mean, it's the cumulative action of many, many people that are driving this along.
00:40:17.040 And even if you took out one player, even a key player, the probability that you do anything but slow it infinitesimally is quite...
00:40:25.860 Yeah, because there's also a massive payoff for those that will succeed.
00:40:29.380 It's also set up that way.
00:40:30.780 People know that at least until the AI take over or whatever, that whoever is on the line towards increasing the power of the AI will rake in major rewards.
00:40:44.340 Right?
00:40:44.660 Well, that's what you do with all cognitive acceleration, right?
00:40:48.040 Yeah.
00:40:48.240 I could recommend Ian Banks as an author, English author, I think.
00:40:53.580 He wrote a series of books on the...
00:40:55.440 He called the culture novels.
00:40:57.380 And it was a world where there was humans and then there was AIs, the smartest humans, and AIs that were dumber than humans.
00:41:04.000 But there were some AIs that were much, much smarter.
00:41:06.360 And they lived in harmony because they mostly all pursued what they wanted to pursue.
00:41:12.080 Humans pursued human goals and super smart AIs pursued super smart AI goals.
00:41:18.540 And, you know, they communicated and worked with each other.
00:41:22.360 But they mostly, you know, they're different.
00:41:25.600 When they were different enough that that was problematic, their goals were different enough that they didn't overlap.
00:41:31.320 Because one of the things that...
00:41:33.300 That would be my guess.
00:41:34.280 It's like these ideas where these super AIs get smart and the first thing they do is stomp out the humans.
00:41:39.860 It's like, you don't do that.
00:41:41.000 Like, you don't wake up in the morning and think, I have to stomp out all the cats.
00:41:45.100 No, it's not about...
00:41:46.400 The cats do cat things and the ants do ant things and the birds do bird things.
00:41:50.520 And super smart mathematicians do smart mathematician things.
00:41:54.880 And, you know, guys who like to build houses do build house things.
00:41:58.060 And, you know, everybody...
00:41:59.680 You know, the world...
00:42:00.820 There's so much space in the intellectual zone.
00:42:04.280 That people tend to go pursue the...
00:42:08.920 In a good society.
00:42:10.740 Like, you tend to pursue the stuff that you do.
00:42:13.540 And then the people in your zone, you self-regulate.
00:42:18.240 And you also, even in the social strategies, we self-regulate.
00:42:22.800 I mean, the recent political events of the last 10 years, the weird thing to me has been, why have, you know, people with power been overreaching to take too much from people with less?
00:42:36.800 Like, that's bad regulation.
00:42:38.400 But one of the aspects of increase in power is that increase in power is always mediated, at least in one aspect, by military, by, let's say, physical power on others.
00:42:54.040 You know, and we can see that technology is linked and has been linked always to military power.
00:42:59.500 And so the idea that there could be some AIs that will be our friends or whatever is maybe possible.
00:43:07.340 But the idea that there will be some AIs which will be weaponized seems absolutely inevitable because increase in power is always...
00:43:16.980 Increase in technological power always moves towards military.
00:43:20.240 So we've lived with atomic bombs since the 40s, right?
00:43:25.680 So the, I mean, the solution to this has been mostly, you know, some form of mutual assured destruction or attacking me, like the response to attacking me is so much worse than the...
00:43:39.460 Yeah, but it's also because we have reciprocity.
00:43:42.340 We recognize each other as the same.
00:43:44.500 So if I look into the face of another human, there's a limit of how different I think that person is from me.
00:43:51.680 But if I'm hearing something described as a possibility of superintelligences that have their own goals, their own cares, their own structures, then how much mirror is there between these two groups of people, these two groups?
00:44:05.380 Well, Jim's objection seems to be something like we're making, we may be making when we're doomsaying, let's say.
00:44:13.400 And I'm not saying there's no place for that.
00:44:16.280 We're making the presumption of something like a zero-sum competitive landscape, right?
00:44:21.720 Is that the idea and the idea behind movies like The Terminator is that there is only so much resources and the machines and the human beings would have to fight over it.
00:44:33.040 And you can see that that could easily be a preposterous assumption.
00:44:36.760 Now, I think that one of the fundamental points you're making, though, is also there will definitely be people that will weaponize AI.
00:44:47.140 And those weaponized AI systems will have as their goal something like the destruction of human beings, at least under some circumstances.
00:44:54.840 And then there's the possibility that that will get out of control because the most effective systems at destroying human beings might be the ones that win, let's say.
00:45:04.660 And that could happen independently of whether or not it is a true zero-sum competition.
00:45:08.760 Yeah, and also the effectiveness of military stuff doesn't need very smart AI to be a lot better than it is today.
00:45:18.680 You know, like the Star Wars movies where, like, you know, tens of thousands of years in the future, super highly trained, you know, fighters can't hit somebody running across a field.
00:45:30.040 Like, that's silly, right?
00:45:31.200 You can already make a gun that can hit everybody in the room without aiming at it.
00:45:36.720 It's, you know, there's, like, the military threshold is much lower than any intelligence threshold, like, for danger.
00:45:47.520 And, you know, like, to the extent that we self-regulated through the nuclear crisis is interesting.
00:45:53.880 I don't know if it's because we thought that the Russians were like us.
00:45:57.260 I kind of suspect the problem was that we thought they weren't like us.
00:46:01.820 And, but we still managed to make some calculation to say that any kind of attack would be mutually devastating.
00:46:11.120 Well, when you look at, you know, the destructive power of the military we already have so far exceeds the planet.
00:46:16.860 I'm not sure, like, adding intelligence to it is the tipping point.
00:46:21.320 Like, like, that's, I think the more likely thing is things that are truly smart in different ways will be interested in different things.
00:46:31.400 And then the possibility for, let's say, mutual flourishing is really interesting.
00:46:38.260 And I know artists using AI already to do really amazing things.
00:46:42.540 And that's already happening.
00:46:44.400 Well, when you're working on the frontiers of AI development and you see the development of increasingly intelligent machines, I mean, I know that part of what drives you is, I don't want to put words in your mouth, but what drives intelligent engineers in general, which is to take something that works and make it better and maybe to make it radically better and radically cheaper.
00:47:04.540 So, so there's this drive toward technological improvement.
00:47:08.140 And I know that you like to solve complex problems and you do that extraordinarily well.
00:47:12.920 But do you, do you, is there also a vision of a more abundant form of human flourishing emerging from the, from the development?
00:47:23.500 So what do you see happening?
00:47:24.820 Well, you said it years ago.
00:47:26.080 It's like, we're going to run out of energy.
00:47:27.680 What's next?
00:47:28.140 We're going to run out of matter.
00:47:29.300 Right.
00:47:29.540 Like our ability to do what we want in ways that are interesting and, you know, for some people, beautiful is limited by a whole bunch of things because we're, you know, partly it's technological and partly with, you know, we're stupidly divisive.
00:47:46.780 But there is, there's also a reality, which is one of the things that technology has been is, of course, an increase in power towards desire, towards human desire.
00:47:59.540 And that is represented in mythological stories where, let's say, technology is used to accomplish impossible desire.
00:48:09.140 Right.
00:48:09.340 We have, you know, the story of, the story of building the, the mechanic, the bull around the king of Minos, the wife of the king of Minos, you know, in order to be inseminated by, by a bull.
00:48:21.040 We have the story, we have the story of Frankenstein, et cetera, the story of the golem, where we put our desire into this increased power.
00:48:33.800 And then what happens is that we don't know our desires.
00:48:36.300 That's one of the things that I've also been worried about in terms of AI is that we act, we have secret desires that enter into what we do that people aren't totally aware of.
00:48:47.980 And as we increase in power, these systems, those desires, let's say, the, like the idea, for example, of the possibility of having an AI friend and the idea that an AI friend would be the best friend you've ever had because that, that friend would be the nicest to you, would care the most about you, would do all those things.
00:49:09.000 That would be an exact example of what I'm talking about, which is, it's really the story of the genie, right?
00:49:14.680 It's the story of the genie in the lamp, where the genie says, what do you wish?
00:49:20.080 And the, and the person, and I have unlimited power to give it to you.
00:49:23.100 And so I give him my wish, but that wish has all these, these underlying implications that I don't understand, all these underlying possibilities.
00:49:30.760 Yeah, but the cool thing, the moral of almost all those stories is having unlimited wishes will be, lead to your downfall.
00:49:41.160 And so humans, like, if you give, you know, a young person an unlimited amount of stuff to drink, for six months they're going to be falling down drunk and they're going to get over it, right?
00:49:52.780 Having a friend that's always your friend no matter what, it's probably going to get boring pretty much.
00:49:56.860 Well, the literature on marital stability indicates that.
00:50:00.860 So there's a, there's a sweet spot with regards to marital stability in terms of the ratio of negative to positive communication.
00:50:09.600 So if on average you receive five positive communications and one negative communication from your spouse, that's on the low threshold for stability.
00:50:21.680 If it's four positive to one negative, you're headed for divorce.
00:50:25.020 But interestingly enough, on the other end, there's a threshold as well, which is that if it exceeds 11 positive to one negative, you're also moving towards divorce.
00:50:36.000 So there's, so, so, so there might be self-regulating mechanisms that, that would in sense take care of that.
00:50:43.160 You might find a yes man, AI friend, extraordinarily boring, very, very rapidly.
00:50:48.740 But as opposed to an AI friend that was interested in what you were interested in, it was actually interesting.
00:50:54.920 Like, you know, we go through friends in the course of our lives, like different friends are interesting at different times.
00:51:00.480 And some friends we grow with, and that continues to be really interesting for years and years.
00:51:05.260 And other friends, you know, some people get stuck in their thing and then you've moved on or they've moved on or something.
00:51:10.840 So, yeah, I tend to think of, like, a world where there was more abundance and more possibilities and more interesting things to do is an interesting place.
00:51:25.180 Okay, okay.
00:51:25.920 And modern society has let the human population, and some people think this is a bad thing, but I don't know.
00:51:31.900 I'm a fan of it.
00:51:33.540 You know, modern population has gone from tens of, 200 million to billions of people.
00:51:38.180 So, that's generally being a good thing.
00:51:40.340 We're not running out of space.
00:51:41.800 I've been in, you know, some of your audience has probably been in an airplane.
00:51:45.360 If you look out the window, the country is actually mostly empty.
00:51:49.140 The oceans are mostly empty.
00:51:50.480 Like, we're weirdly good at polluting large areas, but as soon as we decide not to, we don't have to.
00:51:58.620 Most of our, you know, energy pollution problems are technical.
00:52:03.440 Like, we can stop polluting.
00:52:04.920 Like, electric cars are great.
00:52:06.160 So, there's so many things that we could do technically.
00:52:11.940 I forget the guy's name.
00:52:13.840 He said the Earth could easily support a population of a trillion people.
00:52:17.940 And a trillion people would be a lot more people doing, you know, random stuff.
00:52:21.780 And he didn't imagine that the future population would be a trillion humans and a trillion AIs, but it probably will be.
00:52:29.780 It will probably exist on multiple planets, which will be good the next time an asteroid shows up.
00:52:34.460 So, what do you think about, so one of the things that seems to be happening, tell me if you think I'm wrong here, and I think it's germane to join us.
00:52:42.080 And I just want to make the point of, you know, where we are compared to living in the Middle Ages, our lives are longer, our families are healthier, our children are more likely to survive.
00:52:51.460 Like, many, many good things happened.
00:52:54.940 Like, setting the clock back wouldn't be good.
00:52:57.640 And, you know, if we have some care and people who actually care about how culture interacts with technology for the next 50 years, you know, we'll get through this hopefully more successfully than we did the atomic bomb and the Cold War.
00:53:10.580 But it's a major change.
00:53:15.380 I mean, this is, like, your worries are, you know, I mean, they're relevant.
00:53:22.540 But, you know, but also, Jonathan, your stories about how humans have faced abundance and faced evil kings and evil overlords.
00:53:31.480 Like, we have thousands of years of history of facing the challenge of the future and the challenge of things that cause radical change.
00:53:39.960 Yeah.
00:53:40.540 You know, that's very valuable, you know, information.
00:53:44.900 But for the most part, nobody's succeeded by stopping change.
00:53:48.880 They've succeeded by bringing to bear on the change our capability to self-regulate the balance.
00:53:57.440 Like, a good life isn't having as much gold as possible.
00:54:00.340 It's a boring life.
00:54:01.480 A good life is, you know, having some quality friends and doing what you want and having some insight in life.
00:54:08.520 Yeah.
00:54:09.280 And some optimal challenge.
00:54:11.640 And, you know, in a world where a larger percentage of people can have, well, live in relative abundance and have tools and opportunities, I think is a good thing.
00:54:23.160 Yeah.
00:54:23.340 And I don't want to pull back abundance.
00:54:25.620 But what I have noticed is that our abundance brings a kind of nihilism to people.
00:54:33.600 And I don't, like I said, I don't want to go back.
00:54:35.840 I'm happy to live here and to have these tech things.
00:54:38.400 But I think it's something that I've also noticed, that increase of the capacity to get your desires when that increases to a certain extent also leads to a kind of nihilism where exactly that.
00:54:55.900 Well, I wonder, Jonathan, I wonder if that's partly a consequence of the erroneous maximization of short-term desire.
00:55:06.980 I mean, one of the things that you might think about that could be dangerous on the AI front is that we optimize the manner in which we interact with our electronic gadgets to capture short-term attention, right?
00:55:23.120 Because there's a difference between getting what you want right now, right now, and getting what you need in some more mature sense across a reasonable span of time.
00:55:31.960 And one of the things that does seem to be happening online, and I think it is driven by the development of AI systems, is that we're assaulted by systems that parasitize our short-term attention and at the expense of longer-term attention.
00:55:48.120 And if the AI systems emerge to optimize attentional grip, it isn't obvious to me that they're going to optimize for the attention that works over the medium to long run, right?
00:55:59.420 They're going to be, they could conceivably maximize something like whim-centered existence.
00:56:06.460 Yeah, because all the virality is based on that, all the social media networks are all based on this reduction of attention, this reduction of desire to reaching your rest, let's say, in that desire, right?
00:56:19.820 The like, the click, all these things, they're...
00:56:22.400 Yeah, now.
00:56:23.520 Yeah, exactly.
00:56:24.120 So, but that's something that, you know, so for reasons that are somewhat puzzling, but maybe not, you know, the business models around a lot of those interfaces are around, you know, the part, the users, the product, and, you know, the advertisers are trying to get your attention.
00:56:42.260 Yeah, yeah.
00:56:43.100 But that's something culture could regulate.
00:56:45.000 We could decide that, no, we don't want tech platforms to be driven by advertising money.
00:56:49.960 Like, that would be a smart decision, probably.
00:56:53.520 And that could be a big change.
00:56:56.140 And also...
00:56:56.460 What do you see as an alternative?
00:56:57.640 See, well, the problem with that might be that markets drive that in some sense, right?
00:57:02.780 Yeah.
00:57:03.160 And I know they're driving that in a shorter way.
00:57:04.900 But we can take steps.
00:57:06.220 Like, you know, at various times, you know, alcohol's been illegal.
00:57:09.400 Like, you can, society can decide to regulate all kinds of things.
00:57:14.220 And, you know, sometimes some things need to be regulated and some things don't.
00:57:18.800 Like, when you buy a hammer, you don't fight with your hammer for its attention, right?
00:57:23.180 A hammer's a tool.
00:57:24.460 You buy one when you need one.
00:57:26.820 Nobody's marketing hammers to you.
00:57:29.220 Like, that has a relationship that's transactional to your purpose, right?
00:57:34.860 Yeah.
00:57:35.100 And our technology has become a thing where, I mean...
00:57:38.860 But there's a relationship...
00:57:39.500 In the short run, things are so...
00:57:40.920 There's a relationship between human, let's say, high human goals.
00:57:44.680 Something like attention and status.
00:57:48.540 And what we talked about, which is the idea of elevating something higher in order to see it as a model.
00:57:53.740 See, these are where intelligence exists in the human person.
00:57:58.560 And so when we notice that in the systems, in the platforms, these are the aspects of intelligence which are being weaponized in some ways...
00:58:08.740 Not against us, but are just kind of being weaponized because they're the most beneficial at the short term to be able to generate our constant attention.
00:58:16.720 And so what I mean is that that is what the AIs are made of, right?
00:58:21.340 They're made of attention, prioritization, you know, good, bad.
00:58:26.640 What is it that is worth putting energy into in order to predict towards a telos?
00:58:31.820 And so I'm seeing that the idea that we could disconnect them suddenly seems very difficult to me.
00:58:39.600 Yeah, so I'll give you two...
00:58:41.620 First, I want to give an old example.
00:58:43.500 So after World War II, America went through this amazing building boom of building suburbs.
00:58:50.520 And the American dream was you could have your own house, your own yard in the suburb with a good school, right?
00:58:56.360 So in the 50s, 60s, early 70s, they were building that like crazy.
00:59:01.260 By the time I grew up, I lived in a suburban dystopia, right?
00:59:05.800 And we found that that as a goal wasn't a good thing because people ended up in houses separated from social needs and structures.
00:59:16.760 And then new towns are built around like a hub with, you know, places to go and eat, you know.
00:59:22.260 So there was a good that was viewed in terms of opportunity and abundance, but it actually was a fail culturally.
00:59:30.380 And then some places it modified and it continues.
00:59:33.000 And some places are still dystopian, you know, suburban areas.
00:59:37.400 And some places people simply learn to live with it, right?
00:59:41.300 Yeah, but that has something that has to do with attention, by the way.
00:59:44.220 It has to do with a subsidiary hierarchy, like a hierarchy of attention, which is set up in a way in which all the levels can have room to exist, let's say.
00:59:56.160 And so, you know, the new systems, the new way, let's say the new urbanist movement, similar to what you're talking about, that's what they've understood.
01:00:04.960 It's like we need places of intimacy in terms of the house.
01:00:07.760 We need places of communion in terms of, you know, parks and alleyways and buildings where we meet and a church, all these places that kind of manifest our communion together.
01:00:18.880 Yeah, so those existed coherently for long periods of time.
01:00:23.400 And then the abundance post-World War II and some ideas about, like, what life could be like caused this big change.
01:00:32.720 And that change satisfied some needs, people got houses, but broke community needs.
01:00:38.080 And then new sets of ideas about what's the synthesis, what's the possibility of having your own home but also having community, not having to drive 15 minutes for every single thing.
01:00:50.260 And some people live in those worlds and some people don't.
01:00:52.620 Do you think we'll be smart?
01:00:54.420 So one of the problems...
01:00:55.340 Well, why were we smart enough to solve some of those problems?
01:00:58.420 Because we had 20 years.
01:00:59.880 But now, because one of the things that's happening now is, as you pointed out earlier, is we're going to be producing equally revolutionary transformations, but at a much smaller scale of time.
01:01:12.280 And so, Mike, one of the things I wonder about, I think it's driving some of the concerns in the conversation is, are we going to be intelligent enough to direct with regulation the transformations of technology as they start to accelerate?
01:01:29.680 I mean, we've already, you look what's happened online, I mean, we've inadvertently, for example, radically magnified the voices of narcissists, psychopaths, and Machiavellians.
01:01:41.880 And we've done that so intensely, partly, and I would say partly as a consequence of AI mediation, that I think it's destabilizing the entire political economy.
01:01:51.520 Well, it's destabilizing part of it, like as Scott Adams pointed out, you just block everybody that acts like that.
01:01:56.980 I don't pay attention to people that talk like that.
01:01:59.760 Yeah, but they seem to be raising the temperature in the entire culture.
01:02:01.660 Well, there's still places that are sensitive to it, like 10,000 people here can make a storm and some corporate, you know, person, you know, fires somebody.
01:02:10.760 But I think that's, like, we're five years from that being over.
01:02:13.500 A corporation will go 10,000 people out of 10 billion.
01:02:17.380 Not a big deal.
01:02:18.380 Okay, so you think at the moment it's a calibration program.
01:02:20.260 That's a learning moment that we'll re-regulate.
01:02:25.520 What's natural to our children is so different than what's natural to us.
01:02:29.760 But what was natural to us was very different from our parents.
01:02:32.960 So some changes get accepted generationally really fast.
01:02:37.280 So what's made you so optimistic?
01:02:40.160 What's made, what do you mean optimistic?
01:02:42.080 Well, most of the things that you have said today, and maybe it's also because we're pushing you, I mean, you really do.
01:02:49.760 My nephew, Kyle, is a really smart, clever guy.
01:02:53.560 He called me a, what did he call it, a cynical optimist.
01:02:59.340 Like, I believe in people.
01:03:02.340 Like, I like people, but also people are complicated.
01:03:04.800 They all got all kinds of nefarious goals.
01:03:06.800 Like, I worry a lot more about people burning down the world than I do about artificial intelligence.
01:03:12.660 Just because, you know, people, well, you know people, they're difficult, right?
01:03:20.020 But the interesting thing is in aggregate we mostly self-regulate.
01:03:24.560 And when things change, you have these dislocations.
01:03:27.100 And then it's up to people who talk and think, and while we're having this conversation, I suppose, to talk about how do we re-regulate this stuff.
01:03:35.240 Well, because one of the things that the increase in power has done in terms of AI, and you can see it with Google and you can see it online, is that there are certain people who hold the keys, let's say, and then who hold the keys to what you see and what you don't see.
01:03:51.680 So you see that on Google, right?
01:03:53.440 And you know it if you know what searches to make where you realize that this is not, this is actually being directed by someone who now has huge amount of power in order to direct my attention towards their ideological purpose.
01:04:07.840 And so that's why, like, I think that to me, I always tend to see AI as an extension of human power.
01:04:19.280 Even though there is this idea that it could somehow become totally independent, I still tend to see it as an increase of the human care.
01:04:29.120 And whoever will be able to hold the keys to that will have increase in power.
01:04:33.980 And that can be, like, and I think we're already seeing it.
01:04:37.080 Well, that's not really any different, though, is it, Jonathan, than the situation that's always confronted us in the past?
01:04:43.460 I mean, we've always had to deal with the evil uncle of the king, and we've always had to deal with the fact that an increase in ability could also produce a commensurate increase in tyrannical power, right?
01:04:56.360 I mean, so that might be magnified now, and maybe the danger in some sense is more acute, but possibly the possibility is more present as well.
01:05:06.380 Because you can train an AI to find hate speech, right?
01:05:10.180 You can train an AI to find hate speech and then to act on that hate speech immediately within—now it's only—we're not only talking about social media, but what we've seen is that that is now encroaching into payment systems and into people losing their bank account, their access to different services.
01:05:30.200 And so this idea of automization—
01:05:32.140 Yeah, there's an Australian bank that already has decided that it's a good thing to send all of their customers a carbon load report every month, right?
01:05:41.400 And to offer them hints about how they could reduce their polluting purchases, let's say.
01:05:47.980 Well, at the moment, that system is one of voluntary compliance, but you can certainly see in a situation like the one we're in now that the line between voluntary compliance and involuntary compulsion is very, very thin.
01:06:02.860 Yeah.
01:06:03.180 So I'd like to say—so during the early computer world, computers were very big and expensive.
01:06:09.140 And then they made many computers and workstations, but they were still corporate-only.
01:06:13.720 And then the PC world came in.
01:06:15.440 All of a sudden, PCs put everybody online.
01:06:18.320 Everybody could suddenly see all kinds of stuff.
01:06:21.600 And, you know, people could get a Freedom of Information Act request, put it online somewhere, and 100,000 people could see it.
01:06:28.520 Like, it was an amazing democratization moment.
01:06:32.200 And then there was a similar but smaller revolution with the world of, you know, smartphones and apps.
01:06:39.860 But then we've had a new, completely different set of companies, by the way, you know, from, you know, what happened in the 60s, 70s, and 80s to today.
01:06:49.260 It's very different companies that control it.
01:06:52.400 And there are people who worry that AI will be a winner-take-all thing.
01:06:56.260 Now, I think so many people are using it, and they're working on it in so many different places, and the cost is going to come down so fast that pretty soon you'll have your own AI app that you'll use to mediate the Internet to strip out, you know, the endless stream of ads.
01:07:12.980 And you can say, well, is this story objective?
01:07:16.380 Well, here's the 15 stories, and this is being manipulated this way, and this is being manipulated that way.
01:07:21.840 And you can say, well, I want—what's more like the real story?
01:07:25.140 And the funny thing is, information that's broadly distributed and has lots of inputs is very hard to fake the whole thing.
01:07:36.800 So right now, a story can pull through a major media outlet, and if they can control the narrative, everybody gets the fake story.
01:07:44.440 But if the media is distributed across a billion people who are all interacting in some useful way, somebody's standing up—
01:07:53.820 There's a signal there.
01:07:54.020 Yeah, there's real signal there, and if somebody stands up and says something that's not true, everybody goes, everybody knows that's not true.
01:08:01.360 So, like, a good outcome with people thinking seriously would be the democratization of information and, you know, objective facts in the same way.
01:08:13.020 The same thing that happened with PCs versus corporate central computers could happen again.
01:08:17.920 Yeah, but if you have an increased problem, the problem is that these are—
01:08:22.460 And that's a real possibility.
01:08:23.900 The increasing power always creates the two at the same time.
01:08:27.780 And so we saw that, you know, increase in power creates first—or it depends in which direction it happens.
01:08:34.560 It creates an increase in decentralization, an increase in access, an increase in all that.
01:08:39.120 But then it also, at the same time, creates the counter reaction, which is an increase in control, an increase in centralization.
01:08:48.060 And so, now, the more the power is, the more the waves will—the bigger the waves will be.
01:08:56.500 And so the image of—the image that 1984 presented to us, you know, of people going into newspapers and changing the headlines and taking the pictures out and doing that, that now obviously can happen with just a click.
01:09:11.680 So you can click and you can change the past.
01:09:14.340 You can change the past.
01:09:15.540 You can change facts about the world because they're all held, you know, online.
01:09:20.200 And we've seen it happen, obviously, in the media recently.
01:09:22.840 But it—so does decentralization win over centralization?
01:09:27.880 How is that even possible, it seems?
01:09:30.180 I mean, and it's also interesting.
01:09:31.640 Like, when Amazon became a platform, suddenly any mom-and-pop business could have a, you know, Amazon, eBay, there's a bunch of platforms, which had an amazing impact.
01:09:43.840 Because any business could get to anybody.
01:09:47.040 But then the platform itself started to control the information flow, right?
01:09:51.460 But at some point, that'll turn into—people go, well, why am I letting somebody control my information flow when Amazon objectively doesn't really have any capability, right?
01:10:03.640 So, like you point out, the waves are getting bigger, but they're real waves.
01:10:09.300 It's the same with information.
01:10:10.980 The information is all online.
01:10:12.220 It's also on a billion hard drives, right?
01:10:15.700 So, if somebody says, I'm going to erase objective fact, a distributed information system would say, yeah, go ahead and erase it anywhere you want.
01:10:24.680 There's another thousand copies of it.
01:10:27.060 Yeah.
01:10:27.540 And that's what—
01:10:28.060 But again, this is—
01:10:29.080 They tried to do, didn't they?
01:10:30.500 Yeah.
01:10:31.440 Yeah.
01:10:31.760 And this is where thinking people have to say, yeah, this is a serious problem.
01:10:35.860 Like, if humans don't have anything to fight for, they get lazy and, you know, a little bit dopey, in my view.
01:10:41.100 Like, we do have something to fight for, and, you know, that's worth talking about.
01:10:47.800 Like, what would a great world with, you know, distributed, you know, human intelligence and artificial intelligence working together in a collaborative way to create abundance and fairness and, you know, like some better way of arriving at good decisions than what the truth is?
01:11:05.740 That would be a good thing.
01:11:07.400 But, you know, it's not, well, we'll leave it to the experts, and then the experts will tell us what to do.
01:11:12.720 That's a bad thing.
01:11:14.140 Yeah.
01:11:14.360 So, that's—
01:11:15.740 Well, so do you—the model that you just laid out, which I think is very interesting—
01:11:19.840 I'm somewhat optimistic about that.
01:11:21.340 Yeah.
01:11:21.920 Well, it did happen on the computational front.
01:11:24.080 I mean, it was the case—
01:11:24.920 It happened a couple times both directions.
01:11:26.720 Okay.
01:11:27.200 Right?
01:11:27.660 You know, the PC revolution was amazing.
01:11:29.960 Yeah.
01:11:30.720 Right?
01:11:30.940 And Microsoft was a fantastic company.
01:11:33.540 It enabled everybody to write a $10, $50 program to use.
01:11:37.620 Yeah.
01:11:38.040 Right?
01:11:38.320 And then at some point, they're also, you know, let's say a difficult program company, and they made money off a lot of people and became extremely valuable.
01:11:46.840 Now, for the most part, they haven't been that directional in telling you what to do and think and how to do it.
01:11:52.480 But they are a money-making company.
01:11:55.720 You know, Apple created the App Store, which is great, but then they also take 30% of the App Store profits,
01:12:00.340 and there's a whole section of the internet that's fighting with Apple about their control of that platform.
01:12:05.720 Mm-hmm.
01:12:06.340 Right?
01:12:06.760 And in Europe, you know, they've decided to regulate some of that, which—that should be a social, cultural conversation about how should that work.
01:12:18.280 Yeah.
01:12:18.520 So do you see the more likely, certainly the more desirable future, is something like a set of distributed AIs, many of which are under personal—in personal relationship, in some sense,
01:12:35.080 the same way that we're in personal relationship with our phones and our computers, and that that would give people the chance to fight back, so to speak, against this.
01:12:43.660 And there's lots of people really interested in distributed platforms.
01:12:47.420 And one of the interesting things about the AI world is, you know, there's a company called OpenAI, and they open-source a lot of it.
01:12:54.080 The AI research is amazingly open.
01:12:56.820 It's all done in public.
01:12:58.140 People publish the new model all the time.
01:13:00.620 You can try them out.
01:13:02.280 People—there's a lot of startups doing AI in all different kinds of places.
01:13:07.180 You know, it's a very curious phenomenon.
01:13:11.780 Yeah.
01:13:12.500 There are two—
01:13:13.160 And it's kind of like a big, huge wave.
01:13:15.180 It's not like a—you can't stop a wave with your hand.
01:13:18.500 Yeah.
01:13:19.360 Well, when you think about the waves, there are two, actually, in the book of Revelation, which describes the end or describes the finality of all things or the totality of all things as maybe a way for people who are more secular to kind of understand it.
01:13:31.920 And in that book, there are two images, interesting images about technology.
01:13:36.760 One is that there is a dragon that falls from the heavens, and that dragon makes a beast.
01:13:42.740 And then that beast makes an image of the beast, and then the image speaks.
01:13:48.220 And when the image speaks, then people are so mesmerized by the speaking image that they worship the beast, ultimately.
01:13:56.680 So that is one image of, let's say, making and technology in Scripture, in Revelation.
01:14:02.200 But there's another image, which is the image of the heavenly Jerusalem.
01:14:06.480 And that image is more an image of balance.
01:14:08.780 It's an image of the city which comes down from heaven with a garden in the center and then becomes this glorious city.
01:14:15.720 And it says, the glory of all the kings is gathered into the city.
01:14:20.600 Like, so the glory of all the nations is gathered into this city.
01:14:23.720 So now you see a technology which is at the service of human flourishing and takes the best of humans and brings it into itself in order to kind of manifest.
01:14:34.220 And it also has hierarchy, which means it has the natural at the center and then has the artificial as serving the natural, you could say.
01:14:41.100 So those two images seem to reflect these two waves that we see.
01:14:45.840 And this kind of idea of an artificial intelligence which will be ruling over us or speaking over us.
01:14:52.480 But there's a secret person controlling it.
01:14:55.320 Even in Revelation, it's like, there's a beast controlling it and making it speak.
01:15:00.060 So now we're mesmerized by it.
01:15:02.240 And then this other image.
01:15:03.400 So I don't know, Jordan, if you ever thought about those two images in Revelation as being related to technology, let's say.
01:15:09.900 Well, I don't think I've thought about those two images in the specific manner that you described.
01:15:15.480 But I would say that the work that I've been doing and I think the work you've been doing, too, in the public front reflects the dichotomy between those images.
01:15:25.340 And it's relevant to the points that Jim has been making.
01:15:27.900 I mean, we are definitely increasing our technological power.
01:15:31.220 And you can imagine that that'll increase our capacity for tyranny and also our capacity for abundance.
01:15:37.000 And then the question becomes, what do we need to do in order to increase the probability that we tilt the future towards Jerusalem and away from the beast?
01:15:45.660 And the reason that I've been concentrating on helping people bolster their individual morality to the degree that I've managed that is because I think that whether the outcome is the positive outcome that in some sense Jim has been outlining or the negative outcomes that we've been querying him about,
01:16:04.100 I think that's going to be dependent on the individual ethical choices of people at the individual level, but then cumulatively, right?
01:16:11.740 So if we decide that we're going to worship the image of the beast, so to speak, because we're mesmerized by our own reflection, that's another way of thinking about it.
01:16:19.740 And we want to be the victim of our own dark desires, then the IA revolution is going to go very, very badly.
01:16:25.900 But if we decide that we're going to aim up in some positive way and we make the right micro decisions, well, then maybe we can harness this technology to produce a time of abundance in the manner that Jim is hopeful about.
01:16:38.720 Yeah, and let me make two funny points.
01:16:42.040 So one is, I think there's going to be continuum, like the word artificial intelligence won't actually make any sense, right?
01:16:51.040 So humans collectively, like individuals know stuff, but collectively we know a lot more, right?
01:16:57.780 And the thing that's really good is in a diverse society with lots of people pursuing individual, interesting, you know, ideas, worlds, like we have a lot of things.
01:17:09.780 And more people, more independence generates more diversity.
01:17:16.860 And that's a good thing where, you know, totalitarian society where everybody's told to wear the same shirt.
01:17:21.880 Like it's inherently boring, like the beast speaking through the monster is inherently dull, right?
01:17:30.460 Like, but in an intelligent world where not only can we have more intelligent things, but in some places go far beyond what most humans are capable of in pursuit of interesting variety.
01:17:46.260 And, you know, like I believe the information and, well, let's say intelligence is essentially unlimited, right?
01:17:54.520 Like, and the unlimited intelligence won't be this shiny thing that tells everybody what to do.
01:17:59.780 That's sort of the opposite of interesting intelligence.
01:18:03.760 Interesting intelligence will be more diverse, not less diverse.
01:18:08.180 Like that's a good future.
01:18:09.840 And your second description, that seems like a future worth working for and also worth fighting for.
01:18:17.380 And that means concrete things today.
01:18:20.600 And also, you know, it's a good conceptualization.
01:18:24.260 Like, I see the messages my kids are taught, you know, don't have children and the world's going to end.
01:18:29.740 We're going to run out of everything.
01:18:31.220 You're a bad person.
01:18:32.080 Why do you even exist?
01:18:33.140 It's like these messages are terrible.
01:18:35.720 The opposite is true.
01:18:37.700 More people would be better.
01:18:39.100 We live in a world of potential abundance, right?
01:18:43.220 It's right in front of us.
01:18:44.820 Like, there's so much energy available.
01:18:47.340 It's just amazing.
01:18:49.560 It's possible to build technology without, you know, pollution consequences.
01:18:54.100 That's called externalizing costs.
01:18:56.080 Like, we know how to do that.
01:18:58.460 We can have very good, clean technology.
01:19:00.840 We can do lots of interesting things.
01:19:03.000 So, if the goal is maximum diversity, then the line between human intelligence, artificial intelligence that we draw, like, you'll see all these kind of really interesting partnerships and all kinds of things.
01:19:15.820 And more people doing what they want, which is the world I want to live in.
01:19:19.840 Yeah.
01:19:20.020 But to me, it seems like the question is going to be related to attention, ultimately.
01:19:27.020 That is, what are humans attending to at the highest?
01:19:30.380 What is it that humans care for in the highest?
01:19:33.180 You know, in some ways, you could say, what do humans, what are humans worshiping?
01:19:37.300 And, like, depending on what humans worship, then their actions will play out in the technology that they're creating, in the increase in power that they're creating.
01:19:46.620 Well, that's, well, and if we're guided by the negative vision, the sort of thing that Jim laid out that is being taught to his children, you can imagine that we're in for a pretty damn dismal future, right?
01:19:57.040 Human beings are a cancer on the face of the planet.
01:19:59.600 There's too many of us.
01:20:00.600 We have to accept top-down, compelled limits to growth.
01:20:04.140 There's not enough for everybody.
01:20:05.520 A bunch of us have to go because there is too many people on the planet.
01:20:09.580 We have to raise up the price of energy so that we don't, what, burn the planet up with carbon dioxide pollution, et cetera.
01:20:17.520 It's a pretty damn dismal view of the potential that's in front of us.
01:20:24.040 And so...
01:20:24.760 Yeah, the world should be exciting.
01:20:26.280 And the future should be exciting.
01:20:28.260 Well, we've been sitting here for about 90 minutes, bandying back and forth both visions of abundance
01:20:35.080 and visions of apocalypse.
01:20:36.460 And, I mean, I've been heartened, I would say, over the decades talking to Jim about what he's doing on the technological front.
01:20:44.660 And I think part of the reason I've been heartened is because I do think that his vision is guided primarily by desire to help bring about something approximating life more abundant.
01:20:55.980 And I would rather see people on the AI front who are guided by that vision working on this technology.
01:21:01.460 But I also think it's useful to do what you and I have been doing in this conversation, Jonathan,
01:21:07.180 and acting in some sense as friendly critics and hopefully learning something in the interim.
01:21:13.320 Do you have anything you want to say in conclusion?
01:21:15.220 I mean, I just think that the question is linked very directly to what we've been talking about now for several years,
01:21:22.240 which is the question of attention, the question of what is the highest attention.
01:21:27.120 And I think the reason why I have more alarm, let's say, than Jim,
01:21:31.420 is that I've noticed that in some ways human beings have come to now, let's say, worship their own desires.
01:21:38.600 They've come to worship.
01:21:40.640 And that even the strange thing of worshiping their own desires has actually led to an anti-human narrative.
01:21:46.640 You know, this weird idea, this almost suicidal desire that humans have.
01:21:50.880 And so I think that seeing all of that together in the increase of power,
01:21:54.840 I do worry that the image of the beast is closer to what will manifest itself.
01:22:01.420 And I feel like during COVID, that sense in me was accelerated tenfold in noticing to what extent technology was used,
01:22:11.200 especially in Canada, how technology was used to instigate something which looked like authoritarian systems.
01:22:18.420 And so I am worried about it.
01:22:19.940 But I think like Jim, honestly, although I say that, I do believe that in the end, truth wins.
01:22:24.600 I do believe that in the end, you know, these things will level themselves out.
01:22:29.180 But I think that because I see people rushing towards AI, almost, you know, almost like lemmings going off a cliff,
01:22:38.640 I feel like it is important to sound the alarm once in a while and say, you know,
01:22:43.980 we need to orient our desire before we go towards this extreme power.
01:22:48.900 So I think that that's mostly the thing that worries me the most and that preoccupies me the most.
01:22:53.320 But I think that ultimately, in the end, I do share Jim's positive vision.
01:22:56.920 And I do think that, I do believe the story has a happy ending.
01:23:00.640 It's just, we might have to go through hell before we get there.
01:23:04.240 I hope not.
01:23:07.160 So Jim, how about you?
01:23:08.520 What have you got to say in closing?
01:23:10.080 A couple of years ago, a friend who's, you know, my age said, oh, kids coming out of college,
01:23:15.220 they don't know anything anymore.
01:23:16.480 They're lazy.
01:23:17.140 And I thought, I work at Tesla.
01:23:18.500 I was working at Tesla at the time.
01:23:20.680 And we hired kids out of college and they couldn't wait to make things.
01:23:25.260 They were like, it's a hands-on place.
01:23:27.860 It's a great place.
01:23:29.020 And I've told people, like, if you're not in a place where you're doing stuff, it's growing, it's making things.
01:23:34.800 You need to go somewhere else.
01:23:37.060 Like, and also, I think you're right.
01:23:39.120 Like, the mindset of, if people are feeling this is a productive, creative technology that's really cool,
01:23:45.980 they're going to go build cool stuff.
01:23:47.880 And if they think it's a shitty job and they're just tuning the algorithm so they can get more clicks,
01:23:53.140 they're going to make something.
01:23:55.820 Beastly.
01:23:56.340 You know, beastly, perhaps.
01:23:58.620 And the stories, you know, our cultural tradition is super useful, both cautionary and, you know,
01:24:05.940 explanatory about something good.
01:24:08.960 Like, and I think it's up to us to go do something about this.
01:24:12.620 And I know people are working really hard to make, you know, the internet a more open place,
01:24:17.760 to make sure information is distributed, to make sure AI isn't a winner-take-all thing.
01:24:23.220 Like, these are real things and people should be talking about them and then they should be worrying.
01:24:29.260 But the upside is really high.
01:24:31.700 And we've faced these kind of technological, like, this is a big change.
01:24:37.400 Like, the AI is bigger than the internet.
01:24:39.340 Like, I've said this publicly.
01:24:41.280 Like, the internet was pretty big.
01:24:44.340 And, you know, this is bigger.
01:24:46.120 It's true.
01:24:48.020 But the possibilities are amazing.
01:24:51.340 And so, with some sense, we could actually...
01:24:53.700 So then let's get our art together and utilize them?
01:24:54.940 Yeah.
01:24:55.320 With some sense, we could achieve it.
01:24:57.040 Yeah.
01:24:57.220 Like, it's...
01:24:58.460 And the world is interesting.
01:25:00.060 Like, I think it'll be a more interesting place.
01:25:02.640 Well, that's an extraordinarily cynically optimistic place to end.
01:25:07.520 I'd like to thank everybody who is watching and listening.
01:25:11.360 And thank you, Jonathan, for participating in the conversation.
01:25:14.060 It's much appreciated, as always.
01:25:15.760 I'm going to talk to Jim Keller for another half an hour on the Daily Wire Plus platform.
01:25:20.900 I use that extra half an hour to usually walk people through their biography.
01:25:25.500 I'm very interested in how people develop successful careers and lives and how their destiny unfolded in front of them.
01:25:33.160 And so, for all of those of you who are watching and listening who might be interested in that, consider heading over to the Daily Wire Plus platform and partaking in that.
01:25:42.400 And otherwise, Jonathan, we'll see you in Miami in a month and a half to finish up the Exodus Seminar.
01:25:48.640 We're going to release the first half of the Exodus Seminar we recorded in Miami on November 25th, by the way.
01:25:56.340 So, that looks like it's in the can.
01:25:58.640 Yeah, I can't wait to see it.
01:25:59.760 The rest of you...
01:26:00.380 Yeah?
01:26:01.680 Yeah, yeah, absolutely.
01:26:03.040 I'm really excited about it.
01:26:04.340 And just for everyone watching and listening, I brought a group of scholars together.
01:26:08.540 About two and a half months ago, we spent a week in Miami, some of the smartest people I could gather around me, to walk through the book of Exodus.
01:26:16.600 We only got through halfway because it turns out there's more information there than I had originally considered.
01:26:22.320 But it went exceptionally well and I learned a lot.
01:26:25.440 And Exodus means ex hodos.
01:26:27.720 That means the way forward.
01:26:29.640 And, well, that's very much relevant to everyone today as we strive to find our way forward through all these complex issues, such as the ones we were talking about today.
01:26:39.140 So, I would also encourage people to check that out when it launches on November 25th.
01:26:43.880 I learned more in that seminar than any seminar I ever took in my life, I would say.
01:26:47.520 So, it was good to see you there.
01:26:49.220 We'll see you in a month and a half.
01:26:50.280 Jim, we're going to talk a little bit more on the Daily Wire Plus platform.
01:26:53.280 And I'm looking forward to meeting the rest of the people in your AI-oriented community tomorrow and learning more about, well, what seems to be an optimistic version of a life more abundant.
01:27:04.520 And to all of you watching and listening, thank you very much.
01:27:07.900 Your attention isn't taken for granted and it's much appreciated.
01:27:11.720 Hello, everyone.
01:27:12.580 I would encourage you to continue listening to my conversation with my guest on dailywireplus.com.
01:27:19.280 Going online without ExpressVPN is like not paying attention to the safety demonstration on a flight.
01:27:25.540 Most of the time, you'll probably be fine, but what if one day that weird yellow mask drops down from overhead and you have no idea what to do?
01:27:33.300 In our hyper-connected world, your digital privacy isn't just a luxury.
01:27:37.100 It's a fundamental right.
01:27:38.260 Every time you connect to an unsecured network in a cafe, hotel, or airport, you're essentially broadcasting your personal information to anyone with a technical know-how to intercept it.
01:27:47.620 And let's be clear, it doesn't take a genius hacker to do this.
01:27:50.940 With some off-the-shelf hardware, even a tech-savvy teenager could potentially access your passwords, bank logins, and credit card details.
01:27:58.320 Now, you might think, what's the big deal?
01:28:00.440 Who'd want my data anyway?
01:28:01.980 Well, on the dark web, your personal information could fetch up to $1,000.
01:28:06.400 That's right, there's a whole underground economy built on stolen identities.
01:28:10.640 Enter ExpressVPN.
01:28:12.400 It's like a digital fortress, creating an encrypted tunnel between your device and the internet.
01:28:16.680 Their encryption is so robust that it would take a hacker with a supercomputer over a billion years to crack it.
01:28:22.760 But don't let its power fool you.
01:28:24.560 ExpressVPN is incredibly user-friendly.
01:28:26.920 With just one click, you're protected across all your devices.
01:28:29.940 Phones, laptops, tablets, you name it.
01:28:32.120 That's why I use ExpressVPN whenever I'm traveling or working from a coffee shop.
01:28:36.260 It gives me peace of mind knowing that my research, communications, and personal data are shielded from prying eyes.
01:28:41.980 Secure your online data today by visiting expressvpn.com slash jordan.
01:28:46.980 That's E-X-P-R-E-S-S-V-P-N dot com slash jordan, and you can get an extra three months free.
01:28:53.360 Expressvpn.com slash jordan.