Jonathan Pajot is a French-Canadian artist and icon carver. Jim Keller is a microprocessor engineer known in the relevant communities and beyond for his work at Apple and AMD, among other corporations. In this episode, we discuss the perils and promise of artificial intelligence, and how they intersect with religious and cultural ideas. Dr. Jordan B. Peterson has created a new series that could be a lifeline for those battling depression and anxiety. We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling. With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way. In his new series, he provides a roadmap towards healing, showing that while the journey isn t easy, it s absolutely possible to find your way forward. If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better. Go to Daily Wire Plus now and start watching Dr. B.P. Peterson's new series on Depression and Anxiety. Let s take the first step towards the brighter future you deserve. Let s make this a step towards a brighter future we all deserve. And let s do what we can do to help each other achieve that. Thank you for listening, and let s keep moving forward together. -Jon and Jim - Jonothan and J.Keller Welcome to the Daily Wire + Podcast! . Today's episode features a three-way conversation between Jonathan and Jim Keller, and their thoughts on what artificial intelligence might mean for us in the future of AI and artificial intelligence. The Future of AI, and what AI can do in the 21st century, and why we should do it better than we already have a better day to make a better life. Today s episode will be a conversation about what AI should look like, not less so than a day in the next five years, not better than the day we can be a day, not a day that we have a day like that, and a day to be a better than that, right now, and more of what we have already got a chance to do better, right here in the morning, a better of a day than that we know that we're going to have a chance of having a better tomorrow, a more of that? Thanks for listening to the podcast, and much more!
00:00:00.960Hey everyone, real quick before you skip, I want to talk to you about something serious and important.
00:00:06.480Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
00:00:12.740We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
00:00:20.100With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
00:00:27.420He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
00:00:35.360If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better.
00:00:41.780Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
00:00:47.460Let this be the first step towards the brighter future you deserve.
00:00:57.420Hello everyone watching on YouTube or listening on Associated Platforms.
00:01:13.380I'm very excited today to be bringing you two of the people I admire most intellectually, I would say, and morally for that matter.
00:01:22.840Jonathan Pajot and Jim Keller, very different thinkers.
00:01:27.880Jonathan Pajot is a French-Canadian liturgical artist and icon carver known for his work.
00:01:33.800Featured in museums across the world, he carves Eastern Orthodox, among other traditional images, and teaches an online carving class.
00:01:41.720He also runs a YouTube channel, This Symbolic World, dedicated to the exploration of symbolism across history and religion.
00:01:49.440Jonathan is one of the deepest religious thinkers I've ever met.
00:01:52.240Jim Keller is a microprocessor engineer known very well in the relevant communities and beyond them for his work at Apple and AMD, among other corporations.
00:02:03.840He served in the role of architect for numerous game-changing processors, has co-authored multiple instruction sets for highly complicated designs,
00:02:13.320and is credited for being the key player behind AMD's renewed ability to compete with Intel in the high-end CPU market.
00:02:23.520In 2016, Keller joined Tesla, becoming Vice President of Autopilot Hardware Engineering.
00:02:30.920In 2018, he became a Senior Vice President for Intel.
00:02:36.520In 2020, he resigned due to disagreements over outsourcing production, but quickly found a new position at TENS Torrent as Chief Technical Officer.
00:02:46.520We're going to sit today and discuss the perils and promise of artificial intelligence, and it's a conversation I'm very much looking forward to.
00:02:54.520So, welcome to all of you watching and listening.
00:02:58.120I thought it would be interesting to have a three-way conversation.
00:03:02.120Jonathan and I have been talking a lot lately, especially with John Verveke and some other people as well, about the fact that we seem...
00:03:10.520It seems necessary for us to view...for human beings to view the world through a story.
00:03:15.120In fact, that our...when we describe the structure that governs our action and our perception, that is a story.
00:03:26.120And so, we've been trying to puzzle out, I would say to some degree on the religious front, what might be the deepest stories.
00:03:34.120And I'm very curious about the fact that we perceive the world through a story, human beings do, and that seems to be a fundamental part of our cognitive architecture.
00:03:44.720And of cognitive architecture in general, according to some of the world's top neuroscientists.
00:03:49.720And I'm curious, and I know Jim is interested in cognitive processing and in building systems that, in some sense, seem to run in a manner analogous to the manner in which our brains run.
00:04:02.320And so, I'm curious about the overlap between the notion that we have to view the world through a story and what's happening on the AI front.
00:04:08.920There's all sorts of other places that we can take the conversation.
00:04:13.920Do you want to tell people what you've been working on and maybe give a bit of a background to everyone about how you conceptualize artificial intelligence?
00:04:34.520And I'd say my skill set goes from, you know, somewhere around the atom up to the program.
00:04:41.520So, we make transistors out of atoms, we make logical gates out of transistors, we make computers out of logical gates.
00:04:49.120So, we run programs on those, and recently we've been able to run programs fast enough to do something called an artificial intelligence model or a neural network, depending on how you say it.
00:05:04.120And then we're building chips now that run artificial intelligence models fast.
00:05:11.800And we have a novel way to do it, the company I work at.
00:05:15.800But lots of people are working on it, and I think we were sort of taken by surprise what's happened in the last five years, how quickly models started to do interesting and intelligent-seeming things.
00:05:32.800There's been an estimate that human brains do about 10 to the 18th operations a second, which sounds like a lot.
00:05:41.800It's a billion-billion operations a second.
00:05:44.800And a little computer, you know, the processor in your phone probably does 10 billion operations a second, you know, and then if you use the GPU, maybe 100 billion, something like that.
00:05:57.800And big modern AI computers like OpenAI uses or Google or somebody, they're doing like 10 to the 16th, maybe slightly more operations a second.
00:06:09.800So, they're within a factor of 100 of a human brain's raw computational ability.
00:06:16.800And by the way, that could be completely wrong, our understanding of how the human brain does computation could be wrong, but lots of people have estimated based on number of neurons, number of connections, how fast neurons fire, how many operations a neuron firing seems to involve.
00:06:31.800I mean, the estimates range by a couple orders of magnitude, but when our computers got fast enough, we started to build things called language models and image models that do fairly remarkable things.
00:06:45.800So, what have you seen in the last few years that's been indicative of this, of the change that you describe as revolutionary?
00:06:51.800What are computers doing now that you found surprising because of this increase in speed?
00:06:58.800Yeah, you can have a language model read a 200,000 word book and summarize it fairly accurately.
00:07:12.800And I'm going to introduce you to a friend who took a language model and changed it and fine-tuned it with Shakespeare and used it to write screenplays that are pretty good.
00:07:24.800And these kinds of things are really interesting.
00:07:27.800And we were talking about this a little bit earlier.
00:07:30.800So, when computers do computations, you know, a program will say add A equal B plus C.
00:07:38.800The computer does those operations on representations of information, ones and zeros.
00:07:46.800The computer has no understanding of it.
00:07:49.800But what we call a language model translates information like words and images and ideas into a space where the program, the ideas, and the operation it does on them are all essentially the same thing.
00:08:05.800So, a language model can produce words and then use those words as inputs.
00:08:10.800And it seems to have an understanding of what those words are, which is very different from how a computer operates on data.
00:08:19.800I'm curious about the language models.
00:08:21.800I mean, my sense of, at least in part, how we understand a story is that maybe we're watching a movie, let's say, and we get some sense of the character's goals.
00:08:35.800And then we see the manner in which that character perceives the world.
00:08:40.800And we, in some sense, adopt his goals, which is to identify with character.
00:08:45.800And then we play out a panoply of emotions and motivations on our body because we now inhabit that goal space.
00:08:52.800And we understand the character as a consequence of mimicking the character with our own physiology.
00:08:59.800And you have computers that can summarize the gist of a story, but they don't have that underlying physiology.
00:09:06.800Well, first of all, it's a theory that your physiology has anything to do with it.
00:09:12.800You could understand the character's goals and then get involved in the details of the story.
00:09:18.800And then you're predicting the path of the story and also having expectations and hopes for the story.
00:09:26.800And a good story kind of takes you on a ride because it teases you with doing some of the things you expect, but also doing things that are unexpected.
00:11:16.800That's simply the capacity of the model right now.
00:11:19.800And the model is not well-grounded enough in a set of, let's say, goals and reality or something to make sense for a while.
00:11:26.800So, what do you think would happen, Jonathan?
00:11:28.800This is, I think, associated with the kind of things that we've talked through to some degree.
00:11:34.800So, one of my hypotheses, let's say, about deep stories is that they're meta-gists in some sense.
00:11:46.800So, you could imagine a hundred people telling you a tragic story and then you could reduce each of those tragic stories to the gist of the tragic story.
00:11:55.800And then you could aggregate the gists and then you'd have something like a meta-tragedy.
00:11:59.800And I would say, the deeper the gist, the more religious-like the story gets.
00:12:06.800It's that idea is part of the reason that I wanted to bring you guys together.
00:12:10.800I mean, one of the things that what you just said makes me wonder is...
00:12:13.800Imagine that you took Shakespeare and you took Dante and you took, like, the canonical Western writers and you trained an AI system to understand the structure of each of them.
00:12:27.800And then, and now you have, you could pull out the summaries of those structures, the gists, and then couldn't you pull out another gist out of that?
00:12:37.800So, it would be like the essential element of Dante and Shakespeare.
00:12:41.800And I wonder when that would get biblical.
00:12:43.800I want to hear what Jonathan said so far.
00:12:45.800So, here's one funny thing to think about.
00:13:15.800Because if I killed you and scanned your brain and got the current state of all the synapses and stuff, A, you'd be dead, which would be sad.
00:13:23.800And B, I wouldn't know anything about your thoughts.
00:13:26.800Your thoughts are embedded in this model that your brain carries around.
00:13:31.800And you can express it in a lot of ways.
00:16:53.800So, in a story prediction model, the AI would predict the story, then compare it to its prediction, and then fine tune itself slowly as it trains itself.
00:17:18.800So, I talked to Carl Friston about this prediction idea in some detail.
00:17:22.800And so, Friston, for those of you who are watching and listening, is one of the world's top neuroscientists.
00:17:27.800And he's developed an entropy enclosure model of conceptualization, which is analogous to one that I was working on, I suppose, across approximately the same timeframe.
00:17:38.800So, the first issue, and this has been well established in the neuropsychological literature for quite a long time, is that anxiety is an indicator of discrepancy between prediction and actuality.
00:17:51.800And then, positive emotion also looks like a discrepancy reduction indicator.
00:17:57.800So, imagine that you're moving towards a goal, and then you evaluate what happens as you move towards the goal.
00:18:04.800And if you're moving in the right direction, what happens is what, you might say, what you expect to happen.
00:18:10.800And that produces positive emotion, and it's actually an indicator of reduction in entropy.
00:18:30.800But what I'm trying to make a case for is that your emotions directly map that, both positive and negative emotion, look like they're signifiers of discrepancy reduction, both on the positive and negative emotion side.
00:18:43.800But then there's a complexity that I think is germane to part of Jonathan's query, which is that...
00:18:50.800So, the neuropsychologists and the cognitive scientists have talked a long time about expectation, prediction, and discrepancy reduction.
00:18:58.800But one of the things they haven't talked about is it isn't exactly that you expect things.
00:20:09.800So then you might be watching it much more attentively than somebody who doesn't have that worry, for example.
00:20:14.800But both of you can predict where it's going to fly, and you will both notice a discrepancy, right?
00:20:19.800The motivations, one way of conceptualizing fundamental motivation is they're like a priori prediction domains, right?
00:20:28.800And so it helps us narrow our attentional focus because I know when you're sitting and you're not motivated in any sense, you can be doing just, in some sense, trivial expectation computations.
00:20:41.800But often we're in a highly motivated state.
00:20:45.800And what we're expecting is bounded by what we desire and what we desire is oriented, as Jonathan pointed out, towards the fact that we want to exist.
00:20:52.800One of the things I don't understand and wanted to talk about today is how the computer models, the AI models, can generate intelligible sense without mimicking that sense of motivation.
00:21:09.800Because you've said, for example, they can just derive the patterns from observations of the objective world.
00:21:14.800So again, I don't want to do all the talking, but so AI, generally speaking, like when I first learned about it, had two behaviors.
00:21:34.800The model has been trained to know where a cat is.
00:21:36.800And training is the process of giving it an input and an expected output.
00:21:41.800And when you first start training the model, it gives you garbage out, like an untrained brain would.
00:21:47.800And then you take the difference between the garbage output and the expected output and call that the error.
00:21:53.800And then they invent the big revelation was something called backpropagation with gradient descent.
00:21:58.800But that means take the error and divide it up across the layers and correct those calculations so that when you put a new thing in, it gives you a better answer.
00:22:11.800And then, to somewhat my astonishment, if you have a model of sufficient capacity and you train it with 100 million images, if you give it a novel image and say, tell me where the cat is, it can do it.
00:22:25.800So training is the process of doing a pass with an expected output and propagating an error back through the network.
00:22:35.800And inference is the behavior of putting something in and getting an output.
00:23:18.800And then it turns out you can train this to do lots of things.
00:23:20.800You can train it to summarize a sentence.
00:23:23.800You can train it to answer a question.
00:23:27.800There's a big thing about, you know, like Google every day has hundreds of millions of people asking it questions and giving answers and then rating the results.
00:23:36.800You can train a model with that information.
00:23:38.800So you can ask it a question and it gives you a sensible answer.
00:23:41.800But I think in what you said, I actually have the issue that has been going through my mind so much is when you said, you know, people put in the question and then they rate the answer.
00:23:53.800But my intuition is that the intelligence still comes from humans in the sense that it seems like in order to train whatever AI, you have to be able to give it a lot of power and then say at the beginning, this is good.
00:26:03.800And we're all, and everything that happens to us, we process it on the inference pass, which generates outputs.
00:26:09.800And then sometimes we look at that and say, hey, that's unexpected or that got a bad result or that got bad feedback.
00:26:15.800And then we back propagate that and update our models.
00:26:19.800So really well trained models can then train other models.
00:26:24.800So humans right now are the smartest people in the world.
00:26:28.800So the biggest question, the biggest question that I, that, that comes now based on what you said is, because my, my main point is to try to show how it seems like artificial intelligence is always an extension of human intelligence.
00:26:43.800Like it remains an extension of human intelligence.
00:26:48.800So do you think that, do you think that at some point the artificial intelligence will be able to, because the goals recognizing cats, you know, writing plays, all these goals are goals which are, which are based on embodied human existence.
00:27:04.800Could you train, could you train, could you train, could an AI at some point develop a goal which would be uncomprehensible to humans because of its own existence?
00:27:14.800I mean, like, for example, there's a small population of humans that enjoy math, right?
00:27:19.800And they are pursuing, you know, adventures in math space that are incomprehensible to 99.99% of humans, but that's, but they're interested in it.
00:27:30.800And you could imagine like an AI program working with those mathematicians and coming up with very novel math ideas and then interacting with them.
00:27:41.460But they could also, you know, if some AIs were, were elaborating out really interesting and detailed stories, they could come up with stories that are really interesting.
00:27:51.460We're going to see it pretty soon, like all of art, movie making and everything.
00:27:56.160Could there be a story that is interesting only to the AI and not interesting to us?
00:28:02.180So stories are like, I think, some high level information space.
00:28:07.180So, so the, the, the, the computing age of big data, there's all this data running on computers where nobody, only humans understood it, right?
00:28:55.860The AI systems will love you even when you're dull and miserable.
00:28:59.080Well, there's, there's, and there's so much idea space to explore and humans have a wide range.
00:29:05.880Some humans like to go through their everyday life doing their everyday things.
00:29:09.240And some people spend a lot of time, like you, a lot of time reading and thinking and talking and arguing and debating, you know.
00:29:17.000And, you know, there's going to be, like I'd say, a diversity of possibilities with what's, what a thinking thing can do when the thinking is fairly unlimited.
00:29:29.500So, I'm curious about, I'm still, I'm curious in pursuing this, this issue that Jonathan has been developing.
00:29:39.100So, there's a, there's a literally infinite number of ways, virtually infinite number of ways that we could take images of this room.
00:29:48.580Right now, if a human being is taking images of this room, they're going to be, they're going to sample a very small space of that infinite range of possibilities.
00:29:56.380Because if I was taking pictures in this room, in all likelihood, I would take pictures of identities, objects that are identifiable to human beings, that are functional to human beings, at a level of focus that makes those objects clear.
00:30:11.520And so, then you could imagine that the set of all images on the internet has that implicit structure of perception built into it.
00:30:19.320And that's a function of what human beings find useful.
00:30:22.460You know, I mean, I could take a photo of you that was, the focal depth was here, and here, and here, and here, and here, and two inches past you.
00:31:09.040And if you take a large, it turns out if you take a large number of images of things in general, so you've seen these things where you take a 2D image and turn it into a 3D image.
00:31:37.240But you could, you know, an AI scientist may cruise around the world with infrared and radio wave cameras, and they might take pictures of all different kinds of things.
00:31:48.020And every once in a while, they'd show up and go, hey, the sun, you know, I've been staring at the sun and the ultraviolet and radio waves for the last month.
00:31:55.620And it's way different than anybody thought, because humans tend to look at light in the visible spectrum.
00:32:02.800And, you know, there could be some really novel things coming out of that.
00:32:07.760But humans also, we live in the spectrum we live in, because it's a pretty good one for planet Earth.
00:32:13.320Like, it wouldn't be obvious that AI would start some different place.
00:32:17.220Like, visible spectrum is interesting for a whole bunch of reasons.
00:32:20.800Right. So, in a set of images that are human-derived, you're saying that there's...
00:32:26.200The way I would conceptualize that is that there's two kinds of logos embedded in that.
00:32:31.400One would be that you could extract out from that set of images what was relevant to human beings.
00:32:36.880But you're saying that the fine structure of the objective world outside of human concern is also embedded in the set of images.
00:32:44.980And that an AI system could extract out a representation of the world, but also a representation of what's motivating to human beings.
00:32:53.020Yes. And then some human scientists already do look at the sun and radio waves and other things, because they're trying to, you know, get different angles on how things work.
00:33:13.900The reason why I keep coming back to hammering the same point is that even in terms of the development of the AI, that is, developing AI requires immense amount of money, energy, you know, and time.
00:33:27.780That's a transient thing. In 30 years, it won't cost anything. So, that's going to change so fast, it's amazing.
00:33:34.760So, that's a... Like, supercomputers used to cost millions of dollars, and now your phone is the supercomputer.
00:33:40.960So, it's... The time between millions of dollars and $10 is about 30 years.
00:33:46.860So, it's... Like, I'm just saying, it's... Like, the time and effort isn't a thing in technology. It's moving pretty fast.
00:33:55.420It's just... That's just... That just sets the date.
00:33:58.440Yeah. But even making... Even making... Let's say, even... I mean, I guess maybe this is the nightmare question.
00:34:06.480Like, could you imagine an AI system which becomes completely autonomous, which is creating itself even physically through automized factories, which is, you know, programming itself, which is creating its own goals, which is not at all connected to human endeavor?
00:34:24.580Yeah. I mean, individual researchers can... You know, I have a friend who... I'm going to introduce you to him tomorrow.
00:34:31.520He wrote a program that scraped all of the internet and trained an AI model to be a language model on a relatively small computer.
00:34:39.020And in 10 years, the computer he could easily afford would be as smart as a human.
00:34:43.720And so, he could train that pretty easily.
00:34:47.060And that model could go on Amazon and buy 100 more of those computers and copy itself.
00:34:52.940So, yeah, we're 10 years away from that.
00:34:56.460And then... Then why... Like, why would it do that?
00:34:59.340I mean, what does... It does... Is it possible... It's all about the motivational question.
00:35:03.720I think that that's what even Jordan and I both have been coming at from the outset.
00:35:07.660It's like... So, you have an image, right? You have an image of Skynet or of the Matrix, you know, in which the sentient AI is actually fighting for its survival.
00:35:17.360So, it has a survival instinct, which is pushing it to self-perpetuate, like to replicate itself and to create variation on itself in order to survive and identifies humans as an obstacle to that, you know.
00:35:32.500Yeah, yeah. So, you have a whole bunch of implicit assumptions there.
00:35:36.360So, humans, last I checked, are unbelievably competitive.
00:35:40.280And when you let people get into power with no checks on them, they typically run amok.
00:35:48.340And then humans are, you know, self-regulating to some extent, obviously, with some serious outliers, because they self-regulate with each other.
00:35:58.360And humans and AI models, at some point, will have to find their own calculation of self-regulation and trade-offs about that stuff.
00:36:09.760Yeah, because AI doesn't feel pain, at least that we don't know that it feels pain.
00:36:14.780Well, lots of humans don't feel pain either.
00:36:16.780So, I mean, that's... I mean, humans feeling pain or not doesn't stop a whole bunch of activity.
00:36:30.060I mean, there's definitely people like, you know, children, if you threaten them with, you know, go to your room and stuff, you can regulate them that way.
00:36:36.420But some kids ignore that completely, and adults are the same way.
00:36:53.700Do you think that... Well, we've talked about this to some degree for decades.
00:36:58.260I mean, when you look at how fast things are moving now, and as you push that along, when you look out 10 years, and you see the relationship between the AI systems that are being built and human beings, what do you envision?
00:37:43.300And you can really feel that happening.
00:37:45.100To some level, that causes social stress, independent of whether it's AI or Amazon deliveries.
00:37:51.980You know, there's so many things that are going into the stress of it all.
00:37:56.580But there's progress, which is an extension of human capacity.
00:38:01.120And then there's this progress, which I'm hearing about, the way that you're describing it, which seems to be an inevitable progress towards creating something which is more powerful than you.
00:40:30.780People know that at least until the AI take over or whatever, that whoever is on the line towards increasing the power of the AI will rake in major rewards.
00:42:10.740Like, you tend to pursue the stuff that you do.
00:42:13.540And then the people in your zone, you self-regulate.
00:42:18.240And you also, even in the social strategies, we self-regulate.
00:42:22.800I mean, the recent political events of the last 10 years, the weird thing to me has been, why have, you know, people with power been overreaching to take too much from people with less?
00:42:38.400But one of the aspects of increase in power is that increase in power is always mediated, at least in one aspect, by military, by, let's say, physical power on others.
00:42:54.040You know, and we can see that technology is linked and has been linked always to military power.
00:42:59.500And so the idea that there could be some AIs that will be our friends or whatever is maybe possible.
00:43:07.340But the idea that there will be some AIs which will be weaponized seems absolutely inevitable because increase in power is always...
00:43:16.980Increase in technological power always moves towards military.
00:43:20.240So we've lived with atomic bombs since the 40s, right?
00:43:25.680So the, I mean, the solution to this has been mostly, you know, some form of mutual assured destruction or attacking me, like the response to attacking me is so much worse than the...
00:43:39.460Yeah, but it's also because we have reciprocity.
00:43:44.500So if I look into the face of another human, there's a limit of how different I think that person is from me.
00:43:51.680But if I'm hearing something described as a possibility of superintelligences that have their own goals, their own cares, their own structures, then how much mirror is there between these two groups of people, these two groups?
00:44:05.380Well, Jim's objection seems to be something like we're making, we may be making when we're doomsaying, let's say.
00:44:13.400And I'm not saying there's no place for that.
00:44:16.280We're making the presumption of something like a zero-sum competitive landscape, right?
00:44:21.720Is that the idea and the idea behind movies like The Terminator is that there is only so much resources and the machines and the human beings would have to fight over it.
00:44:33.040And you can see that that could easily be a preposterous assumption.
00:44:36.760Now, I think that one of the fundamental points you're making, though, is also there will definitely be people that will weaponize AI.
00:44:47.140And those weaponized AI systems will have as their goal something like the destruction of human beings, at least under some circumstances.
00:44:54.840And then there's the possibility that that will get out of control because the most effective systems at destroying human beings might be the ones that win, let's say.
00:45:04.660And that could happen independently of whether or not it is a true zero-sum competition.
00:45:08.760Yeah, and also the effectiveness of military stuff doesn't need very smart AI to be a lot better than it is today.
00:45:18.680You know, like the Star Wars movies where, like, you know, tens of thousands of years in the future, super highly trained, you know, fighters can't hit somebody running across a field.
00:46:44.400Well, when you're working on the frontiers of AI development and you see the development of increasingly intelligent machines, I mean, I know that part of what drives you is, I don't want to put words in your mouth, but what drives intelligent engineers in general, which is to take something that works and make it better and maybe to make it radically better and radically cheaper.
00:47:04.540So, so there's this drive toward technological improvement.
00:47:08.140And I know that you like to solve complex problems and you do that extraordinarily well.
00:47:12.920But do you, do you, is there also a vision of a more abundant form of human flourishing emerging from the, from the development?
00:47:29.540Like our ability to do what we want in ways that are interesting and, you know, for some people, beautiful is limited by a whole bunch of things because we're, you know, partly it's technological and partly with, you know, we're stupidly divisive.
00:47:46.780But there is, there's also a reality, which is one of the things that technology has been is, of course, an increase in power towards desire, towards human desire.
00:47:59.540And that is represented in mythological stories where, let's say, technology is used to accomplish impossible desire.
00:48:09.340We have, you know, the story of, the story of building the, the mechanic, the bull around the king of Minos, the wife of the king of Minos, you know, in order to be inseminated by, by a bull.
00:48:21.040We have the story, we have the story of Frankenstein, et cetera, the story of the golem, where we put our desire into this increased power.
00:48:33.800And then what happens is that we don't know our desires.
00:48:36.300That's one of the things that I've also been worried about in terms of AI is that we act, we have secret desires that enter into what we do that people aren't totally aware of.
00:48:47.980And as we increase in power, these systems, those desires, let's say, the, like the idea, for example, of the possibility of having an AI friend and the idea that an AI friend would be the best friend you've ever had because that, that friend would be the nicest to you, would care the most about you, would do all those things.
00:49:09.000That would be an exact example of what I'm talking about, which is, it's really the story of the genie, right?
00:49:14.680It's the story of the genie in the lamp, where the genie says, what do you wish?
00:49:20.080And the, and the person, and I have unlimited power to give it to you.
00:49:23.100And so I give him my wish, but that wish has all these, these underlying implications that I don't understand, all these underlying possibilities.
00:49:30.760Yeah, but the cool thing, the moral of almost all those stories is having unlimited wishes will be, lead to your downfall.
00:49:41.160And so humans, like, if you give, you know, a young person an unlimited amount of stuff to drink, for six months they're going to be falling down drunk and they're going to get over it, right?
00:49:52.780Having a friend that's always your friend no matter what, it's probably going to get boring pretty much.
00:49:56.860Well, the literature on marital stability indicates that.
00:50:00.860So there's a, there's a sweet spot with regards to marital stability in terms of the ratio of negative to positive communication.
00:50:09.600So if on average you receive five positive communications and one negative communication from your spouse, that's on the low threshold for stability.
00:50:21.680If it's four positive to one negative, you're headed for divorce.
00:50:25.020But interestingly enough, on the other end, there's a threshold as well, which is that if it exceeds 11 positive to one negative, you're also moving towards divorce.
00:50:36.000So there's, so, so, so there might be self-regulating mechanisms that, that would in sense take care of that.
00:50:43.160You might find a yes man, AI friend, extraordinarily boring, very, very rapidly.
00:50:48.740But as opposed to an AI friend that was interested in what you were interested in, it was actually interesting.
00:50:54.920Like, you know, we go through friends in the course of our lives, like different friends are interesting at different times.
00:51:00.480And some friends we grow with, and that continues to be really interesting for years and years.
00:51:05.260And other friends, you know, some people get stuck in their thing and then you've moved on or they've moved on or something.
00:51:10.840So, yeah, I tend to think of, like, a world where there was more abundance and more possibilities and more interesting things to do is an interesting place.
00:52:13.840He said the Earth could easily support a population of a trillion people.
00:52:17.940And a trillion people would be a lot more people doing, you know, random stuff.
00:52:21.780And he didn't imagine that the future population would be a trillion humans and a trillion AIs, but it probably will be.
00:52:29.780It will probably exist on multiple planets, which will be good the next time an asteroid shows up.
00:52:34.460So, what do you think about, so one of the things that seems to be happening, tell me if you think I'm wrong here, and I think it's germane to join us.
00:52:42.080And I just want to make the point of, you know, where we are compared to living in the Middle Ages, our lives are longer, our families are healthier, our children are more likely to survive.
00:52:51.460Like, many, many good things happened.
00:52:54.940Like, setting the clock back wouldn't be good.
00:52:57.640And, you know, if we have some care and people who actually care about how culture interacts with technology for the next 50 years, you know, we'll get through this hopefully more successfully than we did the atomic bomb and the Cold War.
00:54:11.640And, you know, in a world where a larger percentage of people can have, well, live in relative abundance and have tools and opportunities, I think is a good thing.
00:54:23.340And I don't want to pull back abundance.
00:54:25.620But what I have noticed is that our abundance brings a kind of nihilism to people.
00:54:33.600And I don't, like I said, I don't want to go back.
00:54:35.840I'm happy to live here and to have these tech things.
00:54:38.400But I think it's something that I've also noticed, that increase of the capacity to get your desires when that increases to a certain extent also leads to a kind of nihilism where exactly that.
00:54:55.900Well, I wonder, Jonathan, I wonder if that's partly a consequence of the erroneous maximization of short-term desire.
00:55:06.980I mean, one of the things that you might think about that could be dangerous on the AI front is that we optimize the manner in which we interact with our electronic gadgets to capture short-term attention, right?
00:55:23.120Because there's a difference between getting what you want right now, right now, and getting what you need in some more mature sense across a reasonable span of time.
00:55:31.960And one of the things that does seem to be happening online, and I think it is driven by the development of AI systems, is that we're assaulted by systems that parasitize our short-term attention and at the expense of longer-term attention.
00:55:48.120And if the AI systems emerge to optimize attentional grip, it isn't obvious to me that they're going to optimize for the attention that works over the medium to long run, right?
00:55:59.420They're going to be, they could conceivably maximize something like whim-centered existence.
00:56:06.460Yeah, because all the virality is based on that, all the social media networks are all based on this reduction of attention, this reduction of desire to reaching your rest, let's say, in that desire, right?
00:56:19.820The like, the click, all these things, they're...
00:56:24.120So, but that's something that, you know, so for reasons that are somewhat puzzling, but maybe not, you know, the business models around a lot of those interfaces are around, you know, the part, the users, the product, and, you know, the advertisers are trying to get your attention.
00:57:48.540And what we talked about, which is the idea of elevating something higher in order to see it as a model.
00:57:53.740See, these are where intelligence exists in the human person.
00:57:58.560And so when we notice that in the systems, in the platforms, these are the aspects of intelligence which are being weaponized in some ways...
00:58:08.740Not against us, but are just kind of being weaponized because they're the most beneficial at the short term to be able to generate our constant attention.
00:58:16.720And so what I mean is that that is what the AIs are made of, right?
00:58:21.340They're made of attention, prioritization, you know, good, bad.
00:58:26.640What is it that is worth putting energy into in order to predict towards a telos?
00:58:31.820And so I'm seeing that the idea that we could disconnect them suddenly seems very difficult to me.
00:58:43.500So after World War II, America went through this amazing building boom of building suburbs.
00:58:50.520And the American dream was you could have your own house, your own yard in the suburb with a good school, right?
00:58:56.360So in the 50s, 60s, early 70s, they were building that like crazy.
00:59:01.260By the time I grew up, I lived in a suburban dystopia, right?
00:59:05.800And we found that that as a goal wasn't a good thing because people ended up in houses separated from social needs and structures.
00:59:16.760And then new towns are built around like a hub with, you know, places to go and eat, you know.
00:59:22.260So there was a good that was viewed in terms of opportunity and abundance, but it actually was a fail culturally.
00:59:30.380And then some places it modified and it continues.
00:59:33.000And some places are still dystopian, you know, suburban areas.
00:59:37.400And some places people simply learn to live with it, right?
00:59:41.300Yeah, but that has something that has to do with attention, by the way.
00:59:44.220It has to do with a subsidiary hierarchy, like a hierarchy of attention, which is set up in a way in which all the levels can have room to exist, let's say.
00:59:56.160And so, you know, the new systems, the new way, let's say the new urbanist movement, similar to what you're talking about, that's what they've understood.
01:00:04.960It's like we need places of intimacy in terms of the house.
01:00:07.760We need places of communion in terms of, you know, parks and alleyways and buildings where we meet and a church, all these places that kind of manifest our communion together.
01:00:18.880Yeah, so those existed coherently for long periods of time.
01:00:23.400And then the abundance post-World War II and some ideas about, like, what life could be like caused this big change.
01:00:32.720And that change satisfied some needs, people got houses, but broke community needs.
01:00:38.080And then new sets of ideas about what's the synthesis, what's the possibility of having your own home but also having community, not having to drive 15 minutes for every single thing.
01:00:50.260And some people live in those worlds and some people don't.
01:00:59.880But now, because one of the things that's happening now is, as you pointed out earlier, is we're going to be producing equally revolutionary transformations, but at a much smaller scale of time.
01:01:12.280And so, Mike, one of the things I wonder about, I think it's driving some of the concerns in the conversation is, are we going to be intelligent enough to direct with regulation the transformations of technology as they start to accelerate?
01:01:29.680I mean, we've already, you look what's happened online, I mean, we've inadvertently, for example, radically magnified the voices of narcissists, psychopaths, and Machiavellians.
01:01:41.880And we've done that so intensely, partly, and I would say partly as a consequence of AI mediation, that I think it's destabilizing the entire political economy.
01:01:51.520Well, it's destabilizing part of it, like as Scott Adams pointed out, you just block everybody that acts like that.
01:01:56.980I don't pay attention to people that talk like that.
01:01:59.760Yeah, but they seem to be raising the temperature in the entire culture.
01:02:01.660Well, there's still places that are sensitive to it, like 10,000 people here can make a storm and some corporate, you know, person, you know, fires somebody.
01:02:10.760But I think that's, like, we're five years from that being over.
01:02:13.500A corporation will go 10,000 people out of 10 billion.
01:03:02.340Like, I like people, but also people are complicated.
01:03:04.800They all got all kinds of nefarious goals.
01:03:06.800Like, I worry a lot more about people burning down the world than I do about artificial intelligence.
01:03:12.660Just because, you know, people, well, you know people, they're difficult, right?
01:03:20.020But the interesting thing is in aggregate we mostly self-regulate.
01:03:24.560And when things change, you have these dislocations.
01:03:27.100And then it's up to people who talk and think, and while we're having this conversation, I suppose, to talk about how do we re-regulate this stuff.
01:03:35.240Well, because one of the things that the increase in power has done in terms of AI, and you can see it with Google and you can see it online, is that there are certain people who hold the keys, let's say, and then who hold the keys to what you see and what you don't see.
01:03:53.440And you know it if you know what searches to make where you realize that this is not, this is actually being directed by someone who now has huge amount of power in order to direct my attention towards their ideological purpose.
01:04:07.840And so that's why, like, I think that to me, I always tend to see AI as an extension of human power.
01:04:19.280Even though there is this idea that it could somehow become totally independent, I still tend to see it as an increase of the human care.
01:04:29.120And whoever will be able to hold the keys to that will have increase in power.
01:04:33.980And that can be, like, and I think we're already seeing it.
01:04:37.080Well, that's not really any different, though, is it, Jonathan, than the situation that's always confronted us in the past?
01:04:43.460I mean, we've always had to deal with the evil uncle of the king, and we've always had to deal with the fact that an increase in ability could also produce a commensurate increase in tyrannical power, right?
01:04:56.360I mean, so that might be magnified now, and maybe the danger in some sense is more acute, but possibly the possibility is more present as well.
01:05:06.380Because you can train an AI to find hate speech, right?
01:05:10.180You can train an AI to find hate speech and then to act on that hate speech immediately within—now it's only—we're not only talking about social media, but what we've seen is that that is now encroaching into payment systems and into people losing their bank account, their access to different services.
01:05:32.140Yeah, there's an Australian bank that already has decided that it's a good thing to send all of their customers a carbon load report every month, right?
01:05:41.400And to offer them hints about how they could reduce their polluting purchases, let's say.
01:05:47.980Well, at the moment, that system is one of voluntary compliance, but you can certainly see in a situation like the one we're in now that the line between voluntary compliance and involuntary compulsion is very, very thin.
01:06:15.440All of a sudden, PCs put everybody online.
01:06:18.320Everybody could suddenly see all kinds of stuff.
01:06:21.600And, you know, people could get a Freedom of Information Act request, put it online somewhere, and 100,000 people could see it.
01:06:28.520Like, it was an amazing democratization moment.
01:06:32.200And then there was a similar but smaller revolution with the world of, you know, smartphones and apps.
01:06:39.860But then we've had a new, completely different set of companies, by the way, you know, from, you know, what happened in the 60s, 70s, and 80s to today.
01:06:49.260It's very different companies that control it.
01:06:52.400And there are people who worry that AI will be a winner-take-all thing.
01:06:56.260Now, I think so many people are using it, and they're working on it in so many different places, and the cost is going to come down so fast that pretty soon you'll have your own AI app that you'll use to mediate the Internet to strip out, you know, the endless stream of ads.
01:07:12.980And you can say, well, is this story objective?
01:07:16.380Well, here's the 15 stories, and this is being manipulated this way, and this is being manipulated that way.
01:07:21.840And you can say, well, I want—what's more like the real story?
01:07:25.140And the funny thing is, information that's broadly distributed and has lots of inputs is very hard to fake the whole thing.
01:07:36.800So right now, a story can pull through a major media outlet, and if they can control the narrative, everybody gets the fake story.
01:07:44.440But if the media is distributed across a billion people who are all interacting in some useful way, somebody's standing up—
01:07:54.020Yeah, there's real signal there, and if somebody stands up and says something that's not true, everybody goes, everybody knows that's not true.
01:08:01.360So, like, a good outcome with people thinking seriously would be the democratization of information and, you know, objective facts in the same way.
01:08:13.020The same thing that happened with PCs versus corporate central computers could happen again.
01:08:17.920Yeah, but if you have an increased problem, the problem is that these are—
01:08:23.900The increasing power always creates the two at the same time.
01:08:27.780And so we saw that, you know, increase in power creates first—or it depends in which direction it happens.
01:08:34.560It creates an increase in decentralization, an increase in access, an increase in all that.
01:08:39.120But then it also, at the same time, creates the counter reaction, which is an increase in control, an increase in centralization.
01:08:48.060And so, now, the more the power is, the more the waves will—the bigger the waves will be.
01:08:56.500And so the image of—the image that 1984 presented to us, you know, of people going into newspapers and changing the headlines and taking the pictures out and doing that, that now obviously can happen with just a click.
01:09:11.680So you can click and you can change the past.
01:09:31.640Like, when Amazon became a platform, suddenly any mom-and-pop business could have a, you know, Amazon, eBay, there's a bunch of platforms, which had an amazing impact.
01:09:43.840Because any business could get to anybody.
01:09:47.040But then the platform itself started to control the information flow, right?
01:09:51.460But at some point, that'll turn into—people go, well, why am I letting somebody control my information flow when Amazon objectively doesn't really have any capability, right?
01:10:03.640So, like you point out, the waves are getting bigger, but they're real waves.
01:10:12.220It's also on a billion hard drives, right?
01:10:15.700So, if somebody says, I'm going to erase objective fact, a distributed information system would say, yeah, go ahead and erase it anywhere you want.
01:10:24.680There's another thousand copies of it.
01:10:31.760And this is where thinking people have to say, yeah, this is a serious problem.
01:10:35.860Like, if humans don't have anything to fight for, they get lazy and, you know, a little bit dopey, in my view.
01:10:41.100Like, we do have something to fight for, and, you know, that's worth talking about.
01:10:47.800Like, what would a great world with, you know, distributed, you know, human intelligence and artificial intelligence working together in a collaborative way to create abundance and fairness and, you know, like some better way of arriving at good decisions than what the truth is?
01:11:38.320And then at some point, they're also, you know, let's say a difficult program company, and they made money off a lot of people and became extremely valuable.
01:11:46.840Now, for the most part, they haven't been that directional in telling you what to do and think and how to do it.
01:12:06.760And in Europe, you know, they've decided to regulate some of that, which—that should be a social, cultural conversation about how should that work.
01:12:18.520So do you see the more likely, certainly the more desirable future, is something like a set of distributed AIs, many of which are under personal—in personal relationship, in some sense,
01:12:35.080the same way that we're in personal relationship with our phones and our computers, and that that would give people the chance to fight back, so to speak, against this.
01:12:43.660And there's lots of people really interested in distributed platforms.
01:12:47.420And one of the interesting things about the AI world is, you know, there's a company called OpenAI, and they open-source a lot of it.
01:13:19.360Well, when you think about the waves, there are two, actually, in the book of Revelation, which describes the end or describes the finality of all things or the totality of all things as maybe a way for people who are more secular to kind of understand it.
01:13:31.920And in that book, there are two images, interesting images about technology.
01:13:36.760One is that there is a dragon that falls from the heavens, and that dragon makes a beast.
01:13:42.740And then that beast makes an image of the beast, and then the image speaks.
01:13:48.220And when the image speaks, then people are so mesmerized by the speaking image that they worship the beast, ultimately.
01:13:56.680So that is one image of, let's say, making and technology in Scripture, in Revelation.
01:14:02.200But there's another image, which is the image of the heavenly Jerusalem.
01:14:06.480And that image is more an image of balance.
01:14:08.780It's an image of the city which comes down from heaven with a garden in the center and then becomes this glorious city.
01:14:15.720And it says, the glory of all the kings is gathered into the city.
01:14:20.600Like, so the glory of all the nations is gathered into this city.
01:14:23.720So now you see a technology which is at the service of human flourishing and takes the best of humans and brings it into itself in order to kind of manifest.
01:14:34.220And it also has hierarchy, which means it has the natural at the center and then has the artificial as serving the natural, you could say.
01:14:41.100So those two images seem to reflect these two waves that we see.
01:14:45.840And this kind of idea of an artificial intelligence which will be ruling over us or speaking over us.
01:14:52.480But there's a secret person controlling it.
01:14:55.320Even in Revelation, it's like, there's a beast controlling it and making it speak.
01:15:03.400So I don't know, Jordan, if you ever thought about those two images in Revelation as being related to technology, let's say.
01:15:09.900Well, I don't think I've thought about those two images in the specific manner that you described.
01:15:15.480But I would say that the work that I've been doing and I think the work you've been doing, too, in the public front reflects the dichotomy between those images.
01:15:25.340And it's relevant to the points that Jim has been making.
01:15:27.900I mean, we are definitely increasing our technological power.
01:15:31.220And you can imagine that that'll increase our capacity for tyranny and also our capacity for abundance.
01:15:37.000And then the question becomes, what do we need to do in order to increase the probability that we tilt the future towards Jerusalem and away from the beast?
01:15:45.660And the reason that I've been concentrating on helping people bolster their individual morality to the degree that I've managed that is because I think that whether the outcome is the positive outcome that in some sense Jim has been outlining or the negative outcomes that we've been querying him about,
01:16:04.100I think that's going to be dependent on the individual ethical choices of people at the individual level, but then cumulatively, right?
01:16:11.740So if we decide that we're going to worship the image of the beast, so to speak, because we're mesmerized by our own reflection, that's another way of thinking about it.
01:16:19.740And we want to be the victim of our own dark desires, then the IA revolution is going to go very, very badly.
01:16:25.900But if we decide that we're going to aim up in some positive way and we make the right micro decisions, well, then maybe we can harness this technology to produce a time of abundance in the manner that Jim is hopeful about.
01:16:38.720Yeah, and let me make two funny points.
01:16:42.040So one is, I think there's going to be continuum, like the word artificial intelligence won't actually make any sense, right?
01:16:51.040So humans collectively, like individuals know stuff, but collectively we know a lot more, right?
01:16:57.780And the thing that's really good is in a diverse society with lots of people pursuing individual, interesting, you know, ideas, worlds, like we have a lot of things.
01:17:09.780And more people, more independence generates more diversity.
01:17:16.860And that's a good thing where, you know, totalitarian society where everybody's told to wear the same shirt.
01:17:21.880Like it's inherently boring, like the beast speaking through the monster is inherently dull, right?
01:17:30.460Like, but in an intelligent world where not only can we have more intelligent things, but in some places go far beyond what most humans are capable of in pursuit of interesting variety.
01:17:46.260And, you know, like I believe the information and, well, let's say intelligence is essentially unlimited, right?
01:17:54.520Like, and the unlimited intelligence won't be this shiny thing that tells everybody what to do.
01:17:59.780That's sort of the opposite of interesting intelligence.
01:18:03.760Interesting intelligence will be more diverse, not less diverse.
01:19:03.000So, if the goal is maximum diversity, then the line between human intelligence, artificial intelligence that we draw, like, you'll see all these kind of really interesting partnerships and all kinds of things.
01:19:15.820And more people doing what they want, which is the world I want to live in.
01:19:20.020But to me, it seems like the question is going to be related to attention, ultimately.
01:19:27.020That is, what are humans attending to at the highest?
01:19:30.380What is it that humans care for in the highest?
01:19:33.180You know, in some ways, you could say, what do humans, what are humans worshiping?
01:19:37.300And, like, depending on what humans worship, then their actions will play out in the technology that they're creating, in the increase in power that they're creating.
01:19:46.620Well, that's, well, and if we're guided by the negative vision, the sort of thing that Jim laid out that is being taught to his children, you can imagine that we're in for a pretty damn dismal future, right?
01:19:57.040Human beings are a cancer on the face of the planet.
01:20:36.460And, I mean, I've been heartened, I would say, over the decades talking to Jim about what he's doing on the technological front.
01:20:44.660And I think part of the reason I've been heartened is because I do think that his vision is guided primarily by desire to help bring about something approximating life more abundant.
01:20:55.980And I would rather see people on the AI front who are guided by that vision working on this technology.
01:21:01.460But I also think it's useful to do what you and I have been doing in this conversation, Jonathan,
01:21:07.180and acting in some sense as friendly critics and hopefully learning something in the interim.
01:21:13.320Do you have anything you want to say in conclusion?
01:21:15.220I mean, I just think that the question is linked very directly to what we've been talking about now for several years,
01:21:22.240which is the question of attention, the question of what is the highest attention.
01:21:27.120And I think the reason why I have more alarm, let's say, than Jim,
01:21:31.420is that I've noticed that in some ways human beings have come to now, let's say, worship their own desires.
01:25:15.760I'm going to talk to Jim Keller for another half an hour on the Daily Wire Plus platform.
01:25:20.900I use that extra half an hour to usually walk people through their biography.
01:25:25.500I'm very interested in how people develop successful careers and lives and how their destiny unfolded in front of them.
01:25:33.160And so, for all of those of you who are watching and listening who might be interested in that, consider heading over to the Daily Wire Plus platform and partaking in that.
01:25:42.400And otherwise, Jonathan, we'll see you in Miami in a month and a half to finish up the Exodus Seminar.
01:25:48.640We're going to release the first half of the Exodus Seminar we recorded in Miami on November 25th, by the way.
01:26:04.340And just for everyone watching and listening, I brought a group of scholars together.
01:26:08.540About two and a half months ago, we spent a week in Miami, some of the smartest people I could gather around me, to walk through the book of Exodus.
01:26:16.600We only got through halfway because it turns out there's more information there than I had originally considered.
01:26:22.320But it went exceptionally well and I learned a lot.
01:26:29.640And, well, that's very much relevant to everyone today as we strive to find our way forward through all these complex issues, such as the ones we were talking about today.
01:26:39.140So, I would also encourage people to check that out when it launches on November 25th.
01:26:43.880I learned more in that seminar than any seminar I ever took in my life, I would say.
01:26:50.280Jim, we're going to talk a little bit more on the Daily Wire Plus platform.
01:26:53.280And I'm looking forward to meeting the rest of the people in your AI-oriented community tomorrow and learning more about, well, what seems to be an optimistic version of a life more abundant.
01:27:04.520And to all of you watching and listening, thank you very much.
01:27:07.900Your attention isn't taken for granted and it's much appreciated.
01:27:12.580I would encourage you to continue listening to my conversation with my guest on dailywireplus.com.
01:27:19.280Going online without ExpressVPN is like not paying attention to the safety demonstration on a flight.
01:27:25.540Most of the time, you'll probably be fine, but what if one day that weird yellow mask drops down from overhead and you have no idea what to do?
01:27:33.300In our hyper-connected world, your digital privacy isn't just a luxury.
01:27:38.260Every time you connect to an unsecured network in a cafe, hotel, or airport, you're essentially broadcasting your personal information to anyone with a technical know-how to intercept it.
01:27:47.620And let's be clear, it doesn't take a genius hacker to do this.
01:27:50.940With some off-the-shelf hardware, even a tech-savvy teenager could potentially access your passwords, bank logins, and credit card details.
01:27:58.320Now, you might think, what's the big deal?