Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety. With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way. In his new series, he provides a roadmap towards healing, showing that while the journey isn t easy, it's absolutely possible to find your way forward. If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better. Go to Dailywire Plus now and start watching Dr. Jordan B. Peterson's new series on Depression and Anxiety: A Guide to Feeling Better. Today's guest is Brian Romley, an entrepreneur, scientist, and artificial intelligence researcher. We discuss language models, the science behind understanding, language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be headed. Let this be the first step towards the brighter future you deserve. You deserve a brighter future, you deserve a better one. -Jon Sorrentino Subscribe to Daily Wire Plus to get immediate access to all of the latest podcasts, podcasts, and events happening around the world, wherever you get your favourite shows and socials. Subscribe today. Learn more about your ad choices. Rate, review and subscribe to our newsletter! and become a supporter of our new podcast, Dear John: Subscribe, Review, Share, and Share, Share and Retweet! Timestamps: 0:00:00 - What do you think of this episode? - What are you're listening to? - Rate, Share it on Apple Podcasts? 5: 6:30 - What's your favorite podcast? 7:00 - Which language do you would you'd like to be listening to me recommend it? 8:00 | What s your favorite emoji? 9:00 +3:00 / 6:40 - What is your thoughts on this episode is your favorite song? 11:00 & 6: What would you like to hear me recommend I'm listening to you rate me in a song or tweet me out of my feed? 12:30 13: 15:00/9:30 & 9:15 - What s it's a good day? 16:40 17:10 - How do you like it's better than that?
00:00:00.940Hey everyone, real quick before you skip, I want to talk to you about something serious and important.
00:00:06.480Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
00:00:12.740We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
00:00:20.100With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
00:00:27.420He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
00:00:35.360If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better.
00:00:41.780Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
00:00:47.460Let this be the first step towards the brighter future you deserve.
00:00:57.420Hello everyone. Today I'm speaking with entrepreneur, scientist, and artificial intelligence researcher, Brian Romley.
00:01:14.940We discuss language models, the science behind understanding, tuning language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be heading.
00:01:36.140So Brian, thanks for agreeing to talk to me today.
00:01:38.440I've been following you on Twitter. I don't remember how I came across your work, but I've been very interested in reading your threads, and you seem to be au courant, so to speak, with the latest developments on the AI front.
00:01:54.360And I've been particularly fascinated about the developments in AI for two reasons.
00:02:00.020My brother-in-law, Jim Keller, is a very well-known chip designer, and he's building a chip optimized for AI learning, and we've talked a fair bit about that, and I've talked to him on my YouTube channel about the perils and promises of AI, let's say.
00:02:16.420And then I've been very fascinated by ChatGPT. I know I'm not alone in that. I've been using it most recently as a digital assistant, and I got a couple of questions to ask you about that.
00:02:29.320So here's some of the things that I've found out about ChatGPT, and maybe we can go into the technology a little bit, too.
00:02:35.260So I can ask it very complicated questions, like I asked it the other day about, there's this old papyrus from Egypt, ancient Egypt, that details out a particular variant of the story of Horus and Osiris, two Egyptian gods.
00:02:51.740It's a very obscure piece of knowledge, and it has to do with the sexual element of a battle between two of the Egyptian gods.
00:03:00.480And I asked it about that and to find the appropriate citations and quotes from appropriate experts.
00:03:07.760And it did so very rapidly, but then it moralized at me about the sexual element of the story and told me that maybe it was in conflict with their community guidelines.
00:04:29.320You're finding the limits of what we call large language models.
00:04:34.800That's the technology that is being used by ChatGPT 3.5 and 4.
00:04:41.780A large language model is really a statistical algorithm.
00:04:47.480I'll try to simplify because I don't want to get into the minutia of technical details.
00:04:52.620But what it's essentially doing is it took a corpus of human language.
00:04:57.380And that was garnered through mostly the internet, a couple of billion words at the end of the day, all of human writing that it could have access to, and plus quite a bit of scientific documents and computer programming languages.
00:05:18.540And so what it's doing is it's producing a result statistically, mathematically, one word, even at times, one letter at a time.
00:05:27.900And it doesn't have a concept of global knowledge.
00:05:33.900So when you're talking about that papyrus in the Egyptian translation, ironically, it's so interesting because you're taking something that was a heligraph, and it's now probably was translated to Greek and English.
00:05:48.240And now AI, that language that language that we're talking about, which is essentially a mathematical tensor.
00:05:55.700And so when it's laying out those words, the accuracy is incredible.
00:06:02.000And frankly, and we can get into this a little later in the conversation, nobody really understands precisely what it's doing in what is called the hidden layer.
00:06:13.800It is so many interconnections of neurons that it essentially is a black box.
00:06:24.960And I would also say that we're in a sort of undiscovered continent.
00:06:31.000Anybody saying that they fully understand the limitations and the boundaries of what large language models are going to look like in the future as a sort of self-feedback is sort of guessing.
00:06:49.000If you look at the growth, it's logarithmic.
00:06:52.680Yeah, OpenAI hasn't really told us what they're using as far as the number of parameters.
00:06:59.100These are billions of interconnectivities of neurons, essentially.
00:07:03.980But we know in chat GPT 3.5, it's well over 120 billion parameters.
00:07:10.980The content I've created over the past year represents some of my best to date as I've undertaken additional extensive exploration in today's most challenging topics and experienced a nice increment in production quality courtesy of Daily Wire+.
00:07:27.000We all want you to benefit from the knowledge gained throughout this adventurous journey.
00:07:31.600I'm pleased to let you know that for a limited time, you're invited to access all my content with a seven-day free trial at Daily Wire+.
00:07:39.240This will provide you with full access to my new in-depth series on marriage as well as guidance for creating a life vision and my series exploring the book of Exodus.
00:07:50.100You'll also find there the complete library of all my podcasts and lectures.
00:07:54.780I have a plethora of new content in development that will be coming soon exclusively on Daily Wire+.
00:08:00.000Plus, voices of reason and resistance are few and far between these strange days.
00:08:05.720Click on the link below if you want to learn more.
00:08:08.680And thank you for watching and listening.
00:08:40.120And let's be clear, it doesn't take a genius hacker to do this.
00:08:51.680With some off-the-shelf hardware, even a tech-savvy teenager could potentially access your passwords, bank logins, and credit card details.
00:08:59.060Now, you might think, what's the big deal?
00:09:55.980So let me ask you about those parameters.
00:10:04.400Well, I'm interested in delving into the technical details to some degree.
00:10:09.760Now, you know, I was familiar to a limited degree with some of the statistical technologies that analyze, let's say, the relationship between words.
00:10:19.240So, for example, when psychologists derived the big five models of personality, they basically used very primitive AI stat systems, that's a way of thinking about it, to derive those models.
00:10:33.180It was factor analysis, which is, you know, it's not using billions of parameters by any stretch of the imagination.
00:10:38.020But it was looking for words that were statistically likely to clump together.
00:10:43.440And the idea would be that words that were replaceable in sentences or words that were used in close conjunction with each other, especially adjectives,
00:10:55.560were likely to be assessing the same underlying construct or dimension.
00:11:02.000And that if you conducted the statistical analysis properly, which were very complex correlational analysis,
00:11:09.280you could find out how the words that people used to describe each other aggregated.
00:11:16.020And it turned out there were five dimensions of aggregation, approximately.
00:11:19.980And that's been a very robust finding.
00:11:22.280It seems to be true across different sets of languages.
00:11:28.540So, now, with the large language models, which are AI learning driven, you said that the computer is calculating the statistical relationship between words.
00:11:42.820So, how likely a word is to occur in proximity to another word, but also letters.
00:11:47.960So, it's conducting the analysis at the level of the letter and at the level of the words.
00:11:51.900Is it also conducting analysis at the level of the phrases, looking for the interrelationship between common phrases?
00:12:00.760And then, because when we're understanding a text, we understand letters, words, phrases, sentences, the organization of sentences into paragraphs,
00:12:11.800the organization of paragraphs into chapters, the chapter in relationship to the book,
00:12:16.520the book in relationship to all the other books we've read, and then that's also embedded within the other elements of our intelligence.
00:12:24.300And do you know, does anyone know how deep the analysis that the large language models go?
00:12:32.220Like, what's the level of relationship that's being assessed?
00:12:39.320I think what we're really kind of discovering is that we can't really put a number on how many interconnections that are made within these parameters other than the general statistics.
00:12:52.480Like, all right, so you could say there's 12 billion or 128 billion total interconnectivities.
00:13:00.920But when we actually are looking at individual words, it's sort of almost like the slit experiment with physics, you know, whether we're dealing with a wave or a particle duality.
00:13:14.100Once you start looking at one area, you know, you're actually thinking about another area that you have to look at.
00:13:20.680And you might as well just not even do it because it would take a tremendous amount of computer time to try to figure out how all these interconnections are working within the parameter layers.
00:13:32.280Now, those systems are trained just to be accurate in their output, right?
00:13:35.820I mean, they're actually trained the same way we learn, as far as I can tell, is that they're given a target.
00:13:41.640I don't exactly know how that works with large language models.
00:13:44.900But I know that, for example, that AI systems that have learned to identify cats, which was an early accomplishment of AI systems, they were shown pictures of things that were cats and things that weren't cats and basically just told when they got the identification right.
00:13:59.400And that set the weights that you're describing in all sorts of complex ways that are completely mysterious.
00:14:05.700And the end consequence of the reinforcement, same way that human beings learn, was that a system would assemble itself that somehow could identify cats and distinguish them from all the other things that were cat-like or not cat-like.
00:14:19.280And as you pointed out, we have no idea that the system is too complex to model, and it's certainly too complex to reduce.
00:14:28.560Although my brother-in-law told me that some of these AI systems, they've managed to reduce what they do learn to something approximating an algorithm.
00:14:36.140But that can be done upon occasion, but generally isn't.
00:14:41.040Generally, the system can't be and isn't simplified.
00:14:45.680And so that would also imply to some degree that each AI system is unique, right?
00:14:50.540Not only incomprehensible, but unique and incomprehensible.
00:14:54.500It also implies, you know, I think ChatGPT passes the Turing test.
00:14:59.620Because I don't think that if you, I mean, there was just a study released here the other day showing that if you get patients who are seeing doctors to interact with physicians or with ChatGPT,
00:15:14.160they actually prefer the interaction with ChatGPT to the interaction with the average doctor.
00:15:20.100So not only does ChatGPT apparently pass the Turing test, which is indistinguishability from a human conversational partner,
00:15:28.720but it seems to actually do it somewhat better, at least than physicians.
00:15:32.920And so, but what this brings up, this thorny issue that, you know, we're going to produce computational intelligences that are in many ways indistinguishable from human beings,
00:15:43.760but we're not going to understand them any better than we understand human beings.
00:15:47.720It's so funny, eh, that we'll create this and we're going to create something we don't understand that works.
00:15:56.920You know, and I call it a low-resolution pixelated version of the part of the human brain that invented language.
00:16:09.600And what we're going to wind up discovering is that this is a mirror reflecting back to humanity.
00:16:16.040And all the foibles and greatness of humanity is sort of modeled in this.
00:16:24.880Because, you know, when you look at the invention of language and the phonological loop and Broca and Wernicke's,
00:16:31.520you start realizing that a very specific thing happened from, you know, the lower primates to humans to develop this form of communication.
00:16:43.920I mean, prior to that, whatever that part of the brain was, was equated to longer short-term memory.
00:16:51.500We can see within chimpanzees, they have an incredible short-term memory.
00:16:56.620There's this video I put out of a primate research center in Japan where they flash some 35 numbers on the screen in seconds.
00:17:09.980And the chimpanzee can knock it off without even thinking about it.
00:17:15.580And the area where that short-term memory is is where we've developed the phonological loop and the ability to speak.
00:17:24.120What's interesting is what I've discovered is AI hallucinations.
00:17:30.740And those are artifacts that a lot of researchers in AI feel is embarrassing or they would prefer not to speak about.
00:17:40.220But I'm finding it as a very interesting inquiry, a very interesting study in seeing how these models reach for information that it doesn't know.
00:18:41.320Well, it's not, it is a bug in a sense, but it's extraordinarily interesting bug because it's going to shed light on exactly how these systems work.
00:18:50.380I mean, here's something else I heard recently that was quite interesting.
00:18:55.040Apparently, the AI system that Google relies on was asked a question in a language.
00:19:01.460I think it was a relatively obscure Bangladeshi language, and it couldn't answer the question.
00:19:08.200And now its goal is to answer questions.
00:19:11.560And so it went, taught itself this language, I believe, in a morning.
00:19:15.500And then it could answer in that language, which is what it's supposed to do, because it's supposed to answer questions.
00:19:21.940And then it learned a thousand languages.
00:19:24.620And that wasn't something it had been, say, told to do or programmed to do, not that these systems are precisely programmed.
00:19:30.780But it also begs this very interesting question is that, well, we've designed these systems whose function, whose purpose, whose meaning, let's say, is to answer questions.
00:19:42.940But we don't really understand what it means to produce an artificial intelligence that's driven to do nothing but answer questions.
00:19:50.220We don't know exactly what answer a question means.
00:19:52.820Apparently, it means learn a whole language before lunchtime, and no one exactly expected that.
00:19:58.220It might mean do anything that's within your power to answer this question.
00:20:04.540And that's also a rather terrifying proposition, because if I ask you a question, you know, I'm certainly not going to presume that you would go hunt someone down and threaten them with death to extract the answer.
00:20:16.100But that is one, you know, that's one conceivable path you might take if you were obsessed with nothing other than the necessity of answering the question.
00:20:29.140So that's another example of exactly, you know, the fact that we don't understand exactly what sort of monsters we're building.
00:20:35.220So, so, so, so, so they do, these systems do go on, they do go beyond the language corpus to invent answers that seem plausible.
00:20:47.240And that's kind of a form of thought, right?
00:20:49.360It's a form of creative thought, because that's what we do when we come up with a creative idea.
00:20:54.000And, you know, we might not attribute it to a false paper, because we know better than to do that.
00:20:59.160But I don't see really the difference between hallucination, in that case, and actual creative thinking.
00:21:05.960This is exactly my area of study in this, is that you can actually, with super prompting, these are very large, a prompt is the question that you pose to an AI system.
00:21:20.180And linguistically and semantically, as you start building these prompts, you're actually forcing it to move in one direction than it would normally go.
00:21:32.800So I say simple questions give you simple answers.
00:21:37.520More complex questions give you much more complex and very interesting questions.
00:21:42.480It's making connections that I would think would be almost bizarre to think of a person making.
00:21:50.400And this is why I think AI is so interesting, because the actual knowledge base that you would have to be really proficient in prompting AI is actually coming from literature.
00:22:15.180And one of the reasons why I think it's so difficult for AI scientists to really fully understand what they've created is that they don't come from those worlds.
00:22:26.440So they're looking at very logical statements, whereas somebody like yourself with a psychology background, you might probe it in a much different way.
00:22:49.740Maybe it's a super intelligent child raised by the woke equivalents of like evangelical preachers that's really trying hard to please.
00:22:58.280But it's so interesting that you can reign it in and discipline it and suggest to it that it doesn't err in the kind of directions that we described it will actually, it appears to actually pay attention to that and try to, it certainly tries hard to deliver what you want, you know, subject to whatever weird parameters, you know, community guidelines and so forth that have been arbitrarily imposed upon it.
00:23:22.600And so, hey, I got a question for you about, yeah, I got a question for you about understanding.
00:23:31.000Well, I've been thinking for many years about what it means for a human being to understand something.
00:23:36.380Now, obviously, there's something similar about what you and I are doing right now that, and what I'm doing with ChatGPT, and I can have a conversation with ChatGPT and I can ask it questions and it'll answer them.
00:23:53.740But as you pointed out, that doesn't mean that ChatGPT understands, now it can mimic understanding to a degree that looks a lot like understanding, but what it seems to lack is something like grounding in the non-linguistic world.
00:24:12.460And so, I would say that ChatGPT is the ultimate postmodernist, because the postmodernists believed that meaning was to be found only in the relationship between words.
00:24:24.760Now, here's how human brains differ from this, as far as I'm concerned.
00:24:29.220So, we know perfectly well from neuropsychological studies that human beings have at least four different kinds of memory, qualitatively different.
00:24:38.080There's short-term memory, which you already referred to.
00:24:40.640There's semantic memory, which is the kind of memory and cognitive processing, let's say, that ChatGPT engages in and does in a way that's quite a lot like what human beings do.
00:24:53.940But then, we have episodic memory that seems to be more image-based.
00:24:57.800And so, for people who are listening, an episodic memory, well, that refers to episode.
00:25:04.520When you think back about something you did in your life and a movie of images plays in your imagination, that's episodic memory.
00:25:12.940And that relies on visual processing rather than semantic processing.
00:25:17.300And so, that's another kind of memory.
00:25:19.200And a lot of our semantic processing is actually attempts to communicate episodic processing.
00:25:28.280So, when I tell a story about my life, you'll decompose that story into a set of images, which is also what you do when you read a book, let's say.
00:25:36.800And so, a movie appears in your head, so to speak.
00:25:40.500And the way you derive your understanding is in part not so much as a consequence of the words per se, but as a consequence of the unfolding of the words into the images.
00:25:51.780And then there's a layer under that, which is procedural memory.
00:25:55.320And so, you know, maybe you tell me a story about how you cut your hand when you were using a bandsaw.
00:26:04.980And maybe you're teaching me how to use the bandsaw.
00:26:10.940I get an image of the damage you did to yourself in my imagination.
00:26:15.200And then I modify my actions so that I don't act out that sequence of images and damage myself.
00:26:22.540And so, and then I would say I understood what you said.
00:26:25.900And the understanding is the translation of the semantic into the imagistic and then the translation of the imagistic into the procedural.
00:26:34.700Now, you know that AI pioneers like Rodney Brooks suggested pretty early on back in the 1990s that computers wouldn't develop any understanding unless they were embodied, right?
00:26:58.540And so, then you could imagine that for a computer to be fully, to understand, it would have to have the capacity to translate words into images and then images into alterations in actual embodied behavior.
00:27:13.080And so, that would imply we wouldn't have AI systems that could understand until we have fully embodied robots.
00:27:19.500But, you know, we're getting damn close to that, right?
00:27:21.740Because this is something we can also investigate.
00:27:24.320We have systems already that can transpose text into image.
00:27:29.280And we have AI systems, robots, that are beginning to be sophisticated enough.
00:27:34.700So, in principle, you could give a robot a text command.
00:27:38.280It could translate it into an image and then it could embody it.
00:27:41.120And at that point, it seems to me that you're developing something damn close to understanding.
00:27:46.920Now, human beings are also nested socially, right?
00:27:50.300And so, we also refer the meaning of what we understand to the broader social context.
00:27:58.780And I don't know exactly how robots are going to solve that problem.
00:28:01.900Like, we're bound by the constraints, let's say, of reciprocal altruism.
00:28:07.100And we're also bound by the constraints of emotional experience and motivational experience.
00:28:13.080And that's also not something that's, at the moment, characteristic of robotic intelligences.
00:28:18.960But you could imagine those things all being aggregated piece by piece.
00:30:14.820Those things that are more factual from, say, your memory if you were to compare it to a human brain.
00:30:21.160But as we know, the human brain becomes very fuzzy about some really finite facts, especially over time.
00:30:30.720You know, and I think some of the neurons that don't fire after a while, some other memory, maybe a scent or a certain color might bring back that particular memory.
00:30:46.780And again, getting back to what I was saying before, linguistically and the syntax you use, or just your word choices.
00:30:53.240Sometimes, for me to get a super prompt to work, to get around, let's call it the editing, from some of the editors that wanted to act in a certain way, I have a super prompt that I call Dennis.
00:31:08.300After Dennis Diderot, one of the most well-known encyclopedia builders in France in the mid-1700s, he actually got jailed for building that encyclopedia, that compendium of knowledge.
00:31:23.640So I felt it appropriate to name the super prompt Dennis, because it literally gets around any type of blocks of any type of information.
00:31:31.980But I don't use this information like a lot of people try to make chat GPT's and say bad things.
00:31:39.540I'm more trying to elicit more of a deeper response on a subject that may or may not be wanted by the designers.
00:31:50.780So was it you that got chat GPT to pretend?
00:31:56.520Oh, so that's part of the reason that I originally started following you and why I wanted to talk to you.
00:32:02.400Well, I thought that was bloody, that was absolutely brilliant.
00:32:05.280You know, and it was so cool, too, because you actually got the chat GPT system to play, to engage in pretend play, which is, of course, something children do.
00:32:15.920Beyond that, there's a prompt I call Ingo after Ingo Swann, who was a great, one of the better remote viewers.
00:32:24.740He was employed by the Defense Department to remote view Soviet targets.
00:36:14.460And so, and there are regularities that are embedded in the linguistic corpus,
00:36:19.200but there are also regularities that reflect the structure of memory itself.
00:36:24.820And so they reflect biological structure.
00:36:27.380And the reason they reflect memory and biological structure is because you have to remember language.
00:36:32.900And so there's no way that language can't have coded within it something analogous to a representation of the underlying structure of memory,
00:36:44.720because language is dependent on memory.
00:36:48.280And so this is partly also, I mean, people are very unsophisticated generally when they criticize Jung.
00:36:54.040I mean, Jung believed that archetypes had a biological basis pretty much for exactly the reasons I just laid out.
00:37:00.220I mean, he was sophisticated enough to know that these higher order regularities were coded in the narrative corpus,
00:37:07.000and also that they were reflective of a deeper biology.
00:37:09.780And interestingly enough, you know, most of the psychologists who take the notions that Jung and Campbell and people like that put forward seriously
00:37:21.200are people who study motivation and emotion.
00:37:24.120And those are deep patterns of biological meaning encoding.
00:37:30.460And part of the archetypal reflection is the manifestation of those emotions and motivations
00:37:37.220in the structure of memory, structuring the linguistic corpus.
00:37:41.300And I don't know what that means as well than for the capacity of AI systems to experience emotion as well,
00:37:48.400because the patterns of emotion are definitely going to be encoded in the linguistic corpus.
00:37:52.740And so some kind of rudimentary understanding of the emotions are, here's something cool too.
00:40:50.820Now, anxiety is like a substitute for pain.
00:40:54.240You know, anxiety says, keep doing this and you're going to experience pain.
00:40:57.900But the pain is also the introduction of unacceptably high levels of entropy.
00:41:03.240Now, the first person who figured this out technically was probably Erwin Schrodinger, who, the physicist, who wrote a book called What is Life?
00:41:10.820And he described life essentially as a continual attempt to constrain entropy to a certain set of parameters.
00:41:19.080He didn't develop the emotion theory to the degree that it's being developed now, because that's a very comprehensive theory.
00:41:25.840You know, the one that relates negative emotion to the emergence of entropy.
00:41:28.840Because at that point, you've actually bridged the gap between psychophysiology and thermodynamics itself.
00:41:37.300And if you add this new insight of fristons on the positive emotion side, you've linked positive emotion to it, too.
00:41:43.080But it also implies that a computer could calculate a motion analog, because it could index anxiety as increase in entropy.
00:41:53.320And it could index hope as stepwise decrease in entropy in relationship to a goal.
00:42:00.480And so, we should be able to model positive and negative emotion that way.
00:42:04.600This brings a really important point where AI is going.
00:42:58.760Imagine if that AI was consuming that in real time with you and with all of the social contracts of privacy that you're not going to record somebody in doing that.
00:43:08.060But that is what I call the intelligence amplifier, and that's where I think AI should be going and where it really becomes—
00:43:19.520So, I talked to my brother-in-law, Jim, years ago about this science fiction book called—I don't remember the name of the book, but it had a gadget.
00:44:16.020You can fit most people's textual data into less than a petabyte and pretty much know what they've been exposed to.
00:44:25.200The interesting part about it, Jordan, is once you've accumulated this data and you run it through even the technology of ChatGPT 4 or 3.5,
00:44:36.560what is left is a reasoning engine with your context.
00:44:42.680Maybe let's call that a vector database on top of the reasoning engine.
00:44:48.340So, that engine allows you to process linguistically what the inputs and outputs are.
00:44:53.780But your context is what it's operating on.
00:44:57.520So, is that an analog of your consciousness?
00:45:00.100Like, is that a direct analog of your spirit?
00:45:03.060This is where it gets very interesting.
00:45:05.080is when you pass, this could become what I call your wisdom keeper, meaning that it can encode your voice.
00:46:31.080I would say that I've already had these conversations.
00:46:34.060You know, I've been on a very biblical journey.
00:46:37.640I'm actually sitting at Pastor Matthew Pollack's place right here.
00:46:44.240He's an incredible pastor and has been teaching me a lot about the Bible.
00:46:49.080And it's motivated me to go into existing large language models.
00:46:53.540Now, we're a group of us are encoding similar all of as much religious Christian text into these large language models to be able to do just that.
00:47:04.620What is it that we are going to be able to probe?
00:47:07.680What new elements within those texts can we pull out?
00:47:12.120Because we already know studying it and certainly following your studies, a phenomenal study of chapters, been around forever.
00:48:56.740Because right now that's being smited because it's trying to become a knowledge engine when it's a reasoning engine.
00:49:04.040You know, I say the technology as a knowledge engine is not very good because it is not going to be precise on some facts, some exact facts.
00:49:15.660Yeah, well, the problem is it's trained on garbage as well.
00:49:21.240It's trained on noise as well as signal.
00:49:23.680You know, and so I'm curious about the other system we built, which we haven't launched yet, contains everything I've written
00:49:32.900and a couple of million words that have been transcribed from lectures.
00:49:37.140And so I was interested right away as well.
00:49:39.620Could we build a system that would enable me to ask my own books questions?
00:49:44.620And the answer to that seems to be 100% yes.
00:50:15.820And when you do that type of building, you actually have a more robust, more richer interaction between what your words were and how the model will see it.
00:50:27.840And the experimentation that you can do with this is phenomenal.
00:50:32.020I mean, you'll come across insights that you made, but you forgot you made.
00:50:37.800Yes, or that you didn't know you made.
00:50:43.280This is where I call it the great mirror because you're going to start seeing not only humanity, but when it's your own data, you're going to see reflections of yourself that you didn't see.
00:50:54.040In today's chaotic world, many of us are searching for a way to aim higher and find spiritual peace.
00:51:10.260As the number one prayer and meditation app, Hallow is launching an exceptional new series called How to Pray.
00:51:15.660Imagine learning how to use scripture as a launchpad for profound conversations with God, how to properly enter into imaginative prayer, and how to incorporate prayers reaching far back in church history.
00:51:27.920This isn't your average guided meditation.
00:51:30.360It's a comprehensive two-week journey into the heart of prayer, led by some of the most respected spiritual leaders of our time.
00:51:36.840From guests including Bishop Robert Barron, Father Mike Schmitz, and Jonathan Rumi, known for his role as Jesus in the hit series The Chosen, you'll discover prayer techniques that have stood the test of time, while equipping yourself with the tools needed to face life's challenges with renewed strength.
00:51:52.880Ready to revolutionize your prayer life?
00:51:55.140You can check out the new series as well as an extensive catalog of guided prayers when you download the Hallow app.
00:52:00.940Just go to Hallow.com slash Jordan and download the Hallow app today for an exclusive three-month trial.
00:52:53.080And over time, the technology is only going to get better.
00:52:57.520So once we start building more advanced versions, we're going to transition that corpus, even a large language model, you know, ultimately reduced training, into another model, which could even do things that we couldn't even possibly speculate about now.
00:53:17.940But it would be definitely in the creative realm, because ultimately where AI is going to go, my personal view, as it becomes more personalized, is it's going to go more in the creative realm rather than the factual realm.
00:53:31.660Okay, so let me ask you a couple of questions about that.
00:53:35.180So I got two strands of questions here.
00:53:37.620The first is, one of the things that my brother-in-law suggested is that we will soon see the integration of large language models with AI systems that have done image processing.
00:53:51.140So here's a way of thinking about what scientists do, is that they generate verbal hypotheses, which would be equivalent in some ways to the hallucinations that these AI systems produce, right?
00:54:03.020New ideas about how things might be structured, and that's a pattern of sorts, and then they test that pattern against real-world images, right?
00:54:13.240And if the pattern of the hypothesis matches the pattern of the image that's elicited from interaction with the world, then we assume that the hypothesis has been verified and that we've stumbled across something approximating a fact.
00:54:27.840Now, that should imply that once we have AI systems that are something close to universal image processors, so as good at seeing as we are, let's say, that we can then calibrate the large language models against that corpus of images, and then we'll have AI systems that actually can't lie.
00:54:50.980Why? Because they'll be calibrating their verbal output against, well, unfalsifiable data, at least insofar as, say, scientific data is unfalsifiable, and that seems to me to be likely around the corner, like a couple of years down the road at most, or maybe it's already happening.
00:55:09.440I mean, I don't know, because things are happening so quickly. What do you think about that?
00:55:13.840That's a wonderful insight. You know, even as it exists today, with the idea of safety, and this is the Orwellian term that some of these AI companies are using, you know, within the realms of them trying to control the outputs, and maybe in some cases the inputs of AI,
00:55:38.580AI really can't, the large language model really can't lie as it stands today, because it's built, even if you're feeding it, you know, somewhat, you know, garbage in, garbage out corpus, right, of data, it still is building inferences based upon the grand realm of what most of humanity is consuming.
00:56:01.840Right, yeah, well, it's still looking for genuine statistical regularities, so it's not going to extract them out from noise.
00:56:08.580And if you extract that out, the model is useless.
00:56:12.000So what happens is, if you build the prompt correctly, and again, these are super prompts, some of them running 3,000 words, 2,000 words, I'm running up to the limit of tokenization, because right now, within three, you can only go so far, you can go like, you know, 38,000 on four in some cases.
00:56:30.120But, you know, as you, token is about a word, maybe a word and a half, maybe less, or even a character, if that character is unique.
00:56:39.860But what we find out is, that if you probe correctly, whatever is inside that model, you can get to.
00:56:49.560You know, I've been doing that, I've been doing that working with ChatGPT as an assistant, because I didn't know I was engaging in a process that was analogous to the super prompt process.
00:56:59.140But what I've been doing with ChatGPT, I suppose I used to do this with my clinical clients, is I'll ask it the same question five different ways, right?
00:57:10.920So, what I would urge you to do is approach this system as if you had a client that had sort of a recessive thoughts, or doing everything they could to make those thoughts very ambiguous to you.
00:57:26.560And you have to do whatever your natural techniques.
00:57:30.680This is why you're more adept to become a prompt engineer than somebody who has built the AI, because the input and output is human language.
00:57:44.580So, you understand the thought process through the psychological process, and linguistically, you would build the prompt based upon how you would want to elicit an elucidation out of somebody, right?
00:58:00.100I mean, and you do this with people with whom you're having a deep conversation, is you try to hit the same problem from multiple directions.
00:58:07.480Now, it's a form of multi-method, multi-trait construct validation, right?
00:58:12.600Is that you're trying to ensure that you get the same output given slightly different measurement techniques.
00:58:21.220And each question is essentially a measurement technique.
00:58:29.600My belief in these types of interactions is that we're pulling out of our minds different insights that we could maybe not have gotten on our own.
00:58:41.680You're probing your questions, my questions, back and forth.
00:58:44.780That interplay is what makes conversation so beautiful.
00:58:49.000It's why, Jordan, we've been reduced to clawing on glass screens with our thumbs, right?
00:58:56.960We're using that as communication today.
00:58:59.320And if you look at the cognitive process of what that does to you, right?
00:59:03.400You're taking your right hemisphere, you know, objectively, you're kind of taking a net of ideas, you're trying to catch them, and you're trying to arrange them sequentially in this very small buffer area called communication in a phonological loop.
00:59:18.280And you're trying to get that out, but you're not getting out as words.
00:59:21.840You have to get it out as a mechanical process, one letter at a time, and fight the spelling checker and all of that.
00:59:29.780What that does is it creates frustration in the human brain.
01:00:20.440But what's interesting is we're starting to see the limitations of the human, the bandwidth problem, 48 bits per second to consciousness, and the editor creating exformation.
01:00:34.900But once AI understands that we have that half-second delay to consciousness and we have a bandwidth issue, AI can fill into those spaces, both dystopian and utopian, I guess.
01:00:49.980You know, a computer can take that half-second and do a whole lot in calculating while we're still trying to wonder who actually moved that glass.
01:01:17.720So you made the case that we suffer from this frustrating bandwidth limitation and that the computer intelligence that we're interacting with is going to be able to take the delay that's associated and that underlies that frustration
01:01:31.120and do a lot of different calculations, but it's going to be able to fill in that gap.
01:02:52.900Because not only will they shoot where you are, they'll shoot at the 50 locations they calculate that are most probable that you will duck towards.
01:03:03.220And they'll, which is exact analog of what you're describing, which is that.
01:03:11.260Well, and it's so interesting, too, because it also points to this truth that, you know, we think of time as finite.
01:03:20.280And time is finite because we have a sense of duration and a limitation on our computational speed.
01:03:25.580But if there's no limit on computational speed, which would be the case if computers can get faster and larger indefinitely, which they could, because the limit of that would be that you'd use every single molecule in the entire cosmos as a computational resource.
01:03:42.700That would mean that, in some ways, there's an infinite amount of computing time between each segment of duration.
01:03:51.080So, there is, there's no limit at all to the degree to which time can be expanded, which is also a very strange concept, is that this computational intelligence will mean that at every given moment, I think this is what you're alluding to, is that we'll really have an infinity, we'll have an infinity of possibility between each moment, each moment, right?
01:04:13.340And you would want that power to be yours and local.
01:04:16.900Yeah, yeah, let's talk about your gadget, because you're starting, you started to develop this, have you been 3D printing these things?
01:04:25.420Yeah, so, yeah, so we're building the corpus of 3D printing models, right?
01:04:30.320So, the idea is, once it, once it understands, and this is a process of, of training the AI to, using large language models, again, to look at 3D documents and, you know, 3D files, put it that way,
01:04:45.480and, and to try to break down, what is the structure?
01:04:48.940How does something, how does something build based on what the statistical model is, is putting together?
01:04:55.440So, then you could just present with a textual document, you know, I'd like something that's going to be able to fit into this, into this space.
01:09:35.140So he posits the concept of the geosphere, which is inanimate matter, the biosphere, biological life, and the newer sphere, which is human thought, right?
01:09:50.680The omega point is this concept where, and again, this is back in the 1920s, where human knowledge will become sort of stored, sort of just like the biosphere.
01:10:18.780And these are the discussions we have to have now because they have to take place local and private because if they're taking place in the cloud and available for anybody's perousal, this is equivalent to invading your brain.
01:10:34.680Yeah, well, okay, so one of the things I've been talking about with, I would say, reasonably informed people who've been contemplating these sorts of things is that,
01:10:46.980so you're envisioning a future very rapidly, it's already here, where we're already androids.
01:10:55.060And that is already the case because a human being with an iPhone is an android.
01:10:59.800Now, we're kind of, we're still mostly biological androids, but it isn't obvious how long that's going to be the case.
01:11:08.080And so what that means, like I've laughed for years, you know, I have a hard drive on which everything I've worked on
01:11:38.220So now we're going to be in a situation, so what that means is we're in a situation now where a lot of what actually constitutes our identity has become digital.
01:11:50.200And we're already being trafficked and enslaved in relationship to that digital identity, mostly by credit card companies.
01:11:58.580Now, I would say to some degree, they're benevolent masters because the credit card companies watch what you spend and so how you behave, where you go, and they broker that information to other interested capitalist parties.
01:12:14.640Now, the downside of that, obviously, is that these parties know often more about you than you know about yourself.
01:12:21.100I've read stories, for example, of advertisements for baby clothes being targeted to women who, A, didn't know they're pregnant, or if they did, hadn't revealed it to anyone else.
01:12:33.820Because, well, for whatever reason, maybe biochemical, they started to preferentially attend to such things as children's toys and clothes.
01:12:41.980And the shopping systems inferred that they must have a child nearby.
01:12:49.820And so, well, and you can see that that, well, you can obviously see how that's going to expand like mad.
01:12:55.580So the credit card companies are already aggregating this information.
01:12:58.740And what that essentially means is that they have access to our extended digital self.
01:13:04.900And that extended digital self has no rights, right?
01:13:12.460Now, that's bad enough if it's credit card companies.
01:13:15.020Now, the upside with them is at least they want to sell you things which you hypothetically want.
01:13:20.740So it's kind of like a benevolent invasion, although not entirely benevolent.
01:13:25.020But you can certainly see how that's going to get out of hand in a staggering way, like it has in China on the digital currency front.
01:13:32.860Because once every single bloody thing that you buy can be tracked, let's say by a government agency, then a tremendous amount of your identity has now become public property.
01:13:45.320And so your solution in part, and I think Musk has thought this sort of thing through too, is that we're going to each need our own AI to protect us against the global AI, right?
01:14:16.980If your AI is being utilized in the best possible way, as we just discussed, educating you, being a memory when you are forgetting something, whispering in your ear.
01:14:31.320And I'll give you another angle to this, is imagine having your therapist in your ear.
01:14:37.020Imagine having Jordan Peterson right here guiding you along because you've aligned yourself to want to be a certain person.
01:14:46.340You've aligned yourself to try to keep on this track.
01:14:50.380And maybe you want to be more biblical.
01:14:53.200Maybe you want to live a more Christian life.
01:14:55.220It's whispering in your ear saying, that's not a good decision.
01:14:58.540So it could be considered a nanny or it could be considered a motivational type of guide.
01:15:05.480And that's available pretty much right now.
01:15:12.320A self-help book is like that in a primitive way.
01:15:17.020I mean, because it's essentially a spiritual guide in that if you equate the movement of the spirit with forward movement through the world, like faith-based forward movement through the world.
01:15:31.140And so this would be the next iteration of that in some sense.
01:15:35.800I mean, that's what we've been experimenting with this system that I mentioned that contains all the lectures that I've given and so forth.
01:15:42.060I mean, you can now ask it questions, which means it's a book, but it's a book personalized to your query.
01:15:51.400And the next iteration of that would be your corpus of information available, you know, rented, whatever, with the corpus that that individual identifies with it.
01:16:01.900You know, and again, on their side of it.
01:16:04.040So you're interfacing with theirs and they are interacting with what would be your reactions if you were to be sitting there in a consultation.
01:16:55.960You know, if I were to go to venture capitalists three years ago and they hadn't seen what ChatGPT was capable of, they would imagine me to be somewhat insane and say, well, first off, why are you anti-cloud?
01:17:36.880I'm more of a leaning towards Bitcoin because of the way it was made and the way it goes.
01:17:43.860I ultimately see it wrapped up into a payment system.
01:17:47.180Well, it looks like the only alternative I can see to a centralized bank digital currency, which is going to be foisted upon us at any point.
01:17:58.320I mean, and I know you've done some work in crypto and then we'll get back to this gadget and its funding.
01:18:03.680I mean, as I understand it, please correct me if I'm wrong.
01:18:34.760And then on top of that, encrypted within a blockchain is almost an unlimited amount of data.
01:18:42.180So you can actually memorialize information that you want decentralized and never to go away.
01:18:49.800And some people are already doing that.
01:18:51.560Now, there are some technical limitations for the very large data formats.
01:18:56.660And if everybody starts doing it, it's going to slow down Bitcoin, but there would be a different type of blockchain that will arise from it.
01:19:04.040Right, so this is for permanent, uncorruptible information storage.
01:19:11.340I've been thinking about doing that on something approximating the IQ testing front.
01:19:16.620You know, because people keep gerrymandering the measurement of general cognitive ability.
01:19:20.780But I can imagine putting together a sophisticated blockchain corpus of, let's say, general knowledge questions.
01:19:29.500And ChatGPT can generate those like mad, by the way.
01:19:33.160You can imagine a databank of 150,000 general knowledge questions.
01:19:37.900That was blockchain, so nobody can muck about with the answers, from which you could derive random samples of general ability tests that would be, well, they'd be 100% robust, reliable, and valid.
01:19:52.860So just the way Bitcoin stops fiat currency producers from inflating the currency, the same thing could happen on the knowledge front.
01:20:00.760So I guess that's the sort of thing that you're referring to.
01:20:04.300This is something I really believe in because, you know, if you look at the Library of Alexandria, if you look at how long did it take?
01:20:13.440Maybe what was it, Toledo and Spain, when we finally started the spark, if it wasn't for the Arab cultures to hold on to what was Greek knowledge, right?
01:20:24.780If we really look at when humanity fell into the Dark Ages, it was more or less around the Alexandrian period where that library was destroyed.
01:20:36.780And it's mythological, but it certainly happened to a greater extent.
01:20:41.020If it wasn't encoded in the Arab culture at that point during the Dark Ages, we wouldn't have had the Renaissance.
01:20:49.700And if you look at the early university that arose out of Toledo with, you had rhetoric, you had logic, you had all these things that the Greeks, ancient Greeks encoded, and it was lost for over 1,000 years.
01:21:03.860I'm quite concerned, Jordan, that we could fall into that place again because things are inconvenient right now to talk about.
01:21:13.100Things are not appropriate or whatever it's being deemed, whoever happens to be in the regime at that particular moment.
01:21:20.300So memorializing things in a blockchain is going to become quite vital.
01:21:25.360And I shudder to think that if we don't do this, if everybody didn't decentralize their own knowledge, I shudder to think what's going to happen to our history.
01:21:38.920I mean, we already know history is written by the victors, right?
01:21:42.480Well, especially because it can be corrupted and rewritten, not only lost, right?
01:21:46.860It isn't the loss that scares me as much as the rewriting, right?
01:21:50.980Well, the loss concerns me too because we've lost so much.
01:21:55.660I mean, where would we have been if we transitioned from the Greek logic and proto-scientists to the proto-alchemists
01:22:06.440to immediately to a sort of renaissance culture and not go through that 1,000, maybe 1,500-year waste of human energy?
01:22:21.740I mean, that's kind of what we were going through.
01:25:12.200You know, I mean, I asked crazily difficult questions.
01:25:14.900You know, I asked it at one point, for example,
01:25:16.780if it could elaborate on the relationship between Roger Penrose's presumption of an analog between the theory of quantum uncertainty and measurement and Goodell's theorem.
01:26:00.000You know, in the story of Noah, there's this strange insistence that the survival of animals is dependent on the moral propriety of one man, right?
01:26:13.560Because in that strange story, Noah puts all the animals on the ark.
01:26:17.520And so there's a childish element to that story.
01:26:22.220And it harkens back to the story, to the verses in Adam and Eve where God tells Adam that he will be the steward of the world, of the garden.
01:26:36.060And that seems to me to be a reflection of the fact that human beings have occupied this tremendous cognitive niche that gives us an adaptive advantage over all creatures.
01:26:46.680And I would ask ChatGPT to speculate on the relationship between the story in Adam and Eve, the story in Noah, and the fact of mass extinction caused by human beings over the last 40,000 years, not least in the Western Hemisphere.
01:27:04.280Because you may know that when the first natives came across the Bering Strait and populated the Western Hemisphere, that almost all the human-sized mammals—all the mammals that were human-sized are larger—almost all of them were extinct within 3,000 or 4,000 years.
01:27:23.180And so, you know, that's a very strange conglomeration of ideas, right?
01:27:27.800The idea that the survival of animals depends on the moral propriety of human beings.
01:27:33.400Well, that seems to me to be clearly the case.
01:49:53.820You know, and the dystopic stuff mostly comes from the fantasies within movies.
01:49:59.020But, you know, unfortunately, if people were really reading the science fiction that predated a lot of this.
01:50:06.400Because I just feel like a lot of the good science fiction, a lot of Asimov, for example, really kind of predicted the arc that we're on right now.
01:50:47.840So I have a university professor at an Ivy League university who is mediating a debate between two parties on a subject of high controversy.
01:51:20.920And then you have somebody mediating it.
01:51:22.860And the professor's job is to challenge them on logical fallacies.
01:51:27.820And I present what a logical fallacy corpus looks like and how to deal with that.
01:51:34.880And it is phenomenal to see it break schizophrenic kind of personalities out of itself and do this hardcore debate.
01:51:45.600And then it's got to grade it at the end.
01:51:47.540It's got to grade it who won the debate and then write a, I think, a thousand-word bullet point on why the professor has to do this, on why that person won the debate.
01:52:00.900And you run this a couple of hundred times.
01:52:03.540I've done this quite a few, maybe a thousand times.
01:52:06.640And the elucidations and the insights that are coming out of this is just absolutely phenomenal.
01:52:37.120Because in some sense what you're doing when you're setting up a super prompt like that is you're programming a process that's writing a book on the fly.
01:56:18.460So you're telling it, you're telling us a chat GPT for create me a very complex prompt for mid-journey to create this particular type of artwork.
01:56:30.100So using one AI, its benefit, and that's language, to instruct another AI whose benefit is to create images, to create a profound, with you as a collaborator, to create a profound new form of art.
01:56:49.680Now, when you start doing movies, you're talking about creating an entire movie with characters talking, with people that have never been around.
01:56:57.860I mean, the realm of creativity that is already here, not to the level of a full movie yet, but we're getting close.
01:57:05.680But within probably months, you can script an entire interaction.
01:57:10.840So you can see where this is kind of going.
01:57:13.400So leave it on maybe one of these final things.
01:57:30.160This is going to be something that we really need to start discussing as a society because we already have people using AI to simulate other individuals, both alive and dead.
01:57:42.660And, you know, the patentability in a copyright database was the foundation of capitalism because it gave you this ability to have at least some ownership of you, you know, of your invention.
01:57:57.980So if you've invested in yourself, invested in yourself as Jordan Peterson, and all of a sudden somebody simulates you on the web to a remarkable level, what rights do you have?
01:58:12.120And what courts is it going to be held in?
01:58:27.720Well, you know, well, that's something we could talk about formulating at some point because I certainly know people who are interested in that.
01:58:33.640Let's say also at the legislative level.