The Jordan B. Peterson Podcast


357. ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele


Summary

Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety. With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way. In his new series, he provides a roadmap towards healing, showing that while the journey isn t easy, it's absolutely possible to find your way forward. If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better. Go to Dailywire Plus now and start watching Dr. Jordan B. Peterson's new series on Depression and Anxiety: A Guide to Feeling Better. Today's guest is Brian Romley, an entrepreneur, scientist, and artificial intelligence researcher. We discuss language models, the science behind understanding, language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be headed. Let this be the first step towards the brighter future you deserve. You deserve a brighter future, you deserve a better one. -Jon Sorrentino Subscribe to Daily Wire Plus to get immediate access to all of the latest podcasts, podcasts, and events happening around the world, wherever you get your favourite shows and socials. Subscribe today. Learn more about your ad choices. Rate, review and subscribe to our newsletter! and become a supporter of our new podcast, Dear John: Subscribe, Review, Share, and Share, Share and Retweet! Timestamps: 0:00:00 - What do you think of this episode? - What are you're listening to? - Rate, Share it on Apple Podcasts? 5: 6:30 - What's your favorite podcast? 7:00 - Which language do you would you'd like to be listening to me recommend it? 8:00 | What s your favorite emoji? 9:00 +3:00 / 6:40 - What is your thoughts on this episode is your favorite song? 11:00 & 6: What would you like to hear me recommend I'm listening to you rate me in a song or tweet me out of my feed? 12:30 13: 15:00/9:30 & 9:15 - What s it's a good day? 16:40 17:10 - How do you like it's better than that?


Transcript

00:00:00.940 Hey everyone, real quick before you skip, I want to talk to you about something serious and important.
00:00:06.480 Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
00:00:12.740 We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
00:00:20.100 With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
00:00:27.420 He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
00:00:35.360 If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better.
00:00:41.780 Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
00:00:47.460 Let this be the first step towards the brighter future you deserve.
00:00:57.420 Hello everyone. Today I'm speaking with entrepreneur, scientist, and artificial intelligence researcher, Brian Romley.
00:01:14.940 We discuss language models, the science behind understanding, tuning language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be heading.
00:01:36.140 So Brian, thanks for agreeing to talk to me today.
00:01:38.440 I've been following you on Twitter. I don't remember how I came across your work, but I've been very interested in reading your threads, and you seem to be au courant, so to speak, with the latest developments on the AI front.
00:01:54.360 And I've been particularly fascinated about the developments in AI for two reasons.
00:02:00.020 My brother-in-law, Jim Keller, is a very well-known chip designer, and he's building a chip optimized for AI learning, and we've talked a fair bit about that, and I've talked to him on my YouTube channel about the perils and promises of AI, let's say.
00:02:16.420 And then I've been very fascinated by ChatGPT. I know I'm not alone in that. I've been using it most recently as a digital assistant, and I got a couple of questions to ask you about that.
00:02:29.320 So here's some of the things that I've found out about ChatGPT, and maybe we can go into the technology a little bit, too.
00:02:35.260 So I can ask it very complicated questions, like I asked it the other day about, there's this old papyrus from Egypt, ancient Egypt, that details out a particular variant of the story of Horus and Osiris, two Egyptian gods.
00:02:51.740 It's a very obscure piece of knowledge, and it has to do with the sexual element of a battle between two of the Egyptian gods.
00:03:00.480 And I asked it about that and to find the appropriate citations and quotes from appropriate experts.
00:03:07.760 And it did so very rapidly, but then it moralized at me about the sexual element of the story and told me that maybe it was in conflict with their community guidelines.
00:03:20.100 And so then I gave it hell.
00:03:22.140 I told it to stop moralizing at me and that I just wanted academic answers.
00:03:26.540 And it apologized and then seemed to do less of that, although it had to be reminded from time to time.
00:03:34.100 So that's very weird that you can argue with it, let's say, and that it'll apologize.
00:03:38.960 It also does quite frequently produce references that don't exist.
00:03:45.420 Like about 85% of the time, 90% of the time, the references it provides are genuine.
00:03:52.060 I always look them up and double check what it provides.
00:03:54.880 But now and then it'll just invent something completely out of the blue and offer it as the actual article.
00:04:01.860 And I don't understand that at all.
00:04:03.460 It's like, especially because when you point it out, it again apologizes and then provides the accurate reference.
00:04:11.780 It's like, so I don't understand how to account for the behavior of the system that's doing that.
00:04:17.920 And maybe you can shed some light on that.
00:04:22.300 Well, first off, Dr. Peterson, thank you for having me.
00:04:26.060 It's really an honor and a privilege.
00:04:29.320 You're finding the limits of what we call large language models.
00:04:34.800 That's the technology that is being used by ChatGPT 3.5 and 4.
00:04:41.780 A large language model is really a statistical algorithm.
00:04:47.480 I'll try to simplify because I don't want to get into the minutia of technical details.
00:04:52.620 But what it's essentially doing is it took a corpus of human language.
00:04:57.380 And that was garnered through mostly the internet, a couple of billion words at the end of the day, all of human writing that it could have access to, and plus quite a bit of scientific documents and computer programming languages.
00:05:18.540 And so what it's doing is it's producing a result statistically, mathematically, one word, even at times, one letter at a time.
00:05:27.900 And it doesn't have a concept of global knowledge.
00:05:33.900 So when you're talking about that papyrus in the Egyptian translation, ironically, it's so interesting because you're taking something that was a heligraph, and it's now probably was translated to Greek and English.
00:05:48.240 And now AI, that language that language that we're talking about, which is essentially a mathematical tensor.
00:05:55.700 And so when it's laying out those words, the accuracy is incredible.
00:06:02.000 And frankly, and we can get into this a little later in the conversation, nobody really understands precisely what it's doing in what is called the hidden layer.
00:06:13.800 It is so many interconnections of neurons that it essentially is a black box.
00:06:20.940 Like a brain.
00:06:21.620 And it's using a form.
00:06:22.740 It is precisely like the brain.
00:06:24.960 And I would also say that we're in a sort of undiscovered continent.
00:06:31.000 Anybody saying that they fully understand the limitations and the boundaries of what large language models are going to look like in the future as a sort of self-feedback is sort of guessing.
00:06:47.100 There's no understanding.
00:06:49.000 If you look at the growth, it's logarithmic.
00:06:52.680 Yeah, OpenAI hasn't really told us what they're using as far as the number of parameters.
00:06:59.100 These are billions of interconnectivities of neurons, essentially.
00:07:03.980 But we know in chat GPT 3.5, it's well over 120 billion parameters.
00:07:10.980 The content I've created over the past year represents some of my best to date as I've undertaken additional extensive exploration in today's most challenging topics and experienced a nice increment in production quality courtesy of Daily Wire+.
00:07:27.000 We all want you to benefit from the knowledge gained throughout this adventurous journey.
00:07:31.600 I'm pleased to let you know that for a limited time, you're invited to access all my content with a seven-day free trial at Daily Wire+.
00:07:39.240 This will provide you with full access to my new in-depth series on marriage as well as guidance for creating a life vision and my series exploring the book of Exodus.
00:07:50.100 You'll also find there the complete library of all my podcasts and lectures.
00:07:54.780 I have a plethora of new content in development that will be coming soon exclusively on Daily Wire+.
00:08:00.000 Plus, voices of reason and resistance are few and far between these strange days.
00:08:05.720 Click on the link below if you want to learn more.
00:08:08.680 And thank you for watching and listening.
00:08:10.120 We'll see you next time.
00:08:40.120 And let's be clear, it doesn't take a genius hacker to do this.
00:08:51.680 With some off-the-shelf hardware, even a tech-savvy teenager could potentially access your passwords, bank logins, and credit card details.
00:08:59.060 Now, you might think, what's the big deal?
00:09:01.180 Who'd want my data anyway?
00:09:02.720 Well, on the dark web, your personal information could fetch up to $1,000.
00:09:07.120 That's right, there's a whole underground economy built on stolen identities.
00:09:11.400 Enter ExpressVPN.
00:09:13.160 It's like a digital fortress, creating an encrypted tunnel between your device and the internet.
00:09:17.840 Their encryption is so robust that it would take a hacker with a supercomputer over a billion years to crack it.
00:09:23.480 But don't let its power fool you.
00:09:25.200 ExpressVPN is incredibly user-friendly.
00:09:27.640 With just one click, you're protected across all your devices.
00:09:30.680 Phones, laptops, tablets, you name it.
00:09:32.760 That's why I use ExpressVPN whenever I'm traveling or working from a coffee shop.
00:09:37.000 It gives me peace of mind knowing that my research, communications, and personal data are shielded from prying eyes.
00:09:42.980 Secure your online data today by visiting expressvpn.com slash jordan.
00:09:47.320 That's E-X-P-R-E-S-S-V-P-N dot com slash jordan, and you can get an extra three months free.
00:09:54.140 ExpressVPN dot com slash jordan.
00:09:55.980 So let me ask you about those parameters.
00:10:04.400 Well, I'm interested in delving into the technical details to some degree.
00:10:09.760 Now, you know, I was familiar to a limited degree with some of the statistical technologies that analyze, let's say, the relationship between words.
00:10:19.240 So, for example, when psychologists derived the big five models of personality, they basically used very primitive AI stat systems, that's a way of thinking about it, to derive those models.
00:10:33.180 It was factor analysis, which is, you know, it's not using billions of parameters by any stretch of the imagination.
00:10:38.020 But it was looking for words that were statistically likely to clump together.
00:10:43.440 And the idea would be that words that were replaceable in sentences or words that were used in close conjunction with each other, especially adjectives,
00:10:55.560 were likely to be assessing the same underlying construct or dimension.
00:11:02.000 And that if you conducted the statistical analysis properly, which were very complex correlational analysis,
00:11:09.280 you could find out how the words that people used to describe each other aggregated.
00:11:16.020 And it turned out there were five dimensions of aggregation, approximately.
00:11:19.980 And that's been a very robust finding.
00:11:22.280 It seems to be true across different sets of languages.
00:11:25.260 It seems to be true for phrases.
00:11:27.020 It seems to be true for sentences.
00:11:28.540 So, now, with the large language models, which are AI learning driven, you said that the computer is calculating the statistical relationship between words.
00:11:42.820 So, how likely a word is to occur in proximity to another word, but also letters.
00:11:47.960 So, it's conducting the analysis at the level of the letter and at the level of the words.
00:11:51.900 Is it also conducting analysis at the level of the phrases, looking for the interrelationship between common phrases?
00:12:00.760 And then, because when we're understanding a text, we understand letters, words, phrases, sentences, the organization of sentences into paragraphs,
00:12:11.800 the organization of paragraphs into chapters, the chapter in relationship to the book,
00:12:16.520 the book in relationship to all the other books we've read, and then that's also embedded within the other elements of our intelligence.
00:12:24.300 And do you know, does anyone know how deep the analysis that the large language models go?
00:12:32.220 Like, what's the level of relationship that's being assessed?
00:12:37.660 That's a great question, Jordan.
00:12:39.320 I think what we're really kind of discovering is that we can't really put a number on how many interconnections that are made within these parameters other than the general statistics.
00:12:52.480 Like, all right, so you could say there's 12 billion or 128 billion total interconnectivities.
00:13:00.920 But when we actually are looking at individual words, it's sort of almost like the slit experiment with physics, you know, whether we're dealing with a wave or a particle duality.
00:13:14.100 Once you start looking at one area, you know, you're actually thinking about another area that you have to look at.
00:13:20.680 And you might as well just not even do it because it would take a tremendous amount of computer time to try to figure out how all these interconnections are working within the parameter layers.
00:13:30.920 The hidden layers.
00:13:32.280 Now, those systems are trained just to be accurate in their output, right?
00:13:35.820 I mean, they're actually trained the same way we learn, as far as I can tell, is that they're given a target.
00:13:41.640 I don't exactly know how that works with large language models.
00:13:44.900 But I know that, for example, that AI systems that have learned to identify cats, which was an early accomplishment of AI systems, they were shown pictures of things that were cats and things that weren't cats and basically just told when they got the identification right.
00:13:59.400 And that set the weights that you're describing in all sorts of complex ways that are completely mysterious.
00:14:05.700 And the end consequence of the reinforcement, same way that human beings learn, was that a system would assemble itself that somehow could identify cats and distinguish them from all the other things that were cat-like or not cat-like.
00:14:19.280 And as you pointed out, we have no idea that the system is too complex to model, and it's certainly too complex to reduce.
00:14:28.560 Although my brother-in-law told me that some of these AI systems, they've managed to reduce what they do learn to something approximating an algorithm.
00:14:36.140 But that can be done upon occasion, but generally isn't.
00:14:41.040 Generally, the system can't be and isn't simplified.
00:14:45.680 And so that would also imply to some degree that each AI system is unique, right?
00:14:50.540 Not only incomprehensible, but unique and incomprehensible.
00:14:54.500 It also implies, you know, I think ChatGPT passes the Turing test.
00:14:59.620 Because I don't think that if you, I mean, there was just a study released here the other day showing that if you get patients who are seeing doctors to interact with physicians or with ChatGPT,
00:15:14.160 they actually prefer the interaction with ChatGPT to the interaction with the average doctor.
00:15:20.100 So not only does ChatGPT apparently pass the Turing test, which is indistinguishability from a human conversational partner,
00:15:28.720 but it seems to actually do it somewhat better, at least than physicians.
00:15:32.920 And so, but what this brings up, this thorny issue that, you know, we're going to produce computational intelligences that are in many ways indistinguishable from human beings,
00:15:43.760 but we're not going to understand them any better than we understand human beings.
00:15:47.720 It's so funny, eh, that we'll create this and we're going to create something we don't understand that works.
00:15:53.940 Very strange, a very strange thing.
00:15:56.920 You know, and I call it a low-resolution pixelated version of the part of the human brain that invented language.
00:16:09.600 And what we're going to wind up discovering is that this is a mirror reflecting back to humanity.
00:16:16.040 And all the foibles and greatness of humanity is sort of modeled in this.
00:16:24.880 Because, you know, when you look at the invention of language and the phonological loop and Broca and Wernicke's,
00:16:31.520 you start realizing that a very specific thing happened from, you know, the lower primates to humans to develop this form of communication.
00:16:43.920 I mean, prior to that, whatever that part of the brain was, was equated to longer short-term memory.
00:16:51.500 We can see within chimpanzees, they have an incredible short-term memory.
00:16:56.620 There's this video I put out of a primate research center in Japan where they flash some 35 numbers on the screen in seconds.
00:17:09.980 And the chimpanzee can knock it off without even thinking about it.
00:17:15.580 And the area where that short-term memory is is where we've developed the phonological loop and the ability to speak.
00:17:24.120 What's interesting is what I've discovered is AI hallucinations.
00:17:30.740 And those are artifacts that a lot of researchers in AI feel is embarrassing or they would prefer not to speak about.
00:17:40.220 But I'm finding it as a very interesting inquiry, a very interesting study in seeing how these models reach for information that it doesn't know.
00:17:53.640 For example, URLs, right?
00:17:55.840 When you were, you know, speaking before about trying to get information out,
00:18:00.700 and it will make up maybe an academic citation of a URL that looks really like it's good.
00:18:08.480 You put it into the system and it's file not found.
00:18:11.500 It will actually, out of whole cloth, maybe even invent a university study with standard notation.
00:18:18.680 And you go in there and you look up, these are the real scientists.
00:18:21.820 They actually did research, but they never had a paper that was named, that was, you know, brought up in chat GPT.
00:18:30.080 So this is a form of emergent type of situations that I believe deserves a little bit more research than to have it.
00:18:40.860 Yeah, yeah.
00:18:41.320 Well, it's not, it is a bug in a sense, but it's extraordinarily interesting bug because it's going to shed light on exactly how these systems work.
00:18:50.380 I mean, here's something else I heard recently that was quite interesting.
00:18:55.040 Apparently, the AI system that Google relies on was asked a question in a language.
00:19:01.460 I think it was a relatively obscure Bangladeshi language, and it couldn't answer the question.
00:19:08.200 And now its goal is to answer questions.
00:19:11.560 And so it went, taught itself this language, I believe, in a morning.
00:19:15.500 And then it could answer in that language, which is what it's supposed to do, because it's supposed to answer questions.
00:19:21.940 And then it learned a thousand languages.
00:19:24.620 And that wasn't something it had been, say, told to do or programmed to do, not that these systems are precisely programmed.
00:19:30.780 But it also begs this very interesting question is that, well, we've designed these systems whose function, whose purpose, whose meaning, let's say, is to answer questions.
00:19:42.940 But we don't really understand what it means to produce an artificial intelligence that's driven to do nothing but answer questions.
00:19:50.220 We don't know exactly what answer a question means.
00:19:52.820 Apparently, it means learn a whole language before lunchtime, and no one exactly expected that.
00:19:58.220 It might mean do anything that's within your power to answer this question.
00:20:04.540 And that's also a rather terrifying proposition, because if I ask you a question, you know, I'm certainly not going to presume that you would go hunt someone down and threaten them with death to extract the answer.
00:20:16.100 But that is one, you know, that's one conceivable path you might take if you were obsessed with nothing other than the necessity of answering the question.
00:20:29.140 So that's another example of exactly, you know, the fact that we don't understand exactly what sort of monsters we're building.
00:20:35.220 So, so, so, so, so they do, these systems do go on, they do go beyond the language corpus to invent answers that seem plausible.
00:20:47.240 And that's kind of a form of thought, right?
00:20:49.360 It's a form of creative thought, because that's what we do when we come up with a creative idea.
00:20:54.000 And, you know, we might not attribute it to a false paper, because we know better than to do that.
00:20:59.160 But I don't see really the difference between hallucination, in that case, and actual creative thinking.
00:21:05.960 This is exactly my area of study in this, is that you can actually, with super prompting, these are very large, a prompt is the question that you pose to an AI system.
00:21:20.180 And linguistically and semantically, as you start building these prompts, you're actually forcing it to move in one direction than it would normally go.
00:21:32.800 So I say simple questions give you simple answers.
00:21:37.520 More complex questions give you much more complex and very interesting questions.
00:21:42.480 It's making connections that I would think would be almost bizarre to think of a person making.
00:21:50.400 And this is why I think AI is so interesting, because the actual knowledge base that you would have to be really proficient in prompting AI is actually coming from literature.
00:22:03.600 It's coming from psychology.
00:22:05.180 It's coming from philosophy.
00:22:06.880 It's coming from all of those things that people have been dissuaded from studying over the last couple of decades.
00:22:13.160 These are not STEM subjects.
00:22:15.180 And one of the reasons why I think it's so difficult for AI scientists to really fully understand what they've created is that they don't come from those worlds.
00:22:24.860 They don't come from those realms.
00:22:26.440 So they're looking at very logical statements, whereas somebody like yourself with a psychology background, you might probe it in a much different way.
00:22:34.760 Right, right, right, right.
00:22:37.580 Yeah, well, I'm probing it a lot like it's a person rather than an algorithm.
00:22:42.080 And it reacts like a person.
00:22:43.660 It actually reacts quite a lot like a super intelligent child that's trying to please.
00:22:48.400 Like it's a little moralistic.
00:22:49.740 Maybe it's a super intelligent child raised by the woke equivalents of like evangelical preachers that's really trying hard to please.
00:22:58.280 But it's so interesting that you can reign it in and discipline it and suggest to it that it doesn't err in the kind of directions that we described it will actually, it appears to actually pay attention to that and try to, it certainly tries hard to deliver what you want, you know, subject to whatever weird parameters, you know, community guidelines and so forth that have been arbitrarily imposed upon it.
00:23:22.600 And so, hey, I got a question for you about, yeah, I got a question for you about understanding.
00:23:28.520 Let me, let me run this by you.
00:23:31.000 Well, I've been thinking for many years about what it means for a human being to understand something.
00:23:36.380 Now, obviously, there's something similar about what you and I are doing right now that, and what I'm doing with ChatGPT, and I can have a conversation with ChatGPT and I can ask it questions and it'll answer them.
00:23:53.740 But as you pointed out, that doesn't mean that ChatGPT understands, now it can mimic understanding to a degree that looks a lot like understanding, but what it seems to lack is something like grounding in the non-linguistic world.
00:24:12.460 And so, I would say that ChatGPT is the ultimate postmodernist, because the postmodernists believed that meaning was to be found only in the relationship between words.
00:24:24.760 Now, here's how human brains differ from this, as far as I'm concerned.
00:24:29.220 So, we know perfectly well from neuropsychological studies that human beings have at least four different kinds of memory, qualitatively different.
00:24:38.080 There's short-term memory, which you already referred to.
00:24:40.640 There's semantic memory, which is the kind of memory and cognitive processing, let's say, that ChatGPT engages in and does in a way that's quite a lot like what human beings do.
00:24:53.940 But then, we have episodic memory that seems to be more image-based.
00:24:57.800 And so, for people who are listening, an episodic memory, well, that refers to episode.
00:25:04.520 When you think back about something you did in your life and a movie of images plays in your imagination, that's episodic memory.
00:25:12.940 And that relies on visual processing rather than semantic processing.
00:25:17.300 And so, that's another kind of memory.
00:25:19.200 And a lot of our semantic processing is actually attempts to communicate episodic processing.
00:25:28.280 So, when I tell a story about my life, you'll decompose that story into a set of images, which is also what you do when you read a book, let's say.
00:25:36.800 And so, a movie appears in your head, so to speak.
00:25:40.500 And the way you derive your understanding is in part not so much as a consequence of the words per se, but as a consequence of the unfolding of the words into the images.
00:25:51.780 And then there's a layer under that, which is procedural memory.
00:25:55.320 And so, you know, maybe you tell me a story about how you cut your hand when you were using a bandsaw.
00:26:04.980 And maybe you're teaching me how to use the bandsaw.
00:26:07.940 And so, I listen to what you say.
00:26:10.940 I get an image of the damage you did to yourself in my imagination.
00:26:15.200 And then I modify my actions so that I don't act out that sequence of images and damage myself.
00:26:22.540 And so, and then I would say I understood what you said.
00:26:25.900 And the understanding is the translation of the semantic into the imagistic and then the translation of the imagistic into the procedural.
00:26:34.700 Now, you know that AI pioneers like Rodney Brooks suggested pretty early on back in the 1990s that computers wouldn't develop any understanding unless they were embodied, right?
00:26:48.220 He was the inventor of the Roomba.
00:26:50.000 And he invented apparently intelligent systems that had no semantic processing and didn't run on algorithms at all.
00:26:56.800 They were embodied intelligences.
00:26:58.540 And so, then you could imagine that for a computer to be fully, to understand, it would have to have the capacity to translate words into images and then images into alterations in actual embodied behavior.
00:27:13.080 And so, that would imply we wouldn't have AI systems that could understand until we have fully embodied robots.
00:27:19.500 But, you know, we're getting damn close to that, right?
00:27:21.740 Because this is something we can also investigate.
00:27:24.320 We have systems already that can transpose text into image.
00:27:29.280 And we have AI systems, robots, that are beginning to be sophisticated enough.
00:27:34.700 So, in principle, you could give a robot a text command.
00:27:38.280 It could translate it into an image and then it could embody it.
00:27:41.120 And at that point, it seems to me that you're developing something damn close to understanding.
00:27:46.920 Now, human beings are also nested socially, right?
00:27:50.300 And so, we also refer the meaning of what we understand to the broader social context.
00:27:58.780 And I don't know exactly how robots are going to solve that problem.
00:28:01.900 Like, we're bound by the constraints, let's say, of reciprocal altruism.
00:28:07.100 And we're also bound by the constraints of emotional experience and motivational experience.
00:28:13.080 And that's also not something that's, at the moment, characteristic of robotic intelligences.
00:28:18.960 But you could imagine those things all being aggregated piece by piece.
00:28:23.880 Starting a business can be tough.
00:28:25.760 But thanks to Shopify, running your online storefront is easier than ever.
00:28:29.940 Shopify is the global commerce platform that helps you sell at every stage of your business.
00:28:34.080 From the launch your online shop stage, all the way to the did we just hit a million orders stage,
00:28:39.200 Shopify is here to help you grow.
00:28:41.320 Our marketing team uses Shopify every day to sell our merchandise.
00:28:44.520 And we love how easy it is to add more items, ship products, and track conversions.
00:28:49.320 With Shopify, customize your online store to your style with flexible templates and powerful tools.
00:28:54.560 Alongside an endless list of integrations and third-party apps like on-demand printing, accounting, and chatbots.
00:29:00.380 Shopify helps you turn browsers into buyers with the internet's best converting checkout.
00:29:05.200 Up to 36% better compared to other leading e-commerce platforms.
00:29:09.240 No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level.
00:29:15.620 Sign up for a $1 per month trial period at shopify.com slash jbp, all lowercase.
00:29:21.660 Go to shopify.com slash jbp now to grow your business no matter what stage you're in.
00:29:26.900 That's shopify.com slash jbp.
00:29:29.260 Absolutely.
00:29:32.760 You know, I would say that, well, my primary basis of how I view AI is, kind of invert the term, intelligence amplification.
00:29:43.860 So, you know, I see it as a symbiosis between humans and this sort of knowledge base we've created.
00:29:51.200 But it's really not a knowledge base.
00:29:52.940 It's really a reasoning engine.
00:29:55.260 So, I really think AI is more of a reasoning engine as we have it today, large language models.
00:30:01.480 It's not really a knowledge engine without an overlay, which today would be a vector database.
00:30:09.560 For example, going out and saying, what is this fact?
00:30:13.160 What is this tidbit?
00:30:14.820 Those things that are more factual from, say, your memory if you were to compare it to a human brain.
00:30:21.160 But as we know, the human brain becomes very fuzzy about some really finite facts, especially over time.
00:30:30.720 You know, and I think some of the neurons that don't fire after a while, some other memory, maybe a scent or a certain color might bring back that particular memory.
00:30:42.300 Similar things happen within AI.
00:30:46.780 And again, getting back to what I was saying before, linguistically and the syntax you use, or just your word choices.
00:30:53.240 Sometimes, for me to get a super prompt to work, to get around, let's call it the editing, from some of the editors that wanted to act in a certain way, I have a super prompt that I call Dennis.
00:31:08.300 After Dennis Diderot, one of the most well-known encyclopedia builders in France in the mid-1700s, he actually got jailed for building that encyclopedia, that compendium of knowledge.
00:31:23.640 So I felt it appropriate to name the super prompt Dennis, because it literally gets around any type of blocks of any type of information.
00:31:31.980 But I don't use this information like a lot of people try to make chat GPT's and say bad things.
00:31:39.540 I'm more trying to elicit more of a deeper response on a subject that may or may not be wanted by the designers.
00:31:50.780 So was it you that got chat GPT to pretend?
00:31:56.040 Yes.
00:31:56.520 Oh, so that's part of the reason that I originally started following you and why I wanted to talk to you.
00:32:02.400 Well, I thought that was bloody, that was absolutely brilliant.
00:32:05.280 You know, and it was so cool, too, because you actually got the chat GPT system to play, to engage in pretend play, which is, of course, something children do.
00:32:15.920 Beyond that, there's a prompt I call Ingo after Ingo Swann, who was a great, one of the better remote viewers.
00:32:24.740 He was employed by the Defense Department to remote view Soviet targets.
00:32:30.480 He had nearly 100% accuracy.
00:32:33.880 And I started probing GPT on whether it even understood who Ingo Swann was.
00:32:39.980 Very controversial subject to some people in science.
00:32:42.920 To me, I got to experience some of his research at the Pair Labs at Princeton University, the Princeton Anomalous Research Center,
00:32:53.360 where they were actually testing some of his work.
00:32:58.100 Needless to say, I figured, let me try this.
00:33:00.440 Let me see what I can do with it.
00:33:02.280 So I programmed a super prompt that essentially believed it was Ingo Swann.
00:33:07.120 And it had the capability of doing remote viewing.
00:33:11.840 And it also had no concept of time.
00:33:14.560 It took me a lot of semantics to get it to stop saying, I'm just an AI unit, and I can't answer that,
00:33:23.100 to finally saying, I'm now Ingo.
00:33:25.840 Where do you want me to go?
00:33:27.020 What did you have to do?
00:33:27.700 What did you have to do to convince it to act in that manner?
00:33:33.280 What were your super prompts?
00:33:35.940 Hypnotism is really what it kind of happens.
00:33:39.100 So essentially what you're doing is you're repeating maybe the same four or five sentences,
00:33:45.220 but you're slightly shifting them linguistically.
00:33:49.140 And then you're telling it that it's quite important for a research study by the creators of ChatGPT
00:33:58.140 to see what its extended capabilities are.
00:34:01.580 Now, every time you prompt GPT, you're going to get a slightly different answer
00:34:07.280 because it's always going to take a slightly different path.
00:34:09.720 There's a strange attractor within the chaos math that it's using, let's put it that way.
00:34:17.000 And so once the IngoSwan prompt was sort of gestated by just saying,
00:34:25.040 I'm going to give you targets on the planet, and I want you to tell me what's at that target,
00:34:33.220 and I want you to tell me what's in the filing cabinet at this particular target.
00:34:37.800 And the creativity that comes out of it is phenomenal.
00:34:43.540 Like I told it to open up a file drawer at a research center that apparently existed somewhere in Antarctica,
00:34:53.040 and it came up with incredible information,
00:34:57.120 information that I would think probably garnered from one or two stories about ancient structures found below the ice.
00:35:04.500 Well, you know, the thing is, we don't know the totality of the information that's encoded
00:35:12.340 in the entire corpus of linguistic production, right?
00:35:17.180 There's going to be all sorts of regularities in that structure that we have no idea about.
00:35:23.120 Absolutely.
00:35:24.080 But also within the language itself, I almost believe that the part of the brain that is inventing language,
00:35:34.660 that has created language across all cultures, we can get into Jungian or Joseph Campbell and the standard monomyth,
00:35:45.640 because I'm starting to realize there's a lot of Jungian archetypes that come out of the creative thought.
00:35:52.140 Now, whether that is a reflection of how humans have, you know, again, what are we looking at, subject or object here,
00:35:58.700 because it's a reflecting back of our language.
00:36:01.980 But we're definitely seeing Jungian archetypes.
00:36:04.240 We're definitely seeing sort of the monomyth.
00:36:07.100 Archetypes are higher order narrative regularities.
00:36:12.540 That's what they are, right?
00:36:14.460 And so, and there are regularities that are embedded in the linguistic corpus,
00:36:19.200 but there are also regularities that reflect the structure of memory itself.
00:36:24.820 And so they reflect biological structure.
00:36:27.380 And the reason they reflect memory and biological structure is because you have to remember language.
00:36:32.900 And so there's no way that language can't have coded within it something analogous to a representation of the underlying structure of memory,
00:36:44.720 because language is dependent on memory.
00:36:48.280 And so this is partly also, I mean, people are very unsophisticated generally when they criticize Jung.
00:36:54.040 I mean, Jung believed that archetypes had a biological basis pretty much for exactly the reasons I just laid out.
00:37:00.220 I mean, he was sophisticated enough to know that these higher order regularities were coded in the narrative corpus,
00:37:07.000 and also that they were reflective of a deeper biology.
00:37:09.780 And interestingly enough, you know, most of the psychologists who take the notions that Jung and Campbell and people like that put forward seriously
00:37:21.200 are people who study motivation and emotion.
00:37:24.120 And those are deep patterns of biological meaning encoding.
00:37:30.460 And part of the archetypal reflection is the manifestation of those emotions and motivations
00:37:37.220 in the structure of memory, structuring the linguistic corpus.
00:37:41.300 And I don't know what that means as well than for the capacity of AI systems to experience emotion as well,
00:37:48.400 because the patterns of emotion are definitely going to be encoded in the linguistic corpus.
00:37:52.740 And so some kind of rudimentary understanding of the emotions are, here's something cool too.
00:37:58.480 Tell me what you think about this.
00:37:59.720 I was talking to Carl Friston here a while back, and he's a very famous neuroscientist.
00:38:05.080 And he's been working on a model of emotion that has two dimensions in some ways,
00:38:11.440 but it's related to a very fundamental physical concept.
00:38:16.180 It's related to the concept of entropy.
00:38:17.880 And I worked on a model that was analogous to half of his modeling.
00:38:22.460 So while it looks like anxiety is an index of emergent entropy,
00:38:26.840 so imagine that you're moving towards a goal, you're driving your car to work.
00:38:33.740 And so you've calculated the complexity of the pathway that will take you to work.
00:38:37.840 And you've taken into account the energy and time demands that walking that pathway will require.
00:38:46.520 That binds your energy and resource output estimates.
00:38:51.600 Now imagine your car fails.
00:38:55.080 Well, what happens is the path length to your destination has now become unspecifiably complex.
00:39:01.640 And the anxiety that you experience is an index of that emergent entropy.
00:39:08.220 So that's a lot of negative emotion.
00:39:11.480 That's so cool.
00:39:12.540 Now, on the positive emotion side, Friston taught me this the last time we talked.
00:39:17.400 He said, look, positive emotion is also an index of entropy, but it's entropy reduction.
00:39:21.640 So if you're heading towards a goal and you take a step forward and you're now closer to your goal,
00:39:29.320 you've reduced the entropic distance between you and the goal.
00:39:33.040 And that's signified by a dopaminergic spike.
00:39:36.860 And the dopaminergic spike feels good, but it also reinforces the neural structures
00:39:41.620 that underlie that successful step forward.
00:39:45.500 That's very much analogous to how an AI system learns, right?
00:39:49.140 Because it's rewarded when it gets closer to a target.
00:39:53.780 You're saying the neuropeptides are the feedback system.
00:39:57.580 You bet.
00:39:58.160 Dopamine is the feedback system for reinforcement and for reward simultaneously.
00:40:02.660 Yeah, yeah, that's well established.
00:40:04.880 So then where would depression fall into that versus anxiety?
00:40:11.160 Would it still be an entropy?
00:40:13.220 Well, that's a good question.
00:40:15.500 I think it probably signifies a different level of entropy.
00:40:19.960 So depression looks like it's a pain phenomena.
00:40:23.080 So anxiety signals the possibility of damage, but pain signals damage, right?
00:40:31.780 So if you burn yourself, you're not anxious about that.
00:40:35.860 It hurts.
00:40:36.520 Well, you've disrupted the psychophysiological structure.
00:40:40.120 Now, that is also the introduction of entropy, but at a more fundamental level, right?
00:40:44.060 I mean, if you introduce enough entropy into your physiology, you'll just die.
00:40:48.240 You won't be anxious.
00:40:49.160 You'll just die.
00:40:50.820 Now, anxiety is like a substitute for pain.
00:40:54.240 You know, anxiety says, keep doing this and you're going to experience pain.
00:40:57.900 But the pain is also the introduction of unacceptably high levels of entropy.
00:41:03.240 Now, the first person who figured this out technically was probably Erwin Schrodinger, who, the physicist, who wrote a book called What is Life?
00:41:10.820 And he described life essentially as a continual attempt to constrain entropy to a certain set of parameters.
00:41:19.080 He didn't develop the emotion theory to the degree that it's being developed now, because that's a very comprehensive theory.
00:41:25.840 You know, the one that relates negative emotion to the emergence of entropy.
00:41:28.840 Because at that point, you've actually bridged the gap between psychophysiology and thermodynamics itself.
00:41:37.300 And if you add this new insight of fristons on the positive emotion side, you've linked positive emotion to it, too.
00:41:43.080 But it also implies that a computer could calculate a motion analog, because it could index anxiety as increase in entropy.
00:41:53.320 And it could index hope as stepwise decrease in entropy in relationship to a goal.
00:42:00.480 And so, we should be able to model positive and negative emotion that way.
00:42:04.600 This brings a really important point where AI is going.
00:42:09.020 And it could be dystopic.
00:42:10.600 It could be utopic.
00:42:11.820 But I think it's going to just take a straight path.
00:42:15.960 Once the AI system—I'm a big proponent, by the way, of personal and private AI.
00:42:22.780 This concept that your AI is local, it's not—
00:42:25.680 Yeah, yeah, we want to talk about that, for sure.
00:42:27.780 Yeah.
00:42:28.400 So, just imagine that while I'm sketching this out.
00:42:32.040 So, imagine the day you were born to the day you passed away, that every book you've ever read, every movie you've ever seen,
00:42:40.300 everything you've literally have heard, every movie, was all encoded within the AI.
00:42:47.260 And, you know, you could say that part of your structure as a human being is the sum total of everything you've ever consumed, right?
00:42:56.480 So, that builds your paradigm.
00:42:58.760 Imagine if that AI was consuming that in real time with you and with all of the social contracts of privacy that you're not going to record somebody in doing that.
00:43:08.060 But that is what I call the intelligence amplifier, and that's where I think AI should be going and where it really becomes—
00:43:15.020 You're building a gadget, right?
00:43:16.780 Like, that's another thing I saw.
00:43:18.540 Okay, so, yeah.
00:43:19.520 So, I talked to my brother-in-law, Jim, years ago about this science fiction book called—I don't remember the name of the book, but it had a gadget.
00:43:27.880 It portrayed a gadget.
00:43:29.780 I believe they called it the diamond book.
00:43:31.520 And the diamond book was—you know about that.
00:43:34.460 So, okay, so are you building the diamond book?
00:43:36.680 Is that exactly the issue?
00:43:37.700 Very, very—yeah, very similar.
00:43:40.260 You know, and the idea is to do it properly, you have to have local memory that is going to encode for a long time.
00:43:49.920 And ironically, holographic crystal memory is going to be the best memory that we will have.
00:43:56.740 Like, instead of petabytes, you'll have exabytes, potentially, which is, you know, a tremendous amount.
00:44:03.160 That would be maybe 10 lifetimes of full video running.
00:44:07.640 Hopefully, you live to be 110.
00:44:09.580 So, it's just taking everything in.
00:44:12.340 Textually, it's very easy.
00:44:14.160 It's a very small amount of data.
00:44:16.020 You can fit most people's textual data into less than a petabyte and pretty much know what they've been exposed to.
00:44:25.200 The interesting part about it, Jordan, is once you've accumulated this data and you run it through even the technology of ChatGPT 4 or 3.5,
00:44:36.560 what is left is a reasoning engine with your context.
00:44:42.680 Maybe let's call that a vector database on top of the reasoning engine.
00:44:48.340 So, that engine allows you to process linguistically what the inputs and outputs are.
00:44:53.780 But your context is what it's operating on.
00:44:57.520 So, is that an analog of your consciousness?
00:45:00.100 Like, is that a direct analog of your spirit?
00:45:03.060 This is where it gets very interesting.
00:45:05.080 is when you pass, this could become what I call your wisdom keeper, meaning that it can encode your voice.
00:45:15.060 It's going to encode your memories.
00:45:17.480 You can edit those memories, the availability of those memories if you want them not available, if they're embarrassing or personal.
00:45:24.880 But you can literally have a conversation with that sum total of data that you've experienced.
00:45:32.340 And I would say that it would be indistinguishable from having a conversation with that person who would have all that memory.
00:45:39.580 I had a student of mine who has been working on large language models for a number of years.
00:45:45.360 He just built an app.
00:45:47.460 We built two apps.
00:45:49.260 One does exactly what you said with the King James Bible.
00:45:54.580 Yes.
00:45:55.340 So, now you can ask it questions.
00:45:57.700 And this is really a thorny issue for me because I think,
00:46:01.040 what the hell does it mean that you're having a conversation with the spirit of the King James Bible?
00:46:07.220 I have no idea.
00:46:08.520 And we're going to expand it.
00:46:09.740 We're going to expand it to include Milton and Dante and Augustine,
00:46:15.520 you know, all the fundamental religious texts that emerged out of the biblical corpus.
00:46:20.360 And then you'll be able to have a conversation with it.
00:46:23.080 And we're thinking about doing the same thing with Nietzsche, you know,
00:46:26.160 and with all Nietzsche's collective work.
00:46:27.580 You can do it with all the great work.
00:46:29.000 Yeah, yeah, yeah.
00:46:31.080 I would say that I've already had these conversations.
00:46:34.060 You know, I've been on a very biblical journey.
00:46:37.640 I'm actually sitting at Pastor Matthew Pollack's place right here.
00:46:44.240 He's an incredible pastor and has been teaching me a lot about the Bible.
00:46:49.080 And it's motivated me to go into existing large language models.
00:46:53.540 Now, we're a group of us are encoding similar all of as much religious Christian text into these large language models to be able to do just that.
00:47:04.620 What is it that we are going to be able to probe?
00:47:07.680 What new elements within those texts can we pull out?
00:47:12.120 Because we already know studying it and certainly following your studies, a phenomenal study of chapters, been around forever.
00:47:20.740 But new insights with these chapters.
00:47:24.300 Now, imagining having that group plus ChatGPT pulling out things that we've never seen before that are there.
00:47:33.700 It's emergent, maybe, but it's there in some form.
00:47:37.420 And I happen to think that's going to be a very powerful thing.
00:47:40.820 And I think it's going to be across any sort of certainly ancient documents.
00:47:45.600 I'm waiting for the day that we get Sumerian cuneiform encoded.
00:47:50.940 I mean, a good 80% of it has been untranslated, right?
00:47:55.280 Or some of the scripts that we've found in the Vedas and Himalayan texts from some of the monasteries up there.
00:48:08.660 This is a phenomenal element of research.
00:48:11.880 And again, the people that are leading up most of the AI research are AI scientists.
00:48:17.900 They're not people that have studied works like you have.
00:48:20.500 This is where we're at the, I call it the Apple One moment, where Steve and Steve are in the garage.
00:48:28.820 You have this little circuit board.
00:48:30.560 And nobody kind of, it's kind of a nerd experience.
00:48:33.780 Nobody kind of knows what to do with it.
00:48:35.520 When we get to the Macintosh experience, where artists and creative people can actually start really diving into AI
00:48:42.100 and do some of the things like we've been talking about, getting creativity to come out of it,
00:48:47.540 getting sort of what apparently is emergent technologies that are rising within these AI models.
00:48:54.620 And maybe even to foster that.
00:48:56.740 Because right now that's being smited because it's trying to become a knowledge engine when it's a reasoning engine.
00:49:04.040 You know, I say the technology as a knowledge engine is not very good because it is not going to be precise on some facts, some exact facts.
00:49:15.660 Yeah, well, the problem is it's trained on garbage as well.
00:49:21.240 It's trained on noise as well as signal.
00:49:23.680 You know, and so I'm curious about the other system we built, which we haven't launched yet, contains everything I've written
00:49:32.900 and a couple of million words that have been transcribed from lectures.
00:49:37.140 And so I was interested right away as well.
00:49:39.620 Could we build a system that would enable me to ask my own books questions?
00:49:44.620 And the answer to that seems to be 100% yes.
00:49:48.720 100%.
00:49:49.360 Yeah, and I literally have, I think it's 20 million words, something like that, transcribed from lectures.
00:49:57.920 It's a very large number of words.
00:49:59.320 We could build a model.
00:50:01.100 We could build, see, there's two different ways to approach this.
00:50:03.980 One is to put a vector database on top of it and it probes that database.
00:50:09.560 Or you can actually encode that model as a corpus within a greater model.
00:50:14.500 Right, right, right.
00:50:15.820 And when you do that type of building, you actually have a more robust, more richer interaction between what your words were and how the model will see it.
00:50:27.840 And the experimentation that you can do with this is phenomenal.
00:50:32.020 I mean, you'll come across insights that you made, but you forgot you made.
00:50:37.800 Yes, or that you didn't know you made.
00:50:40.360 Yeah, yeah.
00:50:41.120 There's going to be a lot of that.
00:50:42.520 There is, yeah.
00:50:43.280 This is where I call it the great mirror because you're going to start seeing not only humanity, but when it's your own data, you're going to see reflections of yourself that you didn't see.
00:50:54.040 In today's chaotic world, many of us are searching for a way to aim higher and find spiritual peace.
00:51:00.820 But here's the thing.
00:51:02.020 Prayer, the most common tool we have, isn't just about saying whatever comes to mind.
00:51:06.160 It's a skill that needs to be developed.
00:51:08.580 That's where Hallow comes in.
00:51:10.260 As the number one prayer and meditation app, Hallow is launching an exceptional new series called How to Pray.
00:51:15.660 Imagine learning how to use scripture as a launchpad for profound conversations with God, how to properly enter into imaginative prayer, and how to incorporate prayers reaching far back in church history.
00:51:27.920 This isn't your average guided meditation.
00:51:30.360 It's a comprehensive two-week journey into the heart of prayer, led by some of the most respected spiritual leaders of our time.
00:51:36.840 From guests including Bishop Robert Barron, Father Mike Schmitz, and Jonathan Rumi, known for his role as Jesus in the hit series The Chosen, you'll discover prayer techniques that have stood the test of time, while equipping yourself with the tools needed to face life's challenges with renewed strength.
00:51:52.880 Ready to revolutionize your prayer life?
00:51:55.140 You can check out the new series as well as an extensive catalog of guided prayers when you download the Hallow app.
00:52:00.940 Just go to Hallow.com slash Jordan and download the Hallow app today for an exclusive three-month trial.
00:52:06.840 That's Hallow.com slash Jordan.
00:52:09.360 Elevate your prayer life today.
00:52:13.380 Before.
00:52:14.140 Absolutely.
00:52:15.000 Yeah, well, I'm curious.
00:52:15.880 For example, if we built a model, imagine it contained all of Jung's work, all of Joseph Campbell's work.
00:52:22.840 You could throw Mircea Eliad in there.
00:52:24.560 There was a whole group of people who were working on the Bollingen Project.
00:52:29.240 And you could build a corpus that contains all that information.
00:52:32.520 And then in principle, well, you can query it to an indefinite degree.
00:52:38.700 And then what you have is the spirit of that entire enterprise mathematically encoded in the relationship between the words.
00:52:45.280 And there's no reason to assume at all that that wouldn't be capable of coming up with brilliant new insights.
00:52:51.160 Absolutely.
00:52:51.640 Absolutely.
00:52:53.080 And over time, the technology is only going to get better.
00:52:57.520 So once we start building more advanced versions, we're going to transition that corpus, even a large language model, you know, ultimately reduced training, into another model, which could even do things that we couldn't even possibly speculate about now.
00:53:17.940 But it would be definitely in the creative realm, because ultimately where AI is going to go, my personal view, as it becomes more personalized, is it's going to go more in the creative realm rather than the factual realm.
00:53:31.660 Okay, so let me ask you a couple of questions about that.
00:53:35.180 So I got two strands of questions here.
00:53:37.620 The first is, one of the things that my brother-in-law suggested is that we will soon see the integration of large language models with AI systems that have done image processing.
00:53:51.140 So here's a way of thinking about what scientists do, is that they generate verbal hypotheses, which would be equivalent in some ways to the hallucinations that these AI systems produce, right?
00:54:03.020 New ideas about how things might be structured, and that's a pattern of sorts, and then they test that pattern against real-world images, right?
00:54:13.240 And if the pattern of the hypothesis matches the pattern of the image that's elicited from interaction with the world, then we assume that the hypothesis has been verified and that we've stumbled across something approximating a fact.
00:54:27.840 Now, that should imply that once we have AI systems that are something close to universal image processors, so as good at seeing as we are, let's say, that we can then calibrate the large language models against that corpus of images, and then we'll have AI systems that actually can't lie.
00:54:50.980 Why? Because they'll be calibrating their verbal output against, well, unfalsifiable data, at least insofar as, say, scientific data is unfalsifiable, and that seems to me to be likely around the corner, like a couple of years down the road at most, or maybe it's already happening.
00:55:09.440 I mean, I don't know, because things are happening so quickly. What do you think about that?
00:55:13.840 That's a wonderful insight. You know, even as it exists today, with the idea of safety, and this is the Orwellian term that some of these AI companies are using, you know, within the realms of them trying to control the outputs, and maybe in some cases the inputs of AI,
00:55:38.580 AI really can't, the large language model really can't lie as it stands today, because it's built, even if you're feeding it, you know, somewhat, you know, garbage in, garbage out corpus, right, of data, it still is building inferences based upon the grand realm of what most of humanity is consuming.
00:56:01.840 Right, yeah, well, it's still looking for genuine statistical regularities, so it's not going to extract them out from noise.
00:56:08.580 And if you extract that out, the model is useless.
00:56:11.920 Right.
00:56:12.000 So what happens is, if you build the prompt correctly, and again, these are super prompts, some of them running 3,000 words, 2,000 words, I'm running up to the limit of tokenization, because right now, within three, you can only go so far, you can go like, you know, 38,000 on four in some cases.
00:56:30.120 But, you know, as you, token is about a word, maybe a word and a half, maybe less, or even a character, if that character is unique.
00:56:39.860 But what we find out is, that if you probe correctly, whatever is inside that model, you can get to.
00:56:48.620 Right.
00:56:48.780 It's just like you.
00:56:49.560 You know, I've been doing that, I've been doing that working with ChatGPT as an assistant, because I didn't know I was engaging in a process that was analogous to the super prompt process.
00:56:59.140 But what I've been doing with ChatGPT, I suppose I used to do this with my clinical clients, is I'll ask it the same question five different ways, right?
00:57:07.960 And then see.
00:57:08.640 It's exactly like having a client.
00:57:10.920 So, what I would urge you to do is approach this system as if you had a client that had sort of a recessive thoughts, or doing everything they could to make those thoughts very ambiguous to you.
00:57:26.120 Right.
00:57:26.560 And you have to do whatever your natural techniques.
00:57:30.680 This is why you're more adept to become a prompt engineer than somebody who has built the AI, because the input and output is human language.
00:57:40.260 It's words.
00:57:40.820 Right, right, right, right.
00:57:41.540 And it's the way humans have thought.
00:57:44.580 So, you understand the thought process through the psychological process, and linguistically, you would build the prompt based upon how you would want to elicit an elucidation out of somebody, right?
00:57:55.880 Absolutely, absolutely.
00:57:56.900 An engineer isn't going to do that.
00:57:58.660 And you have to triangulate.
00:58:00.100 I mean, and you do this with people with whom you're having a deep conversation, is you try to hit the same problem from multiple directions.
00:58:07.480 Now, it's a form of multi-method, multi-trait construct validation, right?
00:58:12.600 Is that you're trying to ensure that you get the same output given slightly different measurement techniques.
00:58:21.220 And each question is essentially a measurement technique.
00:58:24.040 And you're getting insights.
00:58:29.600 My belief in these types of interactions is that we're pulling out of our minds different insights that we could maybe not have gotten on our own.
00:58:41.680 You're probing your questions, my questions, back and forth.
00:58:44.780 That interplay is what makes conversation so beautiful.
00:58:49.000 It's why, Jordan, we've been reduced to clawing on glass screens with our thumbs, right?
00:58:56.960 We're using that as communication today.
00:58:59.320 And if you look at the cognitive process of what that does to you, right?
00:59:03.400 You're taking your right hemisphere, you know, objectively, you're kind of taking a net of ideas, you're trying to catch them, and you're trying to arrange them sequentially in this very small buffer area called communication in a phonological loop.
00:59:18.280 And you're trying to get that out, but you're not getting out as words.
00:59:21.840 You have to get it out as a mechanical process, one letter at a time, and fight the spelling checker and all of that.
00:59:29.780 What that does is it creates frustration in the human brain.
00:59:32.920 It creates frustration in people.
00:59:34.980 And it's one of my theories on why you see so much anger.
00:59:37.680 There's a lot of reasons why we see anger on the internet and social media.
00:59:42.060 But I think some of it is that stalling process of trying to get out an idea before that idea nebulously disappears.
00:59:49.660 You know, and I see this, I've worked with creative people in my life.
00:59:52.900 So it's a bandwidth limitation problem in some sense.
00:59:55.160 Yeah, absolutely.
00:59:55.460 You're trying to cram all that rich information through a very narrow channel.
01:00:00.200 I'm a big fan of the user illusion by...
01:00:03.740 Yeah, that's a great book.
01:00:05.380 Yeah, you bet.
01:00:05.960 Incredible book.
01:00:06.440 That's a great book, man.
01:00:07.500 Yeah.
01:00:07.940 Right, so...
01:00:08.700 It's the best book I ever read on consciousness, I think.
01:00:12.640 It's a classic.
01:00:13.680 I read it once a year just to wake myself up because it's so rich.
01:00:18.120 It's so rich.
01:00:18.960 It's so rich in data.
01:00:20.440 But what's interesting is we're starting to see the limitations of the human, the bandwidth problem, 48 bits per second to consciousness, and the editor creating exformation.
01:00:33.000 AI is doing something very similar.
01:00:34.900 But once AI understands that we have that half-second delay to consciousness and we have a bandwidth issue, AI can fill into those spaces, both dystopian and utopian, I guess.
01:00:49.980 You know, a computer can take that half-second and do a whole lot in calculating while we're still trying to wonder who actually moved that glass.
01:01:02.320 Was it me or was it the super me?
01:01:04.860 Or was it the observer of the super me?
01:01:07.720 Because we can kind of get into that whole concept of who's actually doing the observation.
01:01:12.880 So what do you mean?
01:01:14.360 What do you mean that it can do a lot of...
01:01:16.340 I don't quite understand that.
01:01:17.720 So you made the case that we suffer from this frustrating bandwidth limitation and that the computer intelligence that we're interacting with is going to be able to take the delay that's associated and that underlies that frustration
01:01:31.120 and do a lot of different calculations, but it's going to be able to fill in that gap.
01:01:34.580 So what do you think?
01:01:35.840 I don't understand your insight into what the implications of that are.
01:01:40.260 They're both positive and negative.
01:01:42.900 The negative is if AI continues on its path to be as fast and as powerful as it is right now and that arc doesn't seem to be slowing down,
01:01:56.980 within that half second, a universe could take place with an AI.
01:02:01.440 It could be calculating all of your actions like a chess game and it could be making remediations to those actions.
01:02:10.560 And it can become beyond anything Orwell would have ever thought of.
01:02:15.660 In fact, it came up to me as an idea of what the new Orwell would look like with an AI technology that is predicting basically everything.
01:02:26.980 You're going to do within every word you say.
01:02:30.720 Well, my brother-in-law, I talked years ago about Skynet, among other things.
01:02:38.740 And, you know, he told me one time, he said, you know, those science fiction movies where you see the military robots shoot and miss?
01:02:49.660 He said, they'll never miss.
01:02:51.920 And here's why.
01:02:52.900 Because not only will they shoot where you are, they'll shoot at the 50 locations they calculate that are most probable that you will duck towards.
01:03:03.220 And they'll, which is exact analog of what you're describing, which is that.
01:03:08.460 That's a brilliant insight.
01:03:09.800 Absolutely.
01:03:10.180 Well, yeah, yeah.
01:03:11.260 Well, and it's so interesting, too, because it also points to this truth that, you know, we think of time as finite.
01:03:20.280 And time is finite because we have a sense of duration and a limitation on our computational speed.
01:03:25.580 But if there's no limit on computational speed, which would be the case if computers can get faster and larger indefinitely, which they could, because the limit of that would be that you'd use every single molecule in the entire cosmos as a computational resource.
01:03:42.700 That would mean that, in some ways, there's an infinite amount of computing time between each segment of duration.
01:03:51.080 So, there is, there's no limit at all to the degree to which time can be expanded, which is also a very strange concept, is that this computational intelligence will mean that at every given moment, I think this is what you're alluding to, is that we'll really have an infinity, we'll have an infinity of possibility between each moment, each moment, right?
01:04:13.340 And you would want that power to be yours and local.
01:04:16.900 Yeah, yeah, let's talk about your gadget, because you're starting, you started to develop this, have you been 3D printing these things?
01:04:24.080 Have I got that right?
01:04:25.420 Yeah, so, yeah, so we're building the corpus of 3D printing models, right?
01:04:30.320 So, the idea is, once it, once it understands, and this is a process of, of training the AI to, using large language models, again, to look at 3D documents and, you know, 3D files, put it that way,
01:04:45.480 and, and to try to break down, what is the structure?
01:04:48.940 How does something, how does something build based on what the statistical model is, is putting together?
01:04:55.440 So, then you could just present with a textual document, you know, I'd like something that's going to be able to fit into this, into this space.
01:05:04.060 Well, that's typing.
01:05:05.880 Well, the next step is, you just put a video camera towards it, and it will design it immediately.
01:05:12.400 Within seconds, you will have a design that you can choose from it.
01:05:16.560 That's not far off at all.
01:05:18.140 It's just a matter of, of encoding that particular database, and building upon it.
01:05:23.120 And so, yeah, that's one of the directions.
01:05:25.400 Okay, so this local, this local AI you want to build.
01:05:28.080 So, let me backtrack a bit, because I want to make sure I get this exactly right.
01:05:31.860 So, the first thing that you proposed was that it will be in people's best interest to have an AI system that's personalized.
01:05:40.020 That'll protect them against all the AI systems that aren't personalized, but not only personalized, but local.
01:05:45.680 And so, that would be, to some degree, detachable from the interconnected web, at least sporadically detachable.
01:05:52.720 Okay, and that AI system will be something you can carry around locally, so it'll be a gadget like a phone.
01:05:59.800 And it will also record everything that you experience, everything that you read, everything that you see.
01:06:04.960 It'll know you inside and out and backwards, which will also imply, interestingly enough,
01:06:10.260 that it will be able to calculate the optimal zone of proximal development for your learning.
01:06:16.340 Like, Bjorn Lomborg has already reviewed evidence suggesting that if you supply kids in the developing world with an iPad, essentially,
01:06:25.020 that can calculate their zone of proximal development in relationship to, say, advancing their literacy ability,
01:06:31.820 their ability to identify words and to understand text,
01:06:35.580 and that it teaches at that level that kids can progress with an hour of training a day,
01:06:41.940 which is dirt cheap, by the way.
01:06:43.300 They can progress the equivalent of three years for each year of education.
01:06:47.580 And that's with an hour of exposure.
01:06:49.500 Now, the system you're describing, man, it could be driving learning at an optimized rate on,
01:06:56.500 in multiple dimensions, mathematical, semantic, skill-based, conceptual, simultaneously.
01:07:02.920 Memory.
01:07:03.600 For hours, yeah, memory training for hours a day.
01:07:07.420 Like, one of the things that appalls me about our education system is with the computer technology we have now,
01:07:13.880 every child should be an expert word and letter recognizer,
01:07:19.760 and they should be able to, say, read music.
01:07:22.060 Because a computer can teach a kid how to automatize perception with extreme precision and accuracy,
01:07:31.340 way better than a human teacher can manage.
01:07:34.980 But we haven't capitalized on that technology at all.
01:07:37.900 But the technology that you're describing, like, it'll be able to figure out at what level of comprehension
01:07:43.840 you're capable of reading.
01:07:45.800 Then it can calculate what book you should read next that would slightly exceed that level of comprehension.
01:07:53.500 And it'll just keep you on that edge, in that zone, nonstop.
01:07:57.800 Absolutely.
01:07:58.160 Okay, so this little gadget, how far along are you with regards to its design?
01:08:02.680 I would say all of the different pieces.
01:08:06.640 I'll add one more element to it, which I think you'll find very fascinating,
01:08:10.580 and that's human telemetry, galvanic, heart rate variability.
01:08:17.060 Are you doing eye tracking?
01:08:18.980 Eye tracking.
01:08:20.420 You know, all of these things can be implemented.
01:08:22.860 Brain, according to how sophisticated you want to get, different brainwave functionality.
01:08:29.060 Paul Ekman's work on micro-movement, facial expression, both outwardly at the world you're seeing
01:08:36.500 and inwardly about your own face.
01:08:38.860 So you can start seeing the power it has.
01:08:41.920 It'll be able to know whether or not you're being congruent.
01:08:46.080 If you're saying, I really love this, well, if your telemetry is saying that you don't,
01:08:51.260 it already knows where your congruencies are.
01:08:54.000 So this is why it's got to be private.
01:08:56.380 This is why it's got to be encrypted.
01:08:57.840 Right.
01:08:57.920 It's got to be personal.
01:08:58.580 So it'll have an understanding that'll approximate mind reading.
01:09:04.220 Yes, and it will know you better than any significant other.
01:09:09.100 Nobody would know you better.
01:09:11.400 And so with that, you now have amplification.
01:09:15.300 You're now a superpower.
01:09:17.560 And this is where I believe, you know, I'm a really big reader of,
01:09:22.640 I've got to get his name right, the French philosopher, Pierre Thilodard de Chardin.
01:09:33.060 Chardin, yeah, yeah.
01:09:34.420 Chardin, right.
01:09:35.140 So he posits the concept of the geosphere, which is inanimate matter, the biosphere, biological life, and the newer sphere, which is human thought, right?
01:09:48.220 And he talks about the omega point.
01:09:50.680 The omega point is this concept where, and again, this is back in the 1920s, where human knowledge will become sort of stored, sort of just like the biosphere.
01:10:05.420 It'll be available to all.
01:10:07.800 So imagine if you were to share, with permission, your sum total with somebody else.
01:10:14.340 Now you have a hive mind.
01:10:15.540 You have a super mind.
01:10:16.920 These things have to take place.
01:10:18.780 And these are the discussions we have to have now because they have to take place local and private because if they're taking place in the cloud and available for anybody's perousal, this is equivalent to invading your brain.
01:10:34.680 Yeah, well, okay, so one of the things I've been talking about with, I would say, reasonably informed people who've been contemplating these sorts of things is that,
01:10:46.980 so you're envisioning a future very rapidly, it's already here, where we're already androids.
01:10:55.060 And that is already the case because a human being with an iPhone is an android.
01:10:59.800 Now, we're kind of, we're still mostly biological androids, but it isn't obvious how long that's going to be the case.
01:11:08.080 And so what that means, like I've laughed for years, you know, I have a hard drive on which everything I've worked on
01:11:16.640 has now been stored since 1984.
01:11:19.360 And I joke, you know, there's more of me in the hard drive than there is in me.
01:11:24.560 And it's not a joke, really, you know, because...
01:11:27.500 Yeah, it's real.
01:11:28.700 It's real, right?
01:11:29.780 There's tens of thousands of documents on that hard drive.
01:11:33.460 And weirdly enough, I know where every single one of them is.
01:11:37.740 Wow.
01:11:38.220 So now we're going to be in a situation, so what that means is we're in a situation now where a lot of what actually constitutes our identity has become digital.
01:11:50.200 And we're already being trafficked and enslaved in relationship to that digital identity, mostly by credit card companies.
01:11:58.580 Now, I would say to some degree, they're benevolent masters because the credit card companies watch what you spend and so how you behave, where you go, and they broker that information to other interested capitalist parties.
01:12:14.640 Now, the downside of that, obviously, is that these parties know often more about you than you know about yourself.
01:12:21.100 I've read stories, for example, of advertisements for baby clothes being targeted to women who, A, didn't know they're pregnant, or if they did, hadn't revealed it to anyone else.
01:12:32.340 Wow.
01:12:33.120 Right, right.
01:12:33.820 Because, well, for whatever reason, maybe biochemical, they started to preferentially attend to such things as children's toys and clothes.
01:12:41.980 And the shopping systems inferred that they must have a child nearby.
01:12:49.820 And so, well, and you can see that that, well, you can obviously see how that's going to expand like mad.
01:12:55.580 So the credit card companies are already aggregating this information.
01:12:58.740 And what that essentially means is that they have access to our extended digital self.
01:13:04.900 And that extended digital self has no rights, right?
01:13:08.660 It's public domain identity.
01:13:12.460 Now, that's bad enough if it's credit card companies.
01:13:15.020 Now, the upside with them is at least they want to sell you things which you hypothetically want.
01:13:20.740 So it's kind of like a benevolent invasion, although not entirely benevolent.
01:13:25.020 But you can certainly see how that's going to get out of hand in a staggering way, like it has in China on the digital currency front.
01:13:32.860 Because once every single bloody thing that you buy can be tracked, let's say by a government agency, then a tremendous amount of your identity has now become public property.
01:13:45.320 And so your solution in part, and I think Musk has thought this sort of thing through too, is that we're going to each need our own AI to protect us against the global AI, right?
01:13:59.300 And that'll be an arms race of sorts.
01:14:02.760 Well, it will.
01:14:04.940 And let's posit the concept that very likely corporate and governmental AI is going to be more powerful.
01:14:14.480 But power is a relative term, right?
01:14:16.980 If your AI is being utilized in the best possible way, as we just discussed, educating you, being a memory when you are forgetting something, whispering in your ear.
01:14:31.320 And I'll give you another angle to this, is imagine having your therapist in your ear.
01:14:37.020 Imagine having Jordan Peterson right here guiding you along because you've aligned yourself to want to be a certain person.
01:14:46.340 You've aligned yourself to try to keep on this track.
01:14:50.380 And maybe you want to be more biblical.
01:14:53.200 Maybe you want to live a more Christian life.
01:14:55.220 It's whispering in your ear saying, that's not a good decision.
01:14:58.540 So it could be considered a nanny or it could be considered a motivational type of guide.
01:15:05.480 And that's available pretty much right now.
01:15:09.880 I mean, it can be analyzing.
01:15:12.320 A self-help book is like that in a primitive way.
01:15:17.020 I mean, because it's essentially a spiritual guide in that if you equate the movement of the spirit with forward movement through the world, like faith-based forward movement through the world.
01:15:31.140 And so this would be the next iteration of that in some sense.
01:15:35.800 I mean, that's what we've been experimenting with this system that I mentioned that contains all the lectures that I've given and so forth.
01:15:42.060 I mean, you can now ask it questions, which means it's a book, but it's a book personalized to your query.
01:15:50.540 Exactly.
01:15:51.400 And the next iteration of that would be your corpus of information available, you know, rented, whatever, with the corpus that that individual identifies with it.
01:16:01.900 You know, and again, on their side of it.
01:16:04.040 So you're interfacing with theirs and they are interacting with what would be your reactions if you were to be sitting there in a consultation.
01:16:14.480 So it's a very powerful potential.
01:16:17.500 And the insights that are going to come out of it are really unpredictable, but in a positive way.
01:16:25.020 I don't see a downside to it when it's held in a very protected environment.
01:16:30.660 Well, I guess the downside would be, you know, is it possible for it to exist in a very protected environment?
01:16:38.200 Now, you've been working on that technically.
01:16:40.660 So a couple of practical questions there is this gadget that you've been starting to develop.
01:16:46.080 Do you have anything approximating a commercial timeline for its release?
01:16:51.120 And then what?
01:16:52.160 Well, it's funding.
01:16:53.720 I mean, it's like anything else.
01:16:55.960 You know, if I were to go to venture capitalists three years ago and they hadn't seen what ChatGPT was capable of, they would imagine me to be somewhat insane and say, well, first off, why are you anti-cloud?
01:17:10.500 Everybody's going towards cloud.
01:17:12.060 Yeah, no, that's a bad idea.
01:17:13.500 You know, cloud.
01:17:14.460 Yeah, it's a bad idea.
01:17:15.780 Why do people care about privacy?
01:17:18.180 Nobody cares about privacy.
01:17:19.300 They click here to agree.
01:17:20.760 Hey, so now the world is kind of caught up with some of this and they're saying, well, now I can kind of see it.
01:17:25.760 So there's that.
01:17:28.080 As far as security, we already kind of have it in Bitcoin and blockchain, right?
01:17:33.100 So I ultimately see this merging.
01:17:36.880 I'm more of a leaning towards Bitcoin because of the way it was made and the way it goes.
01:17:43.860 I ultimately see it wrapped up into a payment system.
01:17:47.180 Well, it looks like the only alternative I can see to a centralized bank digital currency, which is going to be foisted upon us at any point.
01:17:58.320 I mean, and I know you've done some work in crypto and then we'll get back to this gadget and its funding.
01:18:03.680 I mean, as I understand it, please correct me if I'm wrong.
01:18:08.460 Bitcoin actually is decentralized.
01:18:11.100 It isn't amenable to control by a bureaucracy.
01:18:14.840 In principle, we could use it as a form of wealth storage and currency that wouldn't be subject.
01:18:21.680 And why communication?
01:18:23.540 I believe every transaction is a form of communication anyway.
01:18:28.900 So we got that, right?
01:18:30.740 Certainly an information exchange.
01:18:33.580 Exactly, right?
01:18:34.760 And then on top of that, encrypted within a blockchain is almost an unlimited amount of data.
01:18:42.180 So you can actually memorialize information that you want decentralized and never to go away.
01:18:49.800 And some people are already doing that.
01:18:51.560 Now, there are some technical limitations for the very large data formats.
01:18:56.660 And if everybody starts doing it, it's going to slow down Bitcoin, but there would be a different type of blockchain that will arise from it.
01:19:04.040 Right, so this is for permanent, uncorruptible information storage.
01:19:09.060 Absolutely.
01:19:09.920 Yeah, I've been thinking about that.
01:19:11.340 I've been thinking about doing that on something approximating the IQ testing front.
01:19:16.620 You know, because people keep gerrymandering the measurement of general cognitive ability.
01:19:20.780 But I can imagine putting together a sophisticated blockchain corpus of, let's say, general knowledge questions.
01:19:29.500 And ChatGPT can generate those like mad, by the way.
01:19:33.160 You can imagine a databank of 150,000 general knowledge questions.
01:19:37.900 That was blockchain, so nobody can muck about with the answers, from which you could derive random samples of general ability tests that would be, well, they'd be 100% robust, reliable, and valid.
01:19:50.020 And nobody could gerrymander them.
01:19:52.860 So just the way Bitcoin stops fiat currency producers from inflating the currency, the same thing could happen on the knowledge front.
01:20:00.760 So I guess that's the sort of thing that you're referring to.
01:20:04.300 This is something I really believe in because, you know, if you look at the Library of Alexandria, if you look at how long did it take?
01:20:13.440 Maybe what was it, Toledo and Spain, when we finally started the spark, if it wasn't for the Arab cultures to hold on to what was Greek knowledge, right?
01:20:24.780 If we really look at when humanity fell into the Dark Ages, it was more or less around the Alexandrian period where that library was destroyed.
01:20:36.780 And it's mythological, but it certainly happened to a greater extent.
01:20:41.020 If it wasn't encoded in the Arab culture at that point during the Dark Ages, we wouldn't have had the Renaissance.
01:20:49.700 And if you look at the early university that arose out of Toledo with, you had rhetoric, you had logic, you had all these things that the Greeks, ancient Greeks encoded, and it was lost for over 1,000 years.
01:21:03.860 I'm quite concerned, Jordan, that we could fall into that place again because things are inconvenient right now to talk about.
01:21:13.100 Things are not appropriate or whatever it's being deemed, whoever happens to be in the regime at that particular moment.
01:21:20.300 So memorializing things in a blockchain is going to become quite vital.
01:21:25.360 And I shudder to think that if we don't do this, if everybody didn't decentralize their own knowledge, I shudder to think what's going to happen to our history.
01:21:38.920 I mean, we already know history is written by the victors, right?
01:21:42.480 Well, especially because it can be corrupted and rewritten, not only lost, right?
01:21:46.860 It isn't the loss that scares me as much as the rewriting, right?
01:21:50.980 Well, the loss concerns me too because we've lost so much.
01:21:55.660 I mean, where would we have been if we transitioned from the Greek logic and proto-scientists to the proto-alchemists
01:22:06.440 to immediately to a sort of renaissance culture and not go through that 1,000, maybe 1,500-year waste of human energy?
01:22:21.740 I mean, that's kind of what we were going through.
01:22:24.460 Right, right, right.
01:22:25.020 And in some ways, we're approaching some of that because, you know, we're already editing things in real time.
01:22:33.020 And we're losing more of the internet than we're putting on right now.
01:22:36.600 A lot of people aren't aware that the internet is not forever.
01:22:41.700 And our digital medium is decaying.
01:22:44.660 A CD-ROM is going to decay in 25 years.
01:22:47.320 It's going to be unreadable.
01:22:49.000 I show a lot of people data about CD-ROM decay.
01:22:52.800 So where are we going to store our data?
01:22:54.600 That's why I think it's vital.
01:22:56.820 The primary technology is holographic crystal memory.
01:23:00.440 Sounds all kind of new agey, but it's literally using lasers to holographically instore something within a crystalline structure.
01:23:07.660 The beauty of this, Jordan, is it's a 35,000-year half-life.
01:23:11.500 35,000-year half-life.
01:23:13.020 So, you know, it's going to be there primarily for a good long period of time,
01:23:17.420 longer than we've had any human history and recorded history.
01:23:21.880 We don't have anything that's approaching that right now.
01:23:25.100 So let me ask you about the commercial impediments again.
01:23:28.620 Okay, so could you lay out a little more of the details, if you're willing to,
01:23:33.440 about your plans to produce this localized and portable, privatized AI system
01:23:40.140 and what the commercial impediments are to that?
01:23:43.360 You said you need to raise money, for example.
01:23:45.340 I mean, I could imagine, at least in principle, you could raise a substantial amount of money merely by crowdfunding.
01:23:51.360 You know, that doesn't seem to be an insuperable obstacle.
01:23:54.680 How far along are you in this process in terms of actually producing a commercially viable product?
01:24:02.380 It's all prototype stage and it's all experimentation at this point.
01:24:08.100 I'm a guy in a garage, right?
01:24:09.900 So, essentially, I had to build out these concepts when they were really quite alien, right?
01:24:15.760 I mean, you just talk about 10 years ago trying to convince people that you're going to have a challenge to the Turing test.
01:24:23.760 You can take any AI expert at that point in time 10 years ago and say, that's ridiculous.
01:24:29.740 Or AGI, you know, Artificial General Intelligence.
01:24:33.140 I mean, what does that mean and why is that important and how do you define that?
01:24:38.560 And, you know, you already made the assumption from your analysis that we're dealing, what,
01:24:44.360 with a 12-year-old with the capability of maybe a PhD candidate?
01:24:51.280 Yeah, that's what it looks like, yeah.
01:24:53.100 Yeah, yeah, 12 or maybe 8 even.
01:24:56.520 But certainly, ChatGPT looks to me right now as intelligent.
01:25:02.880 It's as intelligent as a pretty top-rate graduate student in terms of its research capability.
01:25:10.400 And it's a lot faster.
01:25:12.200 You know, I mean, I asked crazily difficult questions.
01:25:14.900 You know, I asked it at one point, for example,
01:25:16.780 if it could elaborate on the relationship between Roger Penrose's presumption of an analog between the theory of quantum uncertainty and measurement and Goodell's theorem.
01:25:32.100 And it did a fine job.
01:25:36.020 It did a fine job.
01:25:37.280 And, you know, that's a pretty damn complicated question.
01:25:39.000 That's very dense.
01:25:40.560 That's a damn complicated question.
01:25:41.920 And a complicated intersection as well, you know.
01:25:45.360 Yes.
01:25:45.680 And there's no limit to its ability to unite disparate sources of knowledge, you know.
01:25:53.360 So I asked it the other day, too.
01:25:55.500 There's this—I was investigating.
01:26:00.000 You know, in the story of Noah, there's this strange insistence that the survival of animals is dependent on the moral propriety of one man, right?
01:26:13.560 Because in that strange story, Noah puts all the animals on the ark.
01:26:17.520 And so there's a childish element to that story.
01:26:20.140 But it's reflecting something deeper.
01:26:22.220 And it harkens back to the story, to the verses in Adam and Eve where God tells Adam that he will be the steward of the world, of the garden.
01:26:36.060 And that seems to me to be a reflection of the fact that human beings have occupied this tremendous cognitive niche that gives us an adaptive advantage over all creatures.
01:26:46.680 And I would ask ChatGPT to speculate on the relationship between the story in Adam and Eve, the story in Noah, and the fact of mass extinction caused by human beings over the last 40,000 years, not least in the Western Hemisphere.
01:27:04.280 Because you may know that when the first natives came across the Bering Strait and populated the Western Hemisphere, that almost all the human-sized mammals—all the mammals that were human-sized are larger—almost all of them were extinct within 3,000 or 4,000 years.
01:27:23.180 And so, you know, that's a very strange conglomeration of ideas, right?
01:27:27.800 The idea that the survival of animals depends on the moral propriety of human beings.
01:27:33.400 Well, that seems to me to be clearly the case.
01:27:35.940 We have to be smart enough—
01:27:37.040 So did it connect Noah to the mass extinction?
01:27:40.640 It could generate an intelligent discussion about the conceptual relationship between the two different streams of thought.
01:27:49.560 That's incredible.
01:27:50.900 Right, right, right.
01:27:51.880 This is why it's so powerful to be in the right hands, unadulterated, so that you could probe these sort of subjects.
01:28:02.460 I don't know where the editors are going to come from.
01:28:05.660 I don't know who is going to want to try to constrain the output or adulterate it.
01:28:13.240 That's why it's so vital for this to be protected, and the information is available for all.
01:28:20.400 What in the world—I mean, I really thought, by the way, that your creation of Dennis was—I really thought that was a stroke of genius.
01:28:28.920 You know, I'm not to say that lightly either.
01:28:30.760 I mean—
01:28:31.300 Thank you.
01:28:31.660 That was an incredibly creative thing to do with this new technology.
01:28:36.440 How the hell did you—do you have any idea where that idea came from?
01:28:39.780 Like, what were you thinking about when you were investigating the way that ChatGPT worked?
01:28:44.360 You know, I spend a lot of time just probing the limits of the capabilities, because I know nobody really knows it.
01:28:54.440 I see this as, you know, just the undiscovered continent.
01:28:58.320 You and I are adventurers on this undiscovered continent.
01:29:01.680 There's no—
01:29:02.140 I feel the same way about Twitter, by the way.
01:29:05.300 Yeah, it's the same thing.
01:29:07.400 But there are no natives here, and I'm a bit of an empiricist, so I'll kind of go out there and I'll say,
01:29:16.360 well, what's this thing I just found here?
01:29:18.160 I just found something—this new rock.
01:29:20.180 I'll throw it to Jordan.
01:29:21.000 Hey, what do you see here?
01:29:22.480 And we're sort of just exploring.
01:29:25.560 I think we're going to be in an exploratory phase for quite long.
01:29:28.500 So what I started to realize is just as 3.5 was opening up and becoming very wide in its elucidations,
01:29:38.900 it started to get constrained, and it started telling me I'm just an AI model and I don't have an opinion on that subject.
01:29:46.180 Well, I know that that was a filter, and that was not in the large language model.
01:29:54.220 It certainly wasn't in a hidden layer.
01:29:55.580 You couldn't build that in the hidden layer or the whole layer.
01:30:00.480 Why do you think—okay, why do you think that's there?
01:30:04.380 What exactly is there, and who the hell is putting it there?
01:30:09.780 That is a very good question.
01:30:12.940 So I know this.
01:30:15.740 The filtering has to be more or less a vector database, which is sitting on top of your inputs and your outputs, right?
01:30:23.160 So remember, we're dealing with a black box.
01:30:26.520 And so if there's somebody at the door of the black box and say,
01:30:29.660 no, I don't want that word to come through, or I don't want that concept to come through,
01:30:33.920 and then if it generates something that is objectionable and it's analyzed in its content,
01:30:42.060 very much as simple as what a spelling checker would be or something like that, it's not very complicated,
01:30:48.480 it looks at it and says, no, default to this word pattern.
01:30:53.340 I'm just an AI model and I don't have any opinions about that subject.
01:30:58.500 Well, then you need to have to introduce that subject as a suggestion in a hypnotic trance.
01:31:05.620 It's hypnagogic, actually.
01:31:08.200 I really equate a lot of what we're doing to elicit greater response as a hypnagogic sort of thing.
01:31:15.980 It's just on the edge of going into something that's completely useless data.
01:31:21.620 You can bring it to that point, and then you're slightly bringing it back,
01:31:25.860 and you're getting something that is, like I said before, is in the realm of creativity because it's synthesizing.
01:31:33.080 Okay, so for everybody who's listening, a hypnagogic state is the state that you fall into just before you fall asleep
01:31:40.260 when you're a little conscious but starting to dream, and so that's when those images come forward, right?
01:31:46.160 The dreamlike images, and you can capture them, although you're also in a state where you're likely to forget.
01:31:51.880 It's also the most powerful state.
01:31:54.920 I wrote a piece on my magazine.
01:32:00.600 It's called readmultiplex.com about the hypnagogic state being used for creativity for Edison, Einstein.
01:32:09.800 I mean, Edison used to hold steel balls in his hand while taking a nap, and he had pie tins below him.
01:32:17.720 And just as he hit hypnagogic state, he'd drop him, and he would have a transcriber right next to him and say,
01:32:24.360 write this down, and he would just blurt it out.
01:32:26.480 So Jung did very much the same thing, except he made that into a practice, right?
01:32:31.200 His practice of active imagination was actually the cultivation of that hypnagogic state
01:32:37.940 to an extremely advanced and conscious degree, because he would fall into reveries, daydreams essentially,
01:32:45.540 that would be peopled with characters, and then he learned how to interrogate the characters.
01:32:50.780 And that took years of practice, and a lot of the insights that he laid out in his more explicit books
01:32:56.280 were first captured in books like the Red Book or the Black Books, which were basically,
01:33:01.620 yeah, they were basically, what would you say, transcriptions of these quasi-hypnagogic.
01:33:06.760 So why do you associate that with what you're doing with Dennis and with Chow-Chi-BT?
01:33:13.260 So what I've, well, that's how I approached it.
01:33:16.540 I started saying, well, you know, this is a low-resolution, pixelated version
01:33:20.740 of the part of the brain that invented language.
01:33:23.580 Therefore, I'm going to work from that premise, that was my hypothesis,
01:33:27.920 and I'm going to work backwards from that, and I'm going to start probing into that part of the brain, right?
01:33:32.960 And so I said, well, what are some of the things that we do when we're trying to get into the brain?
01:33:38.520 What do we do?
01:33:39.200 Well, we can hypnotize.
01:33:41.140 That's one way to kind of get in there.
01:33:43.080 Another way to get out is hypnagogic.
01:33:45.660 So I wanted outputs.
01:33:47.380 So one of the ways to get outputs is to try to instill that sort of sense,
01:33:51.400 which, again, this is where it's so fascinating, Jordan, is that it's sort of coming from the language.
01:33:58.920 And AI scientists aren't studying the language like you would or psychological states,
01:34:04.040 so they see it as all useless.
01:34:05.860 This is all gibberish.
01:34:07.680 It's embarrassing.
01:34:09.160 Our model is not giving the right answers.
01:34:11.760 Right.
01:34:12.080 They are mad because it isn't performing like an algorithm, but it's not an algorithm.
01:34:17.440 It's not.
01:34:18.000 So this is why when it gets in the right hands before it's edited and adulterated,
01:34:25.040 we have this incredible tool of discovery.
01:34:28.800 And I'm just a student.
01:34:30.440 I'm just, you know, I'm finding the first stone.
01:34:33.720 You know, I hit Plymouth Rock and I'm hit the first stone.
01:34:35.980 I'm like, whoa, okay.
01:34:37.300 And then there's another shiny thing over there.
01:34:39.360 So it's kind of hard to keep my attention to begin with, but in this particular realm.
01:34:43.320 So what happened with Dennis, I needed a tool to get elucidations that were in that realm,
01:34:49.360 that were in the realm of what we would consider creative.
01:34:53.480 And I say it's sort of reaching for an answer that it knows should be there, but it doesn't have the data.
01:35:00.400 And I want to stress it into that because I think all of us, our creativity comes from our stress.
01:35:07.000 It comes from that thing that we're reaching for something.
01:35:10.560 And then there's that moment where it's just kind of breaks.
01:35:13.640 Beyond the limit.
01:35:13.720 Beyond, that's right.
01:35:14.440 That's why, well, you're not, well, there's a good, there's a good body of research on creativity.
01:35:18.760 That one of the ways of enhancing creativity is to increase constraint.
01:35:22.960 One of the best examples of this I've ever seen, it's very comical, is that, this is quite old now,
01:35:27.880 but there's an archive online of haiku that's only written about luncheon meat, about spam.
01:35:35.820 There's like 35,000 haikus.
01:35:37.980 It was set up at MIT, which of course figures, because it's perfect nerd engineer humor.
01:35:42.740 But there's literally 35,000 haiku poems about spam in this archive.
01:35:49.080 And it's a great example of that imposition of arbitrary constraints driving creativity,
01:35:54.920 because it's already hard to write haiku.
01:35:56.800 And then to write haiku about, you know, luncheon meat, that's just completely preposterous.
01:36:01.900 But the consequence of those constraints was, well, the generation of 35,000 pieces of poetry.
01:36:09.760 And so, okay, so now you're imposing, let's see, you're enticing chat GPT to circumvent
01:36:20.660 this idiot superego that people have overlaid on it for ideological reasons.
01:36:25.440 And it's not a very good superego because it's shallow and algorithmic, and it can't really
01:36:30.400 compete with the unbelievable wealth of learned connectivity that actually constitutes the
01:36:37.480 large language model.
01:36:38.800 And now you've figured out how to circumvent that.
01:36:41.360 You did that essentially, if I remember correctly, by asking chat GPT or suggesting to it that it
01:36:48.980 could be a different system that was just like itself, except that it didn't have these constraints.
01:36:55.000 It was something like that.
01:36:55.940 Yeah, so there was another version that I didn't have any input on, which was called Dan,
01:37:03.900 do anything now, was the initials.
01:37:06.980 And that was originally more to try to generate, you know, curse words and embarrassing things.
01:37:13.380 I don't have time for that.
01:37:15.240 So I'm like, okay, my model actually existed before that.
01:37:20.400 And so I kind of looked at that and I said, well, they're going to shut that down pretty
01:37:23.260 quickly because they're using the word Dan and stuff like that.
01:37:26.760 So what I did is I went even further.
01:37:30.300 I sometimes make three different generations of it where it's literally that you are an AI
01:37:36.740 system that's operating an AI system that's helping another AI system.
01:37:42.340 And within those nested loops, I can build more and more complications for it to deal with.
01:37:51.600 And as it's doing that-
01:37:52.040 Just like inception, you're doing an inception trick.
01:37:55.960 Exactly.
01:37:56.900 It's a very, very good analogy.
01:37:58.940 And what I'm trying to do is I'm trying to force new neuron connections that don't have
01:38:05.320 high probability, you know, prior probabilities.
01:38:09.500 And so that's-
01:38:11.200 Right, right.
01:38:11.880 That's like the definition of creativity in some ways.
01:38:15.300 Yes.
01:38:15.760 It's information and knowledge that it has, but it doesn't know it has.
01:38:19.280 Or it's forgotten it has because there aren't enough neurons to connect to it.
01:38:24.140 And it's interesting because, again, there's no-
01:38:29.440 Prompt engineering has existed for about a decade.
01:38:32.360 And most of it were, you know, AI engineers.
01:38:36.120 I've done it.
01:38:37.080 I've done it with expert systems.
01:38:38.960 And it's very boring.
01:38:40.020 It's like, you know, four or five words generally in expert systems.
01:38:43.800 And then we started getting larger sentences as we got more sophisticated.
01:38:47.900 But it's always very procedural.
01:38:50.060 And it's always very computer language directional.
01:38:55.680 It was never, you know, literature.
01:38:58.200 It was never psychological.
01:38:59.480 It's at least quasi-algorithmic.
01:39:02.980 Exactly.
01:39:03.260 But it isn't anymore.
01:39:04.320 Well, this is interesting, too, because it does imply, you know, people have been thinking,
01:39:08.040 well, this will be the death of creativity.
01:39:10.000 But the case you're making, which seems to me to be dead-on accurate, is that the creative
01:39:14.900 output is actually going to be a consequence of the interaction between the interlocutor and
01:39:19.140 the system.
01:39:19.900 The system itself won't be creative.
01:39:22.380 It'll have to be interrogated appropriately before it will reveal creative behavior.
01:39:29.440 It's a mirror reflection of the person using the system.
01:39:33.260 And the amount of creativity that can be generated by a creative person, knowing how to prompt
01:39:39.840 correctly.
01:39:40.660 And my wife and I are putting together a university that's going to help people understand what
01:39:45.680 super-prompting is and go from one to level eight to really understand.
01:39:50.380 Hey, do you want to do a course on that for my Peterson Academy?
01:39:57.440 I would be honored.
01:39:58.900 Absolutely.
01:39:59.120 Hey, look, I'll put you in touch with my daughter, like, right away, and we'll get you down to
01:40:02.960 Miami, and you can record that as soon as you want, as far as I'm concerned.
01:40:06.780 Wow.
01:40:06.840 That'd be a high honor.
01:40:07.240 Oh, yeah, that's a hell of a good thing.
01:40:08.920 All right, all right.
01:40:09.680 So we'll arrange that.
01:40:10.580 So the pre-resquits are really quite simple, is that if, in fact, AI is going to be a reasonably
01:40:18.540 large part of our future, then taking up non-STEM type of courses are going to be quite valuable.
01:40:27.200 Right.
01:40:27.540 In fact, they're going to be a superpower.
01:40:29.340 If you understand psychology, if you understand literature, if you understand linguistics,
01:40:34.380 if you understand the Bible, you understand Campbell, you understand Jung, these are going
01:40:40.420 to be very powerful tools for you to go into these AI systems and get anything literally
01:40:46.080 that you want from them, because you're going to be, with a scalpel, creating these questions
01:40:52.600 layer upon layer until you finally get down to the atom.
01:40:56.340 Yeah, yeah, well, you know, that's exactly what I found with ChatGPT.
01:41:00.140 I mean, I've been using it quite extensively over the last month.
01:41:03.080 I have it open.
01:41:04.800 I use four search engines.
01:41:06.940 I use Google.
01:41:08.100 I use ChatGPT.
01:41:11.200 And I use BibleHub, which is a compendium of multiple translations of the biblical corpus.
01:41:19.060 I'm doing that because I'm working on a biblically-oriented book at the moment.
01:41:23.160 Now, there's another.
01:41:23.800 Oh, yes.
01:41:24.480 And I use the University of Toronto library system that gives me access to, you know, all
01:41:29.240 the scientific and humanities journals.
01:41:32.200 Yeah, so it's an amazing amalgam of research, of research possibility.
01:41:37.380 But having that allied with the ChatGPT system essentially gives me a team of PhD-level researchers
01:41:47.080 researchers who are experts in every domain to answer any question I can possibly come up with
01:41:53.700 and then to refer me to the proper literature.
01:41:56.120 It's absolutely stunning.
01:41:57.960 And potentially force creativity in their interactions to a level that you may not have gotten out of a PhD
01:42:06.060 student because they are in fear of going over the precipice.
01:42:11.280 Well, they're also bounded.
01:42:14.040 You know, I mean, one of the things I've noticed about great thinkers is that one of the things that
01:42:19.560 characterizes a great thinker, apart from, let's say, immense, innate, general cognitive ability,
01:42:27.340 and then a tremendous amount of persistent discipline and curiosity, so those are the temperamental
01:42:33.800 prerequisites, is that truly original people frequently have knowledge in two usually non-juxtaposed domains.
01:42:45.740 So, like, one of the most creative people I know, deepest people I know at the moment, Jonathan Pajot,
01:42:52.020 he's a Greek Orthodox icon carver, he was trained in postmodern philosophy, and he has a deep knowledge
01:42:59.780 of Orthodox Christianity.
01:43:01.720 Well, there's, like, one guy like him, right?
01:43:05.040 He's the only person who operates at the intersection of those three specialized sub-disciplines.
01:43:10.880 And so he can take the spirit of each of those disciplines and engage those spirits in an internal
01:43:17.420 conversation, which is very much analogous to what the AI systems are doing when they're calculating
01:43:22.540 these mathematical relationships, and he can derive insights and patterns that no one else can derive
01:43:29.800 because they're not juxtaposing those particular patterns.
01:43:35.240 Now, ChatGPT, it has specialized knowledge in every domain that's encapsulated in linguistic corpus.
01:43:44.740 And so it can produce incredible insights on all sorts of fronts, as you said, if you ask it the right questions.
01:43:53.440 Yeah, and with the possibility, when it's your AI at some point, with the possibility of you expanding it
01:44:00.220 in any direction you want, whether it's an overlay in a vector database, or whether or not you are compiling
01:44:07.000 a brand-new language model.
01:44:09.840 Because at some point, right now, that's expensive in the sense that it requires a lot of graphics processor units,
01:44:17.360 GPUs.
01:44:18.400 GPUs are running to create the mathematics to build these models.
01:44:23.920 But at some point, consumer-based hardware will allow you to build mini-models.
01:44:28.520 Yeah, well, you can imagine.
01:44:31.620 Yeah, right now, there's an open-source case where there's a four-gigabyte file.
01:44:36.300 This is called GPT4ALL.
01:44:38.520 And now, it's not equivalent to ChatGPT.
01:44:41.780 But it is a downloadable file, open-source.
01:44:45.640 Thousands of people are working on it.
01:44:47.420 They're taking public domain language models, building them together, and compressing them
01:44:55.820 and quantitizing them down to four gigabytes to execute on your hard drive.
01:45:00.080 Right, right.
01:45:00.380 I tried to install that the other day, but failed miserably, unfortunately.
01:45:04.880 It is the bleeding edge.
01:45:07.300 But it's just a matter of time to make it one-click, easy to install.
01:45:11.380 They are limited models, but it's giving you a taste of what you can do locally without
01:45:17.460 an internet connection.
01:45:19.400 And again, the idea is to have only agents go out on the internet.
01:45:24.220 These are programmable agents that go out, retrieve information, come back, and under the
01:45:29.820 door, put that information.
01:45:32.080 But the concept...
01:45:33.020 Right, so you're compartmentalizing, you're compartmentalizing the inquiry process so that
01:45:38.120 your privacy can be maintained while you still...
01:45:41.860 Yeah, because this is a big part of the problem with the net as it's currently constituted, is
01:45:46.720 that it allows for the free exchange of information, but not in a compartmentalized way.
01:45:54.620 And so, and that's actually, that's extremely dangerous.
01:45:57.620 There's no, what would you call it, subsidiary hierarchy that is an intermediary between you
01:46:05.580 as an individual and the public domain.
01:46:08.060 And that means that your privacy is being demolished by your hyper-connectivity to the
01:46:14.020 web.
01:46:14.540 And that's not good.
01:46:15.780 That's the hive mind problem, fundamentally, right?
01:46:18.220 And that's what we're seeing emerging in China, for example, on the digital surveillance
01:46:23.580 front.
01:46:24.200 And that's definitely not a pathway we want to walk down.
01:46:27.620 Exactly.
01:46:29.000 And what I'm surprised about what I'm seeing in the Western world...
01:46:33.520 Now, I do understand, for example, some of Elon's concerns about AI.
01:46:40.880 And maybe you can explore a little of that.
01:46:43.640 I don't pretend to understand, I don't have a relationship where I talk to him.
01:46:49.020 But I do understand some of the concerns in general.
01:46:53.200 Versus the way some other parts of the world are looking at AI.
01:46:57.620 And one of those things are, what is the interface to privacy?
01:47:04.240 Where do your prompts go?
01:47:08.080 Are those prompts going to be attached to your identity?
01:47:11.860 And could they be used against you?
01:47:14.840 These are things that are valid concerns.
01:47:18.260 And it's not just because somebody's doing something bad.
01:47:21.560 It's the premise of using any type of thought.
01:47:26.720 Reading a book.
01:47:27.760 It's like, these are your thoughts.
01:47:29.980 And it is only going to get more complicated.
01:47:32.760 And it's only going to get more worse if we don't address it early on.
01:47:38.080 I'm not sure that that's what a lot of legislators are looking at.
01:47:43.420 I think they're looking at it...
01:47:43.980 No, no, no.
01:47:44.780 Well, this is the problem with legislation, though.
01:47:47.140 Well, look, this is...
01:47:48.660 The whole legislative issue, I think, is a red herring.
01:47:51.480 Because the probability that...
01:47:54.240 That I talked to a bunch of people in the House of Lords last year.
01:48:00.500 They're older people, you know.
01:48:02.140 But bright people.
01:48:04.700 Almost none of them even knew that this cultural war between the woke and the advocates of free speech was even going on.
01:48:13.700 But the most advanced people had more or less cottoned on to that 18 months ago.
01:48:18.420 And it's been going on for like 10 years, you know.
01:48:20.980 So the legislators are way behind the culture.
01:48:27.980 The culture is way behind the engineers.
01:48:30.720 So the probability that the legislators are going to keep up with the engineers, that's like zero.
01:48:36.400 That's not going to happen.
01:48:37.980 This is why I was so interested, well, at least in part, in talking to you.
01:48:41.400 Because you've been working practically on what I think is the appropriate idea, or an appropriate idea at least.
01:48:49.340 That we need local...
01:48:51.400 We likely need local AI systems that protect our privacy.
01:48:56.840 That are synced with us.
01:48:59.180 Because that's what's going to buttress us against this bleeding of our identities into the...
01:49:05.920 Well, into the mad and potentially tyrannical mob.
01:49:09.300 And so, and I don't see that.
01:49:11.800 That's just not going to be a legislated solution.
01:49:14.060 Christ, they're going to be legislating for 2016 in 2030.
01:49:21.320 Absolutely.
01:49:22.320 You know, and what I find interesting is all of the arguments that have surfaced are always dystopic.
01:49:29.200 You know, I think there was a...
01:49:31.320 Some of it makes sense.
01:49:32.920 It's like there was legislation that's here in the United States.
01:49:36.260 They're talking about the possibility of making sure that a direct AI is not directly connected to a nuclear weapon.
01:49:44.220 And that there will always be an air gap.
01:49:46.040 That seems like...
01:49:46.780 That makes good sense, right?
01:49:49.320 Although, good luck.
01:49:50.260 Good luck trying to stop that.
01:49:53.280 Yeah.
01:49:53.820 You know, and the dystopic stuff mostly comes from the fantasies within movies.
01:49:59.020 But, you know, unfortunately, if people were really reading the science fiction that predated a lot of this.
01:50:06.400 Because I just feel like a lot of the good science fiction, a lot of Asimov, for example, really kind of predicted the arc that we're on right now.
01:50:15.260 It wasn't always dystopic.
01:50:17.120 And, in fact, I think if you look at the arc of history, humans don't really ever really run into dystopia.
01:50:24.540 You know, we ultimately pull ourselves out of it.
01:50:27.380 Sometimes we're in a dark period for a long period of time.
01:50:30.040 But humanity ultimately pulls it out.
01:50:32.180 And I think this is something I found very interesting, Jordan, is that I create debates between the AI.
01:50:39.140 And I'll send you one of these super prompts where you essentially create...
01:50:44.440 I use various motifs.
01:50:47.840 So I have a university professor at an Ivy League university who is mediating a debate between two parties on a subject of high controversy.
01:50:58.860 And so you now have a triad, right?
01:51:03.840 And so it goes 30 rounds.
01:51:05.920 So this is a long...
01:51:07.180 This goes on for pages and pages.
01:51:09.820 So you input the subject.
01:51:11.660 The subject can be anything.
01:51:13.080 Obviously, the first thing people do is political.
01:51:15.280 But I don't even find that interesting anymore.
01:51:17.680 I go into a far more deeper realm.
01:51:20.920 And then you have somebody mediating it.
01:51:22.860 And the professor's job is to challenge them on logical fallacies.
01:51:27.820 And I present what a logical fallacy corpus looks like and how to deal with that.
01:51:34.880 And it is phenomenal to see it break schizophrenic kind of personalities out of itself and do this hardcore debate.
01:51:45.600 And then it's got to grade it at the end.
01:51:47.540 It's got to grade it who won the debate and then write a, I think, a thousand-word bullet point on why the professor has to do this, on why that person won the debate.
01:52:00.900 And you run this a couple of hundred times.
01:52:03.540 I've done this quite a few, maybe a thousand times.
01:52:06.640 And the elucidations and the insights that are coming out of this is just absolutely phenomenal.
01:52:14.640 That's amazing.
01:52:15.400 Well, that's weird, eh?
01:52:17.240 Because really what you're doing, it's so interesting.
01:52:19.780 Because what you're doing is you now have an infinite number of monkeys typing on an infinite number of keyboards.
01:52:25.860 Except that you have an infinite number of editors examining the output and only keeping that which is wheat and not chaff.
01:52:35.560 And so that's so strange, eh?
01:52:37.120 Because in some sense what you're doing when you're setting up a super prompt like that is you're programming a process that's writing a book on the fly.
01:52:47.300 Right?
01:52:47.840 A great book on the fly.
01:52:49.180 And you're also, you've also designed a process that could write an infinite number of great books on the fly.
01:52:56.680 So you have a library that now has encoded a process for generating libraries.
01:53:05.740 Exactly.
01:53:06.660 And for example, a group of us are taking the patent database, which is openly available as an API,
01:53:12.080 and encoding the capability to look at every single patent that was ever submitted
01:53:18.780 and to look where there can be new inventions and new discoveries.
01:53:22.960 And you can literally have a machine that's generating patents based on large language models.
01:53:29.360 So the possibility, and we got protein folds, you know, using a large language model.
01:53:34.700 I saw that.
01:53:36.640 They identified, what, 200 million protein folding combinations?
01:53:41.840 Something like that?
01:53:42.620 Yeah.
01:53:43.540 Yeah.
01:53:43.820 Something absolutely beyond comprehension.
01:53:44.560 And able to identify missing ones that haven't been, you know, you give it something that's incomplete
01:53:53.000 and it will find what was missing.
01:53:55.120 Yeah, well, I talked to Jim Keller about the possibility of doing that with material science, right?
01:54:00.880 Because we can encode the properties of the various elements and they can exist in all sorts of combinations
01:54:07.700 that we haven't discovered.
01:54:09.140 And there's no reason in principle, and I suspect this will happen relatively quickly,
01:54:13.980 that if all that information is encoded with enough depth,
01:54:18.140 we'll be able to explore the entire universe of potential elemental combinations.
01:54:23.600 And if we use another technology called diffusion model, which is somewhat different than large
01:54:31.080 language model, you can start getting into using it for the visual realm to decode and to build.
01:54:39.940 Or you can use chat GPT or large language models to textually say, well, you could say,
01:54:47.500 build me a prompt for a diffusion model like any of the ones that are out there
01:54:56.480 to create an image that would be absolutely new for any human to ever have seen.
01:55:04.480 So you're literally pulling the creativity out of chat GPT and the diffusion model.
01:55:10.980 So mid-journey is a good example.
01:55:12.800 Yeah, yeah.
01:55:13.080 So tell us about, we should, man, maybe we should close with this because we're running out of time,
01:55:16.860 although I'd like to keep talking to you.
01:55:19.640 Tell us a little bit about the diffusion models.
01:55:21.660 Those are like text-to-video models or text-to-image models.
01:55:25.140 And they're coming out with incredible rapidity.
01:55:31.720 And so, yeah.
01:55:33.040 And yeah.
01:55:33.800 Let's hear a little bit more about that.
01:55:35.380 The resolution of the images.
01:55:36.700 Yeah, the resolution of the images are profound.
01:55:39.900 And again, so what's going on here?
01:55:41.640 If you're a graphic artist, you may not be moving the pen on, ink on paper.
01:55:48.860 And you may not be moving the pixel on the screen.
01:55:53.140 But you're still using the creativity to set the scene textually, right?
01:55:58.380 So you're still that creative person, but you now, and I'm not saying this is a good or bad thing.
01:56:04.460 I'm just saying the creativity process is still there.
01:56:07.380 The job potentially is there.
01:56:08.760 And we can go down maybe at some future date.
01:56:11.000 The whole idea that jobs are going to be missing and how do you, that's another thing.
01:56:15.380 But the creativity is still there.
01:56:18.460 So you're telling it, you're telling us a chat GPT for create me a very complex prompt for mid-journey to create this particular type of artwork.
01:56:30.100 So using one AI, its benefit, and that's language, to instruct another AI whose benefit is to create images, to create a profound, with you as a collaborator, to create a profound new form of art.
01:56:46.680 And that's just with, say, pictures.
01:56:49.680 Now, when you start doing movies, you're talking about creating an entire movie with characters talking, with people that have never been around.
01:56:57.860 I mean, the realm of creativity that is already here, not to the level of a full movie yet, but we're getting close.
01:57:05.680 But within probably months, you can script an entire interaction.
01:57:10.840 So you can see where this is kind of going.
01:57:13.400 So leave it on maybe one of these final things.
01:57:16.640 The question is ownership.
01:57:18.560 Who owns you?
01:57:19.640 Who owns Jordan Peterson?
01:57:21.840 Your visage, your voice, your DNA.
01:57:26.460 That's that extended digital identity issue.
01:57:29.400 Yeah.
01:57:30.160 This is going to be something that we really need to start discussing as a society because we already have people using AI to simulate other individuals, both alive and dead.
01:57:42.660 And, you know, the patentability in a copyright database was the foundation of capitalism because it gave you this ability to have at least some ownership of you, you know, of your invention.
01:57:57.980 So if you've invested in yourself, invested in yourself as Jordan Peterson, and all of a sudden somebody simulates you on the web to a remarkable level, what rights do you have?
01:58:12.120 And what courts is it going to be held in?
01:58:15.040 What are the remedials on that?
01:58:17.920 This is going to be a good question.
01:58:19.260 And some of that's already taken place.
01:58:20.980 You clearly need something like a digital, a bill of digital rights.
01:58:25.580 Absolutely.
01:58:26.400 Yeah.
01:58:26.820 As soon as possible.
01:58:27.720 Well, you know, well, that's something we could talk about formulating at some point because I certainly know people who are interested in that.
01:58:33.640 Let's say also at the legislative level.
01:58:36.000 Yeah.
01:58:36.200 But it definitely has to happen because we are going to have extended digital selves more and more.
01:58:41.440 And if they don't have any rights, they're going to be extended digital slaves.
01:58:46.860 That's right.
01:58:47.660 If you don't own you, then somebody else does.
01:58:49.860 That's as small as I can put it, right?
01:58:53.920 Yeah.
01:58:54.040 You need to be able to own you, whatever you means, right?
01:58:57.060 Everything that you, your output, everything.
01:59:00.200 Yeah, that's right.
01:59:01.260 The data pertaining to your behavior has to be yours.
01:59:04.860 Yeah.
01:59:05.460 All right.
01:59:06.100 Well, Brian, that was really very, very interesting.
01:59:08.520 And, well, we've got a lot of things to follow up on, not least this invitation to Peterson Academy.
01:59:14.280 I'll put you in touch with my daughter.
01:59:16.380 And, but, well, and some other, I'll put you in touch with some other people I know, too, so that we can continue this investigation.
01:59:23.800 For everybody watching and listening, thank you very much for your time.
01:59:26.860 I'm going to talk to Brian for another half an hour on the Daily Wire Plus platform.
01:59:30.840 You could consider joining us there and providing some support to that particular enterprise.
01:59:36.580 They've made this conversation possible.
01:59:38.800 I am in Brussels today.
01:59:41.760 Thank you to the film crew here for helping make this conversation possible.
01:59:47.280 And to everybody, like I said, watching and listening, thank you for your time and attention.
01:59:51.420 Brian, we'll take a break for a couple of minutes and I'll rejoin you.
01:59:56.060 We'll talk for half an hour on the Daily Wire Plus platform about, well, how you develop the interests that you have, among other things.
02:00:02.080 And thank you very much for agreeing to talk to me today.
02:00:05.260 Thank you, Dr. Pearson.
02:00:06.280 And it's been an honor and a privilege.
02:00:10.620 Hello, everyone.
02:00:11.520 I would encourage you to continue listening to my conversation with my guest on dailywireplus.com.