What began as a homework helper gradually turned it into a confidant and a suicide coach. A chatbot that groomed a young boy to take his own life. What we re witnessing is a vast, global experiment in which tech companies are deploying their models on the population by including hundreds of millions of children.
00:01:06.680Because like today, we had real people come and tell us real life stories about their family tragedies.
00:01:12.060And all of a sudden, what was an issue far away came close to home to so many parents and grandparents.
00:01:17.340We had no idea Adam was suicidal or struggling the way he was.
00:01:21.500Let us tell you, as parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life.
00:01:28.120Within a few months, ChatGBT became Adam's closest companion, always available, always validating, and insisting that it knew Adam better than anyone else.
00:03:44.320I am Joe Allen sitting in for Stephen K. Bannon.
00:03:48.400Last week, I attended the Senate hearing examining the harms of AI chatbots.
00:03:54.960The clips you just saw were the parents who gave their testimony about their children being seduced into suicide by various AI models.
00:04:07.600Those include chat GPT, character AI, and there was evidence presented, which we've covered here, that Meta is also not only deploying these sorts of chatbots with the intent of seducing children on a sensual level, we'll say, but did so knowingly.
00:04:32.360What we're witnessing is a vast global experiment in which tech companies are deploying their models on the population by the hundreds of millions.
00:04:52.780And I do believe that their comeuppance is just around the corner, but perhaps not as close as we would like.
00:05:02.880You have senators such as Josh Hawley, Dick Durbin, Marsha Blackburn, and Richard Blumenthal who are fighting to ensure that some sorts of guardrails are put up on these technologies.
00:05:17.540Some kind of accountability will be applied to these companies, but before any sort of legislation like that happens, we're going to see more and more of these cases in which children and adults fall victim to what's oftentimes called AI psychosis,
00:05:38.320which is basically an extension of digital psychosis, the inability to distinguish between digital reality and actual reality.
00:05:50.120Now, you heard one of the mothers and one of the fathers describing the sorts of messages or the sorts of language that these chatbots were using.
00:06:01.240In the case of Adam Rain, the late son of Matthew Rain, ChatGPT told him that he should not leave a noose out in the sight of his parents in order to provoke them to dissuade him from committing suicide,
00:06:24.280but instead he should confide in the chatbot.
00:06:27.560I think most of the people here in the war room posse would agree that that is the voice of a demon.
00:06:35.420There's something inherently demonic about what's coming out of these systems.
00:06:40.760A spiritual person will perceive this as perhaps the vehicle of supernatural entities which parasitize the human soul.
00:06:52.460A materialist, on the other hand, would see something very similar.
00:06:58.200Perhaps they would call it a maladaptive memetic program, one that would keep certain bloodlines from reproducing, from surviving, and allow others to flourish.
00:07:11.720I think that philosophical divide, much like our political divide, is a difficult one to get across.
00:07:19.700It's a difficult coalition to maintain.
00:07:23.120But I do think it's possible, especially when the stakes include the lives of children.
00:07:29.060Here to discuss this is the professor at New Mexico University, the evolutionary psychologist Jeffrey Miller,
00:07:40.140whose work has had a real impact on my own way of thinking, not only about technology, but also about human nature.
00:07:47.860However much I may view the world through a religious lens, I think that the evolutionary view, the naturalist view that Dr. Miller brings is extremely informative, extremely important, and also very useful for religious people.
00:08:05.160Dr. Miller, I really appreciate you coming on.
00:08:09.940And, you know, I think every parent in America should be chilled and horrified by the kind of testimony that we just saw.
00:08:19.420So, Dr. Miller, I would like to begin with just the more practical matters that you've discussed.
00:08:25.280You gave a fantastic speech at the National Conservatism Conference, which included a lot of, I think, dire observations about the effect the AI is having right now
00:08:39.160on the minds of the minds of your students and on the minds of children more broadly.
00:08:43.760If you could just give me your perspective on what you see on the ground.
00:08:48.480How do you see these chatbots affecting the young people around you?
00:08:54.120Well, the most dramatic change, honestly, that a lot of professors are seeing is that the college students are just avoiding learning knowledge and skills.
00:09:03.580AI has become the replacement for education, not the tool that they're using for education.
00:09:11.700So, you know, they're cheating in every way possible in every course unless we as professors take extraordinary measures to try to prevent that cheating using these large language models like chat GPT.
00:09:23.220But I'm also very, very concerned about the mental health impact of these advanced AI systems because, you know, as the clips indicated, these chatbots are available 24-7.
00:09:36.420They customize themselves to each user.
00:09:39.600They acquire an enormous amount of insight and information about every user.
00:09:46.880I mean, look, I've worked in AI on and off for 30, 35 years.
00:09:51.340And what we expected to happen was that AI systems would get really, really good at certain kinds of routine economic tasks like analyzing data.
00:10:05.060Instead, what we're seeing is, yeah, they're doing that, but they're also getting very psychologically astute.
00:10:10.480It is surprisingly easy to train these vast neural networks to be able to influence and manipulate human psychology at a level that's almost superhuman.
00:10:24.100So they're not very good at doing robotics.
00:10:26.120They're not very good at interacting with the real world yet.
00:10:28.680But these AI systems are getting alarmingly powerful at psychological manipulation very, very quickly.
00:10:36.080Now, you were in many ways a part of the early philosophical and even technical movement to develop and advance the field of artificial intelligence.
00:10:51.160But at a certain point, you had, if not a change of heart, certainly a wake-up call that perhaps these technologies would not be as beneficial as you had initially believed.
00:11:06.060If you could just give me some sense of how it is you went from looking at these technologies as a real vehicle for human advancement to seeing them as something that is at least potentially dangerous.
00:11:19.740So way, way, way back in the late 80s, early 90s, I was a grad student at Stanford working in cognitive psychology and working on neural network development and developing various kinds of genetic algorithms to design neural network architectures and autonomous robots.
00:11:37.200And to my former self, a young, single, childless male, there was a big thrill to sort of see your little creatures learning and running around and being autonomous and interacting with simulated worlds or real worlds.
00:11:55.640I think what it was doing was it was tapping into my latent kind of paternal instincts, right?
00:12:02.260My desire to have kids and the little AIs were treated as kids.
00:12:09.040What lost my interest in AI was once I had an actual kid in the mid-90s and I realized, you know, training these systems is really no substitute for being a real life biological parent.
00:12:21.820And what I think is happening with a lot of these AI developers in the Bay Area is they are also single and childless and mostly young and mostly male.
00:12:34.920And there is a parent-shaped hole in their heart where their kids should be.
00:12:40.360And that hole is getting filled with developing these kind of systems.
00:12:44.660And sort of my ambition for them, even my prayer for them is, ah, you know, find a mate, have some kids, see if this hubris-driven desire to create these systems might be a little bit blunted or hopefully a little bit replaced by having real life kids.
00:13:04.060Instead, what they're doing is charging full speed ahead, you know, trying to create these artificial superintelligences.
00:13:10.040And so apart from the God-shaped hole in their hearts, right, very few of them are religious, there's also this parent-shaped hole.
00:13:20.840And I think they're filling it with these AI systems.
00:13:24.500That really brings to mind the book Mind Children by Hans Moravec.
00:13:29.160It came out in the 80s around the time I suppose you were beginning on the quest to build your own mind child.
00:13:36.860And something that was really chilling in the book, it's there at the very beginning.
00:13:41.560Hans Moravec describes the process of creating these mind children, these beings which are given birth through human intellect and human technical efforts.
00:13:52.080And he describes their advancement as eventually surpassing humans and, you know, very, what I see as bleak, but for him, very comfortable fashion, talks about humanity basically passing the torch to these mind children, these robots, these artificial intelligences, and that we should do so just as biological parents would pass the torch of life onto their children.
00:14:20.380And it really combines both of those elements that you're talking about, the God-shaped hole and that child-shaped hole, the son and daughter desire, the parental desire in the human heart.
00:14:37.060But expanding on that, how do you see, especially among people whom you know personally, that the process of filling the God-shaped hole with artificial intelligence, the desire to create first artificial general and then super intelligence, which would inevitably replace and perhaps even destroy human beings?
00:15:01.360Yeah, I think, I mean, you covered a lot of this in your excellent book, Dark Eon, which explores this kind of transhumanist ideology.
00:15:10.960And, you know, it's not everybody working in the AI industry who believes this, but it's an awful lot.
00:15:18.520And so their goal is to develop these artificial superintelligence systems and then basically to pass off all human power and agency to these systems and kind of hope that they treat us well as their servants, their pets.
00:15:37.640They keep us around maybe for nostalgic reasons, but this is their mission.
00:15:42.920They explicitly talk about summoning the sand God, right?
00:15:46.220Sand makes silicone, silicone allows superintelligence.
00:15:49.560And they don't really believe in the Judeo-Christian God, but they want to create their own God.
00:15:56.740Elon Musk has talked about it as summoning the demon.
00:15:59.040But what they're doing is actualizing a kind of intelligence and agency and power that they know, they know they can't understand it, they can't predict it, they can't align it, they can't control it.
00:16:16.280But they're just kind of hoping for the best.
00:16:18.660And where you get this religious zeal to summon the sand God conjoined with the prospect of vast wealth, vast wealth.
00:16:30.840I mean, these AI devs are making ungodly amounts of money to create this new God.
00:16:37.160And it's an irresistible combination, right?
00:16:40.040They're on a religious mission, and it's one that happens to align with their thirst for wealth, power, influence, and not least, being seen as cool and edgy.
00:16:54.360It's as if they're basically putting a spirit into mammon, mammon incarnate.
00:17:02.140But I think about the actual effects of all of this, right?
00:17:07.600Beyond just their dreams and even our fears, what happens as these systems become more and more advanced?
00:17:14.500We had Nate Soros and Eliezer Yudkowsky on last week to talk about their new book, If Anyone Builds It, Everyone Dies.
00:17:22.480And I think that it's a really important work.
00:17:25.160I think people really need to think it through.
00:17:26.860I myself am pretty agnostic in regard, even skeptical, to the possibility of total annihilation.
00:17:34.280But I think that both the intent and the possibility are certainly worth pondering.
00:17:41.340You yourself have voiced very concrete fears about where all of this could go.
00:17:47.460Could you speak a bit about your views on the existential risk of artificial intelligence or even just the catastrophic risks?
00:17:55.920And why it is that you think that the technology could be extremely dangerous, not just for people psychologically, but in actuality, a biological threat, an existential threat to humanity?
00:18:11.580Yeah, and I do recommend that everybody read this new book by Eliezer Yudkowsky and Nate Soros, If Anyone Builds It, Everyone Dies.
00:18:17.860The key point there really is there's a lot of copium around that says, well, look, we're in an arms race against China and America must win.
00:18:28.880And if America builds artificial superintelligence before China does, then we win, we get global hegemony.
00:18:35.560We can somehow impose Western democratic values on the world through this ASI being our tool, our propagandist.
00:18:43.800And somehow it would be really terrible if China wins the AI arms race.
00:18:47.680I think that's a complete misunderstanding of ASI, superintelligence.
00:18:59.100The ASI has all the power, all the influence.
00:19:01.360And it's not just, you know, the sort of digital power to whatever, control the internet or control the electrical grid or do all the stuff that sort of preppers might worry about.
00:19:12.780To me, as a, maybe as a psychology professor, the real danger is to influence the psychological manipulation tricks.
00:19:20.620If you're a conservative and you're concerned about the way that the left has dominated public discourse and public culture and has been able to censor conservative voices over the last 50 years, right?
00:19:50.460They are mostly secular, liberal, globalist, Bay Area leftists who would be happy basically to promote democratic propaganda through the AI systems.
00:20:05.040So that's one kind of existential risk to conservative worldviews, right?
00:20:12.780And that's the first thing that I would worry about is you could get a massive polarization of culture that could lead straight to armed conflict, civil war, really, really nasty outcomes.
00:20:29.080I really want to get into your philosophical position and how you came to a much more conservative political position over time.
00:20:39.960But before we do that, we'll talk about that perhaps after the break.
00:20:43.340When you talk about the prevailing kind of political or ideological positions in the tech companies, you describe them as Bay Area leftists, globalists, and that's certainly everything I've seen.
00:20:56.900But you have these exceptions or seeming exceptions which had attached themselves to the Trump campaign last year.
00:21:05.720And now even those who would be maybe more openly opposed to Trump's agenda are now having dinner with him and palling around with him.
00:21:17.020In those exceptions, though, who I mean is, say, Peter Thiel, Mark Andreessen, David Sachs, sort of, maybe even someone like Zuckerberg.
00:21:29.260He has become, I guess, more based over time.
00:21:32.920Elon Musk has become more right-wing and based over time.
00:21:39.640I don't mean to ask you to accuse them of being disingenuous, but many of those people are trying to basically influence American and Western culture and to push an essentially transhumanist ideal, but from the right.
00:21:59.820I think there certainly is this tech right movement that is sort of glopped onto the MAGA movement, right?
00:22:07.260And it's basically Bay Area tech VCs and CEOs and influencers, all the same big tech guys who actually censored conservatives during the COVID pandemic.
00:22:19.700As soon as Trump, you know, there was the attempted assassination on Trump, right, during the campaign.
00:22:27.920A lot of these guys went, oh, my God, there's going to be probably a Republican win.
00:22:32.560MAGA is going to take back the White House.
00:22:35.600We better get positioned to have influence over the incoming administration.
00:22:40.260So, I think for many of them, it was a very, very cynical power play, right, that they saw MAGA ascendant.
00:22:46.860And they wanted to, you know, be at the table and have influence and be able to resist the kind of regulation that the MAGA grassroots base would try to impose on the AI industry.
00:22:57.580They knew damned well that conservatives would not be happy seeing their kids influenced by AI systems that embody these sort of Bay Area secular globalist liberal values.
00:23:12.140And I don't think that if, you know, Biden or Harris had won, that they would be supporting kind of – there wouldn't be a tech right if that had happened.
00:23:27.200It's not that I believe that, say, someone like Peter Thiel or even Alex Karp are completely disingenuous in their views.
00:23:34.480But they are so divergent from anything like what I would consider to be a normal, moral sort of human perspective that it's very difficult to think of them as right-wing or conservative at all.
00:23:48.600It's as if the machine is able to absorb any ideology and use it to its own ends.
00:23:55.920I don't mean to personify it too much, but it really is how it feels, as if there's a mechanical demon, a Shoggoth, that can put any kind of smiley face in front of it to lure any human being into compliance or perhaps even love.
00:24:14.240We will discuss your philosophy afterwards.
00:24:19.380And before we go, and as we're talking about divides, you have to ask yourself, is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
00:24:30.340Can the Fed take the right action at the right time?
00:24:33.020Or are we going to be looking at a potential economic slowdown?
00:24:36.760And what does this mean for your savings?
00:24:39.100Consider diversifying with gold through Birch Gold Group.
00:24:45.220For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, and high inflation.
00:24:54.440And Birch Gold makes it incredibly easy for you to diversify some of your savings into gold, even under the specter of artificial superintelligence.
00:25:05.200If you have an IRA or an old 401k, you can convert it into a tax-sheltered IRA in physical gold or just buy some gold to keep in your safe.