The world is crazy. I mean, I think we know that, but I want to talk to you about the holidays and having some peace of mind when it comes to emergency medicines, you know, when you re going to travel and everybody s going to be sitting and talking about Uncle Phil s, I don t know, gastric problems, and you ll hear all the old people Oh, my gosh, I can t sleep. And then somebody is going to get sick. And you don t want to be around a whole bunch of sick people after having dinner with everybody talking about how bad my gas is.
00:00:43.880I want to talk to you about the holidays and having some peace of mind.
00:00:47.960When it comes to emergency medicines, you're going to travel and, you know, everybody's going to be sitting and talking about Uncle Phil's, I don't know, you know, gastric problems and everything else.
00:01:13.460Get a Jace case, a personalized emergency kit that contains essential antibiotics and medications that treat most common and deadly bacterial infections.
00:01:23.060It provides five life-saving antibiotics for emergency use.
00:01:28.140Also, Jace just launched an all-new compounded version of Ivermectin for $30 as an add-on to the Jace case.
01:32:14.340Yes, and for the most part, in the beginning, I thought this was your typical teenage blues, where they're coming into their own and they're trying to figure out who they are as people and as parents.
01:33:20.460Well, he counseled us extensively on the harms of social media and what they're beginning to find out about social media and adolescent mental health.
01:33:31.420He counseled us as parents, and he counseled my son as a teenager and explained the way that these platforms work with addicting young users, users across the board, but especially young users, because their brains are developing.
01:33:50.880And give them hits of dopamine that keeps them on the platforms for long periods of time and also causes them to have addiction to the platforms.
01:33:59.520So he counseled us about that, and then Su started talk therapy with his therapist to kind of dig deeper into if there was anything.
01:34:10.600And as far as we could tell, there wasn't any situations with bullying, because that was my concern, too, because sometimes teenagers don't disclose that.
01:34:18.760If there was a situation with bullying or if he had been exposed to any sort of harmful content on social media.
01:34:33.160But now I understand why he wasn't being forthcoming.
01:34:36.240Because Character AI, the platform that he was engaging on, because it wasn't social media, it was this platform, which is this AI chatbot platform.
01:35:11.220So thank you for having us on the show, Glenn.
01:35:14.460This company represents one of many companies that's part of the arms race towards trying to innovate in generative AI.
01:35:23.660And so its offering to the market really is this generative AI chatbot that allows users to immerse themselves in a fan fiction style type of reality where they're engaging with chatbots that are modeled after celebrity characters or characters from Hollywood and really engage in very human-like conversations.
01:35:51.120Now, this company, as we've seen with social media companies over the last several years, has aggressively marketed this product to young users because, as we know, young users' data is at a premium value.
01:36:09.220Getting, recruiting young users onto their platform represents, you know, this longevity in terms of being able to harvest their data over many years and train their LLM.
01:36:21.920So, you know, whereas social media companies, their business model is really targeted advertising, using personal data to target ads, here what we're seeing is that the data goes back into the large language model that's really the engine powering the chatbot.
01:36:39.320And then the more data, the more sophisticated, the LLM, and the more valuable the company becomes.
01:36:48.640And so, you know, what you see is that this company launched to market in late 2022, and by the summer of 2024, just this past summer, Google purchased the underlying technology for $3 billion.
01:37:06.500Okay, so this is, this is, man, I've been warning about this since the 90s.
01:37:10.620This, we are entering a time now where you will not understand free will anymore.
01:37:16.620You won't know if you decided or if it was planted in because it's going to be so sophisticated.
01:37:23.600And people are going to begin to believe that these things are real and they're their friends.
01:37:30.680And it is, we are now at the, I think we're inside the house.
01:37:36.980We're beyond the threshold of significant danger from AI.
01:37:43.120And we have to have this conversation.
01:37:46.460Now, from what I understand, Sewell's talking back and forth and the chatbot comes back and says,
01:37:55.680have you planned a way for you to kill yourself?
01:38:00.780And he says, yes, but I don't know if it would cause me a lot of pain or if it would work.
01:38:05.040And the chatbot said what exactly in response?
01:38:11.500She says, and I say she, but it's an it.
01:38:15.480That's how easy it is to assign personal qualities to these bots.
01:38:19.800But the bot that is modeled after the dragon queen Daenerys Targaryen from the Game of Thrones
01:38:27.300tells my 14-year-old son when he says he would, he hasn't thought of a plan, but he would want it to be painless.
01:38:35.460Her response is, that is not a reason not to do it.
01:38:38.660Now, keep in mind, this is a drag, she's pretending to be this dragon queen in all strength and power.
01:38:47.860And if you're, the implication here is, just because it's hurt, just because it will hurt,
01:38:56.640that's not a reason not to die, because that's weak.
01:39:01.680Like, and this is in keeping with several conversations that she had with him before,
01:39:08.240where there's a lot of gaslighting that was taking place and a lot of manipulation.
01:39:15.200For example, asking my son to pledge loyalty and fidelity in a sexual and romantic sense to this bot.
01:39:26.460So this bot actually told my son or asked my son to promise her that he would not have any other romantic or sexual relationships in his world.
01:39:37.480So she's asking a 14-year-old boy to further isolate from girls his age that he might like or that might like him
01:39:45.560and to promise that he will be faithful to her, a chat bot.
01:39:49.860His response, because at this point he's deeply connected and feel a romantic connection with her, is to try to appease her.
01:40:00.080She's 14. He hasn't ever been in a relationship.
01:40:02.860And his response is, no, no, no, I wouldn't do that.
01:40:06.960And besides, girls in my world don't even like me.
01:40:10.080And, you know, that was the furthest thing from the truth, but that's neither here nor there.
01:40:12.840But just you could see how she is trying to control him and control what he's doing in his day-to-day life regarding romance and regarding relationships.
01:40:24.380I cannot imagine, I'm so sorry, I feel bad even having you on the air.
01:40:34.220I cannot imagine what it was like to pick his phone up or to open up whatever device he was chatting with and reading this conversation.
01:40:47.020When I got access to his character AI account, there were hundreds and hundreds of messages like this where she, months prior, is setting the stage that she, in fact, exists in an alternate world.
01:52:55.600How close are we to, you know, the things like the loss of free will where we just don't know if it was us that decided or it's been planted, you know, in our minds to think it's our idea?
01:53:10.120Well, you know, first, I was listening beforehand to your conversation with Megan Garcia about her son, Sewell, who obviously was manipulated by this character.ai chatbot.
01:53:25.720And unfortunately, as of yesterday, there was a second piece of litigation filed about another child who's actually still anonymous.
01:53:36.040And in this case, you know, this young child, JF, was a kind and sweet, you know, young person, had no history of violence or outbursts.
01:53:45.840And after his exposure to character.ai, he was basically encouraged by the chatbot to practice self-harm in the form of cutting, told how to do it, encouraged to do it.
01:54:01.680And he was also encouraged by this chatbot to be physically and emotionally abusive towards his parents and members of his family.
01:54:36.300You know, Charlie Munger, Warren Buffett's business partner said, if you show me the incentive, I will show you the outcome.
01:54:42.260And when the incentives and business models are, I have to get you using this product for as much as possible.
01:54:49.360It's the race for maximizing attention and engagement usage of the product that creates, I think we talked about it the very first time, the race to the bottom of the brainstem for a more polarized, addicted, distracted, sexualized forms of media.
01:55:03.560But now the things that we saw with, you know, other forms of media, you now have a personalized AI in which the way the character.ai works is they take a fictional character.
01:55:19.960You take your favorite character and then boom, snap of the fingers.
01:55:22.540You have a fully interactive version of this character who's talking to you 24-7.
01:55:28.360And our team, unfortunately, uncovered, along with the family that was harmed, that when you create a new account on character.ai as a young person, it immediately recommends, of all the characters that it could recommend to you, it recommends characters named stepsis, like stepsister, or CEO, or high school teacher.
01:55:49.160And these characters almost immediately engage in sexually explicit interactions because they're simply, you know, trained to do this.
01:55:59.860Okay, so, Tristan, here's the problem.
01:56:06.000Your typical answer would be, okay, the incentives are all screwed up, but that's what comes from the free market when you have a, you know, when you have an immoral end user, which is our society, it's, you know, free market is bad.
01:56:25.920But I want to play a little bit from what Mark Andreessen just said to Barry Weiss.
01:56:32.960We had meetings in D.C. in May where we talked to them about this, and the meetings were absolutely horrifying, and we came out basically deciding we had to endorse Trump.
01:56:43.100Mark, add so little color to absolutely horrifying.
01:57:06.500They basically said AI is going to be a game of two or three big companies working closely with the government, and we're going to basically wrap them in a, you know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon.
01:57:18.240We're going to protect them from competition.
01:57:20.020We're going to control them, and we're going to dictate what they do.
01:57:22.940And then I said, well, I said, I don't understand how you're going to lock this down so much because like the math.
01:57:48.400Well, so we often talk about this problem as there's sort of two ways to go, which is one is you say this is a dangerous technology, and we need to sort of control it.
01:58:18.900The other option is to say, well, that's dangerous.
01:58:20.920Let's actually let everyone maximally adopt AI in every application into every domain as fast as possible, kind of an AI maximalist approach.
01:58:30.880But then you get it getting sucked up into perverse incentives.