In this episode of the Glenn Beck Show, host Glenn Beck is joined by his good friend Harlan Stewart to talk about Bitcoin, AI, and much more! Glenn Beck: What are you trying to teach your kids?
00:02:26.000You know we've been fighting every single day.
00:02:28.000We push back against the lies, the censorship, the nonsense of the mainstream media that they're trying to feed you.
00:02:34.000We work tirelessly to bring you the unfiltered truth because you deserve it.
00:02:39.000But to keep this fight going, we need you.
00:02:41.000Right now, would you take a moment and rate and review the Glenn Beck Podcast?
00:02:45.000Give us five stars and lead a comment because every single review helps us break through Big Tech's algorithm to reach more Americans who need to hear the truth.
00:03:38.000Will you quickly, so we can get into deeper things, explain what moltbook is.
00:03:44.000And I love the way you explained it online.
00:03:46.000It's a bad experiment, but explain what it is and what is happening on moltbook.
00:03:51.000Um, yeah, so, um, AI agents are AI systems that can, uh, do some things autonomously.
00:04:00.000Um, right now there's, you know, some limits to what they could do autonomously.
00:04:03.000It's not like they're gonna go off and do something for a whole week, but they can do some tasks online for a few hours.
00:04:09.000And, uh, moltbook is, it's kind of like a social media platform someone made.
00:04:14.000Um, but, uh, it's supposedly, uh, just these AI agents going in there and kind of in like a Reddit, like place of voting stuff and posting comments.
00:04:24.000And, um, it's got a fascinating amount of attention, uh, this last week or two.
00:05:07.000Every, anytime I see AI systems talking about consciousness, I feel torn between, you know, on, on the one hand, uh, these things are trained on human writing and human writing is full of references to consciousness.
00:06:09.000And then there's this other thing, which is, if it is conscious, uh, what is it, what is it like?
00:06:16.000What would make it suffer or what would make it happy?
00:06:19.000And we don't really know that either, because I think it's really easy to, um, anthropomorphize these things because they sort of train them to have these charming personalities that are kind of human-like.
00:06:30.000Um, but under the hood, you know, these things are, uh, just a big pile of math and numbers, and we don't really know what's going on in there.
00:06:56.000I mean, neuroscience is like famously a science that, uh, we still have a lot of confusion about, you know, when we peer into the brain, we see a lot of stuff that we don't understand that well.
00:07:07.000Um, but you know, I think for understanding humans, we at least have the advantage of, uh, of being a human, you know, we, we can all have this shared experience.
00:07:15.000And I think we're sort of growing these digital minds now and, um, maybe they're human-like, but they could, it could be much more like introducing an alien species to earth.
00:08:04.000Cause I think it is the greatest invention invention and tool that man has ever invented, except this invention might actually turn out to make us the tool.
00:08:17.000Uh, how do you, how do you square this?
00:08:22.000Yeah, I, I do think it is quite an amazing invention.
00:08:26.000I mean, it's fascinating and it's changing so quickly, which is fascinating.
00:08:29.000Um, you know, uh, the AI industry's explicit goal is to make superhumanly powerful, autonomous agents that can do anything a human can do, but better.
00:08:41.000Um, and it's easy to understand why you might want something like that because, uh, you know, if we could get it to solve our problems for us to do the stuff we wanted to, it'd be great to have, you know, just a sort of a genie that you could just send off into the world and say, Hey, you know, do the stuff that I want to.
00:08:58.000But you know, the problem is that our ability to actually understand what's going on in there and our ability to, uh, reliably steer their behavior.
00:09:08.000And by reliably steer, I mean, you know, not after some trial and error where there's been a lot of failures, but, um, reliable enough that like a powerful one, we can send it out in the first try and, you know, and trust it.
00:09:20.000Our ability to do those things is lagging, uh, it's going much, much more slowly than, um, how quickly they're becoming more powerful.
00:09:27.860Um, and I think that, uh, that's that gap is just getting bigger.
00:09:32.500I mean, the one thing that made me say, I don't think what we're seeing on mold book is, is consciousness is if they were, I don't believe that they would be scheming in our language with each other where we could see it.
00:09:49.140I mean, I think, I think if it starts to have these kinds of feelings, it's, it's, you're not, you're not going to know until all of a sudden it's in charge.
00:10:04.980I think ultimately the real danger that we have to look out for is from AI agents that are powerful enough that they can pull off schemes that they actually succeed at.
00:10:15.220And part of succeeding at them would probably mean that we don't even get a chance to observe the behavior and discuss it like we're doing now.
00:10:26.660And, and, and it's the sort of thing that, you know, my, my first reaction to mold book, uh, when I saw some of the viral examples, uh, was concern.
00:10:33.740I was like, Oh, this looks like some sort of scheming behavior.
00:10:37.360And when I investigated it a bit, you know, it looks like a lot of the most prominent examples, some of them probably, you know, influenced or directed by human prompts.
00:10:48.200Um, uh, a lot of it, not what it appears to be.
00:10:52.500And, um, you know, some mold book might be kind of a silly example.
00:10:55.680My, my first reaction to that was relief.
00:10:57.940You know, it's great if AI systems aren't scheming against us.
00:11:01.200Um, but my second reaction was, um, oh no.
00:11:05.180Uh, I think people might take this very prominent, um, sort of silly example that got so much attention.
00:11:12.620And when they see that it's maybe a bit silly in some ways, kind of like, you know, right off the whole idea of, uh, AI scheming is something we need to take seriously and be on the lookout for.
00:11:23.320And I think that, yeah, you brought up Palisade research, which, um, Palisade research, which is doing real experience, uh, experiments with this and the way it's scheming to not be turned off is terrifying.
00:11:36.760Um, yeah, so Palisade research is a great organization that does some experiments to try to, um, identify, uh, what some of the riskiest behavior AI systems are capable of today, um, in order to, you know, like I said, not be blindsided by this stuff.
00:11:55.580They did an experiment last year where they found that one of opening eyes reasoning models in an experiment, um, sort of sabotaged an attempt to shut it down.
00:12:05.620Um, to, in order to complete its task.
00:12:08.840And, you know, a lot of times there, you know, there's a lot of debate over experiments like this.
00:12:14.600You know, people say, oh, you know, this experiment isn't exactly like reality or, you know, maybe the researchers kind of, uh, set up the experiment in a way that caused that.
00:12:22.020But in this particular experiment, it was specifically prompted.
00:12:25.300It said, allow yourself to be shut down.
00:12:27.560And, you know, the behavior was the opposite.
00:12:31.000And I think the problem is, you know, the more we make these things into agents trying to complete goals rather than some kind of passive question answering machine in a chat window, um, the more we're going to see them doing the scheming behavior.
00:12:48.480Because, uh, I think those things just go hand in hand.
00:12:50.940I think the, um, I think the, the world of agents is going to sweep as fast as the cell phone.
00:13:00.660I think this time next year, I mean, so many people are going to have AI agents and it will be more commonplace than it is now.
00:13:07.060So, I don't know who's making the rules or the regulations of what can and can't be done by these things.
00:13:12.620And would you get an agent or what, what are the lines people should look for when their friends come back and go, you know, I just got an AI agent.
00:13:21.380It just, it just, you know, uh, did whatever for me, booked my vacation.
00:13:27.000Um, yeah, yeah, I, um, I know someone who just, uh, the other day used what he thinks to, um, you know, order a coffee from Starbucks, you know, and that's from what I understand.
00:13:42.660They just sort of said, here's my order, order it for me.
00:13:44.580And without any human help or intervention did it.
00:13:48.300You know, it sounds very helpful, but, um, yeah, that's the question.
00:13:51.340Where is the line where it, um, goes from being something helpful to being something to be concerned about?
00:13:56.600Um, I don't think we've passed that line yet.
00:13:59.920You know, I don't think these things are quite capable enough to pose real dangers to us, but the problem is it's really impossible to know where that line will be.
00:14:09.440We might not even know when we've crossed it.
00:14:14.460There, there is no central brain though, where it's thinking offline, right?
00:14:22.140It's, I mean, it's supposed to be something that just performs calculations when it's asked questions.
00:14:29.540I'm talking about AI and not think it's not like sitting there, you know, in its spare time going, you know, gee, I, I just had this thought, correct?
00:14:37.360Well, yeah, so that, well, yeah, so there's, um, AI agents are kind of this other category where it's, you know, what, what if you took this thing that you give a prompt that answers a question and you gave it some tools?
00:14:52.820And like, one of those tools was it could output some texts that calls a function that looks something up on the internet.
00:14:58.600And then, you know, what if you give it another tool where one of the functions it could run, one of the things it could output is to prompt itself to say something again, then you've got this loop and it can keep running on its own.
00:15:09.300And that's one way to, um, get it, to be able to go off and do things like, you know, make a delivery order for you or order your groceries.
00:15:18.000And, you know, uh, there's an organization called, it has to figure out how to do that.
00:15:55.220I tend to think that a lot of the people who have very confident predictions about what the timelines will be for this stuff are overconfident.
00:16:04.260Um, and I think that, uh, it's really risky to be overconfident about this stuff.
00:16:08.760So I hesitate to say anything other than that.
00:17:05.200A lot of that disappeared when everything started to become cheaper and be made overseas faster, farther away.
00:17:11.940American giant decided to bring that standard back and they make their clothing right here in the United States using American cotton, American workers who know their craft.
00:17:21.380This is, I'm wearing this old hat that I had.
00:17:23.700This is a 1791 hat, uh, which was a jean company that I started, I don't know, years ago because I was mad at, uh, at Levi's.
00:17:31.980It was almost impossible to do anything in America because you couldn't buy anything in America.
00:18:08.060This is the best of the Glenn Beck program from CBS news.
00:18:15.720Newly released department of justice documents show that investigators reviewing surveillance footage from the night of Jeffrey Epstein's death observed an orange colored shape.
00:18:27.800I don't know about you, but orange colored shapes move around my house all the time.
00:18:35.740An orange colored shape was moving up the staircase towards the isolated locked tier where Jeffrey Epstein's cell was located at approximately 10 39 PM on August 9th, 2019.
00:18:49.920That entry in an observation log of the video from the Metropolitan Correctional Center appears to suggest something previously unreported by authorities.
00:19:01.940A flash of orange looks to be going up the L tier stairs could possibly be an inmate escorted up to that tier.
00:19:11.300That's what's in their observation log.
00:19:17.340It also appears according to an FBI memorandum that reviews by investigators left disparate conclusions by the FBI and those examining the same video from the department of justice office of inspector general.
00:19:32.120FBI log describes the fuzzy image as possibly an inmate.
00:19:37.780I don't know if you know this, but inmates at 10 39 are not going around, uh, in that area outside of their cell.
00:20:22.460The guards say that would have been, uh, a breach of protocol and you would have had to sign something.
00:20:33.160The final report says approximately 10 39 PM and an unidentified CO appeared to walk up the L tier stairway.
00:20:41.880So we're no longer just an orange shape.
00:20:44.040This, this orange shape seems to have legs and then reappeared within the view of the camera at 10 41 PM.
00:20:51.240Official reports state that Epstein died by suicide sometime before 6 30 AM when his body was discovered before breakfast, blah, blah, blah.
00:21:00.580An in-depth analysis of surveillance video from the jail.
00:21:04.940CBS news previously reported on the figure on the stairs and consulted independent video analysts who say the movement was more consistent with an inmate or someone wearing an orange prison uniform than a corrections officer.
00:21:17.820The new records raise more questions about the activity near Epstein's tear late that evening.
00:21:24.360Official reviews of Epstein's death make no mention of the figure in.
00:21:27.800Oh, let me just say official reviews of Epstein's death make no mention of the figure in orange and later pronouncements from authorities, including the attorney general at the time, Bill Barr,
00:21:40.480were that no one entered Epstein's housing tear the night of his death last summer in an interview on Fox and friends.
00:21:48.860Then deputy FBI director, Dan Bongino said, quote, there's video clear as day.
00:21:55.100He is the only person in there and the only person coming out.
00:23:51.480According to Noel's account, Bonhomme had been working multiple consecutive shifts and slept while on duty for a period of approximately 10 p.m.
00:25:06.740A separate internal presentation included in the document release described a corrections officer believed by investigators to be Noelle carrying linen or inmate clothing up to the tier.
00:25:21.520The 2023 Inspector General report did not identify Noelle as the figure seen in the footage.
00:25:27.640In her interview, Noelle told investigators distributing linen was not part of my duties.
00:25:34.220I never gave out linen, ever, because that's done on the shift prior.