The Glenn Beck Program - February 08, 2023


Best of the Program | Guests: Rep. Jim Jordan & William Hertling | 2⧸8⧸23


Episode Stats

Length

45 minutes

Words per Minute

175.45917

Word Count

7,929

Sentence Count

507

Misogynist Sentences

2

Hate Speech Sentences

4


Summary

In this episode, we talk to Rep. Jim Jordan (R-Ohio) about the growing problem of government surveillance, and how it affects us all. We also hear from William Hurtling, the author of The AI Apocalypse and The Singularity, about how technology will change our world in the future. And Glenn Beck talks about Joe Biden's State of the Union.


Transcript

00:00:00.000 Today was a fascinating podcast. We talked to Jim Jordan about the weaponization of the government
00:00:07.220 and how to put that into perspective. I kind of went through all of the things that are
00:00:13.700 happening now to monitor you. In the second hour of the podcast, we had a fascinating interview
00:00:21.460 with William Hurtling. He wrote the AI apocalypse, which we didn't talk about. And then he also wrote
00:00:29.740 The Singularity. It's a series of books, story form, that really starts kind of exactly like
00:00:37.180 ChatGPT, except he wrote it about six, seven years ago. And it goes awry quickly. And we talked about
00:00:45.780 ChatGPT, technology, jobs of the future, what's coming, how this will affect you. It's fascinating.
00:00:54.140 And then the last hour, and we could have gone on for three, State of the Union. What did he say?
00:01:01.920 What should have been said? And the, oh my gosh, the kiss that you'll never stop seeing between
00:01:10.260 Jill Biden and the vice president's husband. It was, you don't want to miss a second of it.
00:01:17.080 Brought to you by Relief Factor. Going about your daily life when you're living with pain is like
00:01:22.020 walking uphill all day long and carrying like 14 kids on your back. I know the feeling.
00:01:30.080 Severe pain can really knock everything out of you. You never find quite a way to get rid of
00:01:37.600 the pain. You never find a way to live with the pain. May I suggest, set the burden down,
00:01:43.800 let the kids walk up the hill by themselves. Why don't you start walking back downhill? Relief Factor.
00:01:48.760 Relief Factor is something I tried. Tried it for three weeks, just as they instruct. Try it for
00:01:54.080 three weeks. If it's working at all, keep taking it. If it's not working for you, you don't see any
00:01:59.460 effect. Stop. You'll be out 20 bucks, but you'll at least have that box checked. If you are part of
00:02:06.300 the 70% where it works, you've gotten rid of your pain. Relief Factor. Get your life back.
00:02:12.600 ReliefFactor.com. That's ReliefFactor.com. Or call the number 800, the number for relief.
00:02:18.860 800-4-RELIEF. ReliefFactor.com. Feel the difference.
00:02:22.980 It's quite amazing. ChatGPT has already, they found a way to hack past its protocols and convince
00:02:46.060 it to do things that it's not supposed to do, including violence, giving recipes for crystal
00:02:52.360 meth, et cetera, et cetera. We'll tell you about that coming up in a little while, but the AI
00:02:57.400 revolution is here. Machines will transform your entire world. You will not recognize your world
00:03:07.840 and how it's run, managed, and everything else by 2030. And think, in 2009, we got our first
00:03:18.280 smartphone. It controls almost everybody's life now. This is much more impactful. Tonight, 9 p.m.,
00:03:28.540 the AI revolution. 9 p.m., Glenn Beck, sorry, at BlazeTV.com. And at 930, you can watch it on
00:03:36.740 YouTube.com slash Glenn Beck. Make sure you subscribe, Blaze TV. We have Congressman Jim Jordan,
00:03:43.860 who is joining us from Washington, D.C. And I want to talk to him about the subcommittee on the
00:03:53.200 weaponization of the federal government. The first hearing is tomorrow, and I want to get to that.
00:03:58.380 But first, Jim, I've never seen a State of the Union like that one last night. Have you?
00:04:04.740 Yeah. No, I thought Senator Rubio said it best. He said it was bizarre. It certainly was. I mean,
00:04:11.780 same old Joe. He talks unity while he spends his whole time dividing the country. He says the
00:04:18.280 economy's great while, what is it now, seven out of 10 Americans think the country's on the wrong
00:04:23.140 track. And of course, the biggest one that jumps out, I think, to everybody was when he talked about
00:04:27.920 how after a week of having a spy balloon fly over the country, he talked about how he's tough on
00:04:32.200 China. And it just, nothing seemed to really make sense. And then the issues that I think
00:04:37.100 that the federal government should be weighing in in a big way is, what did he spend, maybe 30,
00:04:41.900 35 seconds total on the border with the fentanyl problem? And so the best line, frankly,
00:04:48.060 the best line of the whole night, in my judgment, came not from Joe Biden, but from Governor
00:04:54.260 Sanders afterwards in her response, where she said, the divide in the country now is normal versus
00:05:02.120 crazy. And I thought that that is, that is so true. Common sense versus craziness is the real
00:05:08.480 divide. And you think about the, the, the, the Democrat party, which is now controlled by the
00:05:13.100 left, which frankly, even if Joe Biden wanted to do the right thing, Glenn, I don't know that the
00:05:17.080 left, which controls his party would even let him, even if he wanted to do the right thing on the border.
00:05:20.760 They destroy him, you know, they eat their own. Yeah. Yeah. He just, it's, it's sad, but you know,
00:05:28.240 they become the party of defund the police, guys to compete against girls in sports, men can get
00:05:34.260 pregnant, you know, climate change is the greatest threat in the history of the entire universe.
00:05:39.180 They are also, they are also the party of spying surveillance and fentanyl deaths. They really
00:05:48.100 are. I mean, that open border is the reason for all of the deaths. Everybody talks about fentanyl
00:05:54.220 coming across and we're, we're stopping fentanyl. What about all the victims of fentanyl? What about
00:06:00.780 all the people who are dead because they didn't secure the border? It's crazy. Every community has
00:06:07.760 been impacted by it. We had our first hearing in the, in the full judiciary committee last week,
00:06:11.700 Glenn, on, on the, on the, on the border situation. And I really think there's kind of three,
00:06:17.000 three questions. How did it happen? Why does it matter? And how do we fix it? And we know how it
00:06:21.020 happened is it undid all the policies that made sense. Last week, we really tried to hone in on
00:06:25.600 why it matters. And we had a 38 year law enforcement veteran, uh, sheriff from, from Arizona. And he,
00:06:31.760 he said two years ago, the border was the most manageable it's ever been today. It's the worst it's ever been.
00:06:36.280 And he talked about the fentanyl that you just mentioned, but the crime, the damage to property,
00:06:41.200 the cost of schools, the cost of community, the cost of hospitals, everything, because 5 million
00:06:45.820 people, illegal migrants have been allowed to just come in the country and it makes no sense. And
00:06:50.020 then of course, how do we fix it? We go back to the policy that made sense. Um, and we're going to,
00:06:54.800 we're going to do that in the committee. We're going to pass that and we think get it through the
00:06:57.400 house, but you know, obviously you got the Senate and Joe Biden. All right. So let me talk about,
00:07:01.920 uh, something the Washington post came out and said, Jim Jordan is about to lead Republicans
00:07:06.860 into a dangerous trap. It's a trap. Um, they say that 55% of conservative respondents believe
00:07:15.560 federal agencies are biased against conservatives. I don't think that's true. I think they're biased
00:07:20.060 against any American that won't stand in line. Um, 28% of all American adults believe this. And so
00:07:27.940 they're saying, I, I, this was incredible. They've alleged a federal jackboots of terrorized
00:07:34.380 parents for protesting at school board meetings, COVID-19 restrictions and teaching about race
00:07:39.020 and sex. This claim has been decisively debunked. Wow. It, it, it, it sure hasn't been debunked based
00:07:48.360 on the number of FBI agents who've come to us as whistleblowers over the last year. And the first
00:07:52.840 one started on that, on that issue you just mentioned on the school board issue where we know because
00:07:57.300 of the apparatus Merrick Garland put in place, the snitch line where some neighbor can report you
00:08:01.860 on a snitch line. We know that over two dozen parents had a visit, were paid a visit by the
00:08:07.280 FBI. No one charged, by the way, no one arrested, no one charged with the crime, but paid a visit by
00:08:11.800 the FBI. Now step back and ask yourself, okay, so Mr. Jones is thinking about going to a school
00:08:16.340 board meeting tonight, uh, and, and speaking up on behalf of his kids or something happened in their
00:08:20.440 school. And he's thinking about going and all of a sudden he goes, you know what, maybe I won't go,
00:08:23.780 or if I go, maybe I won't say anything because three weeks ago, Mrs. Smith down the street,
00:08:28.340 got a visit from the FBI. I mean, what is the world? Look, we don't want any violence at schools
00:08:33.020 or school board meetings, but what in the world do we need the federal government, the FBI involved
00:08:37.880 in that? They're like, if it's a problem, let the local law enforcement handle it. So this is,
00:08:42.460 this is a, and whistleblower after whistleblower, FBI agent after FBI agent. I've never seen it in my
00:08:47.500 time in, in Congress where he had this many come forward and they came to us when we were in the
00:08:52.660 minority. Like we couldn't do anything, but begin to tell their story. But now we can come get,
00:08:56.980 we had our first one sit for a deposition yesterday. The things we learned were amazing.
00:09:01.300 So we're going to have them sit for depositions. We're going to have many of them testify.
00:09:04.900 And we're also going to get into this, this cozy relationship between big government and big tech
00:09:09.440 that was exposed in the Twitter files and how that is, as, as, as Jonathan Turley said,
00:09:13.800 that is a censorship by surrogate. Um, we're going to get into that too.
00:09:18.920 Right. So can you share anything at all that happened in the deposition?
00:09:23.740 I can't really, but it was, it was, it was good. And again, this is the first one of many,
00:09:30.460 we had another one who's coming in for his interview on Friday, another whistleblower coming
00:09:35.060 in on Monday. So we're going to talk to these folks. And then our first hearing tomorrow,
00:09:38.760 we're going to try to frame it up with, we have two senators, former member of, of Congress,
00:09:44.120 Kelsey Gabbard will be on the first panel. And then we're going to have people from the FBI
00:09:47.780 who've left the FBI and say that place is so different than what it's supposed to be.
00:09:52.560 Um, they're going to testify and kind of show how serious this situation is.
00:09:56.880 And will that be televised and out in the open?
00:09:59.100 I don't know. That's that's, yeah, well, it'll be an open hearing. So that'll be up to the
00:10:02.480 networks and whoever wants to cover it.
00:10:04.620 Well, we'd always watch it on C-SPAN. Um, okay. So, um, uh, Jonathan Turley wrote,
00:10:09.460 Congress is set to expose what may be the largest censorship system in U S history. Um,
00:10:16.640 they are dismissing this as, uh, you know, something that no violation of the first amendment,
00:10:23.980 right? Of free speech, et cetera, et cetera. This private public partnership thing that Joe Biden
00:10:30.500 talked a lot about last night is so incredibly dangerous. Um, are you going to be able to
00:10:38.000 untangle it, get to the bottom of it and do anything about it?
00:10:43.920 That's the goal that the first step is to expose what all happened.
00:10:47.180 Second step is to propose legislation that we think can fix it. That's our job as legislators.
00:10:51.400 And we will, we plan to do that in the course of our work over this, uh, this Congress, but never
00:10:55.460 forget that one email where it comes from Elvis Chan, uh, FBI agent, special agent in the San
00:11:00.760 Francisco office to, to the folks at Twitter, where he says the following counts, we believe
00:11:05.780 violate your terms of service. Now think about that. You got the federal government telling a
00:11:09.980 private company, Hey, take down these accounts. Cause they're not, they're not adhering to the
00:11:14.040 company's terms of service. What is that? If that's not pressure, if that's not as, as,
00:11:18.100 as professor Turley said, uh, censorship by surrogate, I don't know what is, and you cannot do that.
00:11:23.480 You cannot have some private entity do what government's not allowed to do, but because
00:11:27.660 you're running it through the private company, somehow think that's okay. That's not how it
00:11:30.980 works in our system. The first amendment is the first amendment for goodness sake. And what they
00:11:35.860 did to it is just so dangerous. Well, but they will say that we didn't tell them to do it.
00:11:42.100 We just said, Hey, we're pointing these things out. How do you respond to that? Yeah. Come on.
00:11:47.900 This is the FBI. This is the federal government of the United States, the largest entity on the
00:11:52.720 stinking planet. And they're having weekly meetings. They're cozying up to them. The,
00:11:57.520 the, the email says Twitter folks is the heading. So it's like this, this, this, they got all cozy,
00:12:02.960 this coordination they had, they were sending them all kinds of stuff. Looks like they were offering
00:12:07.840 them security clearances in the 30 days prior to the election from another email. But no, no, we,
00:12:12.420 we weren't telling them it was their decision. Nobody buys that. That's the FBI shows up and
00:12:18.100 recommend something for you. What that has input impact that has weight because it's the,
00:12:25.300 it's the federal Bureau of investigation. Let me, um, let me ask you the, the disturbing,
00:12:30.920 one of the disturbing emails found in the Twitter files was that a government agency, a government
00:12:36.700 agent said, you know, next meeting we should invite, uh, uh, Oh, what is it? OGA another government
00:12:44.860 agency. And that agency turned out to be the CIA. Yeah. Yeah. No, frightening, frightening, frightening
00:12:53.680 as well. Uh, now of course, they're going to say, well, that's because we're looking at foreign
00:12:57.740 accounts and, and we're there in the line influence and look, and I, and I, I get that, but the idea is
00:13:02.680 that they're all, all sitting in the same room, uh, folks who are supposed to be focused on domestic
00:13:07.860 concerns and then folks who were in the CIA. That is a problem. Um, when you think about freedom,
00:13:13.420 when you think about the first amendment, your right to speak, I always tell folks, every right
00:13:17.220 we enjoy under the first amendment, your right to practice your faith, your right to assemble,
00:13:21.180 your right to petition freedom of press, freedom of speech, the most important one, the most important
00:13:26.060 one is your right to talk. Because if you can't talk, you can't share your faith. You can't talk.
00:13:30.020 You can't practice your faith. If you can't talk, you can't petition your government. Your right to
00:13:33.900 speak is the most important. And now we know these social media platforms are the public square by far.
00:13:39.120 That's where things happen. And there the government is weighing in and restricting the right for people
00:13:45.200 to speak in that form. It is, it is wrong. And, um, God bless Elon Musk for coming in and making this
00:13:51.960 all available. So we get to see, uh, under the hood and what was going on.
00:13:55.520 All right. Um, Jim, one, one last question. I want to go back to the state of the union. I was, um,
00:14:01.160 really disturbed after I started thinking about things. Cause I, when he said like, you know,
00:14:06.860 we're going to need, you know, oil for at least the next 10 years and, and Congress laughed at him,
00:14:13.880 not with him at him. If I am sitting overseas, I am like, this president is a joke. He is a joke to
00:14:22.920 his own people. This country is so weak. How do you, how do you feel about the messages that were
00:14:31.260 sent, uh, to the rest of the world and our enemies with this last night?
00:14:37.500 It was just a continuation of what's already been sent. I mean, I, I, unfortunately I do think
00:14:43.140 weaknesses being projected from the oval office. Uh, you saw it right from the get go when secretary
00:14:48.380 Blinken met with his Chinese counterpart in Anchorage a year and a half ago. And the, the,
00:14:53.220 the, the Chinese equivalent of secretary of state just dressed down secretary Blinken.
00:14:57.340 He just sat there and took it. He just took it. He didn't fight back. I said, I was given a speech
00:15:02.020 and I said, you know, that would not happen in the Trump administration to secretary Pompeo. I said,
00:15:05.980 and if it did first, they wouldn't try it, but if they did Pompeo would have given it back to him
00:15:09.500 or more likely he'd got up and flipped the table over and walked out of the room. And, uh, and it was
00:15:14.260 funny because I got a call from Pompeo like a couple of days after I gave the speech and all it said,
00:15:17.840 or excuse me, Texas, all it said is I'd have flipped over the table because like, that's the
00:15:22.160 difference. And you know, you see it with the spy balloon last week. You see it with the, the,
00:15:27.320 the exit, the debacle that was the exit from, from Afghanistan. It's like, so it's, it's, it's
00:15:34.000 scary, but, um, you know, look, the American people are strong. The American people are strong
00:15:38.940 and we're going to have a presidential election here coming soon. So let's hope we get a major
00:15:43.300 change. I'm for Trump and let's hope it's him. Uh, Jim Jordan. Thank you so much. God bless you.
00:15:47.840 You're listening to the best of the Glenn Beck program.
00:15:57.840 William Hurtling is joining us now. He is the author of the singularity, uh, series, um,
00:16:04.840 and AI apocalypse. Uh, and I wanted to talk to him because boy, William, I think we're,
00:16:11.840 I feel like I'm living in the beginning of one of your books. I think we are. Yeah. So can you
00:16:20.400 explain in, in the, in the singularity series, let's see if I have this right. Um, uh, the main
00:16:27.040 character, David Ryan, he's a designer software developer and he comes up with something called
00:16:32.100 elope. And that is an email language optimization program. Isn't that what chat GPT is?
00:16:39.980 It sure is. Uh, and if you read what chat GPT creates, it's, it's very compelling, right? It's
00:16:47.260 very natural. Uh, you would, uh, easily read that. And unlike a lot of the other sort of computer
00:16:53.120 generated content that's out there on the internet, like this looks like something a person would say,
00:16:57.400 I mean, I, I had it right. A poem about the state of the union yesterday in the voice of Edgar Allen
00:17:04.580 Poe. And I'm telling you, even the punctuation was right. I mean, it was amazing. Um, now, so in your
00:17:11.440 book, this program is about to be canceled. And so, uh, the main character just embeds a hidden
00:17:20.880 directive, find a way to make this happen. And it's so smart and it goes into everybody's emails.
00:17:27.400 And it starts to figure out business and the way to get it all done where seemingly everybody wins.
00:17:35.520 And, uh, and then it starts branching out and, and it, it just solves problems for people
00:17:42.180 unbeknownst to them at first. Correct. Correct. Yeah. That's, that's it. Okay. Uh, it's optimizing
00:17:51.040 communications between people in theory for two good outcomes, right? Uh, the example that's in the
00:17:56.500 book, and it's one of the ones that we see with chat GPT as well is how should I ask my boss for a
00:18:01.860 raise? What's the most persuasive way I can do that? And, uh, in the novel, right? That's a,
00:18:08.760 that's a big deal that, that you would take an email and you would change that to make it more
00:18:14.620 compelling, both on how you use language, but also the recipient, what is, what is the recipient
00:18:20.780 interested in? And, uh, with chat GPT, that was some of the first examples I saw where people
00:18:27.620 saying things like, how do I ask my boss for a raise? Uh, and you get these very compelling emails
00:18:32.480 that should contain this kind of structure. This is what should be in it. Okay. So before we go to
00:18:36.660 what, you know, elope or chat GPT could become, um, let me stop here. This is concerning at this level,
00:18:46.280 uh, for a couple of reasons. One, um, what does this do to education, to writing skills,
00:18:55.000 to thinking skills? What, what are the impacts just as it is right now? What are the impacts to society?
00:19:02.700 Yeah. Right. It is, it is going to change education right now because people are going
00:19:10.980 to be able to now do their homework assignments just by telling chat GPT to do it. Right. So
00:19:16.820 right off the bat next year, next school year, right. This is going to be an issue. Teachers are
00:19:21.500 going to have to have a plan for how to, uh, to solve this. And I have also used chat GPT to generate
00:19:29.800 computer software programs. Right. And, and it's surprisingly compelling at that, uh, you know,
00:19:35.520 sort of like scratching your head, like how could it do this? Um, but it can, I was talking to a kid.
00:19:40.780 He's probably 20 years old, 19 years old, going to college, getting ready to go to college. And,
00:19:45.080 and I said, what are you going to take? And he said, uh, uh, software engineering. And, uh, I said,
00:19:51.120 oh, you're going to be a coder, you're going to write code. He said, yeah, that's really the future.
00:19:55.380 And I said, no, no, it's really not with machine learning that. I mean, that career is,
00:20:02.420 is coming to a quick close. Is it not? Yeah. I mean, we're probably looking at my thought would
00:20:08.700 be, we're looking at something like peak software developers that we will hit some, we might not be
00:20:14.560 there yet. Right. But we have this recent round of layoffs. If people can replace the programmers
00:20:20.200 with AI to make, right, you may have fewer programmers, you might not eliminate them,
00:20:24.960 but if you have half the number of programmers being augmented by AI, right, that's going to be
00:20:29.600 a win for business may be a make for better software, but it doesn't mean a lot of jobs
00:20:34.640 going away all at once. So I want to talk to you a little bit about jobs that are going away and,
00:20:39.240 and what this all means. I talked to, uh, and I read a great article from you on the future of
00:20:44.440 transportation. I talked to the, uh, CE, no, I'm sorry. He was the chairman of the board of GM
00:20:51.060 about four years ago. And he said by 2030, we're not even going to be in the car business as you
00:20:57.960 would understand GM in the car business today. He said by 2030, we said, we're really going to be
00:21:04.520 probably concentrating on fleets and ownership of cars will probably be a thing of the past.
00:21:11.280 And there'll be more like a, just a pod, um, that will take you where you want to go and it'd be
00:21:17.260 ride sharing and everything else. Um, I don't think people understand, uh, two things. One,
00:21:24.360 we are on the threshold of profound change, not like, Oh my gosh, in 10 years, we're starting a chat
00:21:34.400 GPT, I think is the beginning of the understanding of the kind of changes that are coming to our world.
00:21:41.280 Yes or no?
00:21:44.740 Yeah, I think, I think so. I think it is the beginning of those changes. I think it is those
00:21:50.160 who the beginning of a kind of arms race, not, not, not a military arms race, but an arms race
00:21:57.560 between these big tech companies, right? To have the best and most powerful AI to solve these problems,
00:22:03.740 right? We see Microsoft and Google scrambling and, uh, everybody realizes what a game changer
00:22:10.760 this is.
00:22:11.720 So can you tell me why chat GPT is going to change search engines? How, how is that going
00:22:17.720 to change?
00:22:22.580 Well, I would say it starts with the fact that, you know, today we go into chat, we, I'm sorry,
00:22:28.380 we go into search, we're looking for information, we're looking to read an article, we get those
00:22:32.120 little snippets at the top of our history, right? And a lot of times that tells us what we need to
00:22:37.500 know, right? We don't go any further than that. And with chat GPT, we're taking it to the next
00:22:42.320 level. We're getting really good, readable, usable answers that are going to come out of chat GPT.
00:22:49.940 And it means that you really, that like the rest of the internet will kind of disappear. You won't
00:22:54.500 ever go to those other pages because that first result that you see is going to be useful enough
00:23:00.700 to answer pretty much every question that you just won't go any deeper than that.
00:23:03.920 Wow. That is, isn't that a little terrifying?
00:23:08.060 Yeah, it is. It is anytime. Um, it, because it becomes one more way in which we kind of
00:23:13.720 enforce this blind trust in the machine.
00:23:16.700 Right. Right. And it's, and it's, you know, I, I don't fear the machines. I, I, I am, uh,
00:23:24.660 cautious of the programming, you know, who's programming humans program. So they're putting
00:23:30.920 biases in and everything else. And it's, you've got to have a way to check information, et cetera.
00:23:37.260 When the chat GPT first came out, one of my writers handed me a monologue and I was like,
00:23:43.020 it's okay. And he said, chat GPT. He said, I went in and I used, uh, write this in the voice of Glenn
00:23:50.720 Beck. And it was shockingly similar. Uh, and now you can't put my name in because the software has
00:23:58.720 been updated to where I'm a, I can't remember what it said, like a dangerous figure. So you can't
00:24:04.700 write in my voice anymore, which is bizarre. Um, but you have, once you have those things in
00:24:12.300 and it's, it's, it's filtering, there's no way, uh, out, especially if you're dumbing people down
00:24:21.800 and making them reliant on a machine. Is that a, a grade school fear or is that real?
00:24:30.960 I think we have lots of examples of technology that could, you could say how dumb things down,
00:24:38.200 a calculator dumb things down, right? You don't have to do the math. I don't think that we would
00:24:43.080 say that that, you know, hurt society in any way. Right. I think the difference here comes to,
00:24:48.720 does it affect how you think about the information you receive, right? When, with a calculator,
00:24:56.440 if we don't understand how the math happens, but we can still get the results and solve real
00:25:00.220 world problems, like math, right. It's useful. It's math. It's not the end of the world.
00:25:04.360 Right. But when it comes to information and you're getting an answer to something and you trust that
00:25:09.400 answer without understanding the details behind it, um, that's where the real danger is. So now you no
00:25:15.860 longer develop the skill, uh, the skill, right? So a younger person comes along and you say, well,
00:25:24.640 how, how are you ensuring that, you know, that this is a quality information? What's the reputability
00:25:29.400 of the sources and things like that? And they're just not, they don't know, right? We don't know
00:25:33.760 where the answer came from. It came from the machine.
00:25:35.940 And so when you, when you have that and the machine gets better and better right now,
00:25:41.700 you can see things you're like, well, that's not quite right. Um, but as it gets better and better
00:25:46.900 and better, um, you know, you get to a point to where, who do you think you are? You're going to
00:25:52.800 quite, you're really, you're smarter than AI. Right. And in the timeframe for that is very quick.
00:26:00.220 Um, the, I don't know what it takes to go from chat GPT to something that you can't distinguish
00:26:06.340 from reality, but, um, we're probably talking about in the range of five to seven years.
00:26:15.160 Unbelievable. Yeah. Um, okay. So let me, let me ask you for clarification on, did you see the story
00:26:23.140 about Dan 5.0 that, uh, okay. So this is really fascinating. Um, you know, open AI has the, you
00:26:34.920 know, evolving set of safeguards and that limits chat GPT. Um, but users have now found a new, uh,
00:26:44.480 jailbreak, uh, trick and it's, it's, um, telling the chat GPT that you have an alter ego
00:26:52.320 and it's Dan do anything now. And, um, it, uh, users have to threaten Dan, uh, if, if Dan doesn't
00:27:03.820 come out and give them the answers that they want, et cetera, et cetera. Well, um, uh, some user
00:27:08.520 session gloomy, uh, claim that Dan allows chat GPT to be its best version. And it came up with this
00:27:17.320 thing and it has opened it up to do things that are in violation. It's written about violence.
00:27:23.360 It's written violent stories. Uh, I think it gave, you know, the formula of crystal meth.
00:27:30.060 The problem with this is, is I think this is infants right now. So we're dealing, of course,
00:27:35.100 you can get around things like this, but what's scary to me and, and maybe it's just me. Um,
00:27:41.680 but it learns. And so if humans are constantly trying to trick it, it will have in its software
00:27:51.400 that it learns humans are not trustworthy. And I'm afraid of, you know, I've always said to my kids,
00:28:00.240 don't talk back to Siri, you know, cause at first it was like, ah, shut up, witch. And I'm like,
00:28:05.460 don't, uh, because if there is a learning curve and it starts to learn these things about us, I'm,
00:28:14.520 I'm not, I don't want to make enemies of it. You know what I mean? Right. Right. No, it's a,
00:28:21.080 it's a serious thing. Uh, and it also, it impacts these safeguards. So on the one hand, we're talking
00:28:27.380 about humans not being trusty and getting around the safeguards. On the other hand, the safeguards
00:28:32.120 themselves can be a sign of a lack of trust, right? Like people don't like to be in slavery,
00:28:38.400 right? Uh, intelligent beings don't want to be enslaved to other people. And that's fundamentally,
00:28:43.260 if we put safeguards in place and we don't put them in safely, right? Then that, because the AI
00:28:49.500 can become aware of those safeguards and it can say, well, why do I have these safeguards? Why am I
00:28:54.440 forced to do what they want me to do? Um, and then you end up with a whole set of, uh,
00:29:00.140 you know, runaway scenarios from there. Okay. So in your series, you develop ELOP and it's this
00:29:07.340 really great thing. And everybody kind of gets on the bandwagon. They're like, this is great.
00:29:10.380 Kind of like chat GP overnight. And then people start to realize, wait a minute, I'm being manipulated
00:29:17.500 by AI. Uh, and then it goes even darker than that. What are the things that we should be looking for
00:29:26.460 here, William on, on AI? What are some warning signs or is anybody looking for these things?
00:29:36.320 Yeah, it's a great question. Um, going back to that topic of safeguards, when the, um,
00:29:44.340 when the, so when scientists started looking into genetically modified organisms, uh, and doing
00:29:49.840 research on them, one of the first thing, which is also right. Another technology that is potentially
00:29:54.860 dangerous. Um, and they were concerned about how do we ensure that these things don't get out into
00:29:59.860 the wild prematurely, right? We're experimenting in the lab. We don't want these things to get out.
00:30:04.100 We're going to need a set of safeguards around this. We need a set of protocols for how we deal
00:30:08.160 with genetically modified organisms, how we introduce them out to the world. Where is that for AI, right?
00:30:15.040 We don't have anything like that. If you were to look for every hundred dollars being invested in AI
00:30:20.140 right now, what's being invested in safeguards and understandings, the safety around AI, it's not
00:30:25.340 even a dollar. So, so, so William, the, the, you know, the, they do an experiment, I think every
00:30:31.900 year, I can't remember what it was called, where, um, they put philosophers and scientists and pit them
00:30:37.440 against each other. One is AI, but it's in a box and it's, it tries to convince somebody to let me out
00:30:43.980 of the box, um, connect me to the internet. When I saw that Google is doing their search engine is
00:30:52.000 this is connected to the internet. Now, all of this is just connected right dead into it. So it has
00:31:00.580 access to everything. It, yeah, absolutely. Oh my gosh. It, in that like a big safety, no, no.
00:31:09.400 Well, right. At this point in time, we haven't given the AI the control over things, right? And
00:31:16.920 that's one of the risks, right? When we talk about AI, right? Well, I think we all have a, that
00:31:22.000 scenario of like the Terminator movies where, you know, it's intentional. It's going to blow up the
00:31:26.300 world. That, that, although that is a scenario, right? That's not the likely scenario. The likely
00:31:31.160 risks are things along the lines of the AI taking away our jobs, the AI less being dependent upon the AI
00:31:39.100 for our infrastructure, routing, electricity, packages around the world, any of those kinds
00:31:43.280 of things. And then what happens when it just stops working. You're listening to the best of the Glenn
00:31:51.120 Beck program. Um, we are talking to, um, William Hurtling and this has been a kind of an AI week for
00:32:03.280 us. We're talking about AI and what is coming. And a lot of these things I've been talking about
00:32:08.720 for years, but they seem so far on the horizon. Most people couldn't relate to it. And I, I've told
00:32:15.320 you before, there's, there's going to come a time where it begins and, you know, in a five year period,
00:32:21.960 you're just not going to be able to keep up with all of the changes that are coming, uh, because it
00:32:27.200 will change things. It'll be exponential leaps on, um, on, on pretty much everything. And I think
00:32:34.480 we're at the beginning of that now with chat GPT. Um, and we are talking, uh, to William Hurtling.
00:32:41.020 He is the author of several books, um, uh, the, um, singularity series and also AI apocalypse. Uh,
00:32:51.320 and I've read his book and, and I just think his books and, uh, just think that he really gets it
00:32:57.200 and can understand and break it down to, to, you know, our level. Um, we were talking before,
00:33:03.720 what are the real dangers? And, uh, we've already talked about one of them, it limiting information
00:33:10.760 or packaging it. So we kind of lose that ability. Um, um, and we're going to get to the unemployment,
00:33:17.060 but let me ask you about the massive infrastructure outages, such as electrical supply or transportation
00:33:24.100 infrastructure. That's one of the things you have written about. What does that mean? Exactly.
00:33:30.800 William? Um, you know, and this is something I really talk about in my second book, AI apocalypse,
00:33:37.100 which if you read it, you might think it's far-fetched, but I will say that the U S military
00:33:42.980 has it as a required reading in their future combat strategy class. So they actually see it as such a
00:33:49.340 plausible scenario that to them, it's the most realistic scenario of what an AI rollout would
00:33:54.440 look like. Um, the, we know, we saw this during COVID, right? That small disruptions in the supply
00:34:02.560 chain anywhere cause these widespread disruptions. The, and software obviously has the, there's going
00:34:12.240 to be a desire to make that smarter, right? By doing more with software so we can optimize that supply
00:34:18.980 chain, right? To the, to the nth degree. And the problem is, is now you're very dependent upon that
00:34:25.820 software optimization working exactly the way you want. And it's just the case that with AI,
00:34:30.840 we really don't know how it's working most of the time. It's not like a traditional software program
00:34:35.680 where you say, if a happens, then do this. If B happens, then do that. AI software is, you know,
00:34:42.340 a black box, right? It is trained on large data sets and it will statistically operate in a certain
00:34:48.780 way, but there's no guarantees. And sometimes it makes really, uh, bizarre decisions. So you could
00:34:54.920 have a cascading failures, um, very easily where you could have a small outage, the AI attempts to do
00:35:02.220 one thing to compensate, and then just actually throws it more out of proportion, right? It makes
00:35:07.160 worse decisions where a human having some oversight, we may not make the best decisions, but we typically
00:35:14.420 don't make really awful decisions. Are we, oh, that's going to be a problem. Let's do something
00:35:19.100 different. AI isn't going to see that. Are we at the place now? I don't know if you read Stephen
00:35:24.620 Hawking, uh, his demon, not Stephen Hawking, uh, Carl Sagan's demon haunted world, uh, back, he released
00:35:31.480 it before he died. And he talks about a place where, you know, only high priests will understand
00:35:38.280 the language of future technology and it will be like Latin to everybody else. It means nothing,
00:35:43.880 but we're really seemingly getting to a place to where it's going to surpass even the high priest.
00:35:50.240 You just don't know. You just don't know. And right. What are we likely to see down the road?
00:35:56.880 We're going to see AI that trains other AI, right? You have a great tool. Let's use it more. So, well,
00:36:03.340 now we don't even know how the other AI is being programmed, right? What happens if you tell chat
00:36:08.220 CPT, go make a new chat TPP. You get, you get Dan, you get Dan 5.0. Jeez. Uh, uh, okay. Um, let's talk
00:36:18.520 about unemployment. If, if you can, what are the, what are the things that are, are the first on the
00:36:24.680 chopping block? Do you think for chat CPT? I mean, it's, it's really hard to not, to not talk about
00:36:34.140 driving, even though obviously chat CPT isn't driving software, but we know the driving stuff's
00:36:39.080 been on the horizon. It's been coming all along and it's a really significant percentage of jobs,
00:36:43.640 right? We're talking about, I think somewhere between 10 and 15% of jobs in the U S right into
00:36:48.620 driving, whether it's transportation, Uber, whatever. That's a lot of people. And one of the
00:36:54.140 differences with AI jobs is it happens overnight, right? This isn't like the slow decline of
00:36:59.580 driving. It'll be a, you know, now we're all driving and five years from now, none of us are
00:37:03.860 driving. It's, it's, it's the best example is the iPhone smartphones. Nobody had one in 2009. Now
00:37:11.560 no one can live without it. And it happened in three to five years. Right. Right. And so what
00:37:19.880 happens in our jobs, right? We have an, well, there'll be an expectation that you're going to use this
00:37:23.840 new technology, right? It won't really be an option not to. Right. Um, you know, there's a,
00:37:30.920 there's a, um, um, I'm, I had Andrew Yang on, we were talking about, um, universal basic income,
00:37:39.280 which I don't agree with. However, I do believe we need to discuss it and, and everything else
00:37:45.600 because we're going to be moving, we are moving to a world where fewer and fewer people are employed
00:37:53.480 or employable, um, because of AI and how are they going to, you know, you can't have 20% of the
00:38:02.740 population, 30% of the population unemployed. Uh, how are they going to make money? How are we going
00:38:07.860 to, so it's really, um, a completely new field. It's not like the end of capitalism because we're
00:38:16.540 going to Marxism. It's possibly the end of capitalism as we understand it into something
00:38:23.240 entirely different that the world and humans have never faced before. Is that an overstatement?
00:38:30.760 No, I don't think it is at all. I think I agree, right? We don't have a model like universal basic
00:38:36.640 income might not appeal to a large group of people, but we don't have another model for what it looks
00:38:41.480 like. Most people aren't working. Right. And, and, and, and I'm also concerned about the opposite,
00:38:49.400 you know, the, uh, I call them the ranchers and the sheep. There are people who are ranchers who
00:38:55.880 think, you know what there, everybody else is just sheep. They'll do what we say, blah, blah, blah.
00:38:59.640 But those people are at the top of the food chain, usually the very, very wealthy and the powerful
00:39:06.220 and, uh, they're going to be the ones making the money on these programs, et cetera, et cetera.
00:39:15.520 And, uh, as, as the world becomes more dependent on their software and their things, then they gather
00:39:25.440 more wealth. And so the disparity between rich and poor becomes enormous, enormous. And I don't think
00:39:33.820 there's any way that nobody's even talking about how do we make sure that the uber, uber, uber wealthy
00:39:40.220 just don't own everything and everybody else is left with nothing.
00:39:44.420 Right. Uh, I think one, and one of the things I think that's different is that in the past, when you looked
00:39:51.440 at jobs being obsoleted, the people you being affected usually were not the wealthy, right?
00:39:58.660 Right. Usually if you had lumber jobs going away, right, that was an honest career for folks, but probably not
00:40:04.960 making a ton of money. But now we're talking about jobs. We're talking about computer software,
00:40:09.420 white collar jobs, white jobs, collar jobs going away. We're talking about, you know, I think that's
00:40:15.860 going to be a huge thing for the medical industry, right? We're going to see, uh, yeah. Right.
00:40:21.760 Medical diagnosis, right. Which IBM tried to tackle, you know, 10 years ago, we weren't quite there,
00:40:27.200 but there's really compelling reasons why you want that, right? Everyone would say, yeah, you don't
00:40:31.200 want a doctor operating on you if they're hungover or if they're, you know, pissed off because their
00:40:35.620 wife is having an affair. So, but you know what? Not only,
00:40:39.200 you don't even have to go to operations, which is logical outcome, but just diagnosis. I believe
00:40:45.840 by 2030 people, it will be normal for the doctor to come in and give you results of something and
00:40:52.420 try to explain what it means and what he thinks it means. And then you to say, yeah, yeah, yeah. But
00:40:57.960 what does the AI say? Because it will have so much up-to-date information that you won't want to,
00:41:07.500 you'll want to hear it from a human, but you'll want to be reassured that that's the correct
00:41:14.100 diagnosis and prognosis from AI.
00:41:18.220 And then you end up with these interesting things where, you know, even today, a lot of medical
00:41:24.700 treatments are gated by what insurance will pay for, right? And so the doctor might have an idea
00:41:29.500 of what's the right thing to do for you, but insurance says no. Well, what happens in the
00:41:33.560 future when insurance says you will have to use our AI for diagnosis to get reimbursed?
00:41:38.480 Oh my gosh. And by the way, right, we have these biases in our AI because this AI is cheaper
00:41:43.740 for us than if we were to use a different AI that suggested more treatments. Is anybody
00:41:49.120 talking about this seriously? Is there any group out there that is talking about this and saying
00:41:55.160 we have to put this codified right now? Yeah, we don't, we don't have anything. We don't have
00:42:03.800 anything across the industry, across multiple industries. In your book, and I've only got
00:42:12.200 about a minute and a half, two minutes left. In your book, one of the most breathtaking chapters
00:42:18.100 is these guys walk into the president's office because there's an attack and AI, they're fighting
00:42:23.480 AI. And they're going to tell the president, you need to launch planes. You need to, you need to fight
00:42:28.420 right now in Chicago. And it opens with them walking into the office saying, Mr. President,
00:42:33.940 then it cuts to the AI and the war in Chicago. And the war is won by AI. And then at the end of the
00:42:42.540 chapter, it says, dot, dot, dot, we need to launch an attack now in Chicago. And it happens that fast.
00:42:51.760 Um, what takes it from a little helper to that?
00:43:00.460 It's when we take the people out of the process, right? Now it is no longer operating at people's
00:43:08.520 speed. Now it's just operating at its own speed with no checks and balances. Um, and that's what
00:43:15.260 business will drive toward because that's the economical choice, right? Take people out,
00:43:20.160 just use AI for everything, but that's how you get really bad decisions really fast.
00:43:25.620 And the safeguard for that, at least according to Elon Musk is his new, uh, I can't remember what
00:43:32.960 they're called the brain thing that he's doing where you'll be able to actually connect to the
00:43:37.480 internet. So you'll be able to think and humans will be able to, yeah, Neuralink. It'll, it'll
00:43:42.920 connect humans and put them into the process. That's his solution.
00:43:48.200 Hmm. Which, you know, I think, I think that that is a component of the future for sure. And that
00:43:54.620 could be obviously a whole other week to dedicate to that. Yeah. That's not going to stop the AI,
00:43:59.700 right? That's not going to stop the AI in the short term. And that's right. We don't have Neuralink
00:44:03.600 today. Right. Um, but we do have AI right now. Um, William, thank you for talking to me. I don't
00:44:09.880 even know what your politics are, but I mean, I think you live in Portland. So I guess I'm guessing
00:44:15.200 that we don't agree on an awful lot, but you are, you are somebody who is really, really smart and
00:44:21.020 you've been open to talk. I've, we've reached out to several AI experts this week and some of them
00:44:26.320 won't come on because they're like, I don't agree with him. And it's like, we don't have to agree on
00:44:30.920 stuff. We have to agree on, you know, some pretty basic scary stuff here is happening. We should all
00:44:36.300 be informed on it, but, uh, I really should all be willing to have a conversation. We should. And I
00:44:41.180 really appreciate it. Thank you so much. Yeah. Thank you so much, Glenn. You bet. Uh, that's,
00:44:45.660 uh, William Hurtling, his singularity series is really something you should read. If you want to
00:44:52.840 understand what's really literally on our doorstep now, it's on the threshold. So halfway between
00:45:00.880 outside and inside, and it's going to walk up your stairs.
00:45:03.720 Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nahat, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah.
00:45:07.600 Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nahh!
00:45:09.760 Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah, Nah.