The Charlie Kirk Show - April 11, 2023


The A.I. Revolution and the Future of Humanity with Joe Allen


Episode Stats

Length

37 minutes

Words per Minute

155.34575

Word Count

5,841

Sentence Count

344


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcripts from "The Charlie Kirk Show" are sourced from the Knowledge Fight Interactive Search Tool. Explore them interactively here.
00:00:00.000 Hey, everybody.
00:00:00.000 Today on the Charlie Kirk show, Joe Allen for a full hour on artificial intelligence.
00:00:04.000 What is it?
00:00:04.000 Where does it come from?
00:00:05.000 What is the threat?
00:00:05.000 What are some positives?
00:00:07.000 That and more.
00:00:08.000 Email us your thoughts as always, freedom at charliekirk.com.
00:00:11.000 Subscribe to our podcast.
00:00:12.000 Open up your podcast app and type in Charlie Kirk Show.
00:00:15.000 Get involved with TurningPointUSA at tpusa.com.
00:00:18.000 If you enjoy conversations like this, consider supporting our show directly at charliekirk.com/slash support.
00:00:25.000 That is charliekirk.com/slash support.
00:00:27.000 Thank you, Lauren from Washington, Sharon from Minnesota, Alma from California, Heather from Kentucky, Elena from California, and Laurel from Oklahoma, CharlieKirk.com/slash support.
00:00:39.000 Buckle up, everybody.
00:00:40.000 Here we go.
00:00:41.000 Charlie, what you've done is incredible here.
00:00:43.000 Maybe Charlie Kirk is on the college campus.
00:00:45.000 I want you to know we are lucky to have Charlie Kirk.
00:00:49.000 Charlie Kirk's running the White House, folks.
00:00:52.000 I want to thank Charlie.
00:00:53.000 He's an incredible guy.
00:00:54.000 His spirit, his love of this country.
00:00:56.000 He's done an amazing job building one of the most powerful youth organizations ever created, Turning Point USA.
00:01:02.000 We will not embrace the ideas that have destroyed countries, destroyed lives, and we are going to fight for freedom on campuses across the country.
00:01:11.000 That's why we are here.
00:01:14.000 Brought to you by the Loan Experts I Trust, Andrew and Todd at Sierra Pacific Mortgage at andrewandTodd.com.
00:01:24.000 Welcome back, everybody.
00:01:26.000 The entire hour we're going to talk about artificial intelligence.
00:01:30.000 One of the most important stories happening in regards to humanity, let alone politically.
00:01:35.000 And Joe Allen, who is an expert in it, is a tech writer for War Room, and he's here to answer our questions.
00:01:42.000 He's also a writer at the Federalist and also talk about some of these stories.
00:01:45.000 Joe, welcome to the program.
00:01:47.000 Welcome back, I should say.
00:01:48.000 We have a full hour, which I think necessitates, which is necessary considering the topic and how deep it is.
00:01:54.000 So, Joe, let's just start with layman terms.
00:01:57.000 What is artificial intelligence?
00:02:00.000 And for how long has it been in, let's just say, in use by corporations or businesses?
00:02:07.000 Well, Charlie, very good to be here.
00:02:10.000 The definition of artificial intelligence, as it was originally stated, 1956 by John McCarthy, is a computer system that thinks like a human being.
00:02:24.000 Now, that's a very high bar, one that arguably has not been met in any real way given the complexity and richness of human thought.
00:02:34.000 But right now, there's an enormous hype and, and I think justifiably so, around GPT technology because it does approach human-level intelligence in so many ways on the level of language.
00:02:52.000 And it's able to pass all of these different cognitive tests, right?
00:02:56.000 The sorts of tests that judge whether or not a human being is intelligent from the bar to the LSAT to the U.S. Biology Olympiads.
00:03:07.000 And so, to answer the second question about how long have these systems been in use, they've actually been in use for quite some time.
00:03:16.000 Certainly, for the last two decades, machine learning techniques have been applied to finance, they've been applied to medicine, they've been applied to biology overall, to social networking analysis, and so forth.
00:03:33.000 But really, it's the advances in artificial neural networks that have made all the difference.
00:03:40.000 There are a lot of different ways that artificial intelligence can be organized, but GPT is an artificial neural network, and a number of other advanced systems rely on that model.
00:03:53.000 What that is, going back to the definition, what that is, an artificial neural network basically replicates the way a human brain processes information.
00:04:05.000 So the human brain is composed of neurons.
00:04:08.000 Each neuron has about a thousand connections to other neurons.
00:04:12.000 There are some 86 billion of them in the human brain.
00:04:16.000 With artificial intelligence, with an artificial neural network, the processing is done by nodes, and those nodes are connected to each other over layers in much the same way that the human brain is composed.
00:04:35.000 And so, what that means ultimately, Charlie, is that instead of just having the sort of typical rules-based processing that you get, you know, all throughout computer programming since its inception, also in the late 40s, early 50s, what you get is the sort of fuzzy logic.
00:04:54.000 You get a non-deterministic output.
00:04:57.000 It's all about statistics rather than direct input and output.
00:05:03.000 And so, what that means ultimately, and why so many people are now alarmed by the advance in artificial intelligence, especially in regard to GPT and its sort of mirrors in other corporations, is that you get all of these emergent capabilities.
00:05:22.000 And you also get this richness of information output and this unpredictability in the information output, which really does mirror a lot of the predictions that have been made by futurists and transhumanists that artificial intelligence will reach human intelligence, a sort of general human intelligence, that it will surpass human intelligence.
00:05:48.000 So, I want to definitely build out the dystopian and the negative here, but let's just take a pause and actually do the opposite.
00:05:56.000 What is exciting or is tempting about this technology?
00:06:01.000 Because we hear a lot about the negatives, and I want to certainly build that out.
00:06:04.000 But, why is so much money getting poured into this?
00:06:06.000 For example, I saw one story that said with artificial intelligence, you'll be able to diagnose health issues more, and then potentially life-saving drugs could be developed within 30 minutes.
00:06:16.000 And that didn't seem as if it was that unrealistic considering the processing power.
00:06:20.000 Can you just build out what some of the alluring potential positives here?
00:06:24.000 And then, obviously, we'll get into the remarkably dystopian wrinkles involved.
00:06:30.000 There are really, you know, a lot of positives.
00:06:34.000 And I actually, as negative as I tend to be about the ultimate trajectory of this, there's no way I could deny the benefits.
00:06:42.000 So, starting with the biological level, you have the ability now to create new drugs, and basically you can test them in silico using machine learning techniques.
00:06:56.000 And what that means is that the machine is able to simulate the biological system to such a high degree and able to simulate chemical compounds or enzymes or genetic mutations to such a high degree that you can basically test new drugs, or for instance, and this is a very, very popular thing that's going to be emerging very, very soon on the market: new mRNA vaccines.
00:07:25.000 And you can test them in silico before they ever go into the biolab.
00:07:29.000 And that just means that the development of these drugs, the development of these different biological systems, the sort of genetic drugs, is able to happen that much faster.
00:07:41.000 And then, you know, I'm in close contact with a radiologist, and he speaks about this all the time.
00:07:46.000 And I read this in the literature all the time: the analysis of x-rays and other medical visualizations, that is, without a doubt, for a long time now, machine learning and visual recognition AI has been used to locate different abnormalities in an x-ray, just to stick with that.
00:08:06.000 And so, it has allowed radiologists to identify cancers much more readily.
00:08:11.000 The AI visual learning systems are much better than radiologists on the whole at identifying very small anomalies before they become serious.
00:08:21.000 And so, this is a huge advantage, not only to people who potentially have cancer or other problems, but also to the corporations who use this AI.
00:08:31.000 The biomedical corporations, they still need radiologists to sign off on it.
00:08:36.000 But the same radiologists who would be able to go over, say, just a handful of x-rays in one day.
00:08:43.000 They can now do it by the dozens.
00:08:45.000 Yeah, I mean, or yeah, eventually.
00:08:46.000 So, one piece of information I found interesting: artificial intelligence can now tell the difference between male and female retinas with extremely high accuracy.
00:08:55.000 But we, as humans, have not yet discovered any differences between male and female retinas.
00:09:00.000 Is that just pattern detection that they're able to?
00:09:03.000 I mean, that's one example where all of a sudden the artificial intelligence machine itself is actually getting to a level of, you know, breakthrough or pattern recognition that we humans have not yet been able to see.
00:09:16.000 So, the main power that artificial intelligence confers is pattern recognition.
00:09:24.000 And it is in all of its narrow domains, it exceeds human pattern recognition by orders of magnitude.
00:09:31.000 And so, the identification, I've actually not seen that study, very interesting, but just to imagine the identification of differences between male and female retinas, the fact that an AI was able to detect that beyond any sort of human observation doesn't surprise me at all.
00:09:48.000 That's true in pretty much everything that I'm talking about here.
00:09:52.000 And that extends out of the medical industry into finance, that extends into the criminal justice system, that extends out to military applications with battlefield simulation or battlefield surveillance or communication surveillance or the detection of cyber attacks.
00:10:13.000 So, in all of these domains, even though the AI can only do what it's trained to do in that domain, it exceeds human output in every case.
00:10:24.000 Even GPT, even though GPT produces really bad poems and tends to hallucinate all these false answers, on the whole, GPT is able to draw from a corpus of language far larger than any human.
00:10:38.000 Stay right there, Joe Allen, who is the nationwide expert, in my opinion, on artificial intelligence.
00:10:43.000 Okay, so we've gotten the positive, and then we're going to get deep into the dark elements because that's honestly where we're headed.
00:10:49.000 There are clouds are imminent.
00:10:54.000 Look, Americans have had it.
00:10:56.000 They're done supporting companies that rake in hundreds of millions of dollars, sometimes billions of dollars, while trashing the country that made their success possible.
00:11:04.000 Until recently, we had to take it, but companies like Patriot Mobile are building a whole new economy, one which embraces the values that made America the greatest nation on earth.
00:11:14.000 Look, Patriot Mobile is America's only Christian conservative wireless provider.
00:11:19.000 Look, they offer dependable coverage for all three major networks, and they offer you a performance coverage guarantee.
00:11:25.000 If you're not happy with your coverage, you could switch to a different network for free without changing carriers.
00:11:30.000 All this, plus the knowledge that you're supporting free speech, the sanctity of life, Second Amendment, and our military first responder heroes.
00:11:38.000 Their 100% U.S.-based customer service team makes switching awfully easy.
00:11:43.000 Just go to patriotmobile.com/slash Charlie or call them today at 878-PATER.
00:11:48.000 Make the switch.
00:11:49.000 It's no new money out of your budget.
00:11:50.000 In fact, it will save you money.
00:11:51.000 Go to patriotmobile.com/slash Charlie.
00:11:54.000 That is patriotmobile.com/slash Charlie.
00:11:57.000 Get free activation today with the offer code Charlie.
00:12:00.000 We need to stand together and support companies that share our values.
00:12:04.000 Patriotmobile.com/slash Charlie or call 878-PAT, patriotmobile.com/slash Charlie.
00:12:13.000 So, Joe, we see, let me read a quote here from Elon Musk.
00:12:18.000 He says, AI is one of the biggest risks to civilization.
00:12:22.000 Why does he believe that?
00:12:24.000 Primarily, Charlie, he believes that because of the work of Nick Bostrom, who is an Oxford philosopher, transhumanist, co-founder of the World Transhumanist Association.
00:12:35.000 And Nick Bostrom published a book in 2014 called Super Intelligence.
00:12:41.000 And super intelligence basically lays out all of the different paths that an artificial intelligence system or series of systems could make to superhuman intelligence and then how those systems could destroy all of humanity or at least some significant portion of it.
00:13:01.000 So, Musk has been taken this up really since 2014 is when you really start hearing him speak out about this, mainly because of that book.
00:13:11.000 And Nick Bostrom, incidentally, was also very much influenced by Eliezer Yudkowsky, who is at the Machine Intelligence Research Institute.
00:13:24.000 And Yudkowski is the one who really has stirred up all of this controversy about whether or not artificial intelligence poses an existential threat because of Yudkowsky, and especially because of a Time magazine op-ed that he published around the same time that Elon Musk and company signed their open letter for an AI moratorium.
00:13:49.000 Yudkowski argued not that AI is a danger or some distant existential risk, maybe in the future.
00:13:58.000 He just flat out says if these systems are allowed to get above where they're at now, and maybe if they're allowed to remain where they're at now, they will inevitably kill us.
00:14:09.000 So, of course, that's like the far, far, far end of the kind of doomer spectrum.
00:14:16.000 But I do think that even if you don't believe that AI poses some kind of existential risk, as those guys do, there are so many other really dramatic downsides to this technology that those at the very least need to be taken seriously and accountable.
00:14:33.000 Well, let's get technical then.
00:14:35.000 So, where are the threats going to emerge, right?
00:14:38.000 Because I hear from Elon and others, you know, these very charged language: threat to civilization extra.
00:14:43.000 Okay, okay, great.
00:14:44.000 So, let's go through in the next month, six months, year, three years, right, Joe?
00:14:49.000 So, like, build it out.
00:14:50.000 Where is that technology going to manifest in a way that will threaten our humanity first, second, third?
00:14:56.000 Like, what are the immediate threats?
00:14:58.000 And then we'll get to what we could do about it.
00:14:59.000 But give us kind of a threat analysis.
00:15:01.000 Give us the landscape.
00:15:04.000 Well, so, first off, I just want to state my own concerns.
00:15:08.000 My concerns are three: one, you have this intense development of a human-AI relationship that I think is very unhealthy.
00:15:17.000 Two, you have an enormous threat to jobs, especially white-collar jobs.
00:15:22.000 I'm not all that sympathetic to the white-collar, but at least we have to admit that this is going to be a big, big problem insofar as social structure.
00:15:31.000 And three, you've got this push to put AI in education, and that I believe will be undoubtedly one of the most intense brainwashing tools that has ever been unleashed.
00:15:43.000 On humanity.
00:15:43.000 Yep.
00:15:44.000 But if you look at what they're talking about, they're talking about AI systems that kill us all or disrupt society and civilization so much that everyone has to unplug their computers and throw them away, right?
00:15:56.000 Basically ending the industrial era as we know it.
00:16:00.000 And so oftentimes Yudkowski and sometimes Bostrom and definitely Musk, they're criticized because they don't lay out these definite paths.
00:16:11.000 They just say.
00:16:11.000 An intelligence explosion could happen.
00:16:15.000 And if that intelligence explosion is not aligned with human interests or human existence, then that intelligence explosion could end the world is how they put it over and over again.
00:16:28.000 But even despite that sort of dramatic projection of this very abstract future, there are those, including Yurkowski and Bostrom, who have actually laid out specific paths which would lead to that destruction.
00:16:44.000 Running short on time, maybe we hit that in the next segment, but I will say this.
00:16:52.000 That AI, these direct paths, they are also of major concerns.
00:16:58.000 But I really think that the most important thing that people need to focus on are the immediate effects of artificial intelligence on psychology and society.
00:17:10.000 Are you feeling burned out and a little tired?
00:17:12.000 Look, I want to tell you about something that I've become a big believer in.
00:17:15.000 And if you do not know about it, you got to research it.
00:17:17.000 You can fact-check me.
00:17:18.000 It's NAD.
00:17:20.000 NAD is a precursor for your body to be able to create ATP, which is basically the life force of everything that you do.
00:17:28.000 And look, there's a lot of people out there that are promising energy and doing all this, but go do some research on NAD and go see actually how incredibly important it is for high performance to be able to go actually get it to the next level.
00:17:41.000 And so what does NAD stand for?
00:17:43.000 Well, try to take a note here.
00:17:45.000 It is nicotinamide adenonide dinucleotide.
00:17:50.000 I did that pretty well, don't you think?
00:17:51.000 NAD.
00:17:52.000 It's a coenzyme that is central to metabolism.
00:17:55.000 Again, don't take my word for it.
00:17:57.000 Go watch a YouTube video or two or three or four and go fact-check me on it.
00:18:00.000 I've been taking NAD for quite some time.
00:18:02.000 And people say, Charlie, how do you travel 2,700 days in a decade?
00:18:07.000 How do you do the 300 days a year?
00:18:08.000 How do you do that?
00:18:09.000 Look, it's not only because of this.
00:18:10.000 I eat well and do other things as well.
00:18:13.000 But if you look at NADH, especially when it combines with CoQ10 and marine collagen, it boosts your body's cellular function.
00:18:20.000 I would never tell you guys to go do something I myself did not do.
00:18:24.000 And Strong Cell has been able to put together a scientific breakthrough in cellular health replenishment that combines NADH, CoQ10, and marine collagen.
00:18:33.000 And when you combine them together, you get mental clarity.
00:18:35.000 And that's a must for me.
00:18:36.000 It's not just that.
00:18:37.000 It's for vitality.
00:18:39.000 It helps your immune system.
00:18:41.000 It's all good stuff.
00:18:42.000 So go to strongcell.com forward slash Charlie today and see for yourself.
00:18:47.000 It's not a stimulant.
00:18:47.000 It doesn't contain any caffeine.
00:18:49.000 I'm talking about overall health from the cellular level.
00:18:52.000 NADH has been called the anti-aging enzyme that helps with so many issues like brain fog, short-term memory loss, blood pressure, heart disease, blood sugar retention, and so much more.
00:19:02.000 And look, it's not a magic pill.
00:19:03.000 It's like, oh, I'm going to start taking this and I'm going to be super smart.
00:19:06.000 No, no, it's an additive, an amplifier on people that want to get better.
00:19:11.000 But I can tell you, it makes a big difference.
00:19:13.000 I've personally seen undeniable benefits from taking Strong Cell and engaging with NAD every day.
00:19:18.000 So I had to partner with them.
00:19:20.000 I vetted them.
00:19:20.000 I checked out their ingredient profile.
00:19:22.000 And do yourself a favor and give Strong Cell a try.
00:19:25.000 Visit strongcell.com forward slash Charlie today and use promo code Charlie.
00:19:29.000 It's just a quick shot every morning.
00:19:31.000 And you get a special 20% discount on your order.
00:19:34.000 Again, that's strongcell.com forward slash Charlie.
00:19:37.000 NAD is your body's ability to create ATP.
00:19:43.000 Don't believe me, go to WebMD, go to ScienceDirect, go to Nature Journal, NIH, YouTube.
00:19:47.000 It's all natural.
00:19:48.000 It's naturally occurring, and you're giving your body more of what it already needs.
00:19:51.000 Use promo code Charlie.
00:19:53.000 Again, that strongsell.com forward slash Charlie.
00:19:55.000 Don't forget your 20% discount by using promo code Charlie at checkout strongsell.com/slash Charlie.
00:20:05.000 So, Joe, let's continue.
00:20:06.000 Is this quote correct from Yoval Harari, who I can't stand, who seems to be a transhumanist fan?
00:20:13.000 Is it true that he's saying he calls for AI labs to immediately pause for at least six months?
00:20:19.000 What is this all about?
00:20:21.000 So, to lay this story out, this all unfolded about two weeks ago.
00:20:26.000 You had the Future of Life Institute, which is ultimately composed of mostly transhumanist-leaning individuals, released an open letter calling for a six-month moratorium on any AI system above the level of GPT-4.
00:20:43.000 Now, the signatories include Max Tegmark, author of Life 3.0 and one of the co-founders, Stuart Russell, a computer scientist, and then, of course, Elon Musk and Yuval Noah Harari, along with I think now up to 2,000 other AI experts.
00:20:59.000 And the dangers that they point out are the media and the internet environment being flooded with disinformation, which I think is a very real danger.
00:21:10.000 They point out the loss of jobs, mass loss of jobs, including fulfilling jobs, which I think is a very significant danger.
00:21:18.000 Goldman Sachs just put out a report where they estimate 300 million jobs will be lost worldwide due to AI, for instance.
00:21:27.000 And then finally, they worry that human beings will lose control of civilization.
00:21:32.000 And I think that even if artificial intelligence doesn't even go any further than it is now, what you will end up with where we're already going is that most of us are losing control of our civilization.
00:21:46.000 And that power, the power of the direction of our civilization, lies in the hands of technocrats, tech corporations, government, and military institutions that have little to no regard for our wants, our will, and our needs.
00:22:02.000 You saw that with the pandemic.
00:22:04.000 You see that in so many different levels.
00:22:06.000 So Yuval Noah Harari signing on to that really doesn't surprise me.
00:22:10.000 I have a much more positive view of Yuval Noah Harari than most, even if I disagree with him profoundly on the very basics of what reality is.
00:22:19.000 I think that he is oftentimes, his warnings are oftentimes ignored in favor of his sort of provocative and especially the sort of vicious anti-religious rhetoric that he puts out.
00:22:32.000 But it doesn't surprise me that he signed that.
00:22:34.000 And it was in response to that letter that very same day that Eliezer Yudkowski from the Machine Intelligence Research Institute published that Time magazine op-ed saying it's not enough.
00:22:46.000 The machines will kill us all.
00:22:49.000 And if you'd like, Charlie, I can go into what those specific paths are.
00:22:54.000 Please do.
00:22:55.000 Or we can move on.
00:22:56.000 So please do.
00:22:57.000 Keep going.
00:22:59.000 So Nick Bostrom's superintelligence lays this out in the greatest detail, but Yudkowski in a number of articles, mostly published at Less Wrong, and other outlets that are mainly in that sort of existential risk community.
00:23:16.000 What they argue basically is that artificial intelligence, especially an artificial general intelligence.
00:23:23.000 So artificial narrow intelligence being all the systems we have now dedicated to language, to battlefield simulation, or to biological simulation.
00:23:33.000 Artificial general intelligence is a system that has basically overlapping narrow intelligences, so multiple cognitive modules like the human brain and can move flexibly between them.
00:23:47.000 And also, all of them run simultaneously, right?
00:23:50.000 And so, what you're talking about then is at least in a kind of alien form, a human-like intelligence, but it goes much faster.
00:23:58.000 It has an infinite memory, basically, and it's able to look at much larger amounts of data.
00:24:06.000 And so, the risk, the existential risk to humanity, as these guys put it forward, is that that system will be programmed with or will emerge, it will develop a will of its own.
00:24:20.000 It will have a sort of the desire for self-perpetuation, and because of that, it will be an evolutionary competitor with human beings.
00:24:32.000 Or, worse, if there are multiple such systems, they will be evolutionary competitors with human beings.
00:24:39.000 And because it's an artificial intelligence sitting on a server or sitting in the cloud, potentially it could be replicated indefinitely so that you end up with thousands, millions, billions of these super intelligent AIs.
00:24:55.000 And if they are not aligned with human interests and human values, or if they're not aligned with the necessity of human existence, they argue they would just simply manipulate the infrastructure that they have access to to destroy us.
00:25:13.000 So, the two major bits of the infrastructure that are pointed out are weapon systems, especially nuclear weapon systems, if they were able to hack into them, or biological systems like biolabs,
00:25:28.000 so that an AI would basically create in silico some sort of deadly virus and then order it on the sly from one of the many biofoundries that exist across the world that create mutant microbes to order and would then unleash this on the world.
00:25:49.000 Now, we're in serious sci-fi territory there.
00:25:52.000 That's what they talk about.
00:25:53.000 And then, the third possibility is that maybe these AIs or this one major AI would not have access to any of those systems, but it would have access to human beings who are in control of those systems.
00:26:08.000 And so, the AI would then manipulate human beings to either launch nuclear warheads or launch any sort of weapon at other human beings or to create and release a virus or to bring planes out of the sky or to create a system situation in society in which we go to war.
00:26:29.000 Maybe it targets specific leaders and convinces a leader to such as Putin, let's say, or Joe Biden, if you consider him to be in control, of starting World War III, whatever, right?
00:26:42.000 That is the sort of vision, those are the realistic pathways.
00:26:45.000 And they go on and on and on to much more, I think, implausible sorts of scenarios.
00:26:51.000 But that's basically what they're talking about when they're talking about AI systems that could destroy humanity.
00:26:58.000 So, what needs to be done politically or otherwise to prevent that situation does not occur?
00:27:06.000 You know, I'm fairly, I don't have any real answer to that.
00:27:13.000 Nobody really does.
00:27:14.000 So, the moratorium would be basically voluntary, right?
00:27:18.000 Tech corporations voluntarily stop.
00:27:20.000 That is not going to happen.
00:27:22.000 Jan Lacun from Meta is just one example of that sort of resistance.
00:27:28.000 He just completely dismisses all of the dangers, including the lesser dangers that I talked about earlier, the psychological damage and social damage.
00:27:36.000 He just dismisses it all.
00:27:37.000 It's full steam ahead.
00:27:39.000 And that sort of accelerationist mentality really does pervade Silicon Valley.
00:27:44.000 It pervades China.
00:27:46.000 It's really, there's no stopping it on a voluntary level.
00:27:50.000 No one's going to stop, especially military organizations, the U.S. military, Chinese military.
00:27:56.000 They're not going to stop.
00:27:57.000 So the second option brought forward by Yudkowsky, U.S. puts a hard stop and basically ceases all large GPU clusters in the U.S. and then signals to China if they don't stop, perhaps there's going to be a major problem.
00:28:17.000 And he goes as far as to say that once this sort of ban is in effect, if you suspect if intelligence has any sort of indication that an advanced AI is being trained on foreign soil, it should launch an airstrike, even if that means nuclear war, because in his mind, artificial intelligence is more dangerous.
00:28:38.000 So politically, I think there's no immediate solution that I can see.
00:28:44.000 You could do something really foolish, like what Yudkowski is talking about and start World War III, or something that looks like the Restrict Act, where all of a sudden all of these civil liberties are in danger of being squashed by the president or the commerce department, right, and the U.S. government.
00:29:03.000 And so politically, I don't think there's really much to do.
00:29:08.000 I think that the best thing that...
00:29:10.000 Yeah, go ahead.
00:29:11.000 No finish, please.
00:29:12.000 I think the best way to think about this is that all of us, human beings, are under threat from these systems, not necessarily because the systems are going to kill all of humanity, but because these are systems of technocratic control.
00:29:29.000 And so first, identifying that problem, which I think is pretty well identified, and second, organize among ourselves and in the institutions, the sort of low-level or mid-level institutions that really do have power and influence, and figure out how it is that we can do without these systems.
00:29:48.000 Or for those who think that it's best to adopt them as weapons against the larger structure, to adopt them within limits in order to survive what is inevitably, and I think this is part of this sort of inevitability that I foresee, major economic downturn coupled with real technological advances so that the upper crust becomes more powerful and the populace below becomes less powerful.
00:30:16.000 We've seen that over and over again.
00:30:18.000 The 2008 crash is a great example, but there are many, many others.
00:30:21.000 The COVID crisis is another.
00:30:24.000 And I think we should brace ourselves for something like that with these technologies, but we at least have the chance to come up with strategies of how to remain outside of those systems so your kids aren't being raised by AI bots, so that your job is not under threat by those bots.
00:30:43.000 And you maybe as an employer make the decision, I am not going to replace humans with artificial intelligence.
00:30:50.000 And in general, that we as a community and we as individuals are not going to become part of and participate in this human AI symbiosis pushed by people like Kamalan Musk.
00:31:04.000 So how much, really quick, and I'm asking for a reason, how much does like chat GPT cost to develop?
00:31:08.000 Like if we were to develop a natural law AI, because I mean, based on what you're telling me, the only logical solution is we have to create our own super weapon to be able to deter what they're going to do.
00:31:19.000 We just can't hope it's going to get better.
00:31:20.000 Politicians will do nothing.
00:31:22.000 So why don't we just develop our own with the proper human controls to do good, but anchor it in reality?
00:31:30.000 Is that a crazy idea?
00:31:31.000 It's a very common one.
00:31:33.000 It's one that I fear has all of the temptations that, as much as I hate to go there, the ring and the Lord of the Rings, it has all of those temptations.
00:31:44.000 But I do think that it's a reasonable response.
00:31:46.000 And a lot of people on our side will be doing that.
00:31:50.000 And so the sort of logical pathways towards that should also be on the table, in my opinion.
00:31:56.000 Although for me, Charlie, I will admit, I'm a Luddite by instinct.
00:32:00.000 And I do think that the more we distance ourselves from this while not losing total.
00:32:06.000 I agree.
00:32:06.000 No, I mean, I think we should ban this stuff completely.
00:32:09.000 It just seems scary.
00:32:10.000 At the same time, it's exciting to be able to find pattern recognition and tumors for kids that are dying unnecessarily right now in children hospitals.
00:32:18.000 Like that actually appeals to me, right?
00:32:20.000 If you could be all of a sudden be able to run the blood work of 100,000 kids that are dying of sickle cell, you know, you know, disorders or leukemia and you might be able to give them life.
00:32:30.000 But Joe, I mean, if there's not a political solution and it's not realistic to ban it and we can only avoid it so much, I mean, isn't it logical to have some sort of a check and balance?
00:32:39.000 I agree.
00:32:39.000 It's like the ring from Lord of the Rings, but our founding fathers demonstrated that there is a way to develop a structure, a system.
00:32:45.000 I mean, you could almost take constitutional principles, checks and balances, separation of powers, and put that into an AI type format where we reluctantly say, okay, we don't love the fact we have to do this, but we got to be able to compete in the AI space or else the bad guys, the wokeys, are going to use it for total evil, right?
00:33:07.000 Because there is a lot of good to be done.
00:33:08.000 I'll give you a great example, right?
00:33:10.000 I mean, what if they use AI as a social credit score system and say that, hey, you've been too outspoken against the regime.
00:33:18.000 We're not going to administer you medicine.
00:33:20.000 I know that might sound insane to people, but that's how sinister these people are.
00:33:24.000 Is this outlandish, realistic?
00:33:25.000 What are your thoughts?
00:33:28.000 At this point, Charlie, I don't think any sort of doomsayer scenario is totally off the table, you know, within degrees.
00:33:37.000 I think that as far as people on our side are concerned, however one conceives of it, let's just say the populist right, working Americans, legacy Americans.
00:33:50.000 I think that one of the most important thing that young people, one of the most important things young people can be doing right now is learning about these technologies and learning how to use them, whether it's AI programming or just simply using an AI program or Bitcoin or other blockchain technologies, because these are going to be the kids that we're relying on or older people, but mainly it's going to be kids that are able to do it.
00:34:18.000 And we're going to need that expertise going forward, if only to defend our little enclaves.
00:34:25.000 As far as creating anything on the scale of GPT, I don't know what the initial investment was.
00:34:32.000 I know Elon Musk initially, Elon Musk initially invested $100 million along with other investors.
00:34:39.000 I believe that was 2015.
00:34:42.000 And then in 2019, Microsoft gave them $1 billion to really advance their GPT technology, the generative pre-trained transformers, the language technology.
00:34:55.000 And then earlier this year, Microsoft or late last year, Microsoft put in an additional $10 billion.
00:35:05.000 And so the power, the real power of GPT isn't necessarily in the architecture or in the programming.
00:35:12.000 The major advances come from scaling.
00:35:15.000 And they just make these artificial brains bigger and bigger and bigger.
00:35:19.000 So it's very difficult for me to imagine how right-wing populists or any of the major financial backers would be able to come up with something that would compete with that.
00:35:30.000 Now, smaller systems could definitely defend against it, but I think that to a certain extent, as far as just raw power is concerned, there's a certain degree of tragedy in all this.
00:35:45.000 You know, there's a certain degree of resignation that these major tech corporations have the resources and the pre-existing expertise and infrastructure to create systems that we will never be able to compete with in the coming decade at the very least.
00:36:00.000 You have guys like Peter Thiel, who, you know, ostensibly are on our side, and Palantir is definitely a very powerful system in its domains.
00:36:09.000 But again, I think that what we're talking about in this AI arms race is something much more akin to just a spiritual descent into lower and lower realms than any kind of normal worldly competition for power that we would have known previously in history.
00:36:27.000 Other than that, Mrs. Lincoln, how was the play?
00:36:30.000 So that sounds great.
00:36:32.000 Final thoughts, Joe?
00:36:33.000 We got a minute remaining.
00:36:36.000 I do think despite all of my kind of doomerism, I don't think that people have to worry about artificial intelligence turning us all into paperclips anytime soon.
00:36:44.000 And if we do, it's already too late.
00:36:46.000 So, you know, enjoy your day.
00:36:48.000 As far as how to think about this going forward, education, number one, educate yourself on it.
00:36:55.000 And resistance, number two, remember what it's like to be human and maintain that humanity in the face of a dramatic shift in the culture going forward.
00:37:06.000 Joe Allen, extremely interesting.
00:37:08.000 I hope solutions will start to emerge.
00:37:10.000 We should start to pray on that because there is a God and we are not him, regardless of how try the transhumanists, how hard the transhumanists attempt or try.
00:37:20.000 Thank you so much, Joe.
00:37:21.000 Appreciate it.
00:37:22.000 Thank you very much, Charlie.
00:37:23.000 Thanks so much for listening, everybody.
00:37:24.000 Email us your thoughts as always, freedom at charliekirk.com.
00:37:27.000 Thanks so much for listening and God bless.
00:37:32.000 For more on many of these stories and news you can trust, go to CharlieKirk.com.