00:00:12.000Open up your podcast app and type in Charlie Kirk Show.
00:00:15.000Get involved with TurningPointUSA at tpusa.com.
00:00:18.000If you enjoy conversations like this, consider supporting our show directly at charliekirk.com/slash support.
00:00:25.000That is charliekirk.com/slash support.
00:00:27.000Thank you, Lauren from Washington, Sharon from Minnesota, Alma from California, Heather from Kentucky, Elena from California, and Laurel from Oklahoma, CharlieKirk.com/slash support.
00:00:56.000He's done an amazing job building one of the most powerful youth organizations ever created, Turning Point USA.
00:01:02.000We will not embrace the ideas that have destroyed countries, destroyed lives, and we are going to fight for freedom on campuses across the country.
00:02:10.000The definition of artificial intelligence, as it was originally stated, 1956 by John McCarthy, is a computer system that thinks like a human being.
00:02:24.000Now, that's a very high bar, one that arguably has not been met in any real way given the complexity and richness of human thought.
00:02:34.000But right now, there's an enormous hype and, and I think justifiably so, around GPT technology because it does approach human-level intelligence in so many ways on the level of language.
00:02:52.000And it's able to pass all of these different cognitive tests, right?
00:02:56.000The sorts of tests that judge whether or not a human being is intelligent from the bar to the LSAT to the U.S. Biology Olympiads.
00:03:07.000And so, to answer the second question about how long have these systems been in use, they've actually been in use for quite some time.
00:03:16.000Certainly, for the last two decades, machine learning techniques have been applied to finance, they've been applied to medicine, they've been applied to biology overall, to social networking analysis, and so forth.
00:03:33.000But really, it's the advances in artificial neural networks that have made all the difference.
00:03:40.000There are a lot of different ways that artificial intelligence can be organized, but GPT is an artificial neural network, and a number of other advanced systems rely on that model.
00:03:53.000What that is, going back to the definition, what that is, an artificial neural network basically replicates the way a human brain processes information.
00:04:05.000So the human brain is composed of neurons.
00:04:08.000Each neuron has about a thousand connections to other neurons.
00:04:12.000There are some 86 billion of them in the human brain.
00:04:16.000With artificial intelligence, with an artificial neural network, the processing is done by nodes, and those nodes are connected to each other over layers in much the same way that the human brain is composed.
00:04:35.000And so, what that means ultimately, Charlie, is that instead of just having the sort of typical rules-based processing that you get, you know, all throughout computer programming since its inception, also in the late 40s, early 50s, what you get is the sort of fuzzy logic.
00:04:57.000It's all about statistics rather than direct input and output.
00:05:03.000And so, what that means ultimately, and why so many people are now alarmed by the advance in artificial intelligence, especially in regard to GPT and its sort of mirrors in other corporations, is that you get all of these emergent capabilities.
00:05:22.000And you also get this richness of information output and this unpredictability in the information output, which really does mirror a lot of the predictions that have been made by futurists and transhumanists that artificial intelligence will reach human intelligence, a sort of general human intelligence, that it will surpass human intelligence.
00:05:48.000So, I want to definitely build out the dystopian and the negative here, but let's just take a pause and actually do the opposite.
00:05:56.000What is exciting or is tempting about this technology?
00:06:01.000Because we hear a lot about the negatives, and I want to certainly build that out.
00:06:04.000But, why is so much money getting poured into this?
00:06:06.000For example, I saw one story that said with artificial intelligence, you'll be able to diagnose health issues more, and then potentially life-saving drugs could be developed within 30 minutes.
00:06:16.000And that didn't seem as if it was that unrealistic considering the processing power.
00:06:20.000Can you just build out what some of the alluring potential positives here?
00:06:24.000And then, obviously, we'll get into the remarkably dystopian wrinkles involved.
00:06:30.000There are really, you know, a lot of positives.
00:06:34.000And I actually, as negative as I tend to be about the ultimate trajectory of this, there's no way I could deny the benefits.
00:06:42.000So, starting with the biological level, you have the ability now to create new drugs, and basically you can test them in silico using machine learning techniques.
00:06:56.000And what that means is that the machine is able to simulate the biological system to such a high degree and able to simulate chemical compounds or enzymes or genetic mutations to such a high degree that you can basically test new drugs, or for instance, and this is a very, very popular thing that's going to be emerging very, very soon on the market: new mRNA vaccines.
00:07:25.000And you can test them in silico before they ever go into the biolab.
00:07:29.000And that just means that the development of these drugs, the development of these different biological systems, the sort of genetic drugs, is able to happen that much faster.
00:07:41.000And then, you know, I'm in close contact with a radiologist, and he speaks about this all the time.
00:07:46.000And I read this in the literature all the time: the analysis of x-rays and other medical visualizations, that is, without a doubt, for a long time now, machine learning and visual recognition AI has been used to locate different abnormalities in an x-ray, just to stick with that.
00:08:06.000And so, it has allowed radiologists to identify cancers much more readily.
00:08:11.000The AI visual learning systems are much better than radiologists on the whole at identifying very small anomalies before they become serious.
00:08:21.000And so, this is a huge advantage, not only to people who potentially have cancer or other problems, but also to the corporations who use this AI.
00:08:31.000The biomedical corporations, they still need radiologists to sign off on it.
00:08:36.000But the same radiologists who would be able to go over, say, just a handful of x-rays in one day.
00:08:46.000So, one piece of information I found interesting: artificial intelligence can now tell the difference between male and female retinas with extremely high accuracy.
00:08:55.000But we, as humans, have not yet discovered any differences between male and female retinas.
00:09:00.000Is that just pattern detection that they're able to?
00:09:03.000I mean, that's one example where all of a sudden the artificial intelligence machine itself is actually getting to a level of, you know, breakthrough or pattern recognition that we humans have not yet been able to see.
00:09:16.000So, the main power that artificial intelligence confers is pattern recognition.
00:09:24.000And it is in all of its narrow domains, it exceeds human pattern recognition by orders of magnitude.
00:09:31.000And so, the identification, I've actually not seen that study, very interesting, but just to imagine the identification of differences between male and female retinas, the fact that an AI was able to detect that beyond any sort of human observation doesn't surprise me at all.
00:09:48.000That's true in pretty much everything that I'm talking about here.
00:09:52.000And that extends out of the medical industry into finance, that extends into the criminal justice system, that extends out to military applications with battlefield simulation or battlefield surveillance or communication surveillance or the detection of cyber attacks.
00:10:13.000So, in all of these domains, even though the AI can only do what it's trained to do in that domain, it exceeds human output in every case.
00:10:24.000Even GPT, even though GPT produces really bad poems and tends to hallucinate all these false answers, on the whole, GPT is able to draw from a corpus of language far larger than any human.
00:10:38.000Stay right there, Joe Allen, who is the nationwide expert, in my opinion, on artificial intelligence.
00:10:43.000Okay, so we've gotten the positive, and then we're going to get deep into the dark elements because that's honestly where we're headed.
00:10:56.000They're done supporting companies that rake in hundreds of millions of dollars, sometimes billions of dollars, while trashing the country that made their success possible.
00:11:04.000Until recently, we had to take it, but companies like Patriot Mobile are building a whole new economy, one which embraces the values that made America the greatest nation on earth.
00:11:14.000Look, Patriot Mobile is America's only Christian conservative wireless provider.
00:11:19.000Look, they offer dependable coverage for all three major networks, and they offer you a performance coverage guarantee.
00:11:25.000If you're not happy with your coverage, you could switch to a different network for free without changing carriers.
00:11:30.000All this, plus the knowledge that you're supporting free speech, the sanctity of life, Second Amendment, and our military first responder heroes.
00:11:38.000Their 100% U.S.-based customer service team makes switching awfully easy.
00:11:43.000Just go to patriotmobile.com/slash Charlie or call them today at 878-PATER.
00:12:24.000Primarily, Charlie, he believes that because of the work of Nick Bostrom, who is an Oxford philosopher, transhumanist, co-founder of the World Transhumanist Association.
00:12:35.000And Nick Bostrom published a book in 2014 called Super Intelligence.
00:12:41.000And super intelligence basically lays out all of the different paths that an artificial intelligence system or series of systems could make to superhuman intelligence and then how those systems could destroy all of humanity or at least some significant portion of it.
00:13:01.000So, Musk has been taken this up really since 2014 is when you really start hearing him speak out about this, mainly because of that book.
00:13:11.000And Nick Bostrom, incidentally, was also very much influenced by Eliezer Yudkowsky, who is at the Machine Intelligence Research Institute.
00:13:24.000And Yudkowski is the one who really has stirred up all of this controversy about whether or not artificial intelligence poses an existential threat because of Yudkowsky, and especially because of a Time magazine op-ed that he published around the same time that Elon Musk and company signed their open letter for an AI moratorium.
00:13:49.000Yudkowski argued not that AI is a danger or some distant existential risk, maybe in the future.
00:13:58.000He just flat out says if these systems are allowed to get above where they're at now, and maybe if they're allowed to remain where they're at now, they will inevitably kill us.
00:14:09.000So, of course, that's like the far, far, far end of the kind of doomer spectrum.
00:14:16.000But I do think that even if you don't believe that AI poses some kind of existential risk, as those guys do, there are so many other really dramatic downsides to this technology that those at the very least need to be taken seriously and accountable.
00:15:04.000Well, so, first off, I just want to state my own concerns.
00:15:08.000My concerns are three: one, you have this intense development of a human-AI relationship that I think is very unhealthy.
00:15:17.000Two, you have an enormous threat to jobs, especially white-collar jobs.
00:15:22.000I'm not all that sympathetic to the white-collar, but at least we have to admit that this is going to be a big, big problem insofar as social structure.
00:15:31.000And three, you've got this push to put AI in education, and that I believe will be undoubtedly one of the most intense brainwashing tools that has ever been unleashed.
00:15:44.000But if you look at what they're talking about, they're talking about AI systems that kill us all or disrupt society and civilization so much that everyone has to unplug their computers and throw them away, right?
00:15:56.000Basically ending the industrial era as we know it.
00:16:00.000And so oftentimes Yudkowski and sometimes Bostrom and definitely Musk, they're criticized because they don't lay out these definite paths.
00:16:11.000An intelligence explosion could happen.
00:16:15.000And if that intelligence explosion is not aligned with human interests or human existence, then that intelligence explosion could end the world is how they put it over and over again.
00:16:28.000But even despite that sort of dramatic projection of this very abstract future, there are those, including Yurkowski and Bostrom, who have actually laid out specific paths which would lead to that destruction.
00:16:44.000Running short on time, maybe we hit that in the next segment, but I will say this.
00:16:52.000That AI, these direct paths, they are also of major concerns.
00:16:58.000But I really think that the most important thing that people need to focus on are the immediate effects of artificial intelligence on psychology and society.
00:17:10.000Are you feeling burned out and a little tired?
00:17:12.000Look, I want to tell you about something that I've become a big believer in.
00:17:15.000And if you do not know about it, you got to research it.
00:17:20.000NAD is a precursor for your body to be able to create ATP, which is basically the life force of everything that you do.
00:17:28.000And look, there's a lot of people out there that are promising energy and doing all this, but go do some research on NAD and go see actually how incredibly important it is for high performance to be able to go actually get it to the next level.
00:18:10.000I eat well and do other things as well.
00:18:13.000But if you look at NADH, especially when it combines with CoQ10 and marine collagen, it boosts your body's cellular function.
00:18:20.000I would never tell you guys to go do something I myself did not do.
00:18:24.000And Strong Cell has been able to put together a scientific breakthrough in cellular health replenishment that combines NADH, CoQ10, and marine collagen.
00:18:33.000And when you combine them together, you get mental clarity.
00:18:49.000I'm talking about overall health from the cellular level.
00:18:52.000NADH has been called the anti-aging enzyme that helps with so many issues like brain fog, short-term memory loss, blood pressure, heart disease, blood sugar retention, and so much more.
00:20:21.000So, to lay this story out, this all unfolded about two weeks ago.
00:20:26.000You had the Future of Life Institute, which is ultimately composed of mostly transhumanist-leaning individuals, released an open letter calling for a six-month moratorium on any AI system above the level of GPT-4.
00:20:43.000Now, the signatories include Max Tegmark, author of Life 3.0 and one of the co-founders, Stuart Russell, a computer scientist, and then, of course, Elon Musk and Yuval Noah Harari, along with I think now up to 2,000 other AI experts.
00:20:59.000And the dangers that they point out are the media and the internet environment being flooded with disinformation, which I think is a very real danger.
00:21:10.000They point out the loss of jobs, mass loss of jobs, including fulfilling jobs, which I think is a very significant danger.
00:21:18.000Goldman Sachs just put out a report where they estimate 300 million jobs will be lost worldwide due to AI, for instance.
00:21:27.000And then finally, they worry that human beings will lose control of civilization.
00:21:32.000And I think that even if artificial intelligence doesn't even go any further than it is now, what you will end up with where we're already going is that most of us are losing control of our civilization.
00:21:46.000And that power, the power of the direction of our civilization, lies in the hands of technocrats, tech corporations, government, and military institutions that have little to no regard for our wants, our will, and our needs.
00:22:04.000You see that in so many different levels.
00:22:06.000So Yuval Noah Harari signing on to that really doesn't surprise me.
00:22:10.000I have a much more positive view of Yuval Noah Harari than most, even if I disagree with him profoundly on the very basics of what reality is.
00:22:19.000I think that he is oftentimes, his warnings are oftentimes ignored in favor of his sort of provocative and especially the sort of vicious anti-religious rhetoric that he puts out.
00:22:32.000But it doesn't surprise me that he signed that.
00:22:34.000And it was in response to that letter that very same day that Eliezer Yudkowski from the Machine Intelligence Research Institute published that Time magazine op-ed saying it's not enough.
00:22:59.000So Nick Bostrom's superintelligence lays this out in the greatest detail, but Yudkowski in a number of articles, mostly published at Less Wrong, and other outlets that are mainly in that sort of existential risk community.
00:23:16.000What they argue basically is that artificial intelligence, especially an artificial general intelligence.
00:23:23.000So artificial narrow intelligence being all the systems we have now dedicated to language, to battlefield simulation, or to biological simulation.
00:23:33.000Artificial general intelligence is a system that has basically overlapping narrow intelligences, so multiple cognitive modules like the human brain and can move flexibly between them.
00:23:47.000And also, all of them run simultaneously, right?
00:23:50.000And so, what you're talking about then is at least in a kind of alien form, a human-like intelligence, but it goes much faster.
00:23:58.000It has an infinite memory, basically, and it's able to look at much larger amounts of data.
00:24:06.000And so, the risk, the existential risk to humanity, as these guys put it forward, is that that system will be programmed with or will emerge, it will develop a will of its own.
00:24:20.000It will have a sort of the desire for self-perpetuation, and because of that, it will be an evolutionary competitor with human beings.
00:24:32.000Or, worse, if there are multiple such systems, they will be evolutionary competitors with human beings.
00:24:39.000And because it's an artificial intelligence sitting on a server or sitting in the cloud, potentially it could be replicated indefinitely so that you end up with thousands, millions, billions of these super intelligent AIs.
00:24:55.000And if they are not aligned with human interests and human values, or if they're not aligned with the necessity of human existence, they argue they would just simply manipulate the infrastructure that they have access to to destroy us.
00:25:13.000So, the two major bits of the infrastructure that are pointed out are weapon systems, especially nuclear weapon systems, if they were able to hack into them, or biological systems like biolabs,
00:25:28.000so that an AI would basically create in silico some sort of deadly virus and then order it on the sly from one of the many biofoundries that exist across the world that create mutant microbes to order and would then unleash this on the world.
00:25:49.000Now, we're in serious sci-fi territory there.
00:25:53.000And then, the third possibility is that maybe these AIs or this one major AI would not have access to any of those systems, but it would have access to human beings who are in control of those systems.
00:26:08.000And so, the AI would then manipulate human beings to either launch nuclear warheads or launch any sort of weapon at other human beings or to create and release a virus or to bring planes out of the sky or to create a system situation in society in which we go to war.
00:26:29.000Maybe it targets specific leaders and convinces a leader to such as Putin, let's say, or Joe Biden, if you consider him to be in control, of starting World War III, whatever, right?
00:26:42.000That is the sort of vision, those are the realistic pathways.
00:26:45.000And they go on and on and on to much more, I think, implausible sorts of scenarios.
00:26:51.000But that's basically what they're talking about when they're talking about AI systems that could destroy humanity.
00:26:58.000So, what needs to be done politically or otherwise to prevent that situation does not occur?
00:27:06.000You know, I'm fairly, I don't have any real answer to that.
00:27:22.000Jan Lacun from Meta is just one example of that sort of resistance.
00:27:28.000He just completely dismisses all of the dangers, including the lesser dangers that I talked about earlier, the psychological damage and social damage.
00:27:57.000So the second option brought forward by Yudkowsky, U.S. puts a hard stop and basically ceases all large GPU clusters in the U.S. and then signals to China if they don't stop, perhaps there's going to be a major problem.
00:28:17.000And he goes as far as to say that once this sort of ban is in effect, if you suspect if intelligence has any sort of indication that an advanced AI is being trained on foreign soil, it should launch an airstrike, even if that means nuclear war, because in his mind, artificial intelligence is more dangerous.
00:28:38.000So politically, I think there's no immediate solution that I can see.
00:28:44.000You could do something really foolish, like what Yudkowski is talking about and start World War III, or something that looks like the Restrict Act, where all of a sudden all of these civil liberties are in danger of being squashed by the president or the commerce department, right, and the U.S. government.
00:29:03.000And so politically, I don't think there's really much to do.
00:29:12.000I think the best way to think about this is that all of us, human beings, are under threat from these systems, not necessarily because the systems are going to kill all of humanity, but because these are systems of technocratic control.
00:29:29.000And so first, identifying that problem, which I think is pretty well identified, and second, organize among ourselves and in the institutions, the sort of low-level or mid-level institutions that really do have power and influence, and figure out how it is that we can do without these systems.
00:29:48.000Or for those who think that it's best to adopt them as weapons against the larger structure, to adopt them within limits in order to survive what is inevitably, and I think this is part of this sort of inevitability that I foresee, major economic downturn coupled with real technological advances so that the upper crust becomes more powerful and the populace below becomes less powerful.
00:30:24.000And I think we should brace ourselves for something like that with these technologies, but we at least have the chance to come up with strategies of how to remain outside of those systems so your kids aren't being raised by AI bots, so that your job is not under threat by those bots.
00:30:43.000And you maybe as an employer make the decision, I am not going to replace humans with artificial intelligence.
00:30:50.000And in general, that we as a community and we as individuals are not going to become part of and participate in this human AI symbiosis pushed by people like Kamalan Musk.
00:31:04.000So how much, really quick, and I'm asking for a reason, how much does like chat GPT cost to develop?
00:31:08.000Like if we were to develop a natural law AI, because I mean, based on what you're telling me, the only logical solution is we have to create our own super weapon to be able to deter what they're going to do.
00:31:19.000We just can't hope it's going to get better.
00:31:33.000It's one that I fear has all of the temptations that, as much as I hate to go there, the ring and the Lord of the Rings, it has all of those temptations.
00:31:44.000But I do think that it's a reasonable response.
00:31:46.000And a lot of people on our side will be doing that.
00:31:50.000And so the sort of logical pathways towards that should also be on the table, in my opinion.
00:31:56.000Although for me, Charlie, I will admit, I'm a Luddite by instinct.
00:32:00.000And I do think that the more we distance ourselves from this while not losing total.
00:32:10.000At the same time, it's exciting to be able to find pattern recognition and tumors for kids that are dying unnecessarily right now in children hospitals.
00:32:18.000Like that actually appeals to me, right?
00:32:20.000If you could be all of a sudden be able to run the blood work of 100,000 kids that are dying of sickle cell, you know, you know, disorders or leukemia and you might be able to give them life.
00:32:30.000But Joe, I mean, if there's not a political solution and it's not realistic to ban it and we can only avoid it so much, I mean, isn't it logical to have some sort of a check and balance?
00:32:39.000It's like the ring from Lord of the Rings, but our founding fathers demonstrated that there is a way to develop a structure, a system.
00:32:45.000I mean, you could almost take constitutional principles, checks and balances, separation of powers, and put that into an AI type format where we reluctantly say, okay, we don't love the fact we have to do this, but we got to be able to compete in the AI space or else the bad guys, the wokeys, are going to use it for total evil, right?
00:33:07.000Because there is a lot of good to be done.
00:33:28.000At this point, Charlie, I don't think any sort of doomsayer scenario is totally off the table, you know, within degrees.
00:33:37.000I think that as far as people on our side are concerned, however one conceives of it, let's just say the populist right, working Americans, legacy Americans.
00:33:50.000I think that one of the most important thing that young people, one of the most important things young people can be doing right now is learning about these technologies and learning how to use them, whether it's AI programming or just simply using an AI program or Bitcoin or other blockchain technologies, because these are going to be the kids that we're relying on or older people, but mainly it's going to be kids that are able to do it.
00:34:18.000And we're going to need that expertise going forward, if only to defend our little enclaves.
00:34:25.000As far as creating anything on the scale of GPT, I don't know what the initial investment was.
00:34:32.000I know Elon Musk initially, Elon Musk initially invested $100 million along with other investors.
00:34:42.000And then in 2019, Microsoft gave them $1 billion to really advance their GPT technology, the generative pre-trained transformers, the language technology.
00:34:55.000And then earlier this year, Microsoft or late last year, Microsoft put in an additional $10 billion.
00:35:05.000And so the power, the real power of GPT isn't necessarily in the architecture or in the programming.
00:35:15.000And they just make these artificial brains bigger and bigger and bigger.
00:35:19.000So it's very difficult for me to imagine how right-wing populists or any of the major financial backers would be able to come up with something that would compete with that.
00:35:30.000Now, smaller systems could definitely defend against it, but I think that to a certain extent, as far as just raw power is concerned, there's a certain degree of tragedy in all this.
00:35:45.000You know, there's a certain degree of resignation that these major tech corporations have the resources and the pre-existing expertise and infrastructure to create systems that we will never be able to compete with in the coming decade at the very least.
00:36:00.000You have guys like Peter Thiel, who, you know, ostensibly are on our side, and Palantir is definitely a very powerful system in its domains.
00:36:09.000But again, I think that what we're talking about in this AI arms race is something much more akin to just a spiritual descent into lower and lower realms than any kind of normal worldly competition for power that we would have known previously in history.
00:36:27.000Other than that, Mrs. Lincoln, how was the play?
00:36:36.000I do think despite all of my kind of doomerism, I don't think that people have to worry about artificial intelligence turning us all into paperclips anytime soon.
00:36:48.000As far as how to think about this going forward, education, number one, educate yourself on it.
00:36:55.000And resistance, number two, remember what it's like to be human and maintain that humanity in the face of a dramatic shift in the culture going forward.
00:37:08.000I hope solutions will start to emerge.
00:37:10.000We should start to pray on that because there is a God and we are not him, regardless of how try the transhumanists, how hard the transhumanists attempt or try.