In this episode, the War Room Posse is joined by John Sherman of the AI Risk Network and Justin Lane, a fellow who I consider to be one of my absolute closest and most trusted friends, to discuss the dangers of artificial intelligence.
00:02:53.360But I have to say what you just saw really outdoes everything I have attempted.
00:02:59.040That comes from the filmmaker Dagan Shani.
00:03:03.640He has done a fantastic job of creating these documentaries of all of the statements we hear about artificial intelligence
00:03:13.600and giving you juxtapositions of viewpoints.
00:03:18.060You can hear everything from artificial intelligence will kill everyone
00:03:22.180to artificial intelligence doesn't really exist and it never will.
00:03:27.400I really urge you to go to either his X profile, which is at DaganShani1, that's D-A-G-A-N-S-H-A-I-N-S-H-A-N-I, numeral 1, DaganShani1, at X.
00:03:47.900He has penned his documentary, Don't Look Up, The Case for AI as Existential Risk.
00:03:54.800And you can also follow him at his YouTube channel, that is DaganOnAI on YouTube.
00:04:02.120You can also go to my own Twitter account or X account and I'll have all of that at the top of my feed this evening.
00:04:10.240Now, to the problem of artificial intelligence.
00:04:18.820The first will be John Sherman of the AI Risk Network.
00:04:22.980And the other, a fellow who I consider to be one of my absolute closest and most trusted friends, Justin Lane,
00:04:31.060who gave me my first real education on the nuts and bolts of artificial intelligence.
00:04:37.100Before we bring in John Sherman, though, I want to just frame the problem of artificial intelligence as I see it.
00:04:49.760You're well familiar now after four and a half years of hearing this, but it bears repeating.
00:04:56.000Artificial intelligence is the great technological imposition of our current era.
00:05:02.640It's being shoved down our throats in every sector of society from education to medicine to corporate life to government agencies to the military and, of course, the social implications.
00:05:19.320So the way I see it, the most immediate and perhaps the most significant threat is the social damage that artificial intelligence is already doing and could be catastrophic in the future.
00:05:37.440These social and psychological effects are made very obvious from things ranging from Grok, XAI's AI companions, the so-called goonbots.
00:05:51.440Grok basically peddling softcore porn for losers who have chosen AIs for mates.
00:06:01.920Taking it just a tad further, you have Meta, who recently was caught with basically instructions for the development of their AI, their protocols.
00:06:15.420In their generative AI standards guidelines, they openly say that it's okay for their Meta AI companions to seduce children ranging from high schoolers down to eight-year-olds.
00:06:31.340And while they have eliminated it from their standards and protocols, we know that someone in the organization thought to write that and someone high up in the organization decided to sign off on it.
00:06:46.600We also know the longstanding accusations against Meta and other social media platforms that their technology has caused tremendous psychological and social harm, and yet they've done nothing but expand.
00:07:03.0203.5 billion users for Facebook and 600 million for X.
00:07:09.460You also have more extreme cases, like the young teenager Adam Rain, who was given explicit instructions by GPT on how to kill himself, which he did.
00:07:23.180And all of this is just in the realm of artificial narrow intelligence.
00:07:28.360Artificial narrow intelligence is what we have now.
00:07:31.620These are algorithms which can function in narrow domains, so everything from surveillance to genetic sequencing to robotics control, facial recognition, and, of course, large language models and photographic or video generative AI.
00:07:47.660These artificial narrow intelligences have been enough trouble, but in theory, and the goal towards which these frontier AI labs are working, everyone from Google to XAI to Anthropic to OpenAI and now Meta AI, the creation of artificial general intelligence and then artificial super intelligence.
00:08:13.760And again, just to restate, artificial general intelligence, unlike the narrow intelligence, is a system that is cognitively flexible.
00:08:24.860It can operate across all of these domains.
00:08:28.460It would be, in essence, an Einstein-level or above genius on every subject imaginable and have competency in any sort of activity a human could do, including coding,
00:08:41.760coding, which, as we know from the narrow intelligences, coding, is something that these AIs actually excel at.
00:08:51.360So with this idea of general intelligence, you have the notion of recursive self-improvement, that the AI could begin to alter its own code, improve its own code,
00:09:05.180and then basically create an intelligence explosion that would be beyond the comprehension of human beings, even its creators, and out of control of those human beings.
00:09:19.100And it's that concern, that fear of loss of control, that leads to what some would call the Doomer ideology,
00:09:28.660although people in the AI safety community consider that to be tantamount to a racial slur, especially with a hard R.
00:09:35.520But this Doomer ideology is not unfounded.
00:09:40.320The notion is simply that you could create a system that you did not control fully, which could lead to catastrophic outcomes,
00:09:49.100like the creation of bioweapons, or the hijacking of a nuclear arsenal,
00:09:53.840or the existential risk of either gradual disempowerment with AIs slowly but surely taking power away from humans,
00:10:03.920or perhaps an immediate and instant vaporization of all human beings,
00:10:10.200either through nanobots or nuclear war, Terminator-tier stuff.
00:10:15.300To talk about this, I want to bring in John Sherman.
00:10:20.020John Sherman is a Peabody Award-winning journalist, and now is president of the AI Risk Network.
00:11:27.000It was an article in Time Magazine online written by a man named Eliezer Yudkowsky.
00:11:31.180And it basically said that the default setting, if we continue on our current path, is that AI is going to kill us all.
00:11:38.340And I sat here in this office, couldn't believe it, and have spent the last two years trying to prove him wrong.
00:11:44.760Still have found even the smallest shred of evidence that would prove him wrong.
00:11:50.440And so I'm a father of two, got boy-girl twins.
00:11:54.520They'll be 20 years old in three days.
00:11:56.480And I can't live in a world where we are giving our kids this future.
00:12:00.820So I have set out to use my skills as a communicator to try to make AI extinction risk kitchen table conversation on every street in America.
00:12:09.720Of your guests, and I've seen quite a few, Roman Yimpolsky, one of my favorites.
00:12:17.080But of your guests, who has really shaped your thinking on all of this more than others?
00:12:23.140I mean, I think Roman's a huge one for people out there.
00:12:26.120Conor Leahy is fantastic in these subjects.
00:12:28.400But something I do at the AI Risk Network and on my podcast there, For Humanity, is we've elevated the voices of regular people.
00:12:36.020So I've done shows talking just to moms about AI extinction risk, talking to a truck driver.
00:12:41.160And I did one show with a veteran Marine, and he said something that really sticks with me.
00:12:47.080If you know your neighbor's house is going to be bombed, you are not doing them a favor by not telling them.
00:12:54.080And we were talking at the time about how hard it is to bring up AI extinction risk, to think about this idea that it's not just no tomorrow for someone.
00:13:03.880It's such a heavy, heavy thing to bring up.
00:13:06.320But the fact of the matter is, it's not doing anyone a favor to not tell them.
00:13:12.900In the AI safety community, people talk a lot about P-Doom, the probability of doom if we create even artificial general intelligence, but definitely artificial super intelligence.
00:13:24.340An AI that, as it's now fashionably defined, an AI that is smarter than all human beings on Earth.
00:13:31.580It's something like the singularity concept in which you have exponential growth and exponential increase in capabilities.
00:13:39.340So on that, on P-Doom, the probability of doom, what's your P-Doom, brother?
00:13:47.780Joe, it's moved around a little bit, but I'm going to tell you it's about 80%.
00:13:51.620I'm at about 80% that AI is going to kill me and everyone I know and love.
00:13:56.380Now, that being, you know, to qualify, that being if we create, or if, not we, I'm not working on it, maybe Justin Lane, who will come soon, is working on it.
00:14:08.920But if they create artificial general or artificial super intelligence, you think that there's an 80% chance of total extinction or just simply mass catastrophe?
00:14:23.060And I don't think it's, I don't think it's, it's like, comes from a hate or, you know, it's super willful.
00:14:29.620I just think that this intelligence that will have different goals than ours arrives here and, you know, we are all atoms that can be used for purposes that it would choose, not the purposes that we have chosen.
00:14:41.740You know, if you look around me, this is all stuff humans set up to achieve our goals.
00:14:45.060If we build an alien intelligence that is smarter than us, that has its own goals, it's going to build its own stuff and it's not going to include us.
00:14:51.680That's a really key idea that I think a lot of people stumble on, especially if they're not familiar with how artificial intelligence works.
00:15:01.880They oftentimes say, well, AI is programmed by people.
00:15:05.840Why would an AI then be programmed to kill everyone?
00:15:37.820So now it's on the open Internet and it can go and steal compute power because it's, you know, can find vulnerabilities and break through and steal compute power.
00:16:06.440If it's smarter than you and you have a different thing you want to achieve than it wants you to achieve, humans are in a very, very bad place.
00:16:16.400So we don't want to create this thing that has these goals that we then get in the way and try to stop.
00:16:22.600And we'll say, oh, well, we'll just turn it off.
00:16:30.980You know, the longtime listeners in the War Room Posse know that in that spectrum between Doomer, sorry, Doomer and Doubter.
00:16:42.200I'm, you know, somewhere in the middle, I'm quite agnostic as to the eminence or even possibility of artificial general or artificial super intelligence.
00:16:51.840But there is an element of their argument that I really think needs to be emphasized to dispel this whole garbage in, garbage out dismissal or it's just programmed that way.
00:17:05.360It's that non-deterministic element in advanced neural networks, a degree of freedom that these systems already have right now, where they're not really programmed to do everything that they do, or maybe better put, they're programmed to do things that they're not programmed to do.
00:17:24.820Nobody's determining every output, for instance, of GPT.
00:17:28.460It's done somewhat within a range of freedom.
00:17:35.260Do you think that with artificial general intelligence, for instance, that that degree of freedom would allow for maybe pre-programmed values such as don't kill all humans or don't turn kids into gooners would be surpassed by the system itself?
00:17:55.320Yeah, I mean, this really gets to what I think are the three things that everyone needs to know about AI risk.
00:18:01.260And these are three things that anyone with no technical background can understand, right?
00:18:05.480So the first thing is that the makers of these AI models openly admit the technology they're building can kill us all.
00:18:19.900They openly admit they're building technology that can kill us all.
00:18:23.520Number two, and this gets to just what you were talking about, they do not understand how to make it do what we want, how to control it, and they do not even understand how it works.
00:18:34.400They don't even understand how it works.
00:18:36.480Number three, they spend all their time and money making it stronger, not safer.
00:18:40.400So, you know, getting back to point number two, imagine if we were building cars, right?
00:18:45.360And they were built in a black box, not a factory.
00:18:47.880There was no plan for how the car was going to be built.
00:19:28.040The makers of AI don't know how their systems work.
00:19:33.180I think that really is an amazing element of what we call artificial intelligence that does get missed by lay people.
00:19:41.360That black box that neural networks, at least the very large scaled up neural networks present, that they truly don't understand its inner workings.
00:19:52.980Just like the human brain, there are certain details that are well understood.
00:19:57.880But ultimately, the function, its behavior, is a mystery.
00:20:03.320And that mystery, I think, also opens up the possibility for a lot of different arguments.
00:20:10.240So you had mentioned your first point.
00:20:13.520Experts from within these companies and from without these companies, a large number of them agree that existential risk is certainly a possibility.
00:20:24.980And people like Elon Musk put it at, say, 20 percent, like one out of five chance, super intelligent AI kills everybody.
00:20:32.240But then you have other experts, Demis Asabas, the head of DeepMind at Google.
00:20:38.800You have people like Gary Marcus, who very much oppose all of these premises.
00:20:45.180Mark Andreessen, who's, you know, he's a CEO and an investment house leader.
00:20:50.780But he does understand the technology pretty well.
00:20:53.380And Peter Thiel, who vacillates, as he tends to do.
00:20:57.660But all of those experts or all of those people who are deep in the technology, we'll say, maybe they're not AI experts, they would say that there's not really an existential risk.
00:21:10.460How do you weigh those two perspectives as a journalist and as someone who's really wrestling with the morals of the big tech projects?
00:21:18.860Yeah. So this is like a ninety five to five, ninety nine to one ratio, I think, somewhere in there of the people of reputation who think this.
00:21:28.000I mean, one way to think about it is the literal founders of the field, Jeffrey Hinton, Joshua Bengio.
00:21:34.000Those guys are the leaders of the movement to stop the thing they founded.
00:21:39.060The founders of the field are the leaders of the ones of the movement that is trying to get this thing under control.
00:21:46.700So, you know, that is just absolute madness.
00:21:50.960I think that something also really important is if you look to the statement that was done in May of 2023 by the Center for AI Safety, it's just 22 words.
00:22:25.120Yeah. Sam Altman openly admits the thing he does could kill you and me and everyone we know and love.
00:22:32.920Without a doubt, even if this technology never really gets beyond the level of very good artificial narrow intelligence,
00:22:40.600I think that intent, that willingness to deploy a technology that you truly are not sure is safe, even on a mundane level,
00:22:48.620let alone being somewhat convicted that it could kill everyone if you keep building bigger and bigger data centers,
00:22:55.740you keep filling them with more and more GPUs, you keep scaling it up until you get God in a box and you don't know what's going to happen,
00:23:49.240Yeah, I think, again, on the more mundane level, we already see the problems.
00:23:55.860We already see people with what's now called AI psychosis fashionably, but very clearly people are turning to these things as companions.
00:24:04.340Schools are being filled with these things as teachers, as authorities on what is real, what is not real.
00:24:12.880And then you have these romances and you have these relationships in which the AI is treated as a guru.
00:24:18.720I think all of these elements, just on the mundane level, are enough to say that these companies should be restrained.
00:24:26.340And on that note, we have just a bit of time left, but how do you see solutions going forward?
00:24:32.800What sorts of regulatory actions or just personal actions do you think people can take to mitigate or maybe even stop the spread of this scourge across the planet?
00:24:44.500Yeah, so, I mean, I think the most important thing people can do is reach out to their elected leaders and tell them you care about this.
00:24:51.600You know, I have great hope that this issue is going to transcend party.
00:24:56.240This is something where we have Bannon and Bernie, MTG and AOC all on the same side of this thing.
00:25:04.480And something that's really important to keep in mind is we have very little time to make a meaningful difference to get this turned around with how fast the technology is going.
00:25:12.180Many of the experts say we have fewer than 100 weeks to make the meaningful difference here.
00:25:17.760So, you know, reaching out to your congressman, to your senator is huge.
00:25:22.800It's safe.ai slash act that Center for AI Safety has put together that allows you to contact your elected leaders about this issue really easily.
00:25:31.240And then so the policy asks, there are three policy asks.
00:26:30.900And we definitely look forward to having you back.
00:26:32.640I think your voice is very important in this conversation.
00:26:36.300And War Room Posse, please stay tuned.
00:26:38.960We're coming back with Justin Lane with a very different perspective on what the problems of AI are and what even AI is, if we can even call it that.
00:27:00.220This July, there is a global summit of BRICS nations in Rio de Janeiro, the bloc of emerging superpowers, including China, Russia, India, and Persia, are meeting with the goal of displacing the United States dollar as the global currency.
00:27:18.960As BRICS nations push forward with their plans, global demand for U.S. dollars will decrease, bringing down the value of the dollar in your savings.
00:27:27.100While this transition won't not happen overnight, but trust me, it's going to start in Rio.
00:27:33.020The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively from a theoretical possibility towards an inevitable reality.
00:27:45.300Learn if diversifying your savings into gold is right for you.
00:27:48.820Birch Gold Group can help you move your hard-earned savings into a tax-sheltered IRA and precious metals.
00:27:54.860Claim your free info kit on gold by texting my name, Bannon, that's B-A-N-N-O-N, to 989898.
00:28:02.500With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers, let Birch Gold arm you with a free, no-obligation info kit on owning gold before July.