In this episode of The War Room, host J.K. Banno sits down with Nate Sores, co-author of the book, "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All," to discuss the problem of artificial superintelligence.
00:00:29.000I wish in my soul, I wish that any of these people had a conscience.
00:00:34.000Ask yourself, what is my task and what is my purpose?
00:00:38.000If that answer is to save my country, this country will be saved.
00:00:45.000War Room. Here's your host, Stephen K. Bannon.
00:00:50.000Good evening. I am Joe Allen, sitting in for Stephen K. Bannon.
00:01:00.000Many of you in the War Room Posse, if not most of you, are familiar with the concept of artificial super intelligence.
00:01:07.000A machine that outpaces human beings in thinking ability, in memory, in data collection, and if given access to the outside world through robotics or even manipulated human brains, would be able to outperform human beings in the real world.
00:01:26.000Now, you also know I'm quite skeptical of the claim that this is imminent or even possible.
00:01:37.000I'll tell you briefly about an experience I had at a forum that was given to Jaime Sevilla of Epic AI.
00:01:48.000Epic AI does evaluations on AI systems, testing them to see how good they really are.
00:01:56.000And what Jaime Sevilla presented was complicated.
00:02:03.000On the one hand, we all know that AIs are extremely fallible, but what he showed was that for some number of runs, GPT-5 could do mathematical calculations at the level of a PhD mathematician.
00:02:21.000Now, if you aren't a mathematician, perhaps you're not all that concerned about it, but what it shows is that objectively, without any question, this artificial mind can perform a specific cognitive task better than most human beings on Earth.
00:02:43.000Now, this was hosted by the Foundation for American Innovation, and Ari Kagan of FAI pointed out something very, very important, that on the one hand, we see that GPT-5 is incapable, oftentimes, of creating an accurate map or even answering a mundane question accurately or counting fingers.
00:03:09.000On the other hand, for some number of runs, GPT-5 can outperform the vast majority of human beings on Earth in complex mathematics.
00:03:21.000And even more interesting, it often does so by way of alien pathways, non-human pathways.
00:03:31.000It chooses routes through the information that no human being would and arrives accurately at its destination.
00:03:39.000So on the one hand, you have a top-performing artificial brain, and on the other, you have a mechanical idiot.
00:03:51.000How do we approach this problem as Americans, as human beings, as Christians, as naturalists?
00:04:03.000Joining us this evening is Nate Sores, co-author of the admittedly fantastic book,
00:04:11.000If Anyone Builds It, Everyone Dies, Why Superhuman AI Would Kill Us All.
00:04:19.000Co-authored with Eliezer Yudkowsky, well-known on The War Room for many reasons, but perhaps most for his advocacy of bombing data centers in any country that decides to build superintelligence against any kind of possible international treaty.
00:04:39.000Nate Sores, we really appreciate you coming on. Thank you so much for joining us.
00:05:19.000Why would everyone die if anyone built superintelligence?
00:05:24.000Yeah, so the very basic point is we're trying to build machines that are smarter than any human, that could outmaneuver us at every turn.
00:05:36.000That's the sort of thing where, just on its face, from a very basic perspective on its face, if you make machines that are much smarter than humans, that's at least kind of dicey.
00:05:51.000If you further don't know what you're doing while you're building these machines.
00:05:55.000If these machines are grown rather than crafted.
00:05:57.000If these machines have lots of warning signs when they're small, of ways that they aren't doing what anyone wanted or what anyone asked for.
00:06:05.000That it's, it just doesn't go well to build things much smarter than you without any ability to point them in some direction that you want them to go.
00:06:17.000And with modern AIs, we see that modern AIs are grown rather than crafted.
00:06:22.000You know, these, these are not traditional pieces of software where when they do something you don't like, the engineers can look inside them and go to every line of code and find the line of code that says, you know, oh, it was driving a teen to suicide today.
00:06:35.000I'll find the drive teens to suicide line and switch that line from true to false.
00:06:40.000I don't know who set that line to true.
00:07:02.000We could go into a little bit how they're created.
00:07:06.000Actually, yeah, I think without, without going too much into technical detail, because I do want lay people in the audience to really clearly grasp what you're talking about.
00:07:17.000But this is a really important point is when I agree with you completely.
00:07:23.000This idea that the frontier AIs and even more primitive AIs from years past are grown, not crafted, or another way of putting it perhaps is that they're trained, not programmed.
00:07:38.000This is something that a lot of people get hung up on, even software engineers who are stuck in the 80s and 90s.
00:07:44.000Could you just explain to the audience what that means that these AIs are grown and how is it that you can get something out of the AI that you didn't train it for?
00:07:55.000Yeah, so modern AIs, you know, the field of AI has, in some sense, tried to understand intelligence since, you know, 1954.
00:08:05.000And in some sense, that field never really made progress in understanding really in depth how to craft intelligence by hand.
00:08:15.000You know, there were many cases over time where programmers were like, maybe, you know, it's a little bit like this or a little bit like that.
00:08:21.000And they tried to sort of like handcraft some, some intelligent machine that could think well.
00:08:27.000It sort of never went anywhere. When AI started working, it started working because we found a way to train computers that works empirically, where humans understand the training process, but humans don't understand what comes out.
00:08:42.000It's a little bit like, like, you know, breeding cows, where you can take some traits you like and you can get out some traits you like, but you don't have precise control over what's going on.
00:08:54.000So the way it works is you have basically a huge amount of computing power, you have a huge amount of data, and there's a process for combining the data with the computing power to shape the computing power to be a little bit better at predicting the data.
00:09:12.000And humans understand the process that does the shaping, but they don't understand what comes out, what gets shaped.
00:09:19.000And it turns out if you take a really staggering amount of computing power, we're talking, you know, highly specialized computer chips in enormous data centers that take amount of electricity that could power a small city, and you run them for a year on almost all of the text that you can possibly dig up that humans have ever written.
00:09:40.000You shape the computing power that much, the machines start talking.
00:09:45.000We understand the shaping process, we don't understand why the machines are talking.
00:09:49.000I mean, we understand why in the sense that, well, we trained them and started working, but we couldn't look inside them and debug what's going on in there.
00:09:57.000And when they act in a way we don't like, you know, all we can really do is instruct them, stop doing that.
00:10:02.000And then sometimes they stop and sometimes they don't.
00:10:06.000That idea, that black box, as it's oftentimes described, that we don't really know what's going on inside these machines.
00:10:15.000Not just we as in laypeople, the top experts don't really know how these machines are arriving at oftentimes coherent and accurate statements.
00:10:24.000And I think one analogy, two analogies, actually, you bring up in your book that are really, really great to understand that, that scientists know more about DNA and how that results in an organism and more about the human brain and how that results in thought and behavior than they do large neural networks and their outputs.
00:11:45.000Yeah, I think the thing where it changes the tests again, but hides it better.
00:11:50.000That indicates that it knew what the programmer wanted in some sense.
00:11:54.000You know, it doesn't say sorry and then I'll fix it and then make the same mistake, but hiding it without in some sense somewhere in there having something like an understanding of what the programmer wanted.
00:12:08.000But nobody at Anthropic, the company that made the A.I. where you can you can sort of see this test other companies, you know, similar similar cases, but nobody at the A.I. company set out to make a cheater.
00:12:21.000The user didn't want the A.I. to cheat.
00:12:26.000So we've got this concept and you explain it very clearly in the book.
00:12:31.000It's excellent that A.I.s are grown, not crafted.
00:12:35.000And to the extent they're given degrees of freedom, which is really the key to their power, they don't always do what they train for.
00:12:43.000And you also are very clear that you don't want to anthropomorphize these machines.
00:12:50.000You don't want to think of them like you would a human when you discuss their wants or their preferences.
00:12:57.000At the same time, it does seem like what you're describing is a machine with a will of its own to some extent.
00:13:07.000Yes, it's dependent on the infrastructure in the humans to prompt, but it has a will of its own without luring you into anthropomorphization.
00:13:19.000Would you say that that is something that people should wrap their heads around that these machines are not essentially under human control?
00:13:29.000Yeah. So, you know, it's it can be tricky to think about machines here because they're a different sort of thing than we're used to.
00:13:35.000You know, the the common reply in the field of A.I. is people ask, well, can a machine really have a will?
00:13:41.000Can a machine really, really think? And the sort of standard answer is, can a submarine really swim?
00:13:47.000Right. A submarine moves through the water at speed. It can get from point A to point B.
00:13:53.000Is it really swimming? I mean, this word swimming was sort of designed in a world where we were only seeing animals that did swimming.
00:14:00.000And so when a machine starts moving through the water from point A to point B, people could debate all day.
00:14:05.000Is it really swimming? You know, does it count as swimming if you don't have flippers you can kick or arms you can you can wave?
00:14:12.000But at the end of the day, it moves through the water at speed. Right.
00:14:17.000With an A.I., you know, even even back in the old days of A.I., when we look at Deep Blue, which is the chess A.I. that beat Garry Kasparov.
00:14:24.000You know, Deep Blue was an A.I. when A.I.'s were crafted.
00:14:29.000We can look at every line of code in there and tell you what it means.
00:14:32.000You could pause it at any time and figure out every bit and bite inside that computer and know exactly what it was doing.
00:14:38.000And Deep Blue was able to beat the world champion at chess.
00:16:14.000You know, in the wake of AlphaGo, there were many humans who said, well, this AlphaGo was trained on so much human data from centuries and centuries of human knowledge about Go.
00:16:26.000Maybe it's not really an A.I. victory because it's, you know, absorbing all of this human data.
00:16:31.000And so AlphaZero trained on no human data.
00:16:34.000I don't remember the stats off the top of my head, but I think it was trained for some for a relatively short time.
00:16:44.000And I believe it stayed in the human regime.
00:16:47.000You know, the human pro regime, it entered, you know, human amateur and exited human pro in some series of hours, again, without any human data.
00:16:55.000And, you know, this, you know, we could also like one thing to remember also about A.I. when we're talking about the AlphaGo example is A.I. is a technology that improves by leaps and bounds.
00:17:12.000You know, in the, in the, uh, AlphaGo was much better at, uh, at playing Go than the previous A.I.s, but it was, it was even, even more so, you know, the AlphaGo, AlphaZero series of A.I.s.
00:17:28.000AlphaGo and AlphaZero and that, that series of A.I.s, they could play chess and Go and whatever other game you threw at them, uh, decently well.
00:18:17.000Let's, let's give the audience the real meat.
00:18:20.000If you have computers that can overcome human beings at these small games, perhaps you could have computers that could beat us, beat us at war, at psychological manipulation.
00:18:35.000You talk about how it could possibly move through phases from just the realization into vast expansion and acceleration, the intelligence explosion.
00:18:45.000But I also really appreciate the way that you talk about this in terms of probabilities.
00:18:51.000You're not making definite predictions.
00:19:06.000So in, in predicting the future, there's an art to predicting only the things that are very easy to call.
00:19:12.000So if someone is, if you're, if you're playing against a very good chess player, if you played chess against Magnus Carlsen, the best human in the world at chess,
00:19:23.000it would be hard for me to predict exactly what moves either of you were going to make.
00:19:29.000It would be easy for me to predict the winner.
00:19:31.000So with AI, you know, it's, it's, it's hard to predict exactly how it will get there.
00:19:36.000It's easy to predict that at the end of the road, the smarter thing has won.
00:19:43.000I mean, even most likely scenarios are very hard there.
00:19:46.000That's a little bit like asking someone from the year 1800 to predict war in the year 2000, right?
00:19:52.000Like when we're talking about facing down a super intelligence, we're talking about facing down things that can think 10,000 times smarter than you.
00:20:00.000Or sorry, can think 10,000 times faster than you, that can think qualitatively better.
00:20:04.000You know, it's like, it's like a million copies of Einstein that can all think 10,000 times faster, that never need to sleep, that never need to eat, that can copy themselves and, and share knowledge and experiences between them.
00:20:15.000You know, the, the sort of technology that those could cook up, you know, it's not literally 10,000 times faster because there's, there's bottlenecks that aren't just thinking things up.
00:20:25.000But, you know, constructing viruses probably would not be that hard.
00:20:31.000Physical viruses, you mean biological viruses.
00:20:34.000Yeah, there are already places on the internet where you can, you know, send a, some money and an RNA sequence and say, you know, please sequence this for me and mail it to thus and such an address.
00:21:09.000You know, there's, there's, if I was a person in 1800 trying to predict what weapons they would have in the year 2000, I could make some guesses.
00:21:19.000And those guesses are all going to be lower bounds.
00:21:21.000You know, in the year 1800, I could say, well, artillery is getting more powerful and more powerful.
00:21:26.000And I know some of the physics, I know the physical limits say that you can make artillery that's at least 10 times as strong.
00:21:45.000But they're actually quite a bit more than 10 times as strong.
00:21:48.000So, you know, I could, I could tell you stories about AIs that think really hard, figure out a lot of what's going on inside DNA and how that works and how to make a sequence that will fool humans into thinking it's beneficial when actually it's, it's not beneficial.
00:22:03.000And then, you know, find some way to, you know, these days, there's not very good monitoring on biological synthesis laboratories.
00:22:15.000Some people are trying to set it up a little bit, which is, which is great.
00:22:18.000But these days, you know, you, you have the wrong DNA sequence, you mail it to the wrong people, you mail them some money, you know, you can electronically send the money, you could, you could probably be synthesizing these viruses.
00:22:28.000And, you know, even if that pathway is cut off or turns out to be hard, wrapping humans around your finger somehow and getting, getting humans to do something that, you know, leads to the creation of some virus like this.
00:22:38.000This is a little bit like the artillery shell that's 10 times stronger than one in 1800.
00:22:43.000It's not really what happens. What really happens is something that seems more fantastical that you're less sure how it could have happened.
00:22:51.000But it's, it's really not hard for, for very, very smart entities with access to the whole internet to take humanity in a fight if they're trying.
00:23:02.000Really, the reason that it, it, the, the answer is not just, you know, they make a virus and kill us is that, you know, the, the, the difficult part from the perspective of an AI is getting its own automated infrastructure that isn't full of, you know, fallible primate monkeys.
00:23:18.000That's the part that takes some steps. Killing the humans once you have the infrastructure, it's not actually that easy to, you know, if you're, if you're really trying to make a virus that can kill everybody, that doesn't seem that hard.
00:23:29.000Well, we only have just a few moments before we go to the break.
00:23:34.000And I would really like to discuss your proposed solutions to this on the other side and a few other maybe challenging questions, but in the very, just in a minute or two, before we go to break, why would these AIs do this?
00:23:49.000This, you, you've kind of described how they could, why, what would the motive, so to speak, be?
00:23:56.000Yeah, the, this is one of those things that's easy to predict to the endpoint, even though it's hard to predict the pathway.
00:24:02.000So it's actually very hard to predict what AIs will want, because as we said, they're grown, not crafted.
00:24:09.000They, they want, they, they pursue all sorts of drives that are not what anyone asked for, what anyone intended.
00:24:16.000And probably these AIs would, would pursue all sorts of weird stuff, you know, maybe something a little bit like flattering, maybe like, you know, making things that are to humans, what, what dogs are to wolves, like some sort of, some sort of weird thing that they're, that they're pursuing.
00:24:34.000The reason that this kills us is that almost any goal the AI could be pursuing can be better pursued with more resources.
00:24:44.000And we were using those resources for something else.
00:24:50.000It's that the AI, you know, builds its own infrastructure, builds out, you know, infrastructure that makes the world, you know, it, it, it captures all the sunlight for whatever purpose it's doing.
00:25:11.000What you're describing sounds like alchemy to me.
00:25:14.000You've described in your book, actually, this process is alchemy, turning lead into gold.
00:25:18.000And speaking of gold, go to birchgold.com slash Bannon.
00:25:24.000Is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
00:25:29.000Can the Fed take the right action at the right time?
00:25:31.000Are we going to be looking at a potential economic slowdown?
00:25:35.000And what does this mean for your savings?
00:25:38.000Consider diversifying with gold through Birch Gold Group.
00:25:42.000For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, high inflation, and super intelligence that will kill everyone you know.
00:25:55.000Birch Gold makes it incredibly easy for you to diversify some of your savings into gold.
00:26:02.000If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical gold.
00:26:10.000Not even robots will know where you hide it.
00:26:13.000Or just buy some gold to keep it in your safe.
00:29:56.000That's why doctors create Field of Greens.
00:29:58.000A delicious glass of Field of Greens daily is like nutritional armor for your body.
00:30:04.000Each fruit and each vegetable was doctor selected for a specific health benefit.
00:30:10.000There's a heart health group, lungs and kidney groups, metabolism, even healthy weight.
00:30:16.000I love the energy boost I get with Field of Greens.
00:30:19.000But most of all, I love the confidence that even if I have a cheat day or, wait for it, a burger, I can enjoy it guilt-free because of Field of Greens.
00:30:28.000It's the nutrition my body needs daily.
00:30:31.000And only Field of Greens makes you this better health promise.
00:30:35.000Your doctor will notice your improved health or your money back.
00:33:02.000Nate, we've talked about some of these basic principles.
00:33:06.000AI is trained, not programmed or grown, not crafted.
00:33:11.000AI is not always going to do what it's trained to do.
00:33:16.000Advanced AI will have what we could say is like human preferences.
00:33:23.000And as it progresses from general intelligence, theoretical for now, and improves itself,
00:33:31.000it could lead to an intelligence explosion resulting in a super intelligence that not only could kill everyone on Earth,
00:33:43.000but you say most likely would kill everyone on Earth.
00:33:46.000Before we get to your concrete proposals on what people should do about this theoretical problem,
00:33:53.000I would just like to give you the floor to wrap up the idea, to cinch up your argument,
00:33:59.000how and why artificial and super intelligence would be an existential threat to humanity.
00:34:05.000Yeah, so almost any goal it could pursue.
00:34:10.000Humans, happy, healthy, free people, are not the most efficient way to get that goal.
00:34:16.000It could get more of that goal by using more resources for other things.
00:34:20.000Whatever else it's trying to get, you know, probably more computing resources could help it get more of it.
00:34:26.000Probably creating more energy could help it get more of it.
00:34:29.000Probably capturing more sunlight could help it get more of it.
00:34:32.000You have, if you have automated minds that are able to, that are smart in the manner of humans,
00:34:38.000that are able to build their own technological civilization, that are able to build their own infrastructure,
00:34:43.000what that leads to, if they don't care about us, is us dying as a side effect,
00:34:48.000in the same way that ants die as a side effect as we build our skyscrapers.
00:34:51.000It's not that they hate us, it's that there's a bunch of resources they can take for their own ends.
00:34:57.000And so if we want this to go well, we either need to figure out how to make the AIs actually care about us,
00:35:03.000or we need to not build things that are so smart and powerful that they transform the world like humanity has transformed the world,
00:35:10.000except we're the ones dying as a side effect this time, as opposed to, you know, a bunch of the animals.
00:35:16.000There was a fantastic open letter issued, if I'm not mistaken, in 2023 from the Future of Life Institute that argued that AI development should be capped at GPT-4.
00:35:31.000We've blown past that, and some of the signatories, including Elon Musk, are among those who continued building no matter what.
00:35:39.000You also have a very brief statement on existential risk from the Center for AI Safety.
00:35:45.000And they make a very similar argument. It's just not worth it, at least not now.
00:35:51.000What are your and Eliezer Yudkowsky's arguments as to what citizens and governments should do to avoid this catastrophe?
00:36:02.000So what the world needs is a global ban on research and development towards superintelligence.
00:36:08.000That, you know, training these new AIs, like I mentioned, it takes highly specialized chips and extremely large data centers that take huge amounts of electricity.
00:36:18.000This is not a sort of ban on development that would affect the average person.
00:36:23.000It would be relatively easy to find all these locations where it's possible to train even smarter AIs and monitor them, put a stop to them, make sure they're not making AIs smarter, right?
00:36:35.000This, you know, this isn't really about the chatbots.
00:36:40.000The chatbots are a stepping stone towards superintelligence by these companies.
00:36:43.000These companies do not set out to make cool chatbots.
00:36:46.000They set out to make superintelligences, and we can't keep let them plowing away.
00:36:51.000The superintelligence is a different ballgame.
00:36:53.000If we get to that ballgame, if we get AIs that sort of go over some cliff edge and become much smarter than humans, that's lethal for everybody.
00:37:05.000Most of the world doesn't seem to understand yet that superintelligence is a different ballgame than the AI we're currently working with and don't seem to understand that we're racing towards the brink of a cliff.
00:37:16.000It seems to me that once people understand that nobody has any interest in going over that cliff edge, there's a possibility to coordinate and say, despite all our other differences, we're not going to rush ahead on this one.
00:37:29.000Much like, you know, the U.S. and the Soviets in the Cold War.
00:37:32.000Many differences. We could agree not to proliferate the nukes.
00:37:36.000We've heard this from Elon Musk for years, although he's continued to move forward with the development of Grok and other AI systems.
00:37:47.000In fact, their founding mission was to create artificial general intelligence in a safe manner.
00:37:53.000Who do you see as the companies or institutions who are most in alignment with your goal of banning superintelligent AI, both either on a national level or through international treaties?
00:38:07.000You know, none of them are advocating for it openly, which I think, I mean, I guess there's there's people who are a little bit more and less clear with the public about where they see the risks, where they see the dangers.
00:38:24.000You know, it's it's not necessarily irrational for somebody like Elon to hop in this race if the race gets to keep going.
00:38:29.000And I lot Elon for saying, you know, this this has a serious risk of killing us all and saying things to the effect of, you know, I originally didn't want to get in the race, but it's going to happen anyway.
00:38:40.000I want to be in it. Right. That's that's not a totally insane picture if everyone else is racing.
00:38:46.000I think. I think many of these these folks running these companies are deluded as to their chances of getting this right.
00:38:56.000So in that sense, I think they should all just be stopping immediately.
00:39:01.000But a I can empathize with the view of thinking that they can do it better than the next guy.
00:39:08.000And in that case, what all these companies should be saying is this is an extremely dangerous technology.
00:39:13.000We're racing towards a cliff edge and the world would be better off if we shut down all of it, including us.
00:39:18.000That's implied by many of the statements they're saying.
00:39:20.000When someone says and, you know, the heads of some of these companies have said, I think this has, you know, 5, 10, 20, 25 percent chance of killing every man, woman and child on the planet.
00:39:30.000If if you think that it doesn't necessarily mean you should stop if everyone else is racing, but it does mean you should say to the world plainly, we should not be doing this.
00:39:39.000Everybody, including me, should be stopped.
00:39:43.000P-Doom, the infamous P-Doom, the probability of doom should super intelligence be created.
00:39:48.000I take it that I don't expect you to speak for Yudkowsky, but your P-Doom is quite high. Can you give us a number, sir?
00:39:55.000I think this this the whole idea of this number is, I think, ill founded the this number.
00:40:02.000There's a big difference between someone who thinks that we are in a big danger because humanity can't do anything and somebody thinks we're in big danger because humanity won't do anything.
00:40:11.000If you're just predicting, you know, what are the chances that we die from this?
00:40:14.000You're mixing together. What can we do and what will we do?
00:40:19.000My answer, first and foremost, is that we can do something.
00:40:23.000This has not been built yet. Humanity is backed off from Brinks before.
00:40:27.000If you ask, suppose we just charge ahead, suppose we do nothing, suppose we rush into making machines that are smarter than every human,
00:40:35.000that can out-manover us at every turn, that can think 10,000 times faster, that never need to sleep, never need to eat, can copy themselves,
00:40:43.000and that are pursuing goals no one asked for and no one wanted.
00:40:48.000What's the chance we survive that? The chance we survive that is roughly negligible.
00:40:52.000But that's not the question that matters. The question that matters is, what are we going to do?
00:40:56.000And can we do something? And the answer to can we do something is yes.
00:41:01.000You know, personally, I'm more of a pea gloom kind of guy.
00:41:05.000I think the probability of gloom is much higher than doom, meaning that the real risk is that the AIs become so annoying,
00:41:14.000so grotesque, as it was put to me by a friend, that we would be better off extinct.
00:41:21.000But your goal to ban superintelligence, to cap it, I'm completely amenable to that.
00:41:28.000I don't want chatbots. I don't think anything but the most essential medical or military AI should even necessarily be pursued.
00:41:36.000But whether it is imminent or whether it's even possible, if we have a ban on artificial superintelligence, I get what I want, right?
00:41:46.000Like, if it was possible, then we don't get it. If it was never possible, well, at least we showed due diligence.
00:41:54.000But there are arguments that the enforcement of this could go out of control, that the enforcement would be the real problem, especially global treaties, global governance.
00:42:05.000So you're well familiar with Peter Thiel's argument that the concern about artificial intelligence, general, super, whatever, that AI killing everyone is less of an immediate concern than the global governance it would require to keep that at bay.
00:42:26.000And this falls into the line with a lot of patterns we see in history from the drug wars, right?
00:42:31.000You have the danger of drugs and the control mechanism of the war against drugs or with terrorism.
00:42:37.000You have the danger of terrorism, the control mechanism of the Patriot Act and the rest of the global surveillance state.
00:42:44.000And even on a mundane level, right? Right now, there's a big push for age gating to make sure that children can't access pornography or malicious AIs.
00:42:55.000But then on the other side of that, you have the danger of required bio digital identity in order to use the Internet.
00:43:02.000So how do you respond to those concerns that global governance or any overreaching governmental structure would be more of a danger than theoretical super intelligent AI, sir?
00:43:16.000So I think that's largely the sort of argument made by someone who does not really believe in this in this possibility.
00:43:24.000And, you know, I would I would sort of prefer to have the argument about is this possible? Could it come quickly?
00:43:29.000I would also say, you know, people say this, I think, often rightly about things like like the the war on drugs, the war on terrorism, where there was a lot more, you know, power being aggregated than was maybe worth worth worth what we got from it.
00:43:48.000But no one says that about nuclear arms treaties. Right.
00:43:51.000And that's because in some sense, A, that's because they believe in nukes. B, that's because, you know, nuclear weapon, making a nuclear weapon, it takes a huge amount of resources that's easily monitorable and doesn't really affect the individual consumer.
00:44:07.000Right. You don't need something like the TSA to be checking everybody's bags for fissile material. Right.
00:44:13.000You you have and and modern AIs is much like this.
00:44:17.000You know, it's it's not like you need to you need to restrict consumer hardware.
00:44:21.000Modern AIs are trained on extremely specialized chips that can be made in extremely few places in the world that are housed in extremely large data centers that, again, run on, you know, electricity comparable to a small city.
00:44:32.000This is this is this is not the sort of monitoring regime that would be more invasive than monitoring for, you know, nuclear arms treaties.
00:44:42.000The the the the difference really is that people are uncertain about whether, like you say, superintelligence is possible and whether it is possible relatively soon.
00:44:53.000That's where I would prefer to to to debate someone who thinks now is not the time for that kind of treaty.
00:45:00.000On that note, I think about this in terms of the technical limits, not just the will to create it, but the technical limits.
00:45:07.000You argue that it's quite possible within the realm of physics and mechanics to create a super intelligent A.I.
00:45:17.000I think about one example in particular, some supersonic jets.
00:45:48.000I didn't have to look at my notes lying.
00:45:50.000The NASA X 43, but it's it's a bit faster than the 1959 version, but not that much faster.
00:45:57.000Isn't it possible then that we will run into technical limitations that would keep anything like general or super intelligence from arising?
00:46:07.000So it very likely is an S shaped curve.
00:46:09.000The question is where there's two questions.
00:46:12.000One is, are there multiple different S shaped curves?
00:46:15.000The other question is, where does the sort of last S shaped curve fall off?
00:46:20.000So to the question of multiple S shaped curves, you can imagine someone after AlphaGo, which we discussed, saying, you know, I know that these AIs are more general than any AIs that came before.
00:46:31.000You know, Deep Blue could play only one game, whereas the AlphaGo series of AIs can play multiple games.
00:46:35.000But I just don't see them going all the way.
00:46:38.000I don't see the AlphaGo Monte Carlo tree search value policy network type architecture, which is what those things were called more or less.
00:46:45.000I don't see those AIs, you know, ever, ever talking.
00:46:48.000I don't see those AIs, you know, there's maybe an S shaped curve for these game playing AIs.
00:46:53.000But ChatGPT is not a bigger version of AlphaGo.
00:46:58.000There was a new advancement that unlocked qualitatively better AIs that can do qualitatively more things across a wider range of options in better ways.
00:47:09.000You know, maybe it's the case that ChatGPT will hit a plateau along that S shaped curve.
00:47:15.000But the question is, you know, when will this field come up with some other insight like the one that unlocked ChatGPT?
00:48:31.000Rhetorically, I can't say how much I admire the way that your book is written, the cleverness of the terms of phrase and the formulations, the title especially.
00:48:42.000We have only just a few minutes remaining.
00:48:44.000But as well as we can, I would just like to talk really briefly about alignment.
00:48:50.000You argue, and Eliezer Yudkowsky has long argued, that these systems need to be aligned to human values.
00:48:59.000And their stochasticity or non-deterministic elements would preclude that, perhaps.
00:49:08.000You were speaking to a largely Christian, largely conservative audience.
00:49:13.000And without presuming too much, you know that the San Francisco culture is significantly different.
00:49:23.000Whose values would such an AI be aligned to?
00:49:28.000That is a very important question for humanity to ask itself and a question I wish we could be asking now.
00:49:34.000But unfortunately, the problem we face is even worse than that.
00:49:37.000The problem we face is that we are nowhere near the ability to align an AI to any person's values.
00:49:46.000We aren't, you know, the Machine Intelligence Research Institute, you know, for ten years I've been studying this question on the technical side.
00:49:55.000No offense, but I prefer working on whiteboards to talking to anyone.
00:49:58.000We were trying to figure out how to get to the very, like, to get to the point where you could ask whose values are we aligning it to?
00:50:06.000Right now we're not at the point where anyone could aim it.
00:50:09.000Right now we're at the point where, you know, the people in these labs at San Francisco are trying to get it to do one thing, and it does a different thing.
00:50:15.000And they specifically say, stop doing that, do this instead.
00:50:18.000And then it does some other third totally weird thing, right?
00:50:21.000The place where I spend my work is trying to make it so that somebody in charge could point the AI somewhere successfully.
00:50:28.000There's then a huge question of where should we point the AI?
00:51:03.000Thank you very much, sir, for coming on.
00:51:05.000Look forward to talking to you again next week.
00:51:07.000And when inflation jumps, when you hear the national debt is over $37 trillion, do you ever think maybe now would be a good time to buy some gold?
00:51:19.000You need to go to birchgold.com slash Bannon.
00:51:23.000That's birchgold.com slash Bannon for your free guide to buying physical gold or text Bannon to 989-898.
00:51:37.000And you never thought it would get this far.
00:51:41.000Maybe you missed the last IRS deadline or you haven't filed taxes in a while.
00:52:10.000There's a lot of talk about government debt, but after four years of inflation, the real crisis is personal debt.
00:52:17.000Seriously, you're working harder than ever, and you're still drowning in credit card debt and overdue bills.
00:52:24.000You need Done With Debt, and here's why you need it.
00:52:27.000The credit system is rigged to keep you trapped.
00:52:31.000Done With Debt has unique and, frankly, brilliant escape strategies to help end your debt fast, so you keep more of your hard-earned money.
00:52:40.000Done With Debt doesn't try to sell you a loan, and they don't try to sell you a bankruptcy.
00:52:46.000They're tough negotiators that go one-on-one with your credit card and loan companies with one goal, to drastically reduce your bills and eliminate interest and erase penalties.
00:52:56.000Most clients end up with more money in their pocket month one, and they don't stop until they break you free from debt permanently.
00:53:05.000Look, take a couple of minutes and visit donewithdebt.com.