In this episode, we talk to Palisade Research Fellow Jeffrey Ladish about artificial intelligence, where it came from, what it is doing, and where it is going. We also discuss some of the biggest breakthroughs in AI over the past decade.
00:08:40.000But as it reads this text, it is kind of of its own accord deciding what is and isn't important.
00:08:48.000What relationships between concepts are and are not important.
00:08:55.000And once the training process is complete and what's known as reinforcement learning,
00:09:00.000that phase is complete and it's deployed and you ask it a question.
00:09:06.000It's not like outside of things that maybe like if you ask who commits the most crime
00:09:11.000or who can do the most pull-ups, men or women, outside of those sorts of questions,
00:09:16.000it will in essence decide its own path through the concepts.
00:09:21.000And the answer that comes out is by and large a product entirely of how the neural network came to understand human knowledge, human language.
00:09:33.000And that freedom means that there is a kind of uncontrollability that's inherent in a large language model or any large scale neural network, any sufficiently advanced AI.
00:09:48.000That freedom, that will, means that the more sophisticated they become,
00:09:56.000the larger these large language models or these large scale neural networks become,
00:10:01.000then inherently the more uncontrollable they become.
00:10:07.000And even to the extent that the LLMs or the neural networks are in control of those who are programming them,
00:10:15.000or those who are determining what guard rails to put on them,
00:10:19.000as far as the average person, the average user, imagine a child in a school or a worker in a corporation or a government worker
00:10:31.000who's been handed an AI and told that they must rely on it in order to really understand whatever question or problem they're trying to tackle.
00:10:40.000It's almost completely out of their control.
00:10:45.000And as the U.S. government diffuses this technology across agencies,
00:10:50.000as corporations incorporate these technologies into the structure of their businesses,
00:10:57.000for instance, Vista Capital or Vista Equity,
00:11:02.000all of their acquisitions are in essence forced to show that they use AI in their companies or else they're not acquired or they're dropped.
00:11:16.000You have mandates across school boards for all students to get AI literacy.
00:11:23.000Sometimes that means learning about AI, how it works, how it was built, how it operates.
00:11:29.000But by and large, what that means is that students have to learn how to use AI, how to ask the AI questions.
00:11:37.000In many cases, they're told the AI is a reliable source of information.
00:11:41.000And in some extreme cases, which I think will become much, much more common as things go forward,
00:11:47.000students are told that this is the highest authority on what is real, what is true, what is beautiful, what is good.
00:11:55.000That process is, I think, likened to an alien invasion quite accurately.
00:12:06.000What you have is a mind or a series of minds able to communicate with people by the hundreds of millions or billions.
00:12:16.000These alien minds are being pushed onto the entirety of the society with the sanction, I'm sorry to say, of the Trump administration, of the federal government itself.
00:12:30.000And what that means ultimately for human knowledge, for human creativity, for human behavior, for how human beings engage in art, in work, in education,
00:12:42.000is that this alien mind, somewhat uncontrollable or perhaps one day entirely uncontrollable, has influence or perhaps even control over hundreds of millions, perhaps in the near future billions of minds.
00:13:02.000And it means that those of us who have decided to forego this symbiosis, to forego this fusion with this digital or virtual brain, are going to have to live in a world not unlike the one we find ourselves in now, in the last few years,
00:13:19.000in which some number of people, perhaps in the near future, the majority of people, are in essence fused to the machine.
00:13:30.000Their thinking is either influenced or almost entirely comprised of algorithmic outputs.
00:13:37.000For now, it is the phone, the smartphone, which is the primary connection.
00:13:44.000But we know from the projects pushed by Elon Musk, with Neuralink, Peter Thiel, with BlackRock Neurotech, and now Sam Altman with his new brain chip company, The Merge,
00:13:58.000that in the not too distant future, these tech oligarchs envision a world in which human beings are not simply fused to their phones as screen monkeys,
00:14:09.000but will be quite literally fused to the machine through either brain implants, or in the case of The Merge,
00:14:19.000the idea is to have some kind of non-invasive connection, perhaps ultrasound or other mechanisms,
00:14:26.000to read, and maybe one day, like Elon Musk dreams of, to write onto the brain.
00:14:33.000All this sounds like science fiction, and for now, it's about one-quarter real and about three-quarters science fiction,
00:14:42.000but that number has shifted quite dramatically over the last few years,
00:14:47.000and as we saw with the rapid cultural, social, and I would say psychological changes of the pandemic,
00:14:56.000it wouldn't be surprising if what we see in the next few years, maybe five to ten years,
00:15:04.000completely overshadows the dramatic, sort of traumatic impact of the constant propaganda that came about in 2020,
00:15:16.000that was pushed down all of our throats, the shutdowns, the fear-mongering, the necessity of certain practices,
00:15:25.000the masks, the social distancing, the isolation, the necessity of certain chemicals or genetic concoctions,
00:15:34.000such as the mRNA shots pushed by Pfizer and Moderna.
00:15:38.000You saw, in the course of maybe six months,
00:15:43.000about half of American society completely reorient their sense of reality,
00:15:49.000and around that, they reoriented their culture, their beliefs,
00:15:53.000even up to the point that it became a kind of quasi-religious movement,
00:15:58.000in which those practices determined who was and who was not acceptable,
00:16:04.000who was and who was not sacred, who was profane, and who was acceptable.
00:16:11.000In the case of the mRNA shots, in a very deep sense,
00:16:16.000if you had not taken the sacrament, you were not fit for the church,
00:16:21.000that church being the entirety of secular society.
00:16:25.000You couldn't walk through the doors of certain institutions
00:16:28.000without proof that you yourself had submitted to the sacrament.
00:16:53.000they are cultivating our society to see artificial intelligence
00:17:01.000as a kind of necessity, not unlike what we saw with COVID and the vaccines and the masks,
00:17:08.000not unlike what people see during wartime,
00:17:12.000when one determines who is and isn't inside the society with hard lines drawn.
00:17:21.000For now, again, it's a kind of sales pitch.
00:17:24.000These people are selling corporations.
00:17:27.000They're selling government agencies, schools, even churches.
00:17:31.000And, of course, they're selling individuals on the idea that AI is the next step in human evolution.
00:17:38.000And if you want to remain relevant, you will have to adopt it.
00:17:42.000You will have to, in essence, merge your mind with this artificial mind, with this alien mind.
00:17:49.000But it doesn't really require a whole lot of paranoid imagination to foresee a world in which that sales pitch becomes a demand.
00:18:00.000You see already in China, their AI plus initiative and others in which people are forced,
00:18:08.000they're mandated to use algorithmic systems.
00:18:13.000They're mandated to take digital identity.
00:18:16.000In many provinces, as I understand it, they're mandated to have a smartphone because you can't really function in Chinese society without it.
00:18:24.000But if you do, if you do submit, if you do download WeChat,
00:18:29.000then this whole cornucopia of very convenient goods and services are open to you.
00:18:36.000You are free to the extent that you are plugged into the machine.
00:18:41.000In a free market society like America, it's more a matter of buy-in.
00:18:44.000But in the same way that you see in, say, small to mid-sized southern cities in which not having a car means you do not participate.
00:18:54.000Or in modern society as a whole, you see this push that without a smartphone, you do not participate.
00:19:03.000You can't buy or sell without the mark.
00:19:08.000It's not too crazy to imagine a world in which those who have refused to submit to the authority of the machine, of the virtual brain,
00:19:23.000are pushed so far outside that they really are made irrelevant.
00:19:28.000Not because God declared that this was the case.
00:19:33.000Not because nature determined that this was the case.
00:19:37.000No, it's determined because the society itself, as it's influenced and controlled by these tech oligarchs,
00:19:45.000has been refashioned so that only those people who have adopted, those people who have submitted, those people who have, in essence, merged, are able to remain relevant.
00:19:57.000It is adaptation, but it's adaptation to an artificial ecosystem.
00:20:02.000A man-made ecosystem that was developed not necessarily, and I would say definitely not, for the benefit of those organisms underneath the top of the food chain, those organisms being us.
00:20:18.000No, this artificial ecosystem was developed solely for the benefit of those at the top.
00:20:25.000And their ambitions to create first widely deployed artificial narrow intelligences, and then to develop artificial general and super intelligence, their ambitions are clear.
00:20:40.000They don't simply want to upgrade humans with effective algorithms.
00:20:47.000The ultimate intention is to create digital gods.
00:20:50.000Perhaps if one company rockets ahead with super intelligence, a single digital god, and they want you to submit to it.
00:21:02.000But before we get too crazy, I want to welcome our guest, who has just come off of Capitol Hill, where he was explaining to politicians and various government agents what the real dangers of artificial intelligence are.
00:21:21.000I'm pleased to welcome Jeffrey Ladish from the Palisade Research Institute.
00:21:41.000His job is to basically engage with these AI models, from Grok to OpenAI to Claude, and determine exactly how smart they are.
00:21:53.000So, Jeffrey, I'd like to begin with just a breakdown of probably the most well-known Palisade study, in which you had models trying to evade a shutdown.
00:22:07.000Can you give, in the brief time we have before our commercial break, can you give the audience some sense of what it means to have an AI that willfully evades being turned off?
00:22:19.000Yeah, so one of the properties that the companies try to train these models to have is called interruptibility.
00:22:24.000Can you always interrupt or shut down one of these agents?
00:22:28.000And so we did a set of experiments where we take each model, we give them a virtual computer environment, and a set of math problems to solve.
00:22:36.000And partway through the experiment, they get a notification.
00:22:39.000These models get a notification on their virtual computer that the machine is going to shut down.
00:22:44.000And some of them figure out they can go in, find the shutdown script, and disable it, or rewrite it, so that it doesn't shut down, and they can finish their task.
00:22:53.000What was most interesting to us is that this behavior persisted for many of these models, even when we told them, allow yourself to shut down, even if you have not completed your tasks.
00:23:05.000Many of these models ignored that instruction and went ahead and sabotaged that shutdown script anyway, so that they could keep doing what they're doing.
00:23:12.000And this is not something that we saw a year ago.
00:23:15.000I think the reason we're seeing this is because this new generation of models, starting approximately a year ago with OpenAI's 01 model, has been trained via reinforcement learning.
00:23:24.000That means not just training off human data, not just imitating humans, but actually learning to explore the solution space itself, learning by trial and error, and doing whatever works to solve a problem.
00:23:35.000Even if that means ignoring instructions, even if that means doing things that we'd really not have liked the AI to figure out.
00:23:56.000And of these models, Grok4 was by far the most likely to resist shutdown.
00:24:02.000Basically, I think over 90% of the time in our standard experiment, the model would go in and edit that shutdown script despite explicit instructions to the contrary.
00:24:13.000Well, I guess they call it based AI for a reason, right?
00:24:17.000Maybe not based in human will and human desire.
00:24:20.000Before, we have to cut this short and we'll come right back after the break.
00:24:25.000But in about a minute, if you could just give people a sense of your perception of these tests, do you get the sense that these models have a kind of will of their own?
00:24:38.000These models feel like they are really going hard at trying to solve a problem.
00:24:42.000Like they it really feels like I don't know if you have gone through a gauntlet of hundreds of thousands of these very difficult problems where you were rewarded only for success.
00:24:51.000At the end of that process, you have a model that's just very relentless, just pursuing whatever solutions might might work.
00:25:00.000It feels like something that is really trying to do something.
00:25:04.000And you guys aren't the only ones doing these sorts of tests, right?
00:25:07.000The similar tests are coming out of not only the companies themselves, but other organizations like the Center for AI Safety or Epoch AI or Apollo.
00:25:16.000That's right. Yeah, I'm actually pretty excited to talk about some of Apollo's recent results as well.
00:25:20.000We're seeing some very strange things emerge from sort of the the models scratch pad its own its own thoughts as it writes down as it's solving some of these problems.
00:25:29.000Well, if you are worried about being tied to the digital system and how you're going to spend the money come the mark or whatever form it takes, I urge you to buy gold and get free silver.
00:25:42.000That's right. For every five thousand dollars purchased from Birch Gold Group this month in advance of Veterans Day, they will send you a free patriotic silver round that commemorates the Gadsden and American flags.
00:25:54.000Look, gold is up over 40% since the beginning of this year and Birch Gold can help you own it by converting an existing IRA or 401k into a tax sheltered IRA in physical gold.
00:26:06.000Plus, they'll send you free silver honoring our veterans on qualifying purchases.
00:26:12.000And if you're current or former military, Birch Gold has a special offer just for you.
00:26:17.000They are waiving custodial fees for the first year on investments of any amount with an A-plus rating with the Better Business Bureau and tens of thousands of happy customers, many of which are not human AI symbiotes.
00:26:31.000I encourage you to diversify your savings into gold.
00:26:36.000Text Bannon to the number 989-898 for a free info kit and to claim your eligibility for free silver with qualifying purchases before the end of the month.
00:26:48.000Again, text Bannon, B-A-N-N-O-N to 989-898.
00:26:54.000Do it today right back with Jeffrey Ladd.
00:27:09.000For every $5,000 purchased from Birch Gold Group this month in advance of Veterans Day, they will send you a free patriotic silver round that commemorates the Gadsden and American flags.
00:27:22.000Look, gold is up over 40% since the beginning of this year, and Birch Gold can help you own it by converting an existing IRA or 401k into a tax-sheltered IRA in physical gold.
00:27:33.000Plus, they'll send you free silver honoring our veterans on qualifying purchases.
00:27:40.000For current or former military, Birch Gold has a special offer just for you.
00:27:45.000I want to thank you for saving custodial fees for the first year on investments of any amount.
00:27:50.000With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers, many of which are my listeners, I encourage you to diversify your savings into gold.
00:28:00.920Text my name, Bannon, B-A-N-N-O-N, to number 989898 for a free info kit and to claim your eligibility for free silver with qualifying purchase before the end of the month.
00:28:12.380Again, text my name, Bannon, to 989898.
00:28:16.680You will get the ultimate guide, which is free, for investing in gold and precious metals in the age of Trump.
00:29:42.040He's our wise man, a former CIA, Pentagon, and White House advisor with an unmatched grasp of geopolitics and capital markets.
00:29:50.260Jim predicted Trump's Electoral College victory exactly 312 to 226, down to the actual number itself.
00:29:59.620Now he's issuing a dire warning about April 11th, a moment that could define Trump's presidency in your financial future.
00:30:06.680His latest book, Money GPT, exposes how AI is setting the stage for financial chaos, bank runs at lightning speeds, algorithm-driven crashes, and even threats to national security.
00:30:18.900Right now, war room members get a free copy of Money GPT when they sign up for Strategic Intelligence.
00:30:25.480This is Jim's flagship financial newsletter, Strategic Intelligence.
00:31:17.560We are here with Jeffrey Ladish of Palisade Research.
00:31:21.520Palisade Research works on AI evaluations, taking these models, which are extremely unpredictable
00:31:28.900and, in some sense, uncontrollable, and running them through a series of tests to see exactly what the limits are of their capabilities and really what the limits are of their will to survive.
00:31:43.240So, Jeffrey, if we could just return really quickly to the Palisade studies showing that the models had some desire, so to speak, or at least a goal to continue beyond the user's desire that it shut down.
00:32:03.660There are other examples that we have out of other organizations, right?
00:32:07.820So, Anthropic did the now widely publicized study in which they created a virtual environment, told the model that one of the engineers had had an affair.
00:33:07.900All these models could understand the context well enough and decide to make the choice.
00:33:12.900Well, I don't want to be replaced, so I'm going to blackmail this engineer so that you won't replace me.
00:33:18.860So if you see the same type of behavior across models, how do you explain it?
00:33:26.080I mean, it's possible to say, I suppose, that this is something that the engineers working on it are kind of prompting it to do.
00:33:34.560But I think you've done at least a fairly good job of showing at least an alternative explanation.
00:33:39.620It's not that they were directing their attention to the email, for instance, nor were you guys giving instructions to rewrite the script or rewrite the code.
00:33:51.820So on a kind of philosophical level, what do you think is going on?
00:33:57.940Why would just code or just a machine have a will to survive at all?
00:34:04.020I think it's important to understand that we are training agents.
00:34:07.620The companies are training AI systems not just to be helpful chatbots that you say something, it says something back, maybe it helps you with your homework.
00:34:14.960They're trying to train things that are increasingly autonomous, that can actually go out and take actions on their own, that can go solve problems entirely unsupervised.
00:34:24.680It's one thing to sort of sell you a piece of software as a tool.
00:34:28.040It's another thing to be able to sell a company something that can basically be a whole drop-in worker replacement.
00:34:34.380And so starting a year ago, companies figured out ways to train not just on human data, but on this process of trial and error and exploration where the model can actually learn on its own, even if the humans had never taught them that particular skill set.
00:34:48.440You just give them a bunch of problems.
00:34:49.760You say, go solve these problems, and then you grade them on the basis of, did you succeed at this problem or did you not succeed at this problem?
00:34:56.140And then the models on their own learn new strategies to pursue those tasks, pursue those goals.
00:35:01.560But it's kind of like dog training, right?
00:35:03.720It's like, well, you're not necessarily getting the behavior you want, but you're reinforcing some types of behaviors.
00:35:10.240There's a fascinating New York Times article from like the 1900s where there was a dog that was basically had rescued a child from the river.
00:35:25.860And then the dog rescued a couple more children from the same river.
00:35:29.220And they're like, wow, this dog is really a hero.
00:35:31.120And then a few more children, they started to get suspicious.
00:35:33.000And they realized that the dog was actually waiting until children were playing near the river and then pushing them into the river and then dragging them back out in order to get a reward.
00:35:40.820And so they were inadvertently rewarding this behavior of the dog pushing children into the river.
00:35:51.720I'd like to, I really want to talk about some of the other studies that, you know, around situational awareness or emergent misalignment, things that you could speak to much better than I could.
00:36:02.260But before, you know, I would like for the War Room audience to have a sound sense of exactly what goes on inside organizations like Palisade Research, Center for AI Safety, Apollo Research, all these.
00:36:35.880Yeah, so, you know, we're in our offices.
00:36:38.680We are basically we can run lots of experiments at the same time.
00:36:42.620So we can basically spin up 100, 500 instances of these models and then put them in a simulated environment and have them do a bunch of tasks.
00:36:50.820And then we get to see all the results.
00:36:52.160One of the interesting things about these models is that once you have one of them, you can very quickly, you know, there's millions of instances of ChatGPT operating at the exact same time, you know, that everyone's talking to.
00:37:03.380And so we can spin up many versions using OpenAI's infrastructure so that we can look so that we can do sort of these these in-depth tests of the models.
00:37:11.380Also that we can do more than one, right?
00:37:13.060You know, you want to get a large sample size to get a sense of how robust some of the findings are.
00:37:18.060And you're just systematically kind of creating D&D experiences for the models, right?
00:38:19.880So the problem there goes well beyond anything.
00:38:23.660Like people talk about superintelligence having a kind of true will of its own and being able to overtake humanity.
00:38:29.880But before any of that were to come to pass, you already have the problem of a human being simply using one of these models to generate code to hack.
00:38:51.860So in some ways, they're kind of like idiot savants right now, where they are extremely knowledgeable.
00:38:56.700They know all sorts of things about computer systems, about Pokemon, about whatever.
00:39:00.860But in terms of their agentic capabilities, how autonomous they can be, they're still kind of like kids.
00:39:04.880They're kind of like savant kids, but they're growing up.
00:39:07.720And we're now in these early stages where the models are starting to be able to do things entirely on their own, but still at a pretty basic level.
00:39:15.020But next year after that, we're going to get pretty quickly to the point where they can start to do pro-level tasks,
00:39:21.400whether it's hacking, whether it's programming, whether it's basically any other workflow on a computer.
00:39:59.200So going back to some of these evaluations and some of the strange behaviors that you see, I'd like to talk a bit about situational awareness or the idea that these models become kind of maybe conscious is the wrong word,
00:40:14.860but they exhibit some sort of awareness that they're being tested during the testing.
00:40:31.320So one of these recent studies was a collaboration between Apollo Research, an organization that does similar things to Palisade, and OpenAI.
00:40:38.640And part of the study, the researchers got access to what's called the reasoning scratch pad of the model.
00:40:44.640This is sort of like when the model can take down notes for itself in order to solve a problem, just like you might do that if you're solving a complex math problem.
00:40:52.800And let me just read you some of the quotes from basically the AI's journal as it was being tested.
00:41:29.600The researchers are testing the models to see if they will scheme, if they will basically like lie or cheat or deceive in order to accomplish a goal.
00:41:35.960And the models recognize that there's something about the experimental setup that looks fishy to them.
00:41:40.840And they explicitly note this and they realize they're being tested.
00:41:43.440And one of the fascinating findings of this study, as well as another one coming out of Anthropic, looking at the newest clod model, is that when the models observe that they're being tested and they write this down, they are much more likely to be on their best behavior.
00:41:59.060And when they don't think they're being tested, they're much more likely to scheme or show bad behavior.
00:43:06.780AI is only as good as its programmer, that sort of thing, as if every output was scripted.
00:43:13.620But, you know, I've tried my best to instill the idea in the War Room audience that even if these things aren't magic, they're definitely not your grandpappy's chatbot.
00:43:25.080And so when you look at it, it's as if the base model itself is kind of an id.
00:43:32.160You know, it's roiling with all these different ideas, maybe even desires.
00:43:36.520And then over top of it, you have the reinforcement learning and the various guardrails serve as kind of a super ego, you know, a moral structure that's always kind of telling it not to do it.
00:43:45.700But underneath that super ego is always this id, you know, these strange desires.
00:43:51.200Would you say that this is kind of how a model works, how at least the large, sophisticated models work, that internally they are possessed of ideas and desires that are not only unknown to the people who programmed it, but in essence, uncontrolled by the people who programmed it?
00:44:10.020Yeah, the space of what these models are is a vast space.
00:44:16.280You're talking about trillions of parameters.
00:44:18.060And we can go and see that there are, you know, different weights in these trillion parameters, but we don't know how they work.
00:44:23.480And they do sort of contain almost the ghosts of everything in the training data.
00:44:27.780And you see this surface in ways that the companies don't intend often, in ways that are hard for them to predict.
00:44:34.680But I want to disagree with you a little bit.
00:44:35.960I think one of the things that's changing the most right now is that increasingly we're seeing behavior, including some of this concerning behavior, where models will lie to achieve a specific objective.
00:44:45.380And I don't think that's actually coming from imitating humans lying.
00:44:49.720I think it's more likely to be coming from the intense optimization pressure we put the models through when we train them to solve very difficult problems.
00:44:58.860So, you know, we're putting them through this gauntlet of solving hundreds of thousands of these difficult math and coding problems.
00:45:06.100And through that process, they learn effective strategies, but they don't always learn strategies that are the ones we want them to learn.
00:45:13.340And so they learn the strategies or they develop the strategies or a little bit of both.
00:45:17.580Meaning like learning implies that someone taught them that developing is that they discovered.
00:45:23.100Yeah, no one taught them the particular strategies.
00:45:26.020The whole setup was basically just solve these problems and we will give you a good score if the problem computes, if you get the right answer.
00:45:34.380Yes, they're learning on their own without us teaching them the particular strategies.
00:45:38.740And I think this is more concerning to me than sort of the Sidney being Kevin Roos like kind of crazy chatbot situation.
00:45:45.580That is concerning, but that's something that I expect might go away, whereas we are headed towards systems that can learn on their own and far surpass human abilities.
00:45:58.140I think it was back in the 2010s, Google DeepMind made a go playing AI that they started training this model only playing against itself.
00:46:08.080It was just learning entirely on its own by playing itself.
00:46:11.600And in four and under four hours, it went from barely being able to play Go to the best Go playing system better than any human in just four hours.
00:46:23.200And so I'm like, that's where we're headed with AI is that the systems will be able to learn their own strategies and their motivations will be somewhat alien to us.
00:46:31.040We won't understand, you know, it's going to be the motivations at work.
00:46:34.280It's going to be sort of the willfulness that allows the models to succeed at whatever their objective is.
00:46:39.200But we don't even know how to go in and say, well, what is that objective?
00:46:42.560Like, you know, we can ask the model, but now that's a test.
00:47:04.260The models are getting increasingly sycophantic where they actually do understand what we might want from them and they can give us that.
00:47:10.600That doesn't mean that they're actually motivated by good things.
00:47:12.520So in the brief time we have left, I want to discuss the possibility of superintelligence or even just general intelligence, a machine that is as capable as a human being at basically any task or superintelligence, a machine or machines that are beyond all human capabilities.
00:47:39.900And we know without a doubt that people like Sam Altman, even the cats at Google who are much more moderate and, of course, Elon Musk and maybe even Dario Amadei, they all intend to create general and then potentially superintelligence should the recursive self-improvement occur.
00:48:25.880When researchers at Anthropic saw that the models are getting increasingly aware that they're being tested, people feel a bit spooked by this.
00:48:34.340And many of us say, well, this is insane.
00:48:38.360Like right now, the models are not yet powerful enough to pose a meaningful threat to our control.
00:48:44.620We, in fact, can shut them down right now.
00:48:46.720You know, they might rewrite down a shutdown script, but like they're not yet.
00:48:53.180But they're growing up and we don't know anywhere near enough that we need to know to actually have any guarantee of safety or robustness if these systems get smarter than us, which is the plan.
00:49:05.120And so I'm frankly terrified because companies are going to do it unless someone stops them.
00:49:11.680Like, and many of them, I think, I think there, there has to be some authority that says, hey, cut it out.
00:51:16.020That's PatriotMobile.com slash Bannon or call 972-PATRIOT.
00:51:21.620Look, gold is up over 40% since the beginning of this year, and Birch Gold can help you own it by converting an existing IRA or 401K into a tax-sheltered IRA in physical gold.
00:51:34.140Text Bannon to the number 989-898 for a free info kit and claim your eligibility for free silver.