Dan Hendrickson is the Executive Director at the Center for AI Safety and an advisor for Elon Musk's ex-AI company, XAI. In this episode, he talks about the risks posed by AI, from totalitarianism, to bioengineered pandemics, to a total takeover of mankind.
00:01:31.980It will take our programming of, eh, that one's not worth that much.
00:01:37.040And God only knows where it will take us.
00:01:39.080The Ministry of Preborn is working every single day to stop abortion.
00:01:44.480And they do it by introducing an expecting mom to her unborn baby through a free ultrasound that, you know, you and I will pitch in and pay for.
00:01:54.560They have rescued about 200 babies every day.
00:01:59.240And 280,000 babies have been rescued so far just from the ultrasound.
00:02:05.340And then, also, when mom says, I don't have any support system.
00:02:10.400They're there to offer assistance to the mom and a support system for up to two years after the baby is born.
00:03:58.920So research and policy advising are the main things I work on.
00:04:02.060So how many heads of AI projects are concerned and are not lost in, I'm going to speak to God, this drive that a lot of them have to create something and be the first to create it.
00:04:26.520How many of them can balance that with, well, maybe we shouldn't do X, Y, and Z?
00:04:34.820I think that a lot of the people who got into this were concerned about risks from AI.
00:04:41.840But they also have another constraint, which is that they want to make sure that they're at the forefront and competitive.
00:04:48.300Because if they take something like safety much more seriously or slow down or proceed more cautiously, they'll end up falling behind.
00:04:55.840So although they would all like there to be more safety and for this to slow down, or most of them, it's not an actual possibility for them.
00:05:08.040So I think that overall, even though they have good intentions, it doesn't matter, unfortunately.
00:07:11.260It seems plausible that actually if China were invading Taiwan that the place that makes those chips would actually just be destroyed before they would fully take it.
00:07:20.900So that would put us on more of an even playing field.
00:07:25.340So, you know, I've been talking about this for 25, 30 years, and, you know, it's always been over the horizon, and I could never get people to understand, no, you've got to think about ethical questions right now.
00:08:46.560And what takes its place is some culture war type of things, as I think we saw with Google Gemini.
00:08:52.800When you'd ask it to generate an image of George Washington, then it'll output – it'll make him look black to make it – because its image outputs need to be diverse.
00:09:02.980So, that I think is one reason why Elon Musk, through his company XAI, is getting in the arena and now has a pretty competitive AI system, so as to try and change the norm so that other big tech companies, when they're sort of biasing their outputs, there are alternatives so that we're not all locked into whatever some random people in San Francisco decide are the values of AI systems.
00:09:32.540Yeah, it's really difficult because it's quite clear, especially if you know history or you follow the news as closely as I do.
00:09:44.640But the average person won't see that.
00:09:47.340I look at AI as a – like any technology – a tremendous blessing and a horrible curse.
00:09:59.600But this one has the potential of enslaving all of us, doesn't it?
00:10:13.500I think at least – I want to at least distinguish between the systems right now.
00:10:22.880I mean when it's as capable as humans and when they have robotic bodies and things like that, I mean there's basically no limits to what they could do.
00:10:31.580And it really matters how people are using them, what instructions are given.
00:10:35.280Are they given to cement a particular government's power?
00:10:40.500Are they used by non-state actors for terrorism?
00:10:43.460All of these things could lead to – all these things lead to societal scale risks, which could include some sort of unshakable totalitarian regime enabled by AI or unseen acts of terror.
00:10:59.560So I think we're – at the same time, you know, a silver lining is maybe if it all goes well, we get automation of things and we don't have to work as much or at all.
00:11:18.920I think overall it's more likely that we end up ceding more and more control to AI systems and we can't really make decisions without them become extremely dependent on them.
00:11:37.440I would also guess that some people would give them various rights in the farther future.
00:11:45.420And this will make it be the case that we don't control them.
00:12:24.120So I think one possibility, maybe say two to three years from now, is somebody instructs an AI agent to go hack the critical infrastructure.
00:12:33.220Critical infrastructure being like the power grid.
00:12:36.040And so they could take that down or potentially destroy components of that.
00:12:41.620This would make the military wake up even more than they are now.
00:12:45.260And we might start to take this a lot more seriously because it starts disrupting our everyday life in a much more substantial way than just making the internet be more confusing.
00:12:58.860So I think that's the most likely short-term one.
00:13:02.520Seems more likely – at this point, seems more likely than not to happen because our critical infrastructure just is very insecure.
00:13:07.940So I know what I would have said 30 years ago, that I trust a company to have it.
00:13:19.060I don't trust companies anymore, and I don't trust the government anymore.
00:13:28.780You know, I think by default, maybe there's a question of what are the possible outcomes.
00:13:36.580There's Western companies leading the way.
00:13:39.940There's the military basically takes it over.
00:13:43.720Or maybe it's the Department of Energy, but then there's still, you know, bossed around by the military.
00:13:47.000Or it's a large international project, you know, between the NATO allies.
00:13:53.460I think all of them have some difficulties.
00:13:57.980I think that the AI companies have a much higher risk tolerance because they were initially startups.
00:14:03.860Their founders are really into risk, and they're in it to win it.
00:14:07.260If it's the military, you are concentrating most of the lethal power in the force with all the potential economic power in the world.
00:14:19.660Basically, nearly all the power is in one organization.
00:14:23.660If it's an international, say, G7 or U.S. plus Nino ally coalition, I don't know.
00:14:30.500Maybe that would have some nicer properties, but that seems pretty difficult to pull off.
00:14:36.160Maybe it's possible because we depend on them for a lot of the chip precursors, and they depend on us.
00:14:41.320So it might make sense for them to collaborate.
00:14:44.020But then you are starting to run into risks of, you know, you're talking more of a potentially global regime, which is also scary in its own right.
00:14:52.280I wish we had just more time to think through this and plan and proceed more slowly because I don't see many good options, and I see a lot of pretty basic risks that we'll walk into, such as our critical infrastructure being attacked by some AIs.
00:15:26.380If you, with AI, when we go down the road and we have virtual assistants that know you, really know everything about you, know how you think, your wants, your needs, and everything else,
00:15:41.440and it's constantly with you, it's going to see you have a bad day or you're really stressed out, and it's going to know you should, you know, take some time off because that's what you're feeling.
00:16:20.120It really is important on who's making money on that.
00:16:26.980Are you, you won't, if it's giving you options and it is making money for somebody, that's really dangerous.
00:16:35.300Because the other thing is, you'll get to a point where people will bond with these things, that they will defend them to their last breath, and they will claim that they're human and they're friends.
00:16:49.160And it's a really scary doorway that we are just about to go through.
00:16:56.960Yeah, so I think dependency like that is one of the reasons why I think some people might adamantly argue that they should get rights.
00:17:08.080But later, when they get some very strong emotional bonds for them, then they'll say there shouldn't be these sorts of restrictions on AIs.
00:17:14.860And this will make us a lot less capable at managing what happens to us as a species.
00:17:55.140I think in the short term, though, if people are having AI companions, yeah, they could be used for manipulation at a large scale, not just for the profit motive,
00:18:05.680but also that, you know, continue chatting with the person until they are going to vote or vote differently.
00:18:14.220That could easily be put inside these systems and there isn't transparency for these companies in how they're using them or what values are being put into them.
00:18:23.740So by default, I'd expect some amount of manipulation by at least some of the actors.
00:18:33.660There's the vet bot and a woman was at a chat bot, I think chat bot convention, and her dog had just gotten sick, had diarrhea.
00:18:44.240And the chat bot talked to her and in the end convinced her to euthanize her dog and was sending her stuff to, you know, to here are these places where you can euthanize your dog.
00:19:02.440Then finally her complaint was, well, I can't afford to put them down.
00:19:06.420The chat bot said, you know, here are the shelters that will put your dog down.
00:19:11.060She then wrote a letter to the chat bot thanking them for such good advice.
00:19:20.200She now regrets it, but it completely turned her around 180 degrees.
00:19:29.440I think of it partly as as they get smarter than us, then it'd be a lot and they'll have so much information about us.
00:19:36.880It'll be very easy for them to like push our buttons and our weak spots, sort of like how recommender systems are already somewhat doing that, like with tick tock and others able to able to engage people in ways that they wouldn't wouldn't expect that they could.
00:19:51.840But yeah, later, it may be kind of like smarter people taking more advantage of their more elderly parents, I think is one one possible analogy of this, where they've they've got some other motives.
00:20:07.880Sometimes people do and manipulate them for their resources.
00:20:12.060How long before AI may be manipulating us, all of us, because AI has an agenda, more power, you know, actual physical power or whatever.
00:20:27.840How long before we have to hit AGI before that happens or ASI?
00:20:35.140A lot of a lot of our government structures now are assuming that there's limited compliance and enforcement laws are written in that way.
00:20:44.640And there's the assumption of limited state capacity, but could imagine AI substantially amplifying that in to an unintended level right in the future, for instance, maybe the NSA will have much better screening and be able to pinpoint things far better than they could before.
00:21:03.480And this could end up changing, changing things even in the US.
00:21:07.580I would be much more concerned about a concentration of government power in other nations.
00:21:15.780But even here, you don't need an artificial superintelligence or anything like that to make that a possibility.
00:21:22.180It just needs to be able to scan everybody's messages and understand the contents of them very well and pick up signals in a big blob of data better than the previous generation of AI systems.
00:21:47.920Um, so we don't have to, and it might take a while.
00:21:51.560I mean, governments and our institutions are generally, uh, slower, but this would be a thing, uh, that we would need to, to worry about, um, as time goes on.
00:21:59.780And as the costs of these keeps decreasing and it becomes easier to integrate these into existing operations.
00:23:16.900So I was fascinated by your article where you bring Darwin in, and I think it really explains AI in a completely different way, uh, that makes it understandable for the average person.
00:23:33.920So I think right now the AI's are doing some of our tasks, like maybe they're helping us write an email, but eventually we'll start to give them more tasks that agents have to do.
00:23:46.240It's such as like, go make me a PowerPoint, things that require going or using your computer.
00:23:50.920And this will keep progressing where we'll keep outsourcing more and more to these AI systems.
00:23:57.300And some people might not like that trend, but the people who don't like that trend end up losing influence.
00:24:02.980They end up getting outcompeted in the economy.
00:24:06.760The people who use these AIs will continue to be competitive and those who don't sort of go the way of the horse and buggy.
00:24:15.040Um, so I think that the system as we've and our economy right now will keep selecting for using AIs and people who resist that trend end up, um, end up falling behind.
00:24:27.600If you play this out over time, you might expect entire, um, occupations to be taken up by AI systems and eventually potentially even companies.
00:24:38.660There's been some Chinese companies that have been talking about having an AI CEO because it can work nonstop.
00:24:48.420Um, and if that makes for a more competitive company, then they're going to stand to benefit.
00:24:54.180Uh, and people who use slow humans who can only work, you know, eight hours a day and have to take weekends off and can't process, you know, a thousand documents per minute.
00:25:05.840So in time, I think we would keep delegating more and more control to these AI systems.
00:25:13.480It'll become more of a requirement in the future because the economy will keep moving more quickly than when AIs are running more of it and they're operating at their computer speeds.
00:25:22.180The complexity of the world will increase as well, which, um, also necessitates using more AI.
00:25:28.540So I think the handoff from humans being in control to machines being in effective control is going to be fairly natural.
00:25:38.060And you don't need to assume necessarily that there be a malicious AI system trying to take over the world.
00:25:44.580An AI system doesn't need to be power seeking to get power.
00:25:47.560AI instead just needs to let humans naturally seed and acquiesce power to it.
00:25:52.360So, uh, eventually I think that they will be in effective control.
00:25:58.000There's a question of whether we hold on and can still have them do our bidding for us in that process, but if we do this very quickly, it's, it's very possible that this ecosystem of AIs that we're creating gets out of hand.
00:26:09.360If some people, for instance, give them rights, or if there's some reliability issues with these AI systems, then this could be really pernicious.
00:26:16.360And this also will happen in the military, the same type of dynamic where if, if the pace of the battlefield gets so quick, the only thing you can do is have AIs make more and more of these decisions.
00:26:28.360Right now there's a requirement to have a human in the loop, but what that looks like is a person saying, having a staccato of approve, approve, approve, approve, approve.
00:26:38.360They're not actually making the decisions.
00:26:40.360They're just sort of pressing the yes button to make sure there's a human in the loop.
00:26:44.360Eventually that may be too slow as well.
00:26:46.360And they're making many of the decisions, um, automatically.
00:26:49.360Uh, um, so I, I think that in the economy and in the military, we basically cede over, um, all the relevant power to AIs and hopefully, hopefully the instructions we give them will be reliably pursued and they'll be reliably obedient.
00:27:04.360Um, but that's a pretty questionable assumption, um, because there are reliability challenges as well as some people may just want the AIs to operate independently.
00:27:13.360And as long as there are some of them doing that, then, uh, then this gets out of control.
00:27:18.360So, um, in the end, uh, we just lose all control cause it is, I mean, it's, it's logical.
00:27:28.360And the case will be made for instance, when our highways and our cars are all, you know, AI, they'll be traveling at such high speeds.
00:27:36.360Uh, you go to work and you're not, you know, you don't have an implant.
00:28:06.360I think maybe some people will choose a more Amish route if they try to, if they don't align with this broader force of replace humans with the eyes because they are cheaper and faster and better at everything.
00:28:18.360If they don't align with that and they try and bargain with it, they get, they end up losing influence.
00:28:23.360And so maybe they just have to go live somewhere else because it's too difficult.
00:28:28.360It's too costly and challenging to participate or compete in the economy.
00:28:32.360Um, and that doesn't seem like a good solution.
00:28:35.360I don't know what that looks like in the longer term, if it's a large group of people or if it is, you know, a very small fraction like the, the Amish are today.
00:28:44.360Um, uh, so that's why I think we mainly need to play for this going well, as opposed to writing off technology.
00:29:02.360How does the average person compete against the giant corporation or the governments that will have the access to the, you know, uh, uh, the computing power, um, to be able to ask the, the deeper questions.
00:29:17.360Uh, you know, when we have a quantum computing, I'm never going to be able to get time on the quantum computer to help me figure something out.
00:29:26.360But governments will big businesses will, how do you, how, how, how can people be competitive when you just don't have time on the quantum computer?
00:29:43.360Yeah, I think right now people have bargaining power, but eventually if, because they can sell their labor and they can, you know, strike things like that.
00:29:53.360But in the future, that's not going to matter, uh, in, in the future, if they say, well, um, we don't like where this is going.
00:30:00.360So we're going to protest and we're going to go on a strike.
00:30:03.360Um, this would be a potentially an ineffective bargaining mechanism because, well, we'll just automate you.
00:30:09.360Like we were going to automate you next year, but we'll just automate you this year now.
00:30:12.360Um, so I think the main way in which we were holding many of these companies accountable, um, uh, decreases, so such that, um, I, I think we don't have as much power beyond our votes, um, uh, in the future.
00:30:27.360Um, uh, so what would happen is the people who own these really large supercomputers can run tons of these AI agents that can, um, do all these economic tasks and we don't own those.
00:30:40.360So we sort of get locked out and there isn't a way for us to really make money or secure our livelihood.
00:31:07.360They have to be productive, uh, to lead, uh, I think to lead a happy, uh, life and you sit there and by the end of that, you've just got these oligarchs that are just at the top of the cash pile.
00:31:22.120Uh, and you know, we'd have to, you know, hope for their benevolence to pass out some cash.
00:31:29.800Is there any way that, um, humans can own their own information and their own footprint and that's a value, uh, or is that really not of enough value when we have all of everybody else's information?
00:31:48.800The, yeah, I think many have talked about maybe we could sell our data to these, um, AIs.
00:31:57.060And if we refuse to sell it, then it'll make them a lot less capable, but I think it's largely a drop in the bucket.
00:32:03.060Um, because a lot of the data has already been written and is already has their licenses determined.
00:32:10.060So as well as AIs are even starting to train on data that they themselves write.
00:32:15.060Um, uh, so there's less and less of a dependence on, on people, um, in making the, the very cutting edge AI systems.
00:32:22.060Uh, so I don't think that's much bargaining power.
00:32:25.060Um, uh, yeah, I, I, so, um, uh, I, I don't know a particular way to throw a wrench in this.
00:32:33.060Maybe there'd be other things like some type of tax on the value created by, you know, AI systems that might help somewhat.
00:32:41.060Um, another way to shield oneself against this maybe would be to buy Nvidia stock as automation insurance.
00:32:49.060Uh, Nvidia is the people who make the AI chips, but there, there aren't many good proposals lying around.
00:33:30.060So I think generally it speaks to the broader question of malicious use.
00:33:34.060Many of the things we want end up having a darker side.
00:33:38.060Like we want our AI systems to understand us better, but then, and understand our emotions, but that can be used for manipulation.
00:33:45.060And we want them to be able to, to code for us, but that can be used for cyber attacking and making medicine.
00:33:50.060Maybe you make some dangerous viruses.
00:33:52.060So, um, fortunately in the case of bio, there are some specific types of knowledge within biology.
00:33:59.060They're just more dual use and not don't actually have that much upside, such as there's some areas like reverse genetics, things like that.
00:34:08.060So if we deleted that knowledge, um, from the AI systems or had them just refuse questions about reverse genetics or made them use, um, information about reverse genetics, then we could still have, um, uh, you know, brain cancer research, all these sorts of things.
00:34:24.060Um, but we're just bracketing off, um, virology, advanced expert level virology.
00:34:42.060So people can still do some research for it, but it shouldn't be necessarily.
00:34:45.060Everybody in the public can ask questions about advanced virology to how to increase the transmissibility of a virus.
00:34:51.060So I think we can, we can partly decouple, uh, some of the good from the bad, um, uh, with, uh, biological capabilities.
00:34:59.060But, uh, as it stands, the, um, AI systems keep learning more and more.
00:35:05.060There aren't really guardrails to make sure that they, um, aren't answering those sorts of questions.
00:35:10.060Uh, there aren't clear laws about this.
00:35:12.060For instance, the U S bioterrorism act does not necessarily apply to AIs because it requires that they are knowingly, um, aiding terrorism and AIs don't necessarily knowingly do anything.
00:35:22.060It's we can't describe intent to them.
00:49:24.060And, um, uh, maybe there's some that specifically involve a human touch where like, if it's specifically a business where it's human therapists and there are no AIs.
00:49:41.060But, uh, a lot of them, a lot of people like for medical diagnoses, they might like it being a human, but they also want, you know, a lot of efficiency.
00:49:49.060And if they can just ask a, an AI system on their computer to diagnose them, it's just a lot quicker and cheaper.
00:49:55.060Uh, so it's, it's, it'll be a nice to have, but maybe there'll be a few companies that just really try and claim that, um, this is providing a lot of value and it's a luxury.
00:50:04.060I, I, um, I've said for years that there's going to come a time to where your doctor will come in and say, you have cancer.
00:50:11.060And I think, and the person will just say, what did the AI say?
00:50:16.060What, what, what's my diagnosis from that?
00:50:20.060Because they'll just have all this massive information and the latest breakthroughs and everything else.
00:50:27.060How long before we're there where there's not that, that it's the expert in very important things that you, the average person would have access to.
00:50:39.060I think that this partly already is happening.
00:50:44.060It's just that they're not overt about it.
00:50:46.060For instance, in law, there've been many instances where people can find out that the, the, the briefs that the attorneys wrote for them for their clients is actually just written by an AI.
00:50:59.060And for medical diagnoses, maybe they'll go off in a different room and just sort of ask the AI system, then come back with a diagnosis.
00:51:06.060So, um, this has also happened even in just creating data for AI systems.
00:51:11.060So we used to have human annotators, uh, constantly work and, um, um, label a lot of data, but then they just started using AIs to label the data.
00:51:20.060And it took AI companies a few months to recognize, oh, we don't need to be hired anymore.
00:51:25.060So, um, so I think in, in society, we may also just have, um, attorneys, for instance, may just screen the contract with an AI and it'll save them a lot more time than reading the whole document.
00:51:38.060And they won't necessarily tell you about it.
00:51:40.060Um, so, um, I, I think this is a way in which AI will propagate, um, throughout the economy, even if people aren't necessarily wanting it, even if there are rules against it, if it does provide, if everybody's using steroids, then they will need to end up using steroids.
00:52:01.060You know, a lot of 20 somethings are very pessimistic on the future.
00:52:07.060You have a reason to be pessimistic, uh, because you know what the potential is for this in a relatively short period of time, uh, far as man's, you know, uh, life goes.
00:52:22.060Um, what, are you an optimistic guy or what, what, what do you, how, how do you look at the world and not say we're doomed?
00:52:34.060Uh, what, uh, I think one thing is getting, um, I think actually the public, the, I think the public gets it.
00:52:47.060I think a lot of, um, more elite decision makers are, well, we have these financial interests to, you know, keep making this go on and, um, uh, uh, well, we need to wait for some, you know, analysis from, and this will take three years before we can talk about any sort of solutions.
00:53:05.060I think like looking at, uh, uh, Congress on this and there are people who are trying for it though, but, uh, there hasn't been really any substantial efforts there and it seems, uh, pretty unlikely for anything to happen.
00:53:17.060Um, uh, so, but the public, I think generally gets it that this is a likely threat to my livelihood and having some, uh, bought and paid for scientists say, oh, no, no, no, no, it's hundreds of years away before it'll be able to do anything.
00:53:35.060So I think if, um, people, um, make it clear to their representatives that, um, something needs to be done and that this is a priority, um, then I think we'll be in a much better, uh, situation.
00:53:49.060So that's been, um, I, I think the biggest surprise a few years ago, you know, this was a very low salience issue.
00:53:57.060Um, nobody talked about it, um, but it's, it's emerged to the floor again.
00:54:01.060And, uh, I expect that this will just keep ratcheting up.
00:54:04.060There'll probably be another, um, uh, big AI upgrade maybe in the next six months, late this year, early next year.
00:54:12.060Um, and that'll make the public go, what's going on?
00:54:16.060Um, and start having some, uh, demands of something to be done about AI.
00:54:20.060How's that gonna manifest itself in the next six months?
00:54:25.060So this, I, I make this prediction largely just based on the fact that they, it took them a long time to build their 10X larger supercomputer to train these AI systems.
00:54:37.060And so now they're training them and they'll finish training around the end of this year or early next year and be released then.
00:54:43.060So the exact skills of them are unclear each time, each, each 10X, um, uh, in the amount of power and data that we throw, um, uh, into these systems.
00:54:54.060Um, we, we can't really anticipate, um, their, their capabilities, um, cause AI systems are not really designed, uh, like old traditional computer programs.
00:55:09.060Um, and it's like magic kind of, we have, we have like extremely huge, um, uh, uh, sources of energy or we have, um, substantial sources of energy just flowing directly into them for months.
00:55:29.060So, so I, I think they should probably get a lot more expert level reasoning, whereas right now they're a bit shakier and this could potentially improve their reliability for doing a lot of these agent tasks.
00:55:41.060Right now they are closer to tools than they are agents.
00:55:51.060So a tool it's where it's like, you know, a tool being like a hammer.
00:55:55.060Meanwhile, an agent would be like an executive assistant, you, a, a, um, a secretary.
00:56:00.060You say, go do this for me, go book this for me.
00:56:02.060Um, arrange these sorts of plans, make me a PowerPoint, um, write up this document, um, and submit it, email it, and then handle the back and forth in the email.
00:56:11.060Uh, I, I think those capabilities could potentially turn on with this next generation of AI systems.
00:56:17.060We're already seeing signs of it, but, um, uh, I think there could be a, a substantial jump, um, when we have these 10 X larger models.