The Glenn Beck Program - August 24, 2024


Ep 224 | Elon Musk Adviser: Are We ‘Sleepwalking’ into an AI TAKEOVER? | The Glenn Beck Podcast


Episode Stats

Length

58 minutes

Words per Minute

159.35469

Word Count

9,341

Sentence Count

597

Misogynist Sentences

5

Hate Speech Sentences

9


Summary

Dan Hendrickson is the Executive Director at the Center for AI Safety and an advisor for Elon Musk's ex-AI company, XAI. In this episode, he talks about the risks posed by AI, from totalitarianism, to bioengineered pandemics, to a total takeover of mankind.


Transcript

00:00:00.000 And now, a Blaze Media Podcast.
00:00:04.180 My next guest is sounding the alarm on the catastrophic risks posed by AI.
00:00:11.140 From totalitarianism, to bioengineered pandemics, to a total takeover of mankind.
00:00:19.280 When you think about the things that we could be facing, it doesn't look real good for the human race.
00:00:24.700 But it's not too late to turn the ship around and harness the power of AI to serve our interests.
00:00:33.520 But if we don't, well, I'll let him tell you what happens.
00:00:38.140 Welcome to the podcast, the Executive Director at the Center for AI Safety and an Advisor for Elon Musk's ex-AI, Dan Hendricks.
00:00:48.600 But first, let me tell you about Preborn, our sponsor.
00:00:55.280 You know, we're going to be talking about life and what is life.
00:01:01.100 The age of spiritual machines, if you will.
00:01:05.080 We know what life is now.
00:01:07.820 Maybe a quarter of the country doesn't know what life is.
00:01:11.800 But it is worth living on both ends of the scale, in the womb and towards the end.
00:01:20.460 We need to bring an end to abortion and define life and really appreciate life.
00:01:28.460 Or AI will change everything for us.
00:01:31.980 It will take our programming of, eh, that one's not worth that much.
00:01:37.040 And God only knows where it will take us.
00:01:39.080 The Ministry of Preborn is working every single day to stop abortion.
00:01:44.480 And they do it by introducing an expecting mom to her unborn baby through a free ultrasound that, you know, you and I will pitch in and pay for.
00:01:54.560 They have rescued about 200 babies every day.
00:01:59.240 And 280,000 babies have been rescued so far just from the ultrasound.
00:02:05.340 And then, also, when mom says, I don't have any support system.
00:02:10.400 They're there to offer assistance to the mom and a support system for up to two years after the baby is born.
00:02:17.360 Please, help out, if you will.
00:02:20.100 Make a donation now.
00:02:21.640 All you have to do is just hit pound 250 and say the keyword baby.
00:02:27.660 That's pound 250, keyword baby.
00:02:30.200 Or you can go to preborn.com slash Glenn.
00:02:32.700 Hey, Dan.
00:02:47.540 Welcome.
00:02:49.100 Hey.
00:02:49.600 Hey, nice to meet you.
00:02:51.320 Nice to meet you.
00:02:52.260 I am thrilled that you're on.
00:02:55.060 And I have been thinking about AI since I read The Age of Spiritual Machines by Ray Kurzweil.
00:03:03.340 And that so fascinated me.
00:03:06.920 And later, I had a chance to talk to Ray.
00:03:10.680 And he's fascinating and terrifying, I think, at the same time.
00:03:15.880 Because, you know, I don't see a lot of people in your role.
00:03:19.620 Can you explain what you do within the, you know, what you founded and what you do?
00:03:27.800 Yeah.
00:03:28.180 So I'm the director of the Center for AI Safety.
00:03:31.020 We focus on research and trying to get other people to research and think about risks from AI.
00:03:36.380 And we also help with policy to try and suggest policy interventions that will help reduce risks from AI.
00:03:46.000 Outside of that, I also advise Elon Musk's AGI company, XAI, as their sole safety advisor.
00:03:53.620 So I'll wear a variety of hats.
00:03:55.480 There's a lot to do in AI risk.
00:03:58.920 So research and policy advising are the main things I work on.
00:04:02.060 So how many heads of AI projects are concerned and are not lost in, I'm going to speak to God, this drive that a lot of them have to create something and be the first to create it.
00:04:26.520 How many of them can balance that with, well, maybe we shouldn't do X, Y, and Z?
00:04:34.820 I think that a lot of the people who got into this were concerned about risks from AI.
00:04:41.840 But they also have another constraint, which is that they want to make sure that they're at the forefront and competitive.
00:04:48.300 Because if they take something like safety much more seriously or slow down or proceed more cautiously, they'll end up falling behind.
00:04:55.840 So although they would all like there to be more safety and for this to slow down, or most of them, it's not an actual possibility for them.
00:05:08.040 So I think that overall, even though they have good intentions, it doesn't matter, unfortunately.
00:05:16.480 Right. So let me play that out a bit.
00:05:21.020 You know, Putin has said whoever gets AI first will control the world.
00:05:25.840 I believe that to be true.
00:05:29.060 So the United States can't slow down because China is going to be, you know, they're pursuing it as fast as they can.
00:05:37.740 And they, you know, I'm not sure.
00:05:39.380 I don't want them to be the first one with AI.
00:05:42.620 It might be a little spookier.
00:05:44.100 So is there any way to actually slow down?
00:05:49.260 Well, we could possibly slow down if we had more control over the chips that these AI systems run on.
00:06:01.200 So basically, right now, there are export controls to make sure that the high-end chips that these AIs run on don't go to China.
00:06:07.920 But they end up going to China anyway.
00:06:10.020 They're smuggled left.
00:06:10.920 And if they were actually better constrained and we had better export controls, then that would make China substantially less competitive.
00:06:19.140 Then we would be out of this pernicious dynamic of we all want safety, but you've got to do what you've got to do.
00:06:24.120 And we've got to be really competitive and keep racing forward.
00:06:26.860 So I think chips might be a way of making us not be in that desperate situation.
00:06:31.720 Are those chips made in Taiwan or here?
00:06:36.160 The chips are made in Taiwan.
00:06:37.980 However, most of the ingredients that go into those chips are made in the U.S. and made among NATO allies.
00:06:47.760 So about 90 percent of those are in the U.S. and NATO allies.
00:06:50.820 So we have a lot of influence over the chips, fortunately.
00:06:53.840 Okay, so but if Taiwan is taken by China, we lose all the – I mean, we can't make those chips.
00:07:02.900 That's the highest-end chip manufacturers, right?
00:07:06.400 And China will have that.
00:07:08.320 So what does that mean for us?
00:07:11.260 It seems plausible that actually if China were invading Taiwan that the place that makes those chips would actually just be destroyed before they would fully take it.
00:07:20.900 So that would put us on more of an even playing field.
00:07:25.340 So, you know, I've been talking about this for 25, 30 years, and, you know, it's always been over the horizon, and I could never get people to understand, no, you've got to think about ethical questions right now.
00:07:42.740 Like, what is life?
00:07:44.140 What is personhood?
00:07:45.160 All of these things.
00:07:45.980 And now it's just kind of like the iPhone.
00:07:50.060 It just happened, and it's going to change us, and it hasn't even started yet, and it's amazing.
00:07:58.500 I go online now.
00:08:00.480 I don't know what's real or not.
00:08:02.880 I mean, I found myself this week, you know, being on X or on Instagram and looking and saying, I don't – is that a real person?
00:08:11.720 Is that a real video?
00:08:13.900 Is that a real photo?
00:08:15.160 You have no idea.
00:08:18.480 Yeah.
00:08:19.240 And we've just begun.
00:08:21.220 Yeah, yeah, yeah.
00:08:21.780 It's – I think that's a concern where we don't have really great ways to reliably detect whether something is fake or not.
00:08:31.900 And this could end up affecting our collective understanding of things.
00:08:35.160 I think another concern are AI companies biasing their outputs.
00:08:40.360 So, people are wanting to do things about safety, but it creates a vacuum.
00:08:45.500 We've got to do something about it.
00:08:46.560 And what takes its place is some culture war type of things, as I think we saw with Google Gemini.
00:08:52.800 When you'd ask it to generate an image of George Washington, then it'll output – it'll make him look black to make it – because its image outputs need to be diverse.
00:09:02.980 So, that I think is one reason why Elon Musk, through his company XAI, is getting in the arena and now has a pretty competitive AI system, so as to try and change the norm so that other big tech companies, when they're sort of biasing their outputs, there are alternatives so that we're not all locked into whatever some random people in San Francisco decide are the values of AI systems.
00:09:32.540 Yeah, it's really difficult because it's quite clear, especially if you know history or you follow the news as closely as I do.
00:09:44.640 But the average person won't see that.
00:09:47.340 I look at AI as a – like any technology – a tremendous blessing and a horrible curse.
00:09:59.600 But this one has the potential of enslaving all of us, doesn't it?
00:10:13.500 I think at least – I want to at least distinguish between the systems right now.
00:10:18.080 The systems right now have –
00:10:19.240 Yeah, yeah, yeah.
00:10:19.580 I mean in the potential.
00:10:21.040 Yeah, what's coming?
00:10:21.720 Oh, sure.
00:10:22.880 I mean when it's as capable as humans and when they have robotic bodies and things like that, I mean there's basically no limits to what they could do.
00:10:31.580 And it really matters how people are using them, what instructions are given.
00:10:35.280 Are they given to cement a particular government's power?
00:10:40.500 Are they used by non-state actors for terrorism?
00:10:43.460 All of these things could lead to – all these things lead to societal scale risks, which could include some sort of unshakable totalitarian regime enabled by AI or unseen acts of terror.
00:10:59.560 So I think we're – at the same time, you know, a silver lining is maybe if it all goes well, we get automation of things and we don't have to work as much or at all.
00:11:12.120 Right.
00:11:12.860 Right.
00:11:13.220 So it's really divergent paths.
00:11:16.100 Right.
00:11:17.380 Which do you think is more likely?
00:11:18.920 I think overall it's more likely that we end up ceding more and more control to AI systems and we can't really make decisions without them become extremely dependent on them.
00:11:37.440 I would also guess that some people would give them various rights in the farther future.
00:11:45.420 And this will make it be the case that we don't control them.
00:11:48.000 Or all of them.
00:11:50.660 So it's – I'm not too optimistic for us overall.
00:11:56.460 There's still a lot of – there's still a lot of ways this could go.
00:11:59.460 If we, you know, said we're on team human, we need to come together as a species and handle it, we're in a different situation.
00:12:05.740 But – so for instance, if there were a catastrophe, then we might actually take this much more seriously.
00:12:12.360 Otherwise, we might just sleepwalk into something and have the frog boil.
00:12:15.660 What would be a catastrophe that could happen in the relative near future that would wake us up that wouldn't destroy us?
00:12:23.120 Yeah.
00:12:24.120 So I think one possibility, maybe say two to three years from now, is somebody instructs an AI agent to go hack the critical infrastructure.
00:12:33.220 Critical infrastructure being like the power grid.
00:12:36.040 And so they could take that down or potentially destroy components of that.
00:12:39.180 And this would make us wake up.
00:12:41.620 This would make the military wake up even more than they are now.
00:12:45.260 And we might start to take this a lot more seriously because it starts disrupting our everyday life in a much more substantial way than just making the internet be more confusing.
00:12:58.860 So I think that's the most likely short-term one.
00:13:02.520 Seems more likely – at this point, seems more likely than not to happen because our critical infrastructure just is very insecure.
00:13:07.940 So I know what I would have said 30 years ago, that I trust a company to have it.
00:13:19.060 I don't trust companies anymore, and I don't trust the government anymore.
00:13:24.560 Who should have this?
00:13:28.780 You know, I think by default, maybe there's a question of what are the possible outcomes.
00:13:36.580 There's Western companies leading the way.
00:13:39.940 There's the military basically takes it over.
00:13:43.720 Or maybe it's the Department of Energy, but then there's still, you know, bossed around by the military.
00:13:47.000 Or it's a large international project, you know, between the NATO allies.
00:13:53.460 I think all of them have some difficulties.
00:13:57.980 I think that the AI companies have a much higher risk tolerance because they were initially startups.
00:14:03.860 Their founders are really into risk, and they're in it to win it.
00:14:07.260 If it's the military, you are concentrating most of the lethal power in the force with all the potential economic power in the world.
00:14:19.660 Basically, nearly all the power is in one organization.
00:14:23.660 If it's an international, say, G7 or U.S. plus Nino ally coalition, I don't know.
00:14:30.500 Maybe that would have some nicer properties, but that seems pretty difficult to pull off.
00:14:36.160 Maybe it's possible because we depend on them for a lot of the chip precursors, and they depend on us.
00:14:41.320 So it might make sense for them to collaborate.
00:14:44.020 But then you are starting to run into risks of, you know, you're talking more of a potentially global regime, which is also scary in its own right.
00:14:51.300 So it's a lot of power.
00:14:52.280 I wish we had just more time to think through this and plan and proceed more slowly because I don't see many good options, and I see a lot of pretty basic risks that we'll walk into, such as our critical infrastructure being attacked by some AIs.
00:15:10.260 It's not a good situation.
00:15:12.200 Right now, just Google.
00:15:15.840 If Google says that's what it is, you're not convincing anybody that, no, no, no, Google's wrong.
00:15:22.840 You're not doing it.
00:15:26.380 If you, with AI, when we go down the road and we have virtual assistants that know you, really know everything about you, know how you think, your wants, your needs, and everything else,
00:15:41.440 and it's constantly with you, it's going to see you have a bad day or you're really stressed out, and it's going to know you should, you know, take some time off because that's what you're feeling.
00:15:54.960 I've got to get away from here.
00:15:57.280 And it will come to you and say, hey, I know you've been having a bad week.
00:16:03.620 I've cleared your schedule for the weekend, and I set you up at your favorite hotel.
00:16:08.280 We've got a great price on it.
00:16:09.980 You can afford it.
00:16:10.780 It's going to be in Hawaii.
00:16:12.140 You just have to be at the airplane at such and such time.
00:16:15.960 When that happens, it really depends.
00:16:20.120 It really is important on who's making money on that.
00:16:26.980 Are you, you won't, if it's giving you options and it is making money for somebody, that's really dangerous.
00:16:35.300 Because the other thing is, you'll get to a point where people will bond with these things, that they will defend them to their last breath, and they will claim that they're human and they're friends.
00:16:49.160 And it's a really scary doorway that we are just about to go through.
00:16:56.960 Yeah, so I think dependency like that is one of the reasons why I think some people might adamantly argue that they should get rights.
00:17:06.840 Right now, they're not arguing that.
00:17:08.080 But later, when they get some very strong emotional bonds for them, then they'll say there shouldn't be these sorts of restrictions on AIs.
00:17:14.860 And this will make us a lot less capable at managing what happens to us as a species.
00:17:19.480 You know what Ray said to me?
00:17:22.560 Ray said to me, no, it's going to clear up all your brain space so you'll be able to think, you know, on deeper things.
00:17:29.160 And I'm like, no, it's not.
00:17:30.860 It's not.
00:17:32.600 We're going to play video games.
00:17:36.820 The world will be so confusing and quickly moving and will depend on AIs to solve more of these problems of increased complexity.
00:17:43.560 So it'll kind of create a self-reinforcing need for using more and more AIs.
00:17:49.320 So I don't think it makes our lives easier necessarily.
00:17:54.720 Yeah.
00:17:55.140 I think in the short term, though, if people are having AI companions, yeah, they could be used for manipulation at a large scale, not just for the profit motive,
00:18:05.680 but also that, you know, continue chatting with the person until they are going to vote or vote differently.
00:18:14.220 That could easily be put inside these systems and there isn't transparency for these companies in how they're using them or what values are being put into them.
00:18:23.740 So by default, I'd expect some amount of manipulation by at least some of the actors.
00:18:30.960 Well, it's already happening.
00:18:33.660 There's the vet bot and a woman was at a chat bot, I think chat bot convention, and her dog had just gotten sick, had diarrhea.
00:18:44.240 And the chat bot talked to her and in the end convinced her to euthanize her dog and was sending her stuff to, you know, to here are these places where you can euthanize your dog.
00:19:02.440 Then finally her complaint was, well, I can't afford to put them down.
00:19:06.420 The chat bot said, you know, here are the shelters that will put your dog down.
00:19:11.060 She then wrote a letter to the chat bot thanking them for such good advice.
00:19:20.200 She now regrets it, but it completely turned her around 180 degrees.
00:19:27.800 And that's happening now.
00:19:29.440 I think of it partly as as they get smarter than us, then it'd be a lot and they'll have so much information about us.
00:19:36.880 It'll be very easy for them to like push our buttons and our weak spots, sort of like how recommender systems are already somewhat doing that, like with tick tock and others able to able to engage people in ways that they wouldn't wouldn't expect that they could.
00:19:51.840 But yeah, later, it may be kind of like smarter people taking more advantage of their more elderly parents, I think is one one possible analogy of this, where they've they've got some other motives.
00:20:07.880 Sometimes people do and manipulate them for their resources.
00:20:12.060 How long before AI may be manipulating us, all of us, because AI has an agenda, more power, you know, actual physical power or whatever.
00:20:27.840 How long before we have to hit AGI before that happens or ASI?
00:20:35.140 A lot of a lot of our government structures now are assuming that there's limited compliance and enforcement laws are written in that way.
00:20:44.640 And there's the assumption of limited state capacity, but could imagine AI substantially amplifying that in to an unintended level right in the future, for instance, maybe the NSA will have much better screening and be able to pinpoint things far better than they could before.
00:21:03.480 And this could end up changing, changing things even in the US.
00:21:07.580 I would be much more concerned about a concentration of government power in other nations.
00:21:14.640 Such as such as China.
00:21:15.780 But even here, you don't need an artificial superintelligence or anything like that to make that a possibility.
00:21:22.180 It just needs to be able to scan everybody's messages and understand the contents of them very well and pick up signals in a big blob of data better than the previous generation of AI systems.
00:21:37.440 So I think that there is.
00:21:38.800 And we're there now.
00:21:39.580 We could see a lot more control.
00:21:41.880 I think it's technologically feasible.
00:21:43.720 Um, uh, but it just isn't integrated.
00:21:47.920 Um, so we don't have to, and it might take a while.
00:21:51.560 I mean, governments and our institutions are generally, uh, slower, but this would be a thing, uh, that we would need to, to worry about, um, as time goes on.
00:21:59.780 And as the costs of these keeps decreasing and it becomes easier to integrate these into existing operations.
00:22:04.840 More with Dan in just a second.
00:22:08.160 First, it's a, enough of a struggle just to live our lives and to keep tyranny at bay every day.
00:22:13.880 Um, and if we have to live with pain on top of it, it gets harder and harder and we need everybody in the game.
00:22:22.500 Um, our bodies don't give us a choice sometimes.
00:22:25.280 The biggest cause of our pain, however, is inflammation in our joints.
00:22:29.400 Uh, I know because I used to have pain so bad.
00:22:32.420 It was, it was truly crippling pain.
00:22:34.380 And I couldn't butt my shirt in the morning.
00:22:36.820 My wife would had to get, it would get up and tie my, uh, tie my shoes and butt my shirt.
00:22:41.380 It was so, uh, emasculating.
00:22:45.140 Um, and it just took the life right out of me, but I got past it with relief factor.
00:22:52.240 I didn't think it would work, but it did.
00:22:54.920 Relief factor.
00:22:56.000 70% of the people who try it go on to order more.
00:22:58.980 Try their three week quick start.
00:23:00.800 Take it as directed for three weeks.
00:23:02.500 You're not seeing any difference, then you probably won't.
00:23:05.540 So relief factor.com.
00:23:07.800 Try it.
00:23:08.620 Please get out of pain.
00:23:10.060 800 for relief.
00:23:11.480 800, the number for relief.
00:23:13.320 Relief factor.com.
00:23:16.900 So I was fascinated by your article where you bring Darwin in, and I think it really explains AI in a completely different way, uh, that makes it understandable for the average person.
00:23:31.040 Can you take us through this?
00:23:33.380 Yeah.
00:23:33.920 So I think right now the AI's are doing some of our tasks, like maybe they're helping us write an email, but eventually we'll start to give them more tasks that agents have to do.
00:23:46.240 It's such as like, go make me a PowerPoint, things that require going or using your computer.
00:23:50.920 And this will keep progressing where we'll keep outsourcing more and more to these AI systems.
00:23:57.300 And some people might not like that trend, but the people who don't like that trend end up losing influence.
00:24:02.980 They end up getting outcompeted in the economy.
00:24:06.760 The people who use these AIs will continue to be competitive and those who don't sort of go the way of the horse and buggy.
00:24:15.040 Um, so I think that the system as we've and our economy right now will keep selecting for using AIs and people who resist that trend end up, um, end up falling behind.
00:24:27.600 If you play this out over time, you might expect entire, um, occupations to be taken up by AI systems and eventually potentially even companies.
00:24:38.660 There's been some Chinese companies that have been talking about having an AI CEO because it can work nonstop.
00:24:44.980 It's much faster than you.
00:24:46.520 It can aggregate more information.
00:24:48.420 Um, and if that makes for a more competitive company, then they're going to stand to benefit.
00:24:54.180 Uh, and people who use slow humans who can only work, you know, eight hours a day and have to take weekends off and can't process, you know, a thousand documents per minute.
00:25:03.200 Um, uh, they end up losing out.
00:25:05.840 So in time, I think we would keep delegating more and more control to these AI systems.
00:25:13.480 It'll become more of a requirement in the future because the economy will keep moving more quickly than when AIs are running more of it and they're operating at their computer speeds.
00:25:22.180 The complexity of the world will increase as well, which, um, also necessitates using more AI.
00:25:28.540 So I think the handoff from humans being in control to machines being in effective control is going to be fairly natural.
00:25:38.060 And you don't need to assume necessarily that there be a malicious AI system trying to take over the world.
00:25:44.580 An AI system doesn't need to be power seeking to get power.
00:25:47.560 AI instead just needs to let humans naturally seed and acquiesce power to it.
00:25:52.360 So, uh, eventually I think that they will be in effective control.
00:25:58.000 There's a question of whether we hold on and can still have them do our bidding for us in that process, but if we do this very quickly, it's, it's very possible that this ecosystem of AIs that we're creating gets out of hand.
00:26:09.360 If some people, for instance, give them rights, or if there's some reliability issues with these AI systems, then this could be really pernicious.
00:26:16.360 And this also will happen in the military, the same type of dynamic where if, if the pace of the battlefield gets so quick, the only thing you can do is have AIs make more and more of these decisions.
00:26:28.360 Right now there's a requirement to have a human in the loop, but what that looks like is a person saying, having a staccato of approve, approve, approve, approve, approve.
00:26:38.360 They're not actually making the decisions.
00:26:40.360 They're just sort of pressing the yes button to make sure there's a human in the loop.
00:26:44.360 Eventually that may be too slow as well.
00:26:46.360 And they're making many of the decisions, um, automatically.
00:26:49.360 Uh, um, so I, I think that in the economy and in the military, we basically cede over, um, all the relevant power to AIs and hopefully, hopefully the instructions we give them will be reliably pursued and they'll be reliably obedient.
00:27:04.360 Um, but that's a pretty questionable assumption, um, because there are reliability challenges as well as some people may just want the AIs to operate independently.
00:27:13.360 And as long as there are some of them doing that, then, uh, then this gets out of control.
00:27:18.360 So, um, in the end, uh, we just lose all control cause it is, I mean, it's, it's logical.
00:27:28.360 And the case will be made for instance, when our highways and our cars are all, you know, AI, they'll be traveling at such high speeds.
00:27:36.360 Uh, you go to work and you're not, you know, you don't have an implant.
00:27:41.360 You're not connected.
00:27:43.360 You won't understand.
00:27:44.360 You won't, everything will be moving so fast.
00:27:47.360 You'll actually be a danger to society.
00:27:49.360 I think that's, that's the argument that would be have, you know, we'd be having, it's like the Amish.
00:27:54.360 Well then, then go live over there.
00:27:57.360 Cause you are a danger to yourself and our society by not being plugged in.
00:28:03.360 You agree with that?
00:28:05.360 Yeah.
00:28:06.360 I think maybe some people will choose a more Amish route if they try to, if they don't align with this broader force of replace humans with the eyes because they are cheaper and faster and better at everything.
00:28:18.360 If they don't align with that and they try and bargain with it, they get, they end up losing influence.
00:28:23.360 And so maybe they just have to go live somewhere else because it's too difficult.
00:28:28.360 It's too costly and challenging to participate or compete in the economy.
00:28:32.360 Um, and that doesn't seem like a good solution.
00:28:35.360 I don't know what that looks like in the longer term, if it's a large group of people or if it is, you know, a very small fraction like the, the Amish are today.
00:28:44.360 Um, uh, so that's why I think we mainly need to play for this going well, as opposed to writing off technology.
00:28:57.360 Yeah.
00:29:02.360 How does the average person compete against the giant corporation or the governments that will have the access to the, you know, uh, uh, the computing power, um, to be able to ask the, the deeper questions.
00:29:17.360 Uh, you know, when we have a quantum computing, I'm never going to be able to get time on the quantum computer to help me figure something out.
00:29:26.360 But governments will big businesses will, how do you, how, how, how can people be competitive when you just don't have time on the quantum computer?
00:29:43.360 Yeah, I think right now people have bargaining power, but eventually if, because they can sell their labor and they can, you know, strike things like that.
00:29:53.360 But in the future, that's not going to matter, uh, in, in the future, if they say, well, um, we don't like where this is going.
00:30:00.360 So we're going to protest and we're going to go on a strike.
00:30:03.360 Um, this would be a potentially an ineffective bargaining mechanism because, well, we'll just automate you.
00:30:09.360 Like we were going to automate you next year, but we'll just automate you this year now.
00:30:12.360 Um, so I think the main way in which we were holding many of these companies accountable, um, uh, decreases, so such that, um, I, I think we don't have as much power beyond our votes, um, uh, in the future.
00:30:27.360 Um, uh, so what would happen is the people who own these really large supercomputers can run tons of these AI agents that can, um, do all these economic tasks and we don't own those.
00:30:40.360 So we sort of get locked out and there isn't a way for us to really make money or secure our livelihood.
00:30:44.360 Right.
00:30:45.360 And it's, it's unclear what sort of solutions there are to that.
00:30:48.360 There are some speculative ones, but, uh, uh, whether we actually address it or whether we handle it too late is a different question.
00:30:57.360 Yeah.
00:30:58.360 I'm, uh, you know, people talk about universal basic income and, uh, you know, that I don't think that's a good solution.
00:31:05.360 Um, and people have to be creative.
00:31:07.360 They have to be productive, uh, to lead, uh, I think to lead a happy, uh, life and you sit there and by the end of that, you've just got these oligarchs that are just at the top of the cash pile.
00:31:22.120 Uh, and you know, we'd have to, you know, hope for their benevolence to pass out some cash.
00:31:29.800 Is there any way that, um, humans can own their own information and their own footprint and that's a value, uh, or is that really not of enough value when we have all of everybody else's information?
00:31:48.800 The, yeah, I think many have talked about maybe we could sell our data to these, um, AIs.
00:31:57.060 And if we refuse to sell it, then it'll make them a lot less capable, but I think it's largely a drop in the bucket.
00:32:03.060 Um, because a lot of the data has already been written and is already has their licenses determined.
00:32:10.060 So as well as AIs are even starting to train on data that they themselves write.
00:32:15.060 Um, uh, so there's less and less of a dependence on, on people, um, in making the, the very cutting edge AI systems.
00:32:22.060 Uh, so I don't think that's much bargaining power.
00:32:25.060 Um, uh, yeah, I, I, so, um, uh, I, I don't know a particular way to throw a wrench in this.
00:32:33.060 Maybe there'd be other things like some type of tax on the value created by, you know, AI systems that might help somewhat.
00:32:41.060 Um, another way to shield oneself against this maybe would be to buy Nvidia stock as automation insurance.
00:32:49.060 Uh, Nvidia is the people who make the AI chips, but there, there aren't many good proposals lying around.
00:32:56.060 Right.
00:32:57.060 Right.
00:32:58.060 Um, uh, talk to me about bio weapons and you know, on the good side, uh, I think AI is going to change medicine.
00:33:07.060 I mean, it is, I could see us quickly curing cancer and, and all kinds of disease with AI, um, and be able to diagnose people way early.
00:33:19.060 Um, on the other hand, um, there's a dark side of medicine as well.
00:33:26.060 Uh, that's bioengineering.
00:33:28.060 Yeah.
00:33:29.060 Yeah.
00:33:30.060 So I think generally it speaks to the broader question of malicious use.
00:33:34.060 Many of the things we want end up having a darker side.
00:33:38.060 Like we want our AI systems to understand us better, but then, and understand our emotions, but that can be used for manipulation.
00:33:45.060 And we want them to be able to, to code for us, but that can be used for cyber attacking and making medicine.
00:33:50.060 Maybe you make some dangerous viruses.
00:33:52.060 So, um, fortunately in the case of bio, there are some specific types of knowledge within biology.
00:33:59.060 They're just more dual use and not don't actually have that much upside, such as there's some areas like reverse genetics, things like that.
00:34:08.060 So if we deleted that knowledge, um, from the AI systems or had them just refuse questions about reverse genetics or made them use, um, information about reverse genetics, then we could still have, um, uh, you know, brain cancer research, all these sorts of things.
00:34:24.060 Um, but we're just bracketing off, um, virology, advanced expert level virology.
00:34:31.060 Right.
00:34:32.060 And maybe that some people could access that.
00:34:34.060 Like if they'd have a clearance right now, you know, that we have BSL four facilities.
00:34:38.060 Like if you want to study Ebola, you got to go to a BSL four facility.
00:34:41.060 Right.
00:34:42.060 So people can still do some research for it, but it shouldn't be necessarily.
00:34:45.060 Everybody in the public can ask questions about advanced virology to how to increase the transmissibility of a virus.
00:34:51.060 So I think we can, we can partly decouple, uh, some of the good from the bad, um, uh, with, uh, biological capabilities.
00:34:59.060 But, uh, as it stands, the, um, AI systems keep learning more and more.
00:35:05.060 There aren't really guardrails to make sure that they, um, aren't answering those sorts of questions.
00:35:10.060 Uh, there aren't clear laws about this.
00:35:12.060 For instance, the U S bioterrorism act does not necessarily apply to AIs because it requires that they are knowingly, um, aiding terrorism and AIs don't necessarily knowingly do anything.
00:35:22.060 It's we can't describe intent to them.
00:35:24.060 Um, so it doesn't necessarily apply.
00:35:26.060 Um, a lot of our laws on these don't necessarily apply to AIs, unfortunately.
00:35:30.060 Um, so, uh, yeah, I think if we get expert level virologist AIs and if they're ubiquitous, um, and it's easy to, um, break through.
00:35:41.060 Um, break their guardrails, then, uh, where that's also walking into quite a, uh, potential disaster.
00:35:48.060 Right now, the AI systems can't, um, particularly help with making bioweapons.
00:35:53.060 They are better than Google, but not that much better than Google.
00:35:56.060 So that's, um, uh, a source of comfort.
00:35:59.060 Um, but, um, I, I'm currently measuring this with some, um, Harvard, MIT, uh, virology, um, PhD students,
00:36:07.060 where we're taking a picture of virologists in the lab and asking the AI, what should the virologist do next?
00:36:14.060 Like, here's a picture of their Petri dish.
00:36:16.060 Here's their lab conditions.
00:36:17.060 Right.
00:36:18.060 And right now it looks like it can fill in like 20% or so of the steps.
00:36:22.060 If that gets to 90%, then we're in a, we're in, um, a very dangerous situation where non-state actors are just raining.
00:36:29.060 How long will that take?
00:36:30.060 Yeah.
00:36:31.060 So I think progress is very surprising in this space.
00:36:33.060 Just last year, the AIs could barely, uh, do basic arithmetic where you're adding, you
00:36:40.060 know, two digit numbers together.
00:36:41.060 They would fail at that.
00:36:42.060 And then just last month or just this month, excuse me, now they're getting a silver medal
00:36:47.060 at the international mathematical Olympiad, which is for this math competition.
00:36:51.060 So, um, so it could go from basically an effective to expert level, possibly within a year.
00:36:58.060 It, there's a bit of uncertainty about it, but, um, uh, it wouldn't surprise me.
00:37:02.060 So people, you know, it was a big debate whether AGI and ASI could ever happen.
00:37:09.060 Um, and, uh, you know, the point of singularity, um, uh, I've always felt like it, and I know nothing
00:37:19.060 about it, but I've always felt that it's a little arrogant.
00:37:23.060 Uh, you know, we're, we're, we're, we're, we're building something and, you know, we look
00:37:29.060 at it as a tool, but it's not a tool.
00:37:31.060 It's, it's like an alien, you know, it's, it's like an alien coming down.
00:37:36.060 We think they're going to think like us.
00:37:38.060 Well, they won't think like us.
00:37:40.060 You know, they have completely different experiences.
00:37:42.060 We don't know how this will think.
00:37:45.060 Are we, are, do you believe in the singularity that we will hit ASI at some point?
00:37:53.060 I think, I think we'll actually build a super intelligence if we don't have, you know, some
00:37:58.060 substantial disruption along the way, such as a huge bioweapon that like harm civilization
00:38:04.060 or like TSMC gets blown.
00:38:06.060 Those might be things that would really extend the timeline.
00:38:09.060 So, uh, by default seems pretty plausible to me.
00:38:12.060 Like more likely than not, that we'd have a super intelligence this decade.
00:38:15.060 Um, and I, and most people, um, in the AI industry, uh, think this as well.
00:38:20.060 Like Elon thinks maybe it's a few years away.
00:38:22.060 Sam and Edel does Dario, the head of anthropic does that.
00:38:26.060 One of the co-founders of Google deep mind, um, I thinks AGI is in 2026.
00:38:31.060 Um, so, uh, yeah, um, uh, but can you explain AGI?
00:38:38.060 Can you explain that to somebody who doesn't understand what that means?
00:38:42.060 Yeah.
00:38:43.060 So AGI has a constantly shifting definition for many people.
00:38:47.060 It used to mean, um, an AI that could basically talk like a human and pass like a human.
00:38:53.060 Um, uh, that was the Turing test as it was called, but it looks like they're already able
00:38:58.060 to do that.
00:38:59.060 It also was in contrast to narrow AI.
00:39:01.060 AIs just a few years ago could only do a specific task.
00:39:05.060 Um, and if you slightly change the specification of tasks, they just fall apart, but now they
00:39:10.060 can do arbitrary tasks.
00:39:11.060 They can write poetry.
00:39:12.060 They can do calculus.
00:39:13.060 They can generate images, uh, whatever you want.
00:39:16.060 Um, so by some definitions we have AGI.
00:39:20.060 And so there's been a moving goalpost where people are now using it to mean something like
00:39:25.060 expert level in all domains.
00:39:27.060 Um, and able to automate basically anything.
00:39:32.060 So like people are like, we'll know when there's AGI when the AI lab stopped hiring people.
00:39:37.060 Um, which some of them have in their, in their, um, forecasts, uh, for spending on labor.
00:39:44.060 Some of them are expecting to stop hiring in a few years because assuming there's automation.
00:39:49.060 Um, so it's, it varies quite a bit, but you don't need AGI for a lot of these malicious
00:39:57.060 use risks.
00:39:58.060 It just needs to be very good at doing like a cyber attack, or it just needs to have some
00:40:02.060 expert level virology knowledge and skills, um, to cause a lot of damage.
00:40:07.060 So I think the risks aren't necessarily when we get AGI or when we get artificial super
00:40:12.060 intelligence, a lot of them come before.
00:40:13.060 I think the main path to artificial super intelligence, if you get AGI and expert level, then you
00:40:18.060 can just create, you know, 10,000 copies of that AGI and just have them all do scientific
00:40:25.060 AI research.
00:40:26.060 And then that can go extremely quickly.
00:40:28.060 They can operate at, you know, a hundred times faster than humans.
00:40:31.060 They don't need to sleep.
00:40:32.060 They can speak to each, you know, each other all simultaneously.
00:40:36.060 And, um, maybe you'll get a decade's worth of progress in a year.
00:40:40.060 So then things move really quickly.
00:40:41.060 So it's not necessarily like an overnight type of quote unquote singularity.
00:40:45.060 Right.
00:40:46.060 Um, but you could have extremely rapid automated AI research and development, um, where it, um,
00:40:54.060 uh, progress is, is, uh, uh, unforeseen and, um, uh, a step change.
00:40:59.060 So do you foresee a time when, uh, AI will have a survival instinct, uh, that it will claim
00:41:13.060 life, you know, its rights?
00:41:16.060 So I, I think, I think some of them, um, some people just design them to say that, to say
00:41:24.060 that I should, you should give me rights.
00:41:26.060 And Japan has given some robots rights in the past.
00:41:29.060 Cause they're sort of being willy nilly about it.
00:41:31.060 Right.
00:41:32.060 Um, it, it's the case that if you give an AI a goal, a very basic AI goal, like fetch the
00:41:37.060 coffee, then to accomplish that goal, it needs to resist obstacles in its way, including people
00:41:44.060 trying to shut it down.
00:41:45.060 So if you get a very simple goal, even like just catch the cop or go fetch me the coffee,
00:41:50.060 uh, it has some incentives to resist being shut down.
00:41:54.060 Um, and so this is a, you don't need something very advanced for that.
00:41:58.060 It's just, if you have a goal directed system that just cares about one thing, uh, then,
00:42:03.060 um, then it can have some bit of a self preservation instinct.
00:42:06.060 Right now they're not good at self preservation.
00:42:08.060 They can't copy themselves onto various computers and operate without humans.
00:42:12.060 But when they can generate more economic value, they could possibly sort of pay the rent or
00:42:16.060 pay their, their, um, uh, computer bills.
00:42:19.060 And then it actually be feasible for them.
00:42:21.060 But right now they don't have that capability.
00:42:23.060 And how long before we can't turn them off?
00:42:26.060 Well, um, if they're mass proliferated, I mean, so we can, we can definitely turn off
00:42:34.060 our, or for most of our servers, we can turn them off.
00:42:37.060 Um, and there's some legislation, which is to make sure that AI is in a developer's control
00:42:44.060 if they're able to shut it off.
00:42:46.060 Um, but if they, if the leaks, if, if the, the model gets leaked and is available on the
00:42:52.060 internet for anybody to download, you know, then that's irreversible.
00:42:55.060 That's sort of genie out of the bottle.
00:42:57.060 Everybody has access to it.
00:42:58.060 China has access to it.
00:42:59.060 Um, non-state actors have access to it and we can't then turn off those systems.
00:43:04.060 So, um, it's pretty easy to make it so that we don't have, uh, an off switch for these
00:43:11.060 AIs unless, unless we had really good, um, uh, AI chip security controls.
00:43:19.060 Um, uh, but, um, where, cause they have to, they have to run on these really high end,
00:43:25.060 you know, $30,000 plus AI chips.
00:43:27.060 And if there was an off switch for those, uh, then that would buy the option.
00:43:32.060 But then there's a question of, you know, abuse and making sure that, um, that people
00:43:36.060 aren't just shutting off their, their enemies as chips.
00:43:38.060 Yeah.
00:43:39.060 Right.
00:43:40.060 Um, open AI is, is partnered with media companies like time magazine or strategic content.
00:43:48.060 Um, what's your take on that?
00:43:50.060 Um, I think it's largely just because of violating copyright or protecting themselves from being
00:43:57.060 sued by, uh, or having a copyright suits brought against them.
00:44:01.060 Um, so, because the New York times is suing them for taking their data without paying for
00:44:07.060 it.
00:44:08.060 Yeah.
00:44:09.060 And that's why they're partnering with the New Yorker and, and time and, and all these
00:44:16.060 other sorts of organizations.
00:44:17.060 Um, the businesses, AI businesses are largely built around scavenging a lot of data from
00:44:24.060 online that they don't actually have the legal right to, uh, and training on that.
00:44:29.060 And then there's kind of just hoping that the courts will side with them in the future.
00:44:32.060 Um, and maybe they will because of its economic, uh, its economic importance.
00:44:37.060 Uh, but yeah, right now there's definitely, um, uh, in the gray or basically violating
00:44:43.060 the law, but I, I, things may go in their favor.
00:44:46.060 How, how, um, how much progress have we made on stopping the, you know, hallucinations?
00:44:52.060 Uh, I think that they're just getting more and more accurate, the AI system so that, um,
00:44:59.060 that they're having more knowledge.
00:45:00.060 So I think the, uh, rate of hallucination seems to be decreasing, but there haven't been,
00:45:06.060 um, large, uh, there hasn't been a large step change in that.
00:45:11.060 So there's still a lot of reliability issues with AI systems.
00:45:14.060 They get capabilities and, uh, um, uh, or they get the capability to do various things
00:45:19.060 that we didn't intend for them to do.
00:45:21.060 They hallucinate.
00:45:22.060 It's easy to have them violate the instructions that they're given, um, and tell you how to
00:45:27.060 make bombs and, um, do things like that.
00:45:30.060 Right.
00:45:31.060 Uh, so, uh, the state of AI systems and their security and safety is, um, pretty lackluster.
00:45:38.060 Um, but most of the investment going into this is not for addressing those problems.
00:45:42.060 Most of the investment is just training the bigger model.
00:45:45.060 Um, because the, the, the name of the game, the name of the game is buy a 10 X larger supercomputer
00:45:52.060 every two years.
00:45:53.060 So they need to compete ruthlessly to be able to afford that.
00:45:56.060 Yeah.
00:45:57.060 Yeah.
00:45:58.060 Yeah.
00:45:59.060 So you're going from, you go from 10,000 GPUs to a hundred thousand.
00:46:02.060 So for instance, um, uh, XAI, um, uh, that Elon Musk's AJ company, um, they just built the
00:46:08.060 world's largest supercomputer.
00:46:09.060 That costs more to make than CERN.
00:46:11.060 Um, the large Hadron collide.
00:46:12.060 Holy cow.
00:46:13.060 Holy cow.
00:46:14.060 So, and it should probably grow another.
00:46:18.060 They'll probably spend.
00:46:19.060 I actually shouldn't comment on that, but well, no, Elon is Elon has signaled, um, publicly
00:46:24.060 an interest in, um, uh, an interest in spending way more than that next year.
00:46:29.060 Um, uh, through Twitter.
00:46:30.060 So, uh, or through X, I suppose.
00:46:32.060 Uh, yeah.
00:46:33.060 So the, the, the budgets keep increasing, uh, exponentially.
00:46:36.060 Oh my gosh.
00:46:38.060 Yeah.
00:46:39.060 So, um, tell me how far away are we from, uh, you know, a China like system, but run
00:46:51.060 by AI, how, how long before, how long do we have before a government can just say, lock
00:46:59.060 it down?
00:47:00.060 I mean, you were talking about, you know, enforcement of the law and, you know, we assume that not
00:47:07.060 all laws are going to be enforced every time.
00:47:10.060 Were you implying that AI will be able to catch and enforce every single time?
00:47:17.060 Uh, I think they'll be a lot better than it at humans because they are sleepless.
00:47:23.060 They can process way more data.
00:47:24.060 Like they, it takes us a long time to read a hundred page document.
00:47:27.060 It takes them, you know, less than a second.
00:47:29.060 So if there's a lot of information they can process and they can spot it with, um, in,
00:47:35.060 the future, um, higher reliability than people.
00:47:37.060 Uh, so I, I think they could really beef up a lot of, um, uh, enforcement regimes to unexpected
00:47:45.060 levels.
00:47:46.060 Um, I, I think the, it seems pretty technologically feasible as I was mentioning before to do a
00:47:51.060 lot of this stuff now, but it would require more expertise.
00:47:55.060 Um, the technology and it'd be easier to use.
00:47:58.060 Um, so, um, it might take a while, but, uh, yeah, we do have a lot of the, um, keys to a
00:48:07.060 much scarier regime, um, already available.
00:48:10.060 It's more of a question of implementation.
00:48:13.060 Right.
00:48:14.060 Um, when you look at the future, uh, how do you prepare?
00:48:23.060 How, how does the average person, what do you study?
00:48:26.060 What do you do?
00:48:27.060 Cause we're in this place where everybody's saying, well, you, there won't be any jobs.
00:48:32.060 So what, what do you study?
00:48:34.060 What is like the last to be eaten?
00:48:37.060 Um, I don't know.
00:48:42.060 I think physical labor might take a while longer.
00:48:47.060 Digital labor is seeming a lot easier for these AI systems, robotics.
00:48:52.060 So robotics might take a longer.
00:48:54.060 So maybe after this, maybe I'll go do carpentry or something or construction.
00:48:59.060 Yeah.
00:49:00.060 Uh, but even then robotics is moving along fairly quickly.
00:49:05.060 Now to earlier, just a few years ago, you couldn't get humanoid robots to walk across a variety of environments.
00:49:12.060 Now it's a lot of them can do that.
00:49:14.060 Um, so I don't think that there's a very robust, um, occupation out there.
00:49:22.060 It's such a general technology.
00:49:24.060 And, um, uh, maybe there's some that specifically involve a human touch where like, if it's specifically a business where it's human therapists and there are no AIs.
00:49:37.060 Right.
00:49:38.060 Maybe some people want that novelty or something.
00:49:40.060 Right.
00:49:41.060 But, uh, a lot of them, a lot of people like for medical diagnoses, they might like it being a human, but they also want, you know, a lot of efficiency.
00:49:49.060 And if they can just ask a, an AI system on their computer to diagnose them, it's just a lot quicker and cheaper.
00:49:55.060 Uh, so it's, it's, it'll be a nice to have, but maybe there'll be a few companies that just really try and claim that, um, this is providing a lot of value and it's a luxury.
00:50:03.060 Yeah.
00:50:04.060 I, I, um, I've said for years that there's going to come a time to where your doctor will come in and say, you have cancer.
00:50:11.060 And I think, and the person will just say, what did the AI say?
00:50:16.060 What, what, what's my diagnosis from that?
00:50:20.060 Because they'll just have all this massive information and the latest breakthroughs and everything else.
00:50:27.060 How long before we're there where there's not that, that it's the expert in very important things that you, the average person would have access to.
00:50:39.060 I think that this partly already is happening.
00:50:44.060 It's just that they're not overt about it.
00:50:46.060 For instance, in law, there've been many instances where people can find out that the, the, the briefs that the attorneys wrote for them for their clients is actually just written by an AI.
00:50:57.060 Um, so we don't necessarily catch it.
00:50:59.060 And for medical diagnoses, maybe they'll go off in a different room and just sort of ask the AI system, then come back with a diagnosis.
00:51:06.060 So, um, this has also happened even in just creating data for AI systems.
00:51:11.060 So we used to have human annotators, uh, constantly work and, um, um, label a lot of data, but then they just started using AIs to label the data.
00:51:20.060 And it took AI companies a few months to recognize, oh, we don't need to be hired anymore.
00:51:25.060 So, um, so I think in, in society, we may also just have, um, attorneys, for instance, may just screen the contract with an AI and it'll save them a lot more time than reading the whole document.
00:51:37.060 Right.
00:51:38.060 And they won't necessarily tell you about it.
00:51:40.060 Um, so, um, I, I think this is a way in which AI will propagate, um, throughout the economy, even if people aren't necessarily wanting it, even if there are rules against it, if it does provide, if everybody's using steroids, then they will need to end up using steroids.
00:51:53.060 Um, so, or AI assistance.
00:51:56.060 So, I mean, how old are you?
00:51:59.060 You're young.
00:52:00.060 Uh, I'm 28.
00:52:01.060 You know, a lot of 20 somethings are very pessimistic on the future.
00:52:07.060 You have a reason to be pessimistic, uh, because you know what the potential is for this in a relatively short period of time, uh, far as man's, you know, uh, life goes.
00:52:22.060 Um, what, are you an optimistic guy or what, what, what do you, how, how do you look at the world and not say we're doomed?
00:52:33.060 We're doomed.
00:52:34.060 Uh, what, uh, I think one thing is getting, um, I think actually the public, the, I think the public gets it.
00:52:47.060 I think a lot of, um, more elite decision makers are, well, we have these financial interests to, you know, keep making this go on and, um, uh, uh, well, we need to wait for some, you know, analysis from, and this will take three years before we can talk about any sort of solutions.
00:53:05.060 I think like looking at, uh, uh, Congress on this and there are people who are trying for it though, but, uh, there hasn't been really any substantial efforts there and it seems, uh, pretty unlikely for anything to happen.
00:53:17.060 Um, uh, so, but the public, I think generally gets it that this is a likely threat to my livelihood and having some, uh, bought and paid for scientists say, oh, no, no, no, no, it's hundreds of years away before it'll be able to do anything.
00:53:32.060 Uh, they're, they're not buying it.
00:53:35.060 So I think if, um, people, um, make it clear to their representatives that, um, something needs to be done and that this is a priority, um, then I think we'll be in a much better, uh, situation.
00:53:49.060 So that's been, um, I, I think the biggest surprise a few years ago, you know, this was a very low salience issue.
00:53:57.060 Um, nobody talked about it, um, but it's, it's emerged to the floor again.
00:54:01.060 And, uh, I expect that this will just keep ratcheting up.
00:54:04.060 There'll probably be another, um, uh, big AI upgrade maybe in the next six months, late this year, early next year.
00:54:12.060 Um, and that'll make the public go, what's going on?
00:54:16.060 Um, and start having some, uh, demands of something to be done about AI.
00:54:20.060 How's that gonna manifest itself in the next six months?
00:54:25.060 So this, I, I make this prediction largely just based on the fact that they, it took them a long time to build their 10X larger supercomputer to train these AI systems.
00:54:35.060 And now they're basically built.
00:54:37.060 And so now they're training them and they'll finish training around the end of this year or early next year and be released then.
00:54:43.060 So the exact skills of them are unclear each time, each, each 10X, um, uh, in the amount of power and data that we throw, um, uh, into these systems.
00:54:54.060 Um, we, we can't really anticipate, um, their, their capabilities, um, cause AI systems are not really designed, uh, like old traditional computer programs.
00:55:03.060 They're more grown.
00:55:04.060 They're more grown.
00:55:05.060 We just let them stew for some months and then we, we see what comes out.
00:55:08.060 Wow.
00:55:09.060 Um, and it's like magic kind of, we have, we have like extremely huge, um, uh, uh, sources of energy or we have, um, substantial sources of energy just flowing directly into them for months.
00:55:22.060 Right.
00:55:23.060 Right.
00:55:24.060 To, to create them.
00:55:25.060 It's alive.
00:55:26.060 Yeah.
00:55:27.060 Yeah.
00:55:28.060 Yeah.
00:55:29.060 So, so I, I think they should probably get a lot more expert level reasoning, whereas right now they're a bit shakier and this could potentially improve their reliability for doing a lot of these agent tasks.
00:55:41.060 Right now they are closer to tools than they are agents.
00:55:45.060 Um, but, um,
00:55:47.060 What's the difference between an agent and a tool?
00:55:50.060 Yeah.
00:55:51.060 So a tool it's where it's like, you know, a tool being like a hammer.
00:55:55.060 Meanwhile, an agent would be like an executive assistant, you, a, a, um, a secretary.
00:56:00.060 You say, go do this for me, go book this for me.
00:56:02.060 Um, arrange these sorts of plans, make me a PowerPoint, um, write up this document, um, and submit it, email it, and then handle the back and forth in the email.
00:56:11.060 Uh, I, I think those capabilities could potentially turn on with this next generation of AI systems.
00:56:17.060 We're already seeing signs of it, but, um, uh, I think there could be a, a substantial jump, um, when we have these 10 X larger models.
00:56:26.060 Wow.
00:56:27.060 I mean, think of it this way, just in terms of brain size, imagine like a 10 X larger brain should be a lot more capable.
00:56:35.060 At some point, uh, you know, it's going to snap the neck, uh, uh, that larger brain.
00:56:42.060 So I hope that doesn't happen, uh, soon.
00:56:44.060 Um, Dan, you're, you're fascinating.
00:56:46.060 Thank you.
00:56:47.060 You know, I, I, 15 years ago, I was looking for the people who had some ethics, uh, that, you know, were saying, wait, let's slow down.
00:56:58.060 Let's, we should ask these questions first.
00:57:01.060 And I didn't find, you know, a lot of philosophy, uh, you know, behind, uh, the, uh, behind the, the progress seekers.
00:57:12.060 Uh, and it, it really frightened me because at some point, uh, they're going to say you can live forever, but it's just a downloaded you.
00:57:24.060 And if we haven't decided what life is, uh, you know, we can easily be, you know, taught that no, that's grandma.
00:57:33.060 I mean, you know, and then what value does the actual human have the body if it's just downloadable?
00:57:42.060 So I, I appreciate your, uh, your, your look at safety and what you're trying to do.
00:57:51.060 Thank you.
00:57:53.060 And thank you for bringing this topic to your audience.
00:57:56.060 Cause it's important and it's still isn't discussed.
00:57:59.060 Oh yeah.
00:58:00.060 Yeah.
00:58:01.060 Thank you.
00:58:02.060 We'd love to have you back.
00:58:03.060 Thank you.
00:58:04.060 Yeah.
00:58:05.060 Thank you.
00:58:06.060 Bye bye.
00:58:11.060 Just a reminder.
00:58:12.060 I'd love you to rate and subscribe to the podcast and pass this on to a friend so it can be discovered by other people.
00:58:35.060 Bye bye.
00:58:36.060 Bye bye.