Making Sense - Sam Harris - August 12, 2024


#379 — Regulating Artificial Intelligence


Episode Stats

Length

34 minutes

Words per Minute

161.44676

Word Count

5,526

Sentence Count

282


Summary

Yoshua Bengio and Scott Wiener introduce a bill that aims to reduce the risks of the frontier models of AI, models bigger than any which currently exist, and if it passes, it will be an important piece of legislation. They also discuss the strange assumptions about the current state of AI risk, and why we need to start thinking right now about mitigating the risks. And they talk about why we should be concerned about regulating AI and why it's a good idea to do so. This is a fascinating topic, all too consequential, and one that needs to be talked about, not only in public, but in private, and in the media, and by academics and technologists across the world. In this episode of the Making Sense Podcast by Sam Harris, I sit down with two of the leading minds in the field of artificial intelligence to talk about how we can mitigate AI risks, and what we can do to mitigate them. This episode is a must-listen for anyone who is interested in AI, AI, or AI policy, AI research, AI-related matters, and/or AI-specificities, and how we should all be thinking about AI and AI risk in general. Please consider becoming a supporter of the podcast by becoming a patron or subscribing to Making Sense. We don't run ads on the podcast, and therefore we don't need to pay for ads. If you enjoy what we're doing here, please consider becoming one! Thank you for listening to the podcast! Sam Harris and I'm still on the road, and I'll be back in a few weeks, so I'll do some more episodes soon. -Sam Harris, making sense of the world - making sense? Timestamps: 1: 2) 3) 4) What's your thoughts on AI? 5) What do you'd like to hear from me? 6) What would you like to see me do? 7) How do you think about AI in the future? 8) What are you're going to do in the next episode? 9) Why do you want to hear me talk about it? 11) What kind of AI research? 12) Do you have a plan for the future of AI in 2020? 13) What s your thoughts? 15) How would you want me to do more? 16) What is your answer to that? 17) What I'm looking for?


Transcript

00:00:00.000 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if
00:00:11.640 you're hearing this, you're not currently on our subscriber feed, and we'll only be
00:00:15.580 hearing the first part of this conversation. In order to access full episodes of the Making
00:00:19.840 Sense Podcast, you'll need to subscribe at samharris.org. There you'll also find our
00:00:24.960 scholarship program, where we offer free accounts to anyone who can't afford one.
00:00:28.340 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:32.860 of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
00:00:44.700 Well, I've been on the road. I just did a short retreat with my friends Joseph Goldstein and Dan
00:00:51.080 Harris, where we did some meditation but also recorded some conversations. Those will eventually
00:00:57.120 be available over at Waking Up, and I am still traveling, so I'll not be doing a long housekeeping
00:01:02.680 here. I am resisting the tractor beam pull of politics at the moment. No doubt it will soon
00:01:10.060 be all-encompassing. But today I am focused on artificial intelligence and its attendant risks
00:01:18.580 and the growing effort to regulate it, which remains controversial. Today I'm speaking with Scott
00:01:25.980 Weiner and Yoshua Bengio. Scott is a member of the California State Senate, and he has introduced a
00:01:32.580 bill, SB 1047, which aims to reduce the risks of the frontier models of AI, models bigger than any which
00:01:42.640 currently exist. And if it passes, it will be an important piece of legislation. The bill has already
00:01:50.120 passed the California Senate, and it's approaching an assembly floor vote later this month. And joining
00:01:56.240 Scott is Yoshua Bengio, who's one of the leading lights of artificial intelligence. He's known for
00:02:02.920 breakthroughs in deep learning and other relevant technology. He won the Turing Award in 2018,
00:02:09.740 which has been described as the Nobel Prize for computer science. And he's a full professor
00:02:15.900 at the University of Montreal. He's also a fellow of the Royal Society of London and Canada, a knight of
00:02:23.940 the Legion of Honor of France, and has other distinctions too numerous to name here. One thing he is not
00:02:30.780 is someone who is uninformed about the current state of the technology, as well as the prospects of making
00:02:37.260 surprising progress toward artificial general intelligence. So we talk about AI risk, the
00:02:45.900 strange assumptions of certain people, some of whom I've spoken with on this podcast, who seem to think
00:02:52.660 there's really no serious risk to worry about, and who view any concept of regulation as premature and
00:03:01.160 economically injurious. Anyway, fascinating topic, all too consequential. And now I bring you Scott Wiener and
00:03:09.520 Yoshua Bengio.
00:03:15.840 I am here with Scott Wiener and Yoshua Bengio. Scott, Yoshua, thanks for joining me.
00:03:21.800 Pleasure.
00:03:22.560 Thanks for having us.
00:03:23.440 So I will have introduced you properly in my housekeeping, but perhaps you can each tell me,
00:03:30.780 I'll start with you, Scott. How have you come to focus on the issue of AI safety, which is what
00:03:37.180 we're going to talk about?
00:03:38.820 Sure. And again, thanks for having us today and for talking about this issue. So I have the great
00:03:45.360 honor and privilege of representing the great city of San Francisco, which is really the beating heart of
00:03:53.260 AI innovation, no offense to other parts of the world. And I'm really proud of that. And so I am,
00:04:00.240 you know, immersed in the tech world in terms of just people who are around me in the community,
00:04:06.560 ranging from high level executives at large tech companies to startup founders to just frontline
00:04:13.640 technologists, academics, investors, just the entire gamut. And about a year and a half,
00:04:23.060 or so ago, folks in the community in the AI space started talking to me about the issue of the
00:04:30.700 safety of large language models and started looking into it more, had a series of dinners and salons
00:04:39.660 and meetings, started reaching out to a number of people and realized that this was an area that we
00:04:44.960 should be addressing. And that's how it all started.
00:04:48.880 Nice. Nice. Yashua?
00:04:50.740 Yeah. After four decades of research in AI and contributing to the exciting advances in deep
00:04:58.820 learning that have produced Jared of AI as we see it today, I came to realize with the advent of
00:05:05.040 ChatGPT that things were moving a lot faster than I and almost everyone in the field anticipated.
00:05:11.580 And I started thinking about what this could mean for humans, for society, for democracy,
00:05:17.260 if we continued on that trajectory towards human level or AGI. And I thought, well,
00:05:26.040 society is not prepared for that. And we need to start thinking right now about mitigating the risks.
00:05:31.420 And we're going to talk about a bill that is in the process of finding its way toward possibly
00:05:39.200 becoming law, SB 1047. But before we jump into the details of regulating AI and why we might want
00:05:48.600 to do that and how, Yashua, I thought you and I were talking offline about kind of where you stand
00:05:54.860 on this continuum of concern. I mean, you're one of the leaders in this field. And I've spoken to many
00:05:59.980 people, you know, both inside the field and, you know, at some part of its periphery who have
00:06:06.020 focused on this question of AI safety. And there's a wide range of attitudes here. And, you know, I
00:06:11.700 would say on the far side of freaked out, you have someone like Eliezer Yudkowsky, who's been on the
00:06:18.520 podcast before. Also Nick, someone like Nick Bostrom, whose book Superintelligence was very influential.
00:06:24.120 And I've spoken to him as well. And then on the far side of Insouciant and, to my eye, utterly in
00:06:32.400 denial that there's any possible downside here, you have people like Rodney Brooks, the roboticist,
00:06:39.280 and Mark Andreessen, the venture capitalist. Rodney hasn't actually been on the podcast,
00:06:44.380 but I debated him at an event and Mark has been here. And then in the middle, you have someone like,
00:06:51.040 not quite in the middle, but at a place where I, that really has always struck me as very rational
00:06:58.200 and still worried is someone like Stuart Russell, the computer scientist at Berkeley. So I'm wondering,
00:07:05.100 can you locate yourself on this continuum or are you in some other spot in the space of all possible
00:07:11.320 attitudes? So in a way, I'm looking at this scene and no one can honestly say what scenario is going to
00:07:21.960 unfold. The scientists among themselves disagree, but there are enough people who believe that the
00:07:28.900 risk is potentially catastrophic and could be just a few years, but it could also equally be decades. We don't
00:07:35.980 know. So there's enough uncertainty and enough potential level of harm that the rational thing
00:07:41.860 to do is to consider all of these scenarios and then act accordingly, according to the
00:07:49.760 precautionary principle. In other words, well, we need to be ready in case it happens in, so, I don't know,
00:07:56.520 2030 that we get human level or worse, I mean, even superhuman. And we need to be prepared in case
00:08:02.540 we don't handle it well. And companies haven't found a way to make sure AI cannot be misused by
00:08:11.700 bad actors in catastrophic ways, or companies haven't figured out how to control an AI so that
00:08:17.380 it doesn't turn against humans. So all of these catastrophic possibilities, right now we don't
00:08:21.520 have the answers. And so the sort of rational thing to do here is to work to mitigate those risks.
00:08:29.700 So that means understanding those risks better rather than denying them, which is not going to
00:08:34.800 help to find solutions, and putting in place the right protection for the public in case these
00:08:41.260 potentially catastrophic things are, you know, more dangerous or shorter term than many people
00:08:48.200 might expect. So I'm really in the agnostic camp, but rational, meaning we have to actually do
00:08:55.000 things in order to avoid bad scenarios. Right. And would you acknowledge that there are two,
00:09:01.560 broadly speaking, two levels of risk here? There's the near-term risk of, really, that we,
00:09:06.960 I think we see already, even with so-called narrow AI, where it's just the human misuse of increasingly
00:09:13.160 powerful tools, whether it's just to derange our politics with misinformation, or, you know,
00:09:18.740 cyber warfare, or, you know, any other way of any other malicious use of increasingly powerful AI.
00:09:24.480 And then we tip over at some point, provided we just continue to make progress, into what you're
00:09:30.360 seem to be referencing, which is more often thought of as the problem of misaligned AI, you know,
00:09:36.940 the so-called alignment problem, where we could build something that is truly general in intelligence,
00:09:41.140 and more powerful than ourselves, you know, cognitively, and yet we could build it in such
00:09:47.100 a way, whereas it would be unaligned with our, you know, ongoing, happy cohabitation with it.
00:09:53.040 Yes. There are currently no defensible scientific arguments that neither of these are possible. So
00:10:01.340 we, and I want to make a little correction to the sort of risks that you described, because even if we
00:10:07.940 find a way to create AI that is controllable, aligned, and so on, it could still become dangerous
00:10:14.020 in the wrong hands. First of all, these safety protections, if you control, like, the system,
00:10:19.580 you just remove those safety protections so that the AI will do bad things for you, right? Because
00:10:23.980 you have to understand how AI works. AI is about how to achieve goals, or how to respond to queries
00:10:31.060 using knowledge and reasoning. But really, who decides on the goals? That normally is the
00:10:37.920 user. And if we have safety protections, maybe we can, like, filter goals that are not acceptable.
00:10:44.280 But of course, the humans could still do bad things. So even if we go to superhuman AI,
00:10:49.440 if it's in the wrong hands, we could end up with a, you know, world dictatorship, right? And that's
00:10:57.760 very bad. Maybe not as bad as, you know, human extinction, but it's very, very bad. Clearly,
00:11:03.920 we want to make sure we don't let anything close to that happen.
00:11:09.000 Joshua, one more question for you, and then I'll pivot to the bill itself. But what do you make of
00:11:14.580 people who have a similar degree of knowledge to your own, right? People who are in the field,
00:11:21.960 out of whom you get more or less nothing but happy talk and dismissals about these concerns? I mean,
00:11:27.760 there are people like, perhaps I'm not being entirely fair to everyone here, but, you know,
00:11:32.580 someone like Jan LeCun or even Jeffrey Hinton, right? I mean, he's had an epiphany which has
00:11:39.340 caused him to be quite worried and valuable on this topic in public. But for the longest time,
00:11:45.340 you know, here we have the, you know, one of the true patriarchs of the field kind of moving along and
00:11:49.980 making progress and not seeming to anticipate that he might wake up one day and realize, well, wait a minute,
00:11:56.400 we're on course to build something smarter than we are. This entails the possibility of risk.
00:12:03.260 How is it that there's a diversity of opinion among well-informed people that there are any
00:12:09.900 significant risks at all to worry about?
00:12:12.640 So it's interesting that you're talking about Jeff Hinton and Jan LeCun because the three of us
00:12:16.440 are good friends. And of course, Jan and I, I mean, Jeff and I kind of agree and Jan doesn't
00:12:24.540 about the risks. So Jeff and I independently shifted our views, like really pivoted around
00:12:32.340 January or February 23, a few months after ChatGPT became available. And our views before that were
00:12:40.560 that, oh, human level intelligence is something so far into the future that, you know, we could reap
00:12:45.860 benefits of AI well before we got there. But what happened with ChatGPT is realized that,
00:12:51.260 well, things are moving a lot faster than we thought. We now have machines that essentially
00:12:56.700 pass what is called the Turing test. In other words, they master language as well as humans.
00:13:00.780 They can pass for humans in a dialogue. That's what the Turing test is. And so our timeline
00:13:07.060 suddenly shifted down to anything from a few years to a few decades, whereas previously,
00:13:12.920 we thought it would be like centuries or decades. So that's really the main reason we shifted.
00:13:18.120 So the crucial variable was the change in expectation around the time horizon.
00:13:23.560 Yes. Yes. And in fact, if you dig and try to understand why most scientists disagree on the
00:13:31.760 risk, it's because they disagree on the timeline. But my position is we don't know what the timeline
00:13:37.280 is. Okay. But you also asked me about like, why is it Jan LeCun who's like basically at the same
00:13:42.080 level of proficiency in the field as Jeff and I? Why is it that he continues to think that we shouldn't
00:13:48.940 worry about risks? It's an interesting question. I think I wrote about it in my blog. By the way,
00:13:55.200 my latest blog entry goes through pretty much all of the criticisms I've read about taking risks seriously
00:14:03.580 and rationally trying to explain, you know, why. Because of our uncertainty, the scientific
00:14:10.620 lack of knowledge, we really need to pay attention. But I think for many people, there's all kinds of
00:14:18.960 psychological biases going on. Imagine you've been working on something all your life, and suddenly
00:14:24.780 somebody tells you that it actually could be bad for democracy or humanity. Well, it isn't something
00:14:31.000 you really want to hear. Or if you're rich because of, you know, working in a field that would
00:14:37.320 eventually bring really dangerous things to society. Well, maybe this is not something you want to hear
00:14:42.480 either. So you prefer to go to something more comfortable, like the belief that it's all going
00:14:48.040 to be all right. For example, Jan is saying, he's agreeing actually on almost everything I'm saying.
00:14:52.820 He just thinks that we'll find a solution beforehand. And so don't worry. Well, I would like that to
00:14:59.880 happen. But I think we need to proactively make sure, you know, we do the right thing, we provide
00:15:05.020 the right incentives to companies to do the right research, and so on. So yeah, it's complicated.
00:15:11.280 Wouldn't he admit that a completely unregulated arms race among all parties is not the right
00:15:17.780 system of incentives by which to find a solution?
00:15:20.680 No, he wouldn't. He thinks that it's better to let everybody have access to very powerful AI and
00:15:27.500 there will be more good AIs than bad AIs. But that isn't rational either. You know, in a conflict
00:15:33.600 situation, you have attackers and defenders. And depending on the attack threat, it could be that
00:15:38.940 there is an advantage to the attacker, or there could be an advantage to the defender. In the case of
00:15:43.640 AI, it depends on which technology the AI, you know, is used for to attack, say, democracies.
00:15:49.380 An example is cyber attacks. Cyber attacks, it's hard for the defender, because you have to plug all
00:15:55.060 the holes, whereas the attacker just needs to find one hole. Or bioweapons, like, you know, if an
00:16:00.840 attacker uses AI to design a bioweapon, you know, the attacker can work for months to design the
00:16:05.500 bioweapon, and then they release it in many places in the world. And then people start dying, and it's
00:16:10.740 going to take months at least to find a cure, during which people are dying, right? So it's not
00:16:16.540 symmetrical. It's not because you have more, like, good people controlling AIs than bad ones that
00:16:22.140 the world is protected. It just doesn't work like this.
00:16:25.320 Yeah, yeah. Okay, so let's dive into the possibilities of controlling the chaos. Scott,
00:16:31.140 what is this bill that has various technologists worried?
00:16:34.820 Yeah, so Senate Bill 1047 has a fairly basic premise that if you are training and releasing
00:16:44.780 an incredibly powerful model, which we define as exceeding 10 to the 26 flop, we've also added in
00:16:51.940 that you've spent at least $100 million in training the model, and that'll go up with inflation, that if
00:16:59.080 you are training and releasing a model of that scale, of that magnitude, of that power, perform
00:17:04.580 reasonable safety evaluations ahead of time. And if your safety evaluations show a significant risk
00:17:12.980 of catastrophic harm, take reasonable steps to mitigate the risk. This is not about eliminating
00:17:19.980 risk. Life is about risk. It's about trying to get ahead of the risk instead of saying, well,
00:17:26.400 let's wait and see. And after something catastrophic happens, then we'll figure it out. That's sort of
00:17:32.000 the human way of things at times. Let's get ahead of it. And what's interesting here is that all of
00:17:39.940 the large labs have already committed to doing this testing. All of their CEOs have gone to the White
00:17:47.960 House, to Congress, most recently to Seoul, South Korea, and have sworn up and down that they either
00:17:54.280 are doing or they will be doing this safety testing. And the bill doesn't micromanage what
00:18:00.660 the safety testing will be. It provides flexibility. It's light touch. But now we have people coming
00:18:05.940 forward and saying, oh, wait, we know we committed to it or they committed to it, but don't actually
00:18:12.240 require it. And that doesn't make sense to me. And so that's the heart of the bill. There are some
00:18:18.620 other aspects that if a model is in, is still in your possession, you have to be able to shut it down
00:18:23.980 and a few things like, like that. And maybe I can make a connection here between that answer and the
00:18:31.580 previous question. So there are people, as you said, Sam, who don't believe that there is any risk. And of
00:18:39.360 course, if we only went by, you know, their choices, they wouldn't do the right thing in terms of
00:18:46.160 safety. So it's not enough to have these commitments. We need to make sure that everyone
00:18:51.440 actually does it. And that is why you need laws. Voluntary commitments are great. And they're
00:18:58.960 already mostly committing, you know, already committed, but we need to make sure it actually
00:19:03.460 happens. Yeah. And I agree with that. And we've seen in a lot of industries that voluntary commitments
00:19:08.980 only get you so far. And even if all of the current leadership of the labs are fully and
00:19:16.100 deeply committed to doing this, and we, I take them at their, at their word and that they're
00:19:21.760 acting in good faith, we have no idea who's going to be running the, these labs or, or other labs that
00:19:28.820 don't exist yet, two, three, five years from now and what the pressures are going to be. The other
00:19:36.080 thing I just want to add, which in this bill, there are, we've had some critics of the bill that have
00:19:41.840 really engaged with us in good faith. And we're so appreciative of that. There are other critics
00:19:46.980 who it's, there's a lot of what about ism. And, and one of them is, well, what about other risks and
00:19:54.540 other technology that causes risk? Okay. Yes, there are other technology that could cause risk, but we're
00:20:00.260 focused on this very real and tangible potential risk. And the other argument that they sometimes make
00:20:07.720 the, and I, I really find this a little bit offensive, dismissing anyone who raises any
00:20:17.520 question or concern about safety, saying you're a doomer, you're, you're a doomer, you're part of a
00:20:23.380 cult, you know, this is a cult like behavior and you're a doomer and just make making you're an
00:20:30.340 extremist. And by the way, none of these risks are real. It's all science fiction. It's all made up.
00:20:37.720 And my response to them is, well, first of all, not everyone's a doomer just because you care about
00:20:43.320 safety or want to maybe take some action around safety. But if you really believe that these risks
00:20:49.440 are fabricated, made up, just pure science fiction, then why are you concerned about the bill? Because
00:20:56.060 if, if the risks are really fake, then if you really believe that, then you should also believe
00:21:02.480 that the bill won't, won't cover anything because none of these harms will ever happen. And it's all
00:21:07.280 science fiction. And so the fact that they are fighting so hard against the bill led by A16,
00:21:14.020 the fact that they're fighting so hard against it sort of really contradicts their claim that they
00:21:19.080 believe that these risks are science fiction. So I can imagine that someone over at Andreessen Horowitz
00:21:25.880 would say that, first of all, it's going to pose a, an economic and legal burden to any company
00:21:33.200 in this business. And there's going to be capital flight out of California. I mean,
00:21:37.340 people will just do this work elsewhere because California has become an even more hostile place
00:21:42.880 for business now. And so is there something about, I mean, if we were going to give a charitable
00:21:49.700 gloss of their fears, what's the worst case scenario from their point of view that is,
00:21:55.740 that is actually honestly engaging with, with what you intend, right? So like, like what kind of
00:22:01.820 lawsuits could bedevil a company that produces an AI that does something bad? Just what you, so what
00:22:09.240 kind of liability are you trying to expose these companies to? And just, you know, how expensive
00:22:15.380 is it in time or resources to do the kind of safety testing you envision, Yashua?
00:22:22.580 Well, they're already doing it. I mean, at least many of them are since they committed to the Biden
00:22:29.540 executive order, and they have been doing these tests since then, or even before in some cases. And it's,
00:22:35.940 it's not hugely expensive. So in terms of the liability, I think, if they do sort of reasonable
00:22:44.160 tests that, you know, one would expect from somebody who knows about the state of the art,
00:22:49.480 then they shouldn't worry too much about liability. Well, so actually, before we get to liability,
00:22:55.380 liability seems to me to be very hard to characterize in advance. I can understand worrying
00:23:02.260 about that. But just on the safety testing, have any of these companies that have agreed to persist in
00:23:07.860 testing, disclosed, you know, what percent of their budget is getting absorbed by AI safety concerns?
00:23:15.700 I mean, are we talking about a, you know, a 5% spend or a 40% spend or what, what is it? Do we know?
00:23:21.180 We think it's very small, especially in the, it's not, I think it's more like the, the not less than
00:23:27.840 5%, it's a few percentage points. And so we think it's about two to 3% as far as we can tell.
00:23:35.360 Okay. And so it's, you know, and again, this is the large labs. It's when you're spending at least
00:23:42.060 a hundred million dollars to train. This is not about startups. I understand there are startups
00:23:48.260 that have concerns because they want to make sure they have access to LAMA and other open source
00:23:53.380 models. But in terms of who's going to have to comply with this, it's not startups. It is large labs
00:23:59.100 that are spending massive amounts of money to train these models. And they are absolutely able to
00:24:05.220 do it. And they have all said that they either are doing it or are committing to do it. So,
00:24:11.920 you know, it's, it's really interesting to me that you have a large lab saying that they're,
00:24:15.940 that they're committing to doing it or already doing it. And then you have some investors,
00:24:21.240 most notably a 16 saying, Oh, it's all made up. The safety testing is not real. It's impossible.
00:24:25.880 And, and so it's a, it's like, okay, well, which is it? And they say that they're already doing it.
00:24:32.440 Okay. So let's say they do all of this good faith safety testing and yet safety testing is not
00:24:39.020 perfect. And one of these models, let's say it's chat GPT five gets used to do something nefarious.
00:24:47.300 You know, somebody weaponizes it against our energy grid and it just turns out the lights in half of
00:24:53.860 America, say. And when all the costs of that power outage are tallied, it's plausible that that would
00:25:02.160 run to the tens of billions of dollars and there'd be many deaths, right? I mean, what's the, what are
00:25:06.640 the consequences of turning out the lights in a hospital and or in every hospital in every major
00:25:11.200 city in half of America for 48 hours, somebody is going to die, right? So what are you imagining on
00:25:17.660 the liability front? Is, does that all, all of that trickle up to Sam Altman in his house in Napa?
00:25:23.840 Drinking white wine on a summer afternoon? What are we picturing here?
00:25:31.100 Yeah. So, well, under this bill, if they've done what the bill requires, which is to perform the
00:25:37.620 safety evaluations and so forth, if they do that, then they're not liable under this bill. Again,
00:25:45.920 it's not about eliminating risk. So they, companies, labs can protect themselves from the,
00:25:52.380 from the very focused liability under this bill, this bill, which first of all, is not dependent
00:25:58.800 on training or releasing the model in California or being physically located in California, which is
00:26:05.600 why this whole claim that labs are going to, or startups are going to leave California. If you are
00:26:10.820 training and releasing your model from, from Miami or from Omaha, Nebraska, as long, if you are doing
00:26:17.320 business in California, which they all will be, it's the fifth largest economy in the world,
00:26:22.800 it's the epicenter of the technology sector, unless you're going to not do business in California,
00:26:27.740 which I'm highly doubtful of, you are covered by the bill. And only the attorney general can file
00:26:33.900 a lawsuit. And it's only if you have not complied with the bill and one of these huge harms happens.
00:26:40.080 One thing that the opponents of the bill continue to like, just refuse to acknowledge is that there
00:26:46.780 is liability today, much broader than what is created by SB 1047. If you release a model,
00:26:55.780 even a smaller one that you spent, you know, 500,000 or a million on, you release that model.
00:27:02.320 And then that model somehow contributes to a much smaller harm than what we're talking about here,
00:27:07.940 burning down someone's house, doing something, you know, something that harms someone. That person
00:27:13.140 can sue you today under just regular tort liability law in California. And I assume in all 50 states,
00:27:21.800 they can sue you today. That liability will be disputed and litigated. And I'm sure in the coming
00:27:27.520 years, the courts are going to spend a lot of time sculpting the, what the contours of liability is
00:27:33.460 for artificial intelligence. But that liability risk, that litigation risk exists today in a much
00:27:40.320 broader way than what SB 1047 provides. And that's why the reaction to the liability aspect of this
00:27:47.780 bill, I think is overstated. And then on top of that, they keep spreading misinformation that model
00:27:54.680 developers are going to go to prison if your model contributes to harm, which is completely false and
00:28:00.080 made up. Interesting. So why do this at the state level? You know, as you've indicated, there's
00:28:08.100 already movement at the federal level. I mean, that the Biden administration has made similar noises
00:28:14.160 about this. Why shouldn't this just be a federal effort? Well, in an ideal world, it would be a federal
00:28:21.500 effort. And I would love for Congress to pass a strong AI safety law. I would also love for Congress
00:28:30.060 to pass a federal data privacy law, which it has never done. I would also love Congress to pass a
00:28:37.080 strong net neutrality law, which it has never done. And so as a result, I authored California's net
00:28:44.160 neutrality law six years ago, and we also passed in California a data privacy law. I would love for
00:28:50.440 all of that to be federal. But Congress, with the exception of banning TikTok, has not passed a
00:28:56.700 significant piece of technology legislation since the 1990s. That may change soon with this child
00:29:03.000 protection social media law. We'll see. But Congress has what can only be described as a poor record of
00:29:11.240 trying to regulate the technology sector. So yes, it would be great for Congress to do it. I'm not
00:29:17.440 holding my breath. The Biden executive order, I like it. I applaud it. It's an executive order. It does
00:29:23.680 not have the force of law. And the Republican platform has stated, Donald Trump's platform
00:29:30.500 states that that executive order will be revoked on day one if Donald Trump is elected president,
00:29:37.600 God forbid. Does this have any effect on open source AI or are you just imagining targeting the
00:29:44.500 largest companies that are doing closed source work?
00:29:48.000 The bill does not distinguish between open source and closed source. They're both covered
00:29:54.820 equally by the bill. We have made some amendments to the bill in response to feedback from the open
00:30:03.100 source community. One change that we made was to make it crystal clear that if a model is no longer
00:30:09.880 in your possession, you're not responsible to be able to shut it down. Because that was some feedback
00:30:16.040 we had received that if it's been open source and you no longer have it, you are not able to shut it
00:30:21.060 down. So we made that change. We also made some changes around clarifying when a model, say that
00:30:30.500 is open source, is no longer a derivative model. In other words, there's enough changes or fine tuning
00:30:37.280 to the model that it effectively becomes a new model at a certain point, and that the original developer
00:30:43.540 is no longer responsible once someone else has changed the model sufficiently. That changer,
00:30:50.760 the person fine tuning, would then become effectively the person responsible under the law.
00:30:57.400 I'd like to add something about open source. So you have to remember there's this threshold which
00:31:02.400 can be adapted in the future, the cost or the size of these models. And most of the open source that is
00:31:10.260 happening in academia and startups that are generated by these companies or these universities,
00:31:17.740 they're much smaller because they don't have a hundred million dollars to train their system.
00:31:22.920 And so all of that open source activity can continue and not be affected by SB 1047.
00:31:29.860 Yeah. Did you say it was 10 to the 23rd, 10 to the 26th?
00:31:33.480 26.
00:31:33.800 That's floating point operations per second? Is that the measure?
00:31:37.780 Yes. Floating or integer.
00:31:39.920 So how big is that in relation to the current biggest model? So chat GPT 4.0?
00:31:47.560 It's above all of the existing ones.
00:31:49.620 Okay. So everything that we currently have, the best LLMs haven't yet met the threshold that would
00:31:56.440 invoke this regular.
00:31:57.020 Yeah. So this is only for the future models that at least we don't know about yet. And there's a good
00:32:03.640 reason for that because the models that harbor the most risks are the ones that haven't been,
00:32:08.780 you know, played with, haven't been made available. So there's a lot more unknowns.
00:32:14.560 So it does make sense to focus on the sort of frontier systems when you're thinking about risks.
00:32:21.820 When you're thinking about the frontier, doesn't that play both ways in the sense that critics of
00:32:27.560 this regulation, I can imagine, and certainly critics of the kinds of fears you and I and others
00:32:33.120 have expressed about AGI, artificial general intelligence, would say and have said that
00:32:38.960 we simply don't know enough to be rationally, you know, looking for any sort of break to pull or any,
00:32:47.040 you know, safety guidelines to enshrine into law. I mean, I'm thinking of, I think it was Andrew Ng
00:32:53.960 who once said, you know, worrying about artificial general intelligence is like worrying about
00:32:58.260 overpopulation on Mars, right? Like it's just, it's so far, and again, this invokes the timeline,
00:33:03.240 which you and many other people now think is far shorter than assumed there. But it's not just a
00:33:08.220 matter of time, it's just that the architecture of the coming robot overlord may be quite different
00:33:14.120 from what we're currently playing with, with LLMs. Is there any charitable version of that that we
00:33:21.200 could prop up that we just were, it's too soon for us to be drawing guidelines because we simply don't
00:33:28.680 know enough? Okay. I have several things to say about this. First of all, if we're worried about
00:33:33.840 the short-term possibilities, like say five years or something, or 2030, then it's very likely that
00:33:41.680 there's going to be something very close to what we have now. If you'd like to continue listening
00:33:45.480 to this conversation, you'll need to subscribe at samharris.org. Once you do, you'll get access
00:33:51.400 to all full-length episodes of the Making Sense podcast. The podcast is available to everyone
00:33:56.320 through our scholarship program. So if you can't afford a subscription, please request a free account
00:34:01.560 on the website. The Making Sense podcast is ad-free and relies entirely on listener support.
00:34:06.960 And you can subscribe now at samharris.org.
00:34:11.680 Thank you.