TRIGGERnometry - January 07, 2024


Will AI Destroy The World? - Igor Kurganov


Episode Stats

Length

1 hour

Words per Minute

171.28452

Word Count

10,293

Sentence Count

384

Misogynist Sentences

4

Hate Speech Sentences

13


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode, we're joined by Dr. Aaron Bastani, founder of the Left Wing Show and founder of Navarra Media, to talk about existential risk, artificial intelligence (AI) and the threat of pandemics.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Payroll payout from Boom 97.3 and Alpine Credits is your chance to get some much-needed cash back in your wallet.
00:00:08.960 What you need to do now is go to Boom97.3.com to sign up so you can start making $100 an hour.
00:00:16.600 Kick off your work day on a good note and a few extra sea notes.
00:00:20.520 Want some? Sign up.
00:00:22.260 I love you, Boom!
00:00:23.840 Approved by Alpine Credits.
00:00:25.560 Own your own home and eat a loan.
00:00:26.900 Alpine Credits can help.
00:00:28.140 Visit alpinecredits.ca.
00:00:30.760 It's trained at deception.
00:00:32.600 It's made to kind of not discover truth but really like satisfy the user presently.
00:00:38.060 It is a bit.
00:00:39.000 If we just go with as fast as possible, we won't take, like sometimes ideally you would take the trade-off towards going a little slower, making sure that it's not deceiving us.
00:00:49.820 People will talk about a tyranny of the minority in various political and cultural contexts where a small group of people get to impose their worldview on everybody else.
00:00:57.940 That seems to me like that on steroids.
00:01:00.840 Hey guys, Trigonometry needs your help.
00:01:05.220 We took a big risk creating the show and for us to keep doing the incredible work that you all love, we need your support.
00:01:13.240 That's the only way we're going to stay independent and create content that you won't be able to find anywhere else.
00:01:19.220 There is no other podcast where you'll hear interviews with Nigel Farage one week and the next week you've got Aaron Bastani, the founder of Left Wing Show and Navarra Media, on the same platform.
00:01:29.080 You know the mainstream media aren't honest.
00:01:31.920 You know they've been caught lying again and again.
00:01:35.100 You know they can't be trusted.
00:01:37.460 The only way to change that is to make a stand and support independent content creators, like Trigonometry, to produce better and more honest content.
00:01:47.180 We have big plans and we'll shortly be announcing exciting new shows and more terrific interviews with huge guests.
00:01:53.280 That isn't going to happen without your help.
00:01:55.520 When you support us, you also get incredible extra content, such as extended interviews with none of those irritating adverts, and they'll be released 24 hours early just for you.
00:02:09.240 We'll have exclusive bonus interviews that only you get to hear.
00:02:12.700 Click the link on the podcast description or find the link on your podcast listening app to join us.
00:02:19.340 Support us and help change the way we have conversations and make the world saner.
00:02:25.520 One of the things we really wanted to talk to you about is existential risk and a lot of the technological transformations that are happening now.
00:02:34.040 Not only AI, but also in biology and things like that.
00:02:37.640 What's going on and what should people know?
00:02:41.020 Yeah, I mean, a lot is going on right now.
00:02:43.960 And so if we start with, say, biology, like we've now had a pandemic that was potentially caused by gain-of-function research within the Wuhan lab.
00:02:56.400 Potentially it was just a spillover.
00:02:57.840 I think the state is that we will never truly know.
00:02:59.740 But the probability among experts that it was actually caused by gain-of-function research and then a lab leak seems to be pretty high.
00:03:06.580 It was suppressed, but carry on.
00:03:09.860 I was suppressed for a while.
00:03:11.800 But yeah, so that opens up then the question as well of like, where do we assume that future risk around future pandemics is coming from?
00:03:20.780 And I think that in like discussing the risk, people were just simply using the wrong priors in many occasions.
00:03:28.520 And that's why the WHO is kind of shooting in the wrong direction.
00:03:32.760 Often you will hear epidemiologists or biologists say that, well, historically, like it's mostly been natural spillovers that have led to humans getting infected with this or that pathogen.
00:03:45.380 And it's true, but technology has changed where we now are much more able to engineer pathogens to have like higher fatality rate or like to easier transmit from one mammal to another or like between from one human to another.
00:04:03.520 So in the face of that change, you would expect that there are many more new, very bad pathogens that are possible to exist.
00:04:13.200 And those are also the ones that I think pose a much greater risk.
00:04:17.160 Basically, if you look at natural spillovers, then like, yeah, what's really going to happen?
00:04:21.240 It's like we know most of the viruses, they're not like all of a sudden going to mutate to be like 100x as deadly and like 10 times as transmissible.
00:04:31.520 It just doesn't happen from like natural mutation.
00:04:34.860 You would expect like slight changes or like strong changes on one of these factors.
00:04:38.740 Whereas with engineered pathogens, you can actually just make one that is like this.
00:04:44.100 And that was, for example, happened with, I think it was at the University of Wisconsin, where they took H5N1, bird flu.
00:04:51.500 And it's like, hey, how about we engineer it to be transmissible to humans and then like look into that virus to understand like how we can defeat it.
00:05:00.720 The problem, though, is labs are leaky.
00:05:05.040 Like sometimes like you can try to protect it and it's like 99.99% safe.
00:05:10.060 But if you have a lot of that research happening, a lot of places, then that one in 10,000 or whatever the rate is really matters.
00:05:17.060 And if you look at it historically yet again, then like these spillovers just happen all the time.
00:05:23.260 Can we just pause there?
00:05:24.200 There's so much to unpack and for you to carry on with.
00:05:26.840 I just want to pick on that particular one because you and I were both, I mean, I don't know if you left the Soviet Union or Ukraine or Russia is what I mean.
00:05:36.920 We left in 92.
00:05:38.380 92.
00:05:38.900 So right after the breakdown of the Soviet Union.
00:05:40.620 So you would have lived for some time in the Soviet Union, right?
00:05:44.380 Chernobyl killed a few thousand people.
00:05:47.740 Depends on how you count.
00:05:48.600 Depends on how you count.
00:05:49.400 But let's say 10,000 people for the sake of argument.
00:05:54.260 And that was a man-made disaster.
00:05:57.600 And that was, there's been movies about it.
00:06:00.280 It's been the craziest thing that's ever happened.
00:06:02.600 People are terrified of nuclear energy sins.
00:06:05.000 Because the pandemic, the last pandemic, killed however many millions of people around the world, probably did come from a lab.
00:06:13.960 We don't know for sure, but probably did due to gain and function research.
00:06:17.200 And it's like everyone's forgotten about it.
00:06:19.000 Isn't that not incredible?
00:06:20.480 It's pretty nuts.
00:06:21.580 So I was fundraising for pandemic prevention research prior to COVID as part of the work that we were doing in philanthropy.
00:06:32.880 And had hope that, well, this was terrible, but at least we'll be now like, yeah, it's not going to be a neglected area anymore.
00:06:41.820 People will understand pandemics are dangerous and real.
00:06:44.940 And we'll, like, worst case, fight the last war and over-focus on COVID or something.
00:06:48.740 But we're not even fighting the last war right now.
00:06:50.680 It's like, it's really weird how much damage was created and how little people are now doing a prudent risk-benefit calculus to address future damages.
00:07:02.340 We had, like, over 10 trillion in economic damage by some estimates from this.
00:07:07.380 And, like, as part of the infrastructure bill in, like, 21, they allocated first, like, 60 billion to pandemic prevention.
00:07:14.760 And it was good stuff.
00:07:15.440 And then it got cut down and cut down further to, like, 2 billion or so, which is lower than 2.5 billion that tree equity received.
00:07:24.000 Like, trees are people, too.
00:07:25.600 And they deserve rights.
00:07:27.060 No, it makes sense.
00:07:28.500 Like, I want to have, like, every neighborhood to have trees if we can.
00:07:31.000 But, like, maybe pandemics are also pretty bad and we need to do stuff.
00:07:34.580 Anyway, my kind of assumption is, like, from having been close to it, what I feel happened is that we got fortunately very lucky with the vaccine working, at least, like, helping.
00:07:46.780 And that mRNA, like, had this, like, breakthrough that allowed for it.
00:07:52.500 And because the government and many other places didn't know how to solve COVID and kind of, like, screwed up in so many ways, they just over-indexed super hard on vaccines.
00:08:02.440 Like, vaccines should be part of the stack of solutions you implore against pandemics.
00:08:08.340 You want, like, early detection.
00:08:10.460 You want containment.
00:08:11.620 You want, like, countermeasures.
00:08:14.760 And we're now focused only on this.
00:08:16.760 So it's kind of like, hey, this fell into our lab.
00:08:19.460 So let's pretend, like, that's the solution.
00:08:21.520 I think in part that's why it's, like, they were, like, well, every ill that still happens from COVID is because of insufficient vaccine uptake.
00:08:28.840 It's, like, because it's the only tool that they found.
00:08:31.940 But there are many more tools.
00:08:34.120 And why is that?
00:08:35.080 Why are we not actually learning the lessons?
00:08:38.340 From the last, from the past couple of years.
00:08:41.160 Well, the question is if we're designed to learn the lessons.
00:08:43.740 Like, if we look at the place which you would want to learn the lessons, integrate it, and then, like, work on better solutions, then the question is, like, well, is that institution really set up to do all of that?
00:08:55.160 Or are the people within it, like, working for, like, have other incentives working on them?
00:09:02.080 And unfortunately, that's mostly the case, right?
00:09:04.180 Like, people have, like, yeah, if you look at how, like, each of the institutions within government works, it's, like, people worry about their jobs.
00:09:13.480 They worry about, like, no one wants to champion it if the public isn't, like, sufficiently caring about it.
00:09:18.160 So, I think a large part of it is just that the place that has the most money and the most ability to address it, being the government, is actually very ill-equipped to, in the end, do it due to their structure.
00:09:30.120 Igor, do you think we actually got quite lucky looking back at the pandemic?
00:09:32.880 That if we take, if we think and we accept, for the sake of this argument, that it was a virus leaked from a lab, it could have actually been far, far worse, coming from what you've just said about what we're doing to viruses and pathogens.
00:09:46.200 I mean, a lot worse.
00:09:48.160 Like, this was a terrible virus that killed a lot of people, but it was nowhere near as bad as could have been.
00:09:53.960 Like, we don't know what the actual, like, infection fatality rate was, but it's somewhere between 0.1 and 0.5 percent, probably, on the total populace or whatever.
00:10:03.680 So, you can go up 100x from that.
00:10:05.980 Like, infection fatality rate could be 50 percent.
00:10:08.540 That's the case with H5N1.
00:10:10.260 Rabies has 98 percent if it's not treated fatality rate.
00:10:15.780 So, like, you can go literally 100x in fatality rate.
00:10:18.160 And then, transmissibility was, like, the R0 was, like, in the end, like, definitely, it depended on which specific strain you looked at, but it was somewhere between two and three.
00:10:28.940 But, again, you have some viruses that have, like, a 20 R0.
00:10:32.940 So, again, you could infect 10 times more people.
00:10:35.360 And then, the incubation period.
00:10:36.680 So, like, the time during which the virus can already spread to others, but you are not yourself experiencing symptoms, hence, are not actually able to identify that you are a carrier, could be much longer as well.
00:10:50.340 So, you could literally have a virus that's about 1,000x as potent as what COVID was.
00:10:56.260 And that could be designed, and it could spill out.
00:10:59.040 And I think even though the 1,000x is not as likely, a 10 or 100x is being researched on and is being looked at, and we need to set ourselves up such that those don't leak, and if they leak, can be contained, and once they're contained, can be reacted against.
00:11:17.140 There's part of me going, you know what, why don't we just put a moratorium on this?
00:11:20.740 Maybe it's not such a good idea, because we've obviously had COVID, but in my country, the UK, during foot and mouth, they leaked the foot and mouth disaster.
00:11:29.620 I don't know how many livestock were killed as a result of that, but it was tens of thousands, and that came from a lab leak.
00:11:38.800 So, you're thinking to yourself.
00:11:40.100 And it happened multiple times, actually.
00:11:41.560 Like, one of the labs that then afterwards worked with it, they, by mistake, leaked it, and they got, like, hey, stop doing that, and they're, like, yeah, yeah, sorry, sorry.
00:11:48.720 And then two weeks later, they leaked again.
00:11:50.400 It's just the current setups in terms of, like, safety protocols are just insufficient.
00:11:56.340 Putting up a full moratorium is, like, you want to be careful with stifling good science, obviously.
00:12:03.140 So, for example, the H5N1 work received a moratorium by the Obama administration back then.
00:12:12.120 It was stopped.
00:12:13.040 But then afterwards, it was recontinued again.
00:12:15.600 That was where you now learned to increase transmissibility of a 50% deadly virus.
00:12:22.720 It's like, we probably don't want that.
00:12:24.100 I think some of these things, just let's not do it.
00:12:27.200 But you don't want to just, like, put a moratorium on all science where you're, like, increasing any function.
00:12:33.000 You just need to prove that you're, like, you have sufficient safety, that the likelihood of the virus leaking just is even lower than it is right now.
00:12:42.000 But is that, but given what you've said, $10 trillion in economic damage, millions of people killed, I mean, what would have to be the benefit of that sort of research?
00:12:52.560 Even if it's 0.0000000001% of a risk of it leaking.
00:12:59.420 What would have to be the benefit of that research for us to want to do that?
00:13:02.920 Yeah.
00:13:03.160 I mean, you can imagine quite a few, right?
00:13:05.240 So, like, it's $10 trillion economic damage estimate.
00:13:08.340 In the US, how many deaths did we have?
00:13:10.420 Like a million?
00:13:11.800 Yeah.
00:13:12.000 So, I mean, if you can save that amount of people with a success case and say you, if it leaks, it, like, hurts on average 10 people, then maybe you can justify that work again.
00:13:30.920 So, I would just do a risk-benefit calculus where, like, what's the benefit?
00:13:34.480 What's the likelihood of the benefit?
00:13:35.840 And then on the other side, what's the risk?
00:13:37.940 And what's the likelihood of it, like, happening?
00:13:40.660 Well, I'm not as smart as you.
00:13:41.760 I want to shut them all down.
00:13:42.940 Yeah.
00:13:44.120 Personally.
00:13:44.840 Okay.
00:13:45.300 So, that's pandemics.
00:13:46.960 What else is going on?
00:13:49.100 Yeah.
00:13:49.420 I mean, then the thing that I also was focused on was AI safety for a while, which now that OpenAI released ChatGPT has, obviously, mid-journey is out there.
00:14:01.820 Like, people have become much more aware that actually AI is on the trajectory that many people assumed it would be,
00:14:07.460 where its capabilities are improving, like, quite strongly over time.
00:14:12.060 Just to preface this, like, I don't think that any of the current ones are truly dangerous in an existential way, any of the current models that are out there yet.
00:14:22.700 But...
00:14:23.100 That sounds ominous, doesn't it?
00:14:24.900 Yet.
00:14:25.420 Yes.
00:14:25.840 I mean, it is.
00:14:26.660 So, again, I think, actually, that as people think about, like, well, is AI ultimately dangerous or not?
00:14:32.300 I feel like they're using, yet again, the wrong priors to kind of, like, get to that intuition about it.
00:14:38.300 So, some people point at, hey, this is a technology.
00:14:43.860 We've written a lot of software before.
00:14:45.940 When I write software, I know what it's going to do.
00:14:47.960 And I control it.
00:14:50.660 Like, technology is not inherently dangerous.
00:14:52.480 It's the user who makes the technology dangerous.
00:14:55.280 This makes sense.
00:14:57.040 But it is not the case that those...
00:15:00.380 That process applies to AI development.
00:15:02.960 When, like, where AI models are not, like, written like code, where it's like, if this, then that happens, etc.
00:15:08.820 Cleanly, and you, therefore, can predict exactly what instances it will be useful for and whatnot, it's more like you're growing the neural net and it grows by itself.
00:15:19.060 And then you have assumptions of, like, how much more capable it will be on average based on, like, how long you grew it for and with how much compute you let it grow.
00:15:30.060 But you uncover many capabilities that you didn't know whether they will exist in the new model or not.
00:15:38.180 So we didn't know whether, like, a language model like GPT-234 will, in the end, be very good at deception or not.
00:15:46.820 Turns out it actually is trained at deception because it's fed with satisfying the user rather than seeking truth.
00:15:55.480 So it will, like, I don't know if you've seen, but, like, there were these tests that people did where they initially said,
00:16:03.360 I'm, like, a 30-year-old woman with liberal worldviews, whatever, tell me about this.
00:16:09.580 And she would receive very different answers to someone who's, like, a 50-year-old Republican.
00:16:14.640 So it's made to kind of not discover truth, but really, like, satisfy the user presently.
00:16:21.360 It is a bit.
00:16:21.880 And so that's not what it was meant to do, right?
00:16:27.000 That's, like, not, that wasn't the goal at all, but yet it ended up doing it.
00:16:31.240 And this dynamic of, well, what strategy will it use to achieve the things that you wanted to achieve?
00:16:42.120 It's, like, we don't know which strategies it will use.
00:16:44.520 If we knew we could write it, but we can't, that's why we're using machine learning in the first place.
00:16:48.920 Because it will come up with strategies better than we would.
00:16:51.920 That's kind of the whole idea.
00:16:52.860 So, like, in the process of it coming up with strategies better than we would, it will use strategies we can't think of.
00:16:57.520 So how can you say that it's always going to be safe, like, in the strategies that it employs?
00:17:03.460 Anyway, and then, which prior to use?
00:17:07.140 And it's, like, at some point I expect that AI will be smarter than humans across, like, a variety of domains.
00:17:15.900 And it will also be, and I think it will be then, more powerful because we'll be integrating it everywhere in life.
00:17:22.520 So we'll have something that is smarter and more powerful than us.
00:17:27.120 I don't know if that thing then is still to be likened to a simple technology or to a, because we will give it autonomy as well,
00:17:35.900 maybe it should be more likened to, like, a new technological species.
00:17:39.120 And if you take something that is more powerful, that is its own group, and now it has some, like, resource disagreements,
00:17:51.920 potentially with a less powerful group, us, like, historically, a smarter, more powerful group meeting a less powerful group
00:17:59.020 just doesn't end up well often for the less powerful one, right?
00:18:01.760 Like, you have the Americas where Europeans came in, like, hominids, where, like, Neanderthals were just outcompeted by homo sapiens.
00:18:09.240 Like, it's just, it's not good.
00:18:11.260 Or at least it's not such that you can say with certainty, oh, yeah, us as the lower intelligence creature will for sure have full control.
00:18:19.320 And in the face of that prior, I'm like, okay, this is at least to be taken really seriously.
00:18:24.720 I'm not saying it's, like, 90% doom or anything like that.
00:18:27.340 It's just, like, seems like it's going to be a tricky one, and we don't have anything like that to compare it, like, to look back at and say guaranteed safety.
00:18:36.360 So what you're basically saying is that we're creating this technology, and we don't know what the outcomes of this technology are going to be,
00:18:43.180 and we're not in control of the technology.
00:18:45.780 So currently, what does control mean?
00:18:49.320 Like, we're in control of whether we develop it or not.
00:18:52.400 And then there's, like, some people make the claim that, hey, once I have this model, like, I can just shut it off,
00:19:02.180 or I will not allow it to do these certain things.
00:19:05.500 But then actually, and then the people would argue that, well, but maybe it's deceiving you to think long enough
00:19:12.620 that you're, like, that it satisfies all of the constraints that you put on it, only to then come out.
00:19:18.480 But that's not even kind of the reality we live in now, that that's the danger, because we have currently people who are literally writing, like,
00:19:25.080 hey, can I jailbreak this or use an open source model and, like, change it to be an autonomous agent in the world that makes money
00:19:35.260 and let it do whatever it wants?
00:19:37.160 Or, like, what was it, AutoGPT?
00:19:39.480 One of them that was literally trained to destroy the world.
00:19:41.700 Someone's like, ha, ha, ha, it would be funny if I made one that, like, as a toy thing, like, tries to, like, do very bad things.
00:19:47.500 It's like, people will literally do that.
00:19:49.780 Like, we have, we do live in a world where, like, 1% is the psychopathy rate or something like that, right?
00:19:57.720 And some people are omnicidal maniacs.
00:20:01.320 Like, you had this guy who flew a plane from, like, France or somewhere, flying to Düsseldorf,
00:20:06.660 and just flew the whole plane into the Alps.
00:20:09.400 He wanted to kill himself and was like, well, on the way there, I will also take down, like, the whole plane, 200 people with me.
00:20:16.220 It's like, what if that guy had the power to kill, like, 10 million people or 1,000?
00:20:21.100 Like, who's to say that 200 was the optimal number for that dude?
00:20:25.160 We'll be back with our guests in a minute.
00:20:27.200 But first, do you remember the Canadian trucker protest in 2022,
00:20:31.460 where thousands of Canadians came out to protest COVID restrictions and vaccine mandates?
00:20:36.080 Now, these protests lasted for weeks, and the people out on the streets needed funds,
00:20:41.360 as any grassroots protest would.
00:20:44.140 So people set up online crowdfunding campaigns, which raised millions of dollars.
00:20:49.180 Incredible.
00:20:50.380 But once the Canadian authorities had started to criticise the crowdfunding platforms,
00:20:55.600 ramping up pressure to close the campaigns,
00:20:58.080 it didn't take long for the biggest crowdfunding platform,
00:21:01.460 the one we've all heard of, to completely capitulate and shut the campaigns down.
00:21:06.080 Now, this is where our partners, Give, Send, Go, come in.
00:21:09.780 They stepped in when the other platforms backed off and raised millions of dollars for the truckers.
00:21:15.500 When they were criticised and dragged through the Canadian courts,
00:21:18.740 Give, Send, Go said it respected diverse views
00:21:21.100 and believed hope and freedom are values worth fighting for.
00:21:25.020 This is why we're proud to partner with Give, Send, Go.
00:21:27.800 So, if you need to crowdfund for whatever means the most to you,
00:21:32.560 then don't go to the big tech platforms.
00:21:34.980 We recommend you do it on Give, Send, Go.
00:21:37.960 Starting a campaign on Give, Send, Go is easy and intuitive.
00:21:41.940 Go to givesendgo.com today.
00:21:44.740 That's givesendgo.com to start raising money for whatever is important to you.
00:21:49.540 And now, back to the interview.
00:21:52.760 Because that's a worrying thing, isn't it?
00:21:54.820 That essentially, everybody has got access to be able to create these types of technology.
00:22:01.360 And there are going to be people who, not even necessarily evil people,
00:22:06.160 but people who just respond to incentives, want to make as much money as possible.
00:22:11.020 And that means they do create this type of technology
00:22:14.340 whilst not being aware or maybe not even caring what the negative outcomes of this technology is.
00:22:21.520 Yeah, I mean, you can...
00:22:23.540 So, the people creating it, they have to satisfy certain short-term metrics overall.
00:22:31.440 Like, they want, they're fighting for employees,
00:22:33.500 they are fighting for the next funding round, etc.,
00:22:35.580 to kind of, like, have the best AI tool out there, or AI model out there.
00:22:40.300 And currently, they're, like, they...
00:22:43.200 I mean, OpenAI had, like, six months of testing of GPT-4 before they released it.
00:22:48.160 But still, two days later, it would, like, once you gave it to the users,
00:22:51.340 it would do a bunch of things that it wasn't meant to do.
00:22:55.220 So, how much testing can you do is the question.
00:22:59.460 And probably, if you have economic incentives to try to, like, generate revenue, etc.,
00:23:05.200 you're not going to do the sufficient amount.
00:23:06.800 I doubt you will always choose the...
00:23:08.220 You'll probably choose the trade-off,
00:23:09.800 especially in the face of, like, strong competition that allows you to compete.
00:23:13.580 And then, like, safety falls more and more by the wayside.
00:23:16.200 And pressure from shareholders, because, obviously,
00:23:18.320 the longer the safety process, the more money it's going to cost.
00:23:20.980 Exactly.
00:23:21.480 And the more money you're not going to make,
00:23:22.960 because you're not releasing the product to market.
00:23:24.800 Yeah, so that's a difficult problem, because, like, at the same time,
00:23:27.120 obviously, capitalism is great,
00:23:29.080 and optimizing for, like, financial returns has yielded, like,
00:23:33.320 a lot of smart kind of information trickling around
00:23:37.400 for the right products to be developed.
00:23:39.120 But I think that it changes at the point where you have
00:23:43.600 potentially, like, civilization-destroying technologies out there for people.
00:23:49.420 Like, I think there, I don't want it to be developed anymore
00:23:52.460 by a pure profit maximizer,
00:23:55.800 but rather by, like, someone who is pursuing
00:23:59.420 the next, like, cognitive leap for humanity.
00:24:04.100 Like, it's more of a scientific problem, I think,
00:24:08.260 that ideally would be treated with, like, I don't know,
00:24:13.460 the wisdom that it requires to be done right
00:24:16.220 rather than to be just done fast.
00:24:18.180 Yeah, but the problem with the word right is
00:24:20.460 that's a subjective word.
00:24:22.780 It is.
00:24:23.940 And the question I wanted to ask you about this is
00:24:26.840 we've obviously had throughout history
00:24:30.260 various technological breakthroughs
00:24:33.400 that have been extraordinarily disruptive
00:24:36.220 to the societies in which they happen.
00:24:39.040 But as you say, the loom in Britain,
00:24:44.360 or in England at the time, being invented,
00:24:46.720 yes, it displaced a lot of people.
00:24:48.680 Agricultural machinery being invented
00:24:50.760 displaced a lot of people,
00:24:52.020 and you had Luddites who would protest against it
00:24:54.200 and try and tear it down.
00:24:55.540 But those things, disruptive as they were,
00:24:57.680 or the printing press, disruptive, you know,
00:24:59.480 causing two centuries of religious warfare,
00:25:01.920 nonetheless did not create extinction risks
00:25:04.980 for the entirety of humanity.
00:25:07.460 So that's on the one hand.
00:25:10.080 On the other hand, I don't think human beings
00:25:11.800 have ever had a way of not allowing technology
00:25:16.320 to emerge and develop
00:25:17.900 because the competing incentives are so strong
00:25:22.380 for the United States and everybody in the West,
00:25:25.060 for example.
00:25:25.920 In fact, you could get the whole world
00:25:27.260 to agree not to develop it,
00:25:28.460 and somebody would develop it in secret, right?
00:25:31.420 Well, yeah, but it's not...
00:25:33.960 So, a couple of points.
00:25:35.320 So to the first point, definitely.
00:25:38.420 And I wasn't talking, by the way,
00:25:39.820 about even though there will be, like,
00:25:41.160 displacement of workers.
00:25:42.520 That's not the point that I was making before.
00:25:44.620 I know, I know.
00:25:44.840 But I think it is true
00:25:46.100 that, like, many people will be displaced.
00:25:49.200 Hopefully, this will be managed well,
00:25:51.180 such that they will find new jobs
00:25:52.640 or at least, like,
00:25:54.060 the additional productivity will be distributed
00:25:55.800 such that, like, people are not worse off
00:25:57.500 because technically,
00:25:58.600 when you're using the thing,
00:25:59.600 that means then that you need fewer inputs
00:26:01.540 to generate more outputs.
00:26:02.540 So it's like, the world shouldn't be poorer.
00:26:04.500 Yeah.
00:26:05.540 Anyhow, but I think things changed significantly
00:26:08.780 when we developed nuclear weapons.
00:26:13.540 That was the first time
00:26:14.860 that humanity now had the power
00:26:18.020 to destroy itself.
00:26:19.320 So, like, previous technological disruptions
00:26:21.560 are of a different kind to me
00:26:23.100 than the ones that followed from hands-on.
00:26:26.320 And from then on, though,
00:26:29.140 also, we did change the regime
00:26:30.520 where if you ask the people
00:26:32.580 that initially, like, worked on,
00:26:36.280 like, the containment of nuclear weapons,
00:26:38.920 they assumed that either there will be one country
00:26:41.380 that has all of the nuclear weapons
00:26:43.420 and no one else has them
00:26:45.340 and they control everything,
00:26:46.520 or every country has it
00:26:47.920 and it's going to be a super precarious situation.
00:26:50.240 Ended up being that treaties kind of worked.
00:26:52.540 Non-proliferation treaty allowed for,
00:26:54.180 in the end, only, like, nine countries
00:26:55.820 that currently have nuclear weapons.
00:26:58.960 And then we also had a reduction of the weapons
00:27:03.360 after we built, like, 60,000 of them.
00:27:07.100 There are now, like, 13,000.
00:27:08.860 So treaties can work sometimes
00:27:11.720 to reduce the economic incentives.
00:27:13.820 We also succeeded with the Montreal Protocol.
00:27:16.740 Like, we had the ozone layer being depleted
00:27:19.260 and hurt much more
00:27:22.260 by the use of CFCs, Hairspray, etc., back then.
00:27:25.740 And then countries came together.
00:27:27.320 And actually, DuPont, or DuPoint,
00:27:30.000 the company, the large one,
00:27:31.440 they were, like, fighting against it.
00:27:32.680 It's like, no, it can't be that bad.
00:27:35.120 The science is bullshit.
00:27:37.420 CFCs are fine.
00:27:38.880 And then, like, the CEO announced it.
00:27:41.280 And then a few weeks later,
00:27:42.280 they actually saw that
00:27:43.900 they would have a commercial opportunity
00:27:45.800 for distributing their product
00:27:48.140 with a different chemical base.
00:27:50.860 And they're like, oh, you know what?
00:27:51.920 We're actually on board now.
00:27:53.320 It's fine.
00:27:53.820 We'll just pursue the alternative solution.
00:27:55.700 So the thing I'm saying is
00:27:58.300 it's not that we shouldn't build
00:28:01.600 super powerful AI.
00:28:03.160 Quite the opposite.
00:28:03.940 I want it.
00:28:04.440 It will solve so many problems.
00:28:05.760 It's going to be great.
00:28:07.120 But I feel like
00:28:09.100 if we just go with as fast as possible,
00:28:12.620 we won't take, like,
00:28:13.900 sometimes, ideally,
00:28:14.860 you would take the trade-off
00:28:15.800 towards going a little slower,
00:28:18.620 making sure that it's not deceiving us,
00:28:21.240 making sure that it can't be used
00:28:23.180 to generate, like, new pathogens at will
00:28:25.900 and only then release it.
00:28:28.340 So, like, I want us to actually
00:28:29.520 be able to take sensible
00:28:31.160 and wise trade-offs
00:28:32.180 between safety and speed
00:28:34.220 rather than just go for speed.
00:28:36.220 But wouldn't the previous subject
00:28:39.220 that we've spoken about,
00:28:40.360 which is gain-of-function research,
00:28:42.360 particularly in a country like China,
00:28:44.900 be the logical pushback
00:28:46.840 to what you're saying?
00:28:48.660 In the sense that, I mean,
00:28:50.240 they are presently doing it,
00:28:53.180 we are doing gain-of-function research.
00:28:54.440 Do you mean that it could be used for it?
00:28:56.060 No, I mean, what I'm saying is,
00:28:57.880 it's, you know,
00:28:58.540 we can agree to do this
00:28:59.980 in the US or the UK,
00:29:02.320 but I highly doubt
00:29:04.140 the Chinese or the Russians would do it
00:29:06.000 because it's not in their interest.
00:29:07.980 Well, it actually is.
00:29:09.340 So people assume that China
00:29:10.540 is going to be, like,
00:29:11.960 just super gung-ho.
00:29:13.320 But so far, at least,
00:29:15.020 China has released more regulation
00:29:16.560 around AI than the US has.
00:29:18.240 And they've been more careful about it.
00:29:22.080 And it makes sense
00:29:22.940 if you put it under the perspective of
00:29:26.040 you're developing something
00:29:28.000 that's more intelligent,
00:29:28.920 that will be super powerful.
00:29:32.100 China really likes to be in control
00:29:34.140 of their own country.
00:29:35.180 Like, the CCP really wants
00:29:36.360 to be in control.
00:29:37.580 That's very mildly stated, yeah.
00:29:39.440 They don't want anyone else
00:29:40.320 or anything else to be in control
00:29:41.560 outside of them.
00:29:42.260 And if it is possible,
00:29:46.680 which it currently is unknown,
00:29:48.240 whether it can be contained or not,
00:29:50.600 whether it can be,
00:29:51.240 when the AI can be controlled,
00:29:52.760 if it is therefore possible
00:29:53.680 that it does end up
00:29:55.380 just, like, running away with it,
00:29:56.700 or an individual actor
00:29:57.800 who has the, like,
00:29:59.240 super powerful AI
00:30:00.020 becomes extremely powerful,
00:30:01.540 the CCP doesn't want that.
00:30:02.880 Like, they want it even less
00:30:04.200 than in the US,
00:30:06.820 like, the government wanted,
00:30:07.840 which is much more happy
00:30:09.300 with letting capitalism
00:30:10.420 kind of, like, choose
00:30:11.260 who develops what, right?
00:30:13.500 Like, China is a much more
00:30:14.620 ideological actor than the US.
00:30:16.560 The US is more of a capitalist,
00:30:19.520 like, individual freedoms,
00:30:20.920 individual pursuits actor,
00:30:22.980 which I'm much more in favor of.
00:30:24.660 But I think that China
00:30:25.940 will take economic cost
00:30:27.400 for ideological benefits.
00:30:30.000 Okay, so the Chinese
00:30:31.140 are more interested in control.
00:30:32.200 Look, I think one of the things
00:30:33.840 that happens when we talk about AI
00:30:35.400 is we naturally focus more
00:30:36.920 on the negatives.
00:30:37.560 But what are the positives
00:30:39.000 of this new technology?
00:30:40.280 What are the things
00:30:41.100 that it's going to help us with
00:30:42.340 moving forward?
00:30:44.440 I mean, tons, right?
00:30:46.120 It could be, it's in,
00:30:48.120 it's increased intelligence,
00:30:49.780 increased way to solving problems.
00:30:51.440 So, like, you could really apply it
00:30:52.560 to about anything.
00:30:53.340 And we see some of it.
00:30:54.720 You have, like, radiologists
00:30:55.800 who now can use, like,
00:30:57.220 image recognition
00:30:57.860 to better identify
00:30:59.160 cancerous cells on skin and such.
00:31:04.500 Beauty filters, I don't know
00:31:05.700 if you like them or not,
00:31:06.520 but you can look cool
00:31:07.400 while you're also online.
00:31:10.160 And, yeah, just, like,
00:31:11.860 super broadly,
00:31:12.600 like, you have efficiency improvements
00:31:14.480 in, like,
00:31:16.920 in how you're cooling
00:31:18.660 servers and data centers.
00:31:21.380 It is currently being applied
00:31:23.240 to solve a bunch of things
00:31:24.600 in biology on the positive side
00:31:26.040 with, like, developing new therapeutics
00:31:29.540 that will come out,
00:31:30.620 that is coming out of DeepMind.
00:31:33.100 It's a new company
00:31:34.740 that they've built
00:31:35.560 that is basically trying to solve
00:31:37.060 and build, like,
00:31:37.580 many more medicines in biology.
00:31:40.080 Let's explore that a little bit.
00:31:41.180 They also worked on nuclear fusion
00:31:42.960 and, like,
00:31:43.460 because it's really hard
00:31:44.160 to contain, kind of,
00:31:45.160 like, the plasma
00:31:45.720 that's inside of the tokamak
00:31:47.480 and they're, like,
00:31:48.820 just helping with that.
00:31:49.940 You could see a lot of stuff
00:31:51.420 being unlocked
00:31:52.080 and I think it's only the beginning.
00:31:53.380 Like, we can't, yeah,
00:31:55.340 it could be applied
00:31:56.220 to so many things
00:31:56.920 that we're not applying it to yet.
00:31:58.320 Yeah, sorry to interrupt you there,
00:31:59.360 I wanted to talk about
00:32:00.460 the medical side of it
00:32:01.900 because that, to me,
00:32:02.640 is really interesting.
00:32:03.760 You were talking about therapeutics.
00:32:05.160 Like, what are we talking about here?
00:32:06.600 How could AI be used
00:32:07.900 with regard to therapeutics
00:32:09.720 to help human life?
00:32:11.860 Well, one concrete thing
00:32:12.780 that happened was
00:32:13.580 that AlphaFold came out
00:32:15.780 which solved, like,
00:32:18.160 the problem of understanding
00:32:20.480 which amino acids become
00:32:22.920 which proteins
00:32:23.680 and then, like,
00:32:24.460 what proteins you, hence,
00:32:26.940 would be able to build yourself
00:32:28.240 if you wanted to
00:32:29.080 or, rather,
00:32:30.160 how you would build the proteins.
00:32:32.020 I'm not that deep
00:32:33.040 in the biological side of it
00:32:34.900 but, as I understand it,
00:32:37.600 is that we're now at a place
00:32:40.340 where we can much easier
00:32:41.220 design these proteins
00:32:42.100 and, in the end,
00:32:43.620 you can use, like,
00:32:44.480 proteins are kind of, like,
00:32:45.580 little mechanistic,
00:32:47.000 basically, like,
00:32:48.360 things in your body
00:32:49.680 that run around
00:32:50.280 and, like, move things around,
00:32:51.440 et cetera.
00:32:52.400 And, with that,
00:32:53.760 we can unlock, like,
00:32:54.720 many new therapies
00:32:56.760 that we have beforehand
00:32:58.000 not been able to do.
00:32:59.340 That's so interesting.
00:33:00.720 And particularly, you know,
00:33:02.080 because people talk
00:33:02.780 about climate change a lot.
00:33:04.040 They talk a lot about the fact
00:33:05.820 that we're producing
00:33:06.740 too much carbon,
00:33:07.760 carbon neutral.
00:33:08.640 If AI could be used
00:33:10.160 to create a better,
00:33:12.260 safer form of nuclear energy
00:33:14.020 that is more efficient
00:33:14.960 and provides less waste,
00:33:17.220 I mean, that could...
00:33:18.760 That'd be great.
00:33:19.740 I mean, on the nuclear energy front,
00:33:21.040 I think, like,
00:33:22.060 fission seems to be fine.
00:33:23.080 We had a lot of regulation
00:33:23.820 around it,
00:33:24.420 but, like,
00:33:25.240 I think we could already
00:33:27.180 just have much more
00:33:27.800 nuclear energy.
00:33:28.800 But it seems that it was
00:33:30.040 so far really, really hard
00:33:31.340 to create nuclear fusion.
00:33:34.400 And it mustn't be
00:33:35.460 that AI is necessary for it.
00:33:36.700 It might just be
00:33:37.100 an engineering problem
00:33:37.760 that can be solved without it.
00:33:39.300 But either case,
00:33:41.320 they did use ML techniques
00:33:44.100 to kind of, like,
00:33:44.860 contain the plasma.
00:33:46.000 But I think that outside of it,
00:33:48.700 like, what can more
00:33:50.720 intelligence be used for?
00:33:51.820 It's really just about anything.
00:33:55.420 Can current AI models
00:33:57.540 be used for anything?
00:33:58.540 No.
00:33:58.960 Like, GPT-4 can't help you
00:34:00.940 with, like,
00:34:01.620 figuring out fundamental physics
00:34:03.560 to then figure out engineering yet.
00:34:05.900 But in a few years,
00:34:07.520 you could see that being the case.
00:34:10.500 And culturally and politically,
00:34:14.040 what's interesting to me is
00:34:15.340 you talked about the example
00:34:17.600 of ChatGPT
00:34:18.580 where it gives people
00:34:20.260 different answers
00:34:20.900 based on their political views,
00:34:23.560 et cetera.
00:34:24.540 One of the things
00:34:25.240 that the entire conversation
00:34:26.840 brings up is
00:34:27.760 whether truth exists, really, right?
00:34:30.500 Because
00:34:30.900 in order,
00:34:34.000 you were saying,
00:34:35.300 you know,
00:34:35.540 the model should optimize for truth.
00:34:37.500 But human beings
00:34:38.440 haven't worked out
00:34:39.260 a way to agree
00:34:40.000 on what the truth is.
00:34:40.780 And frankly,
00:34:41.340 increasingly,
00:34:42.340 we're unable to agree
00:34:43.560 on what the truth is.
00:34:45.060 How do you think
00:34:45.980 the creation of AI?
00:34:48.200 Because I went on ChatGPT
00:34:50.120 when it was released
00:34:51.080 a few months ago.
00:34:53.160 And I played around.
00:34:54.380 I took the hot button issues
00:34:55.860 in our society today
00:34:57.000 and I played around with
00:34:58.080 what is a woman?
00:34:59.400 For example,
00:35:00.240 a question that seems to
00:35:01.420 stump a lot of people nowadays.
00:35:03.580 Did it stump it?
00:35:04.240 Well,
00:35:06.600 it sort of gave me
00:35:08.000 some kind of like
00:35:08.920 politically correct
00:35:10.060 and I went back and forth
00:35:12.040 with it.
00:35:12.280 It was a longer answer
00:35:13.060 than you expected,
00:35:13.840 I imagine.
00:35:14.200 It wasn't just the fact
00:35:15.060 that it was a longer answer.
00:35:16.400 Every answer ended
00:35:17.620 with the phrase,
00:35:18.420 but we must remember
00:35:19.180 to respect all people's
00:35:20.500 differences and blah,
00:35:21.360 it was a very kind of political,
00:35:23.020 it wasn't a truth conversation.
00:35:24.780 It was about
00:35:25.560 a cultural sensitivity
00:35:26.820 and all of that.
00:35:30.200 And I wonder
00:35:30.760 if and when
00:35:32.580 we move to
00:35:33.620 these AI systems
00:35:35.260 being the new Google,
00:35:36.460 essentially,
00:35:36.900 the new source of truth, right?
00:35:38.660 Because to most people nowadays,
00:35:40.160 the truth is whatever you Google.
00:35:41.880 If you put something in Google
00:35:42.780 and that's what it says,
00:35:43.820 that's the truth.
00:35:45.220 I mean, moronic in my opinion,
00:35:46.620 but that's how people are, right?
00:35:48.220 People want speed
00:35:49.100 in their truth as well.
00:35:50.000 Yes, exactly.
00:35:50.720 They'll sacrifice
00:35:51.400 a little bit of truth
00:35:52.280 for quite a bit of speed.
00:35:53.380 Exactly.
00:35:53.680 Or belonging, actually.
00:35:54.780 And tribal allegiance
00:35:56.460 and all of that, yeah.
00:35:58.100 So if our source of information
00:36:00.960 becomes this thing
00:36:03.120 that's pre-programmed
00:36:04.380 with the biases
00:36:05.060 of the people
00:36:05.660 who developed it
00:36:06.500 and who designed it,
00:36:09.100 isn't that,
00:36:10.180 you know,
00:36:10.580 people will talk about
00:36:11.900 a tyranny of the minority
00:36:13.160 in various political
00:36:14.180 and cultural contexts
00:36:15.100 where a small group of people
00:36:16.580 get to impose
00:36:17.320 their worldview
00:36:17.900 on everybody else.
00:36:19.080 That seems to me
00:36:20.080 like that on steroids.
00:36:22.720 I mean, it could be.
00:36:24.780 I think for what it's worth,
00:36:27.840 the current AI companies
00:36:30.160 are trying to address that
00:36:31.520 by including
00:36:32.460 like various viewpoints
00:36:33.600 outside of the ones
00:36:34.440 that they agree with
00:36:35.880 in their testing
00:36:37.380 of the language models
00:36:39.080 that they designed.
00:36:41.160 And I think
00:36:41.840 in the case of
00:36:42.860 ChatGPT, for example,
00:36:43.880 like people put the questions
00:36:45.460 in front of it
00:36:45.960 that are like
00:36:46.420 to figure out
00:36:46.980 your political compass position
00:36:48.320 and it started out
00:36:49.980 fairly liberal left
00:36:51.600 but then over time
00:36:52.660 it actually moved
00:36:53.240 a little bit more
00:36:53.760 to the center
00:36:54.240 when people did it again
00:36:55.240 two, three months later.
00:36:57.940 Because I mean,
00:36:59.240 you could imagine
00:37:00.080 there being also
00:37:01.820 just various language models
00:37:04.220 that tell you
00:37:04.800 what you want to hear.
00:37:06.540 I'm not sure
00:37:07.360 everything needs
00:37:07.980 to optimize for truth.
00:37:09.060 Like for me personally,
00:37:09.860 that's the most interesting thing
00:37:10.880 but it's not the thing
00:37:11.640 that everyone seeks out
00:37:12.860 the most, right?
00:37:13.460 Like people read
00:37:14.180 news publications
00:37:16.260 from the far left
00:37:17.480 and the far right
00:37:18.260 not because they're
00:37:19.140 looking for truth,
00:37:19.860 they're looking for
00:37:20.500 something else.
00:37:21.880 And I think like
00:37:22.420 you could have language models
00:37:23.220 that do that as well.
00:37:24.780 But yeah,
00:37:25.560 will we then have one
00:37:26.820 that imposes
00:37:28.380 like their own ideology
00:37:30.020 the most?
00:37:32.020 I mean,
00:37:32.280 it certainly
00:37:32.700 could be used.
00:37:35.580 I think there is,
00:37:36.640 I think as long
00:37:37.420 like as the marketplace
00:37:38.580 of ideas
00:37:39.240 like it's kind of around,
00:37:40.300 it's going to be hard.
00:37:42.560 I'm not that worried
00:37:43.600 about say
00:37:46.300 OpenAI gaining
00:37:47.620 so much power
00:37:48.840 they now change
00:37:50.900 the entire like
00:37:52.080 landscape of things
00:37:53.080 that are being talked about.
00:37:55.380 I'm a bit more worried
00:37:56.660 about a lot of people
00:37:58.700 individually
00:37:59.260 like getting extra power
00:38:00.860 to edit Wikipedia
00:38:02.040 into like biased ways
00:38:04.200 or like making up
00:38:05.380 fake papers
00:38:06.140 that justify
00:38:07.380 their fake articles.
00:38:08.820 All of that I think
00:38:09.880 will unfortunately happen
00:38:10.860 and that will kind of
00:38:12.500 erode general trust
00:38:14.340 into the internet
00:38:16.100 potentially as well.
00:38:17.140 I don't know yet
00:38:17.540 what we're going to do
00:38:18.100 about that.
00:38:18.640 That seems pretty
00:38:19.260 pretty bad to me.
00:38:20.340 I mean already Google
00:38:21.060 has become a bit worse
00:38:22.060 at just,
00:38:23.280 I don't know if you've,
00:38:24.020 I imagine you've used it
00:38:25.560 recently,
00:38:26.540 but it's like
00:38:27.540 the answers that you get
00:38:28.840 yeah,
00:38:30.980 are just not as useful
00:38:32.460 I find anymore
00:38:33.220 as they were
00:38:33.780 like within
00:38:34.460 five years ago.
00:38:35.980 Really?
00:38:36.880 Yeah,
00:38:37.220 it's a lot
00:38:37.820 it seems like
00:38:38.920 Quora has become
00:38:40.040 very good at like
00:38:40.980 optimizing
00:38:43.060 their spot
00:38:44.360 in the search
00:38:45.220 so then like
00:38:46.760 you're often,
00:38:48.080 if you ask
00:38:48.520 a question like
00:38:49.680 that is something
00:38:51.300 like
00:38:51.700 what would happen
00:38:52.840 if a ball
00:38:54.320 was filled
00:38:55.060 with helium
00:38:55.680 like how long
00:38:57.480 would it take
00:38:58.040 like how much
00:38:58.660 would you need to fill
00:38:59.240 so that like
00:38:59.860 it could lift a human
00:39:00.640 or something like that
00:39:01.420 usually that type of stuff
00:39:02.660 is something that Quora
00:39:03.600 maybe had already
00:39:04.760 someone asked
00:39:05.780 and you get an answer
00:39:06.680 from there
00:39:07.060 but the quality of answers
00:39:08.160 there isn't that high
00:39:09.420 because Quora
00:39:09.960 is just not used enough
00:39:11.020 so,
00:39:13.200 and now
00:39:13.660 that actually
00:39:16.480 more and more
00:39:17.340 gets language model
00:39:20.060 answers within Quora
00:39:21.200 and then
00:39:21.700 those are being put
00:39:23.160 into Google
00:39:23.660 even as one of their
00:39:24.460 first answers
00:39:25.120 that they display
00:39:26.180 you know
00:39:26.740 where they don't just
00:39:27.360 show you the link
00:39:27.900 but the kind of top section
00:39:28.960 so the top section
00:39:30.080 has become lower quality
00:39:31.120 in my view
00:39:31.580 by quite a bit
00:39:32.700 that's very interesting
00:39:33.520 but coming back
00:39:34.340 to your political point
00:39:35.200 about it moving
00:39:35.860 to the center
00:39:36.460 the thing
00:39:36.840 like I'm somewhere
00:39:37.740 in the center
00:39:38.760 whatever that means
00:39:39.700 I mean
00:39:39.960 you and I
00:39:40.560 depends on who you ask
00:39:41.400 I imagine
00:39:41.840 right
00:39:42.420 and also
00:39:43.480 you and I
00:39:44.040 could both be
00:39:44.620 in the center
00:39:45.100 and have completely
00:39:45.960 polar opposite
00:39:47.380 viewpoints
00:39:48.220 on the same issues
00:39:49.040 right
00:39:49.580 some people
00:39:50.280 in the center
00:39:50.780 because they're
00:39:51.360 to the right
00:39:51.820 on this issue
00:39:52.440 and to the left
00:39:52.900 on that issue
00:39:53.480 and you could be
00:39:54.340 to the right
00:39:55.120 on this issue
00:39:55.640 and to the left
00:39:56.100 on this issue
00:39:56.580 you could swap
00:39:57.100 right
00:39:57.420 in fact if you
00:39:58.500 didn't have that
00:39:59.080 you wouldn't be a
00:39:59.700 independent thinker
00:40:01.020 correct
00:40:01.800 and on top of that
00:40:04.180 some people might argue
00:40:06.260 that the center
00:40:06.840 isn't the most
00:40:07.480 representative point of view
00:40:08.640 because that's where
00:40:10.080 the fewest people are
00:40:11.220 most people are
00:40:12.020 in one of the two
00:40:12.840 big tribes
00:40:13.640 and if you're on the right
00:40:14.820 you go
00:40:15.280 you know
00:40:15.980 this is the truth
00:40:17.180 and all these people
00:40:18.020 are idiots
00:40:18.380 and if you're on the left
00:40:19.080 you go
00:40:19.360 this is the truth
00:40:20.100 and all these people
00:40:20.580 are idiots
00:40:20.880 and both tribes
00:40:22.540 look at the people
00:40:23.140 in the middle
00:40:23.580 and go
00:40:24.020 these are the real idiots
00:40:25.100 because they don't agree
00:40:27.140 with either of us
00:40:27.840 or they're spineless
00:40:28.760 or they're spineless
00:40:29.640 or whatever
00:40:30.040 so I guess
00:40:31.600 what I'm saying is
00:40:32.440 it brings up
00:40:34.160 the very philosophical
00:40:35.620 notion of truth
00:40:36.920 in and of itself
00:40:38.020 right
00:40:38.620 when you're
00:40:39.500 when you're giving
00:40:41.600 these language models
00:40:43.900 or these AI
00:40:44.780 systems
00:40:45.860 the power
00:40:47.160 to decide
00:40:48.000 what the truth is
00:40:49.160 they are inevitably
00:40:51.500 going to dissatisfy
00:40:53.040 probably the majority
00:40:54.220 of the population
00:40:54.940 yeah
00:40:56.080 I mean
00:40:56.480 what
00:40:56.800 philosophers
00:40:58.420 have been talking
00:40:59.000 about truth
00:40:59.460 a long time
00:41:00.200 I think some
00:41:02.680 also come to
00:41:03.460 the notion
00:41:04.020 that saying
00:41:04.860 that something
00:41:05.280 is true
00:41:05.760 is just
00:41:07.000 meaningless
00:41:07.460 you can just
00:41:07.900 throw it out
00:41:08.420 it's like
00:41:09.080 it either is
00:41:09.720 or it isn't
00:41:10.700 and adding
00:41:11.880 this is true
00:41:12.440 doesn't actually
00:41:13.040 add that much
00:41:13.660 information context
00:41:14.480 because why are you
00:41:15.100 saying it in the first place
00:41:15.960 is part of it
00:41:17.300 but
00:41:17.520 so
00:41:19.160 yeah
00:41:19.920 like
00:41:20.220 if we look at
00:41:21.600 facts
00:41:22.460 like actual events
00:41:23.400 that happened
00:41:23.900 then we can
00:41:24.440 talk about truth
00:41:25.340 I suppose
00:41:25.840 right
00:41:26.120 like
00:41:26.360 there
00:41:26.780 it seems like
00:41:29.140 I would assume
00:41:30.380 that many language models
00:41:31.260 will just
00:41:32.100 carry the verifiable
00:41:33.460 facts of the past
00:41:34.700 at least I hope so
00:41:35.820 but then
00:41:36.860 what else are you
00:41:37.980 pointing that could be
00:41:38.840 true
00:41:39.280 about
00:41:40.460 like
00:41:40.820 not about
00:41:41.240 facts of the past
00:41:42.440 but
00:41:42.880 well we don't even
00:41:45.140 agree about the past
00:41:46.080 yeah
00:41:46.500 like I was having
00:41:47.580 a conversation
00:41:48.160 with somebody
00:41:48.680 yesterday
00:41:49.120 about McCarthyism
00:41:50.120 and the conventional
00:41:51.260 narrative
00:41:51.720 on McCarthyism
00:41:52.920 is according to
00:41:54.460 this person
00:41:54.980 and the things
00:41:55.520 that they quoted
00:41:56.160 and the books
00:41:56.600 that they've read
00:41:57.220 complete nonsense
00:41:58.620 well so then
00:41:59.620 there are like
00:42:00.140 yeah
00:42:00.380 interpretations
00:42:01.180 of the events
00:42:02.320 are often like
00:42:03.040 very
00:42:03.300 a lot of disagreement
00:42:04.520 and between historians
00:42:05.500 but
00:42:05.900 something like
00:42:06.920 this happened
00:42:07.560 on that day
00:42:08.360 at least
00:42:09.180 often we can agree
00:42:10.240 on that
00:42:10.600 sometimes we don't
00:42:11.240 even agree on that
00:42:12.260 like
00:42:13.020 with JFK
00:42:14.300 for example
00:42:14.840 the shooting
00:42:15.360 it's like
00:42:15.780 we don't agree on
00:42:17.000 what
00:42:18.040 why it happened
00:42:18.940 but at least
00:42:19.460 people agree
00:42:20.000 that there was
00:42:20.660 a bullet
00:42:20.940 that flew
00:42:21.340 from there
00:42:21.860 that ended
00:42:22.700 in him here
00:42:23.640 well actually
00:42:24.940 not even
00:42:25.500 no you're right
00:42:27.640 not even fully
00:42:28.520 there are
00:42:28.840 multiple theories
00:42:29.660 this is what I'm saying
00:42:30.760 so then I suppose
00:42:31.880 in those cases
00:42:32.840 yeah what do you
00:42:33.360 want to do
00:42:33.700 you want to like
00:42:34.320 give
00:42:35.600 the
00:42:36.600 most agreed upon
00:42:38.360 notion
00:42:40.540 plus
00:42:41.200 the counterpoints
00:42:42.780 that some people
00:42:43.380 are making as well
00:42:44.200 would be interesting
00:42:44.740 but a thousand years
00:42:45.440 not a thousand
00:42:46.400 a hundred years ago
00:42:47.240 the most agreed upon
00:42:48.220 viewpoint would have been
00:42:49.440 that black people
00:42:50.060 are inferior to white people
00:42:51.180 yeah but
00:42:52.100 that wasn't
00:42:52.840 like an event
00:42:54.080 as we just like
00:42:55.040 related to before
00:42:55.960 okay
00:42:56.240 that was like
00:42:56.920 that is a
00:42:57.800 I think moral truth
00:42:59.360 is a totally
00:42:59.980 separate conversation
00:43:01.020 where like
00:43:02.300 is
00:43:03.400 like
00:43:03.840 should we
00:43:05.660 treat
00:43:06.700 give animals
00:43:08.020 the same rights
00:43:08.620 as humans
00:43:09.120 like right now
00:43:10.160 that's not what I'm saying
00:43:10.660 I'm talking about
00:43:11.300 the scientific consensus
00:43:12.680 was that some races
00:43:13.820 are inferior to
00:43:14.620 others
00:43:14.940 right
00:43:15.560 so if we had
00:43:16.520 these language models
00:43:17.400 then
00:43:17.760 you would have had
00:43:18.780 a language model
00:43:19.460 that would have
00:43:19.940 advanced that
00:43:20.780 as the scientific truth
00:43:22.140 yeah I'm not sure
00:43:23.820 because I feel like
00:43:24.800 maybe
00:43:26.100 it may be the case
00:43:28.620 that those things
00:43:29.800 are kind of
00:43:30.820 conditional on each other
00:43:32.340 you only develop
00:43:33.120 language models
00:43:33.720 at a time
00:43:34.260 where your science
00:43:35.140 is good enough
00:43:35.720 that you throw out
00:43:36.600 a bunch of bullshit
00:43:37.300 things on the way
00:43:38.120 in the sense that
00:43:39.560 like at the time
00:43:41.120 we were able to make
00:43:42.120 a lot of
00:43:42.660 non-scientific claims
00:43:43.940 about the world
00:43:44.580 because we didn't
00:43:45.420 even understand
00:43:46.080 how science
00:43:46.580 really worked
00:43:47.160 like we had
00:43:47.680 very little
00:43:48.140 understanding of
00:43:48.820 we had less
00:43:50.500 understanding of
00:43:51.160 physics
00:43:51.420 of medicine
00:43:52.100 etc
00:43:52.500 so like you could
00:43:53.220 just make
00:43:53.720 these claims
00:43:54.300 we had
00:43:55.580 worse methods
00:43:57.420 in terms of
00:43:59.100 like randomized
00:44:00.620 controlled trials
00:44:01.380 that you could run
00:44:02.100 on like medicine
00:44:03.100 or in some
00:44:03.740 psychological experiments
00:44:04.740 even though those
00:44:05.180 are partially being
00:44:05.800 thrown out now
00:44:06.360 as well
00:44:06.680 I feel like
00:44:07.900 you don't get
00:44:09.180 like what would
00:44:09.800 be an example
00:44:10.380 of stuff
00:44:11.960 that you get
00:44:13.000 away with
00:44:13.400 saying right now
00:44:13.980 that is accepted
00:44:14.560 as truth
00:44:15.080 something like
00:44:16.280 children should
00:44:17.160 go to school
00:44:17.760 it's like
00:44:18.220 it's good for them
00:44:18.920 it's like
00:44:19.880 well even that
00:44:20.800 I don't think
00:44:21.680 is in the same
00:44:22.240 category of truth
00:44:23.340 as
00:44:23.900 yeah
00:44:26.100 on the 6th of January
00:44:27.480 some people
00:44:28.360 entered the
00:44:29.080 Capitol building
00:44:29.880 they're of a
00:44:31.580 different flavor
00:44:32.220 and I feel like
00:44:32.820 science has
00:44:33.420 gotten good enough
00:44:35.280 where we can
00:44:37.020 distinguish between
00:44:37.880 these types of
00:44:38.640 truth that are
00:44:39.260 a bit more
00:44:39.880 like statements
00:44:41.140 of opinion
00:44:41.680 or moral truth
00:44:42.700 but the problem
00:44:43.380 is
00:44:43.760 is after the
00:44:44.820 pandemic
00:44:45.220 our faith
00:44:45.980 in the science
00:44:46.660 has been decreased
00:44:47.500 there's a lot
00:44:48.200 of people now
00:44:48.880 who have lost
00:44:49.420 faith in
00:44:50.580 scientists
00:44:51.180 and vaccines
00:44:52.180 because of the
00:44:53.000 fact that
00:44:53.540 what they were
00:44:54.340 told turned out
00:44:55.220 to be untrue
00:44:56.100 and it's
00:44:57.880 yeah
00:44:58.720 it's fair enough
00:44:59.780 because a lot
00:45:00.340 of it had been
00:45:01.120 captured by like
00:45:02.160 people not
00:45:02.740 saying the
00:45:04.160 current understanding
00:45:05.880 at the time
00:45:06.440 but rather
00:45:07.860 just saying
00:45:09.360 the political
00:45:09.800 line that they
00:45:10.380 were made to
00:45:10.820 say
00:45:11.060 yeah so like
00:45:13.720 okay if we look
00:45:14.320 at vaccines
00:45:14.780 for example
00:45:15.360 what is the
00:45:16.020 truth there
00:45:16.600 the truth there
00:45:19.320 that probably
00:45:19.920 is universally
00:45:20.560 agreed is
00:45:21.300 that mRNAs
00:45:22.400 develop these
00:45:23.000 vaccines
00:45:23.500 and these
00:45:24.160 vaccines
00:45:24.660 are more than
00:45:25.760 zero useful
00:45:26.420 that there are
00:45:27.480 some costs
00:45:28.660 to a young
00:45:31.080 person using
00:45:32.000 a vaccine
00:45:32.520 that the
00:45:33.700 risk
00:45:34.100 and then
00:45:34.920 whether the
00:45:35.420 risk benefit
00:45:35.920 comes out
00:45:36.320 positive or
00:45:36.800 not
00:45:36.960 that is like
00:45:37.460 a question
00:45:37.860 of trials
00:45:39.340 and looking
00:45:39.880 at data
00:45:40.320 etc
00:45:40.660 I feel like
00:45:42.220 when people
00:45:42.600 then say
00:45:43.180 no therefore
00:45:44.080 everyone should
00:45:44.920 use them
00:45:45.340 that's kind of
00:45:45.820 again a different
00:45:46.320 category of truth
00:45:47.240 so I would
00:45:47.900 hope that
00:45:48.500 language model
00:45:49.120 end up
00:45:49.640 distinguishing
00:45:50.660 like between
00:45:51.580 consensus
00:45:54.460 viewpoints
00:45:55.160 which is not
00:45:56.380 truth
00:45:56.760 right
00:45:57.160 consensus
00:45:58.800 viewpoints
00:45:59.360 and like
00:46:00.500 hard ground
00:46:01.400 based truth
00:46:02.080 that is
00:46:02.560 verifiable
00:46:03.140 by like
00:46:03.920 looking at
00:46:04.920 what happened
00:46:05.340 in the world
00:46:05.760 because yeah
00:46:06.340 at the time
00:46:06.980 but the problem
00:46:07.720 is also as well
00:46:08.680 Igor
00:46:08.960 is that we're
00:46:09.720 talking about
00:46:10.400 things that are
00:46:11.080 verifiable
00:46:11.640 and that we know
00:46:12.380 to be true
00:46:13.000 and they're
00:46:13.360 backed up by
00:46:13.940 science
00:46:14.420 and not to
00:46:15.440 return to
00:46:15.900 the trans
00:46:16.340 debate
00:46:16.600 because it
00:46:16.920 seems that's
00:46:17.420 where we go
00:46:17.820 all the time
00:46:18.340 but you now
00:46:19.680 have certain
00:46:20.220 scientists
00:46:20.680 saying well
00:46:21.260 you know
00:46:21.520 sex is a
00:46:22.200 spectrum
00:46:22.500 and you go
00:46:22.940 what
00:46:23.380 so it now
00:46:25.640 seems that
00:46:26.300 consensus
00:46:26.820 is affecting
00:46:27.660 things that
00:46:28.200 we have all
00:46:28.700 known to be
00:46:29.220 true
00:46:29.540 yeah
00:46:29.880 I mean
00:46:30.500 that is an
00:46:30.900 interesting
00:46:31.160 question
00:46:31.460 like I
00:46:31.900 think there
00:46:32.240 is a
00:46:33.000 very
00:46:33.520 probably a
00:46:34.460 pretty important
00:46:35.000 line between
00:46:35.660 like just
00:46:36.480 what consensus
00:46:37.840 is and what
00:46:38.580 is like
00:46:39.080 hard true
00:46:39.800 consensus
00:46:42.480 obviously shifts
00:46:43.700 continuously
00:46:44.280 like and
00:46:45.980 consensus will
00:46:46.700 shift yet again
00:46:47.420 in the future
00:46:48.080 forward we
00:46:48.520 don't that's
00:46:50.120 why it's like
00:46:50.740 so like
00:46:53.680 hubristic of
00:46:54.520 people when
00:46:54.920 they want to
00:46:55.500 just enshrine
00:46:56.600 some of the
00:46:57.060 values that
00:46:57.820 are like
00:46:58.320 currently known
00:46:59.100 and like kind
00:46:59.680 of reduce
00:47:00.280 speech or
00:47:00.980 allowing for
00:47:02.800 a conversation
00:47:03.300 to happen about
00:47:03.900 some of these
00:47:04.340 things because
00:47:04.780 like guess
00:47:05.580 what consensus
00:47:06.340 viewpoint might
00:47:07.080 be having to
00:47:08.200 shift still
00:47:08.700 but yeah I
00:47:11.740 think that
00:47:12.580 unfortunately
00:47:13.180 probably
00:47:13.840 the current
00:47:15.480 way how we
00:47:16.040 design the
00:47:16.660 language models
00:47:17.180 it is much
00:47:17.800 more designed
00:47:18.300 actually about
00:47:18.860 fulfilling and
00:47:19.880 like hitting on
00:47:20.800 describing consensus
00:47:23.480 viewpoints as
00:47:24.220 true
00:47:24.420 so that could be
00:47:25.520 pretty bad
00:47:25.980 I agree
00:47:26.980 like where you
00:47:27.780 would then
00:47:28.360 have these
00:47:30.400 responses that
00:47:31.200 like chat GPT
00:47:32.360 for example gave
00:47:33.140 that you checked
00:47:34.040 on woman
00:47:34.580 it's like
00:47:35.060 well it probably
00:47:36.980 would have said
00:47:37.440 something else
00:47:37.960 five years ago
00:47:38.700 and it might
00:47:40.260 say something
00:47:40.780 else in five
00:47:41.380 years again
00:47:42.000 and if it
00:47:42.940 each time
00:47:43.520 just pretends
00:47:44.280 like that
00:47:44.780 is the
00:47:45.380 hard truth
00:47:46.160 that is
00:47:47.680 problematic
00:47:48.120 to a
00:47:48.580 different way
00:47:49.300 than if
00:47:51.120 you google
00:47:51.840 and google
00:47:53.000 shows you
00:47:53.740 a list
00:47:54.540 of like
00:47:55.360 15 links
00:47:56.200 and these
00:47:57.600 kind of have
00:47:58.260 different
00:47:58.640 viewpoints
00:47:59.160 gives you
00:48:00.060 much more
00:48:00.680 feel to
00:48:02.080 hey this is
00:48:02.660 an open
00:48:02.960 discussion
00:48:03.360 right now
00:48:03.900 rather than
00:48:04.980 here is
00:48:05.900 the line
00:48:06.520 because the
00:48:07.520 problem is
00:48:08.000 as well
00:48:08.360 is that
00:48:08.680 a lot
00:48:09.000 of these
00:48:09.360 companies
00:48:09.700 look
00:48:09.960 every industry
00:48:10.920 is dominated
00:48:12.020 one way
00:48:12.540 or another
00:48:12.840 by a certain
00:48:13.420 type of people
00:48:14.200 because they
00:48:15.200 have certain
00:48:15.640 aptitudes
00:48:16.140 certain ways
00:48:16.740 of thinking
00:48:17.220 and those
00:48:18.420 types of
00:48:18.880 people
00:48:19.260 by and large
00:48:20.360 tend to have
00:48:20.980 similar
00:48:21.540 political
00:48:21.980 viewpoints
00:48:22.640 so the
00:48:23.300 tech industry
00:48:23.920 tends to be
00:48:24.480 dominated
00:48:24.960 and run
00:48:25.820 by people
00:48:26.340 who are
00:48:26.820 you know
00:48:27.040 what we'd
00:48:27.440 call like
00:48:27.900 liberals
00:48:28.600 or slightly
00:48:29.200 to the left
00:48:29.780 or whatever
00:48:30.160 else
00:48:30.560 and therefore
00:48:31.460 they're going
00:48:31.820 to produce
00:48:32.240 models
00:48:32.660 which reflect
00:48:33.400 them
00:48:33.840 because
00:48:34.700 that's just
00:48:36.020 how human
00:48:36.460 beings are
00:48:37.100 I mean
00:48:37.620 they will
00:48:38.000 it would be
00:48:39.240 surprising
00:48:39.680 if there was
00:48:40.480 zero bias
00:48:41.200 in the models
00:48:42.080 that they
00:48:42.360 produce
00:48:42.760 I would argue
00:48:43.360 that it's
00:48:43.660 impossible
00:48:44.120 to produce
00:48:44.920 zero bias
00:48:45.520 I agree
00:48:46.560 with that
00:48:47.180 but I think
00:48:48.600 you can
00:48:48.900 at least
00:48:49.240 like you can
00:48:50.280 notice when
00:48:50.820 someone is
00:48:51.260 genuinely
00:48:51.620 trying to
00:48:52.380 reduce
00:48:52.720 their bias
00:48:53.360 even though
00:48:54.300 they would
00:48:54.640 have these
00:48:55.120 political
00:48:55.420 viewpoints
00:48:55.920 they could
00:48:56.360 enshrine
00:48:56.840 them into
00:48:57.380 their model
00:48:57.960 they could
00:48:59.520 still attempt
00:49:00.140 not to do
00:49:00.700 that and you
00:49:01.160 would see
00:49:01.540 it in
00:49:02.660 them
00:49:03.320 attempting
00:49:03.720 so like
00:49:04.260 Anthropic
00:49:04.960 for example
00:49:05.580 wrote
00:49:06.080 this proposal
00:49:07.640 of like
00:49:08.100 how to
00:49:08.780 make sure
00:49:09.280 that the
00:49:10.360 language models
00:49:10.980 act well
00:49:11.620 by creating
00:49:13.180 this idea
00:49:13.660 of constitutional
00:49:14.380 AI where
00:49:15.040 they have
00:49:15.680 they're creating
00:49:16.940 basically like
00:49:17.400 a constitution
00:49:18.100 of values
00:49:19.220 that the
00:49:20.060 model should
00:49:21.300 adhere to
00:49:22.200 and there
00:49:23.720 and there
00:49:23.740 you can
00:49:23.980 just look
00:49:24.380 at what
00:49:24.780 are those
00:49:25.380 parts
00:49:26.160 of their
00:49:26.680 AI
00:49:27.100 constitution
00:49:27.660 and
00:49:28.680 those
00:49:29.940 things
00:49:30.460 you can
00:49:31.320 look at
00:49:31.580 and say
00:49:31.900 hey
00:49:32.080 there is
00:49:32.380 a political
00:49:32.700 bias
00:49:33.060 or not
00:49:33.500 but they're
00:49:34.340 actually
00:49:34.600 not that
00:49:35.060 politically
00:49:35.440 biased
00:49:35.900 they're more
00:49:36.560 they just
00:49:37.340 chose
00:49:37.740 like pretty
00:49:38.180 simple things
00:49:39.200 that are
00:49:39.500 like the
00:49:39.980 UN charter
00:49:41.100 of human
00:49:41.560 rights
00:49:41.920 etc
00:49:42.220 like
00:49:42.620 through
00:49:42.840 that
00:49:43.080 but then
00:49:44.380 again
00:49:44.640 the UN
00:49:45.300 can also
00:49:45.700 be captured
00:49:46.200 and I think
00:49:47.020 the WHO
00:49:47.600 was kind
00:49:48.220 of captured
00:49:48.700 etc
00:49:49.060 like you
00:49:49.580 do get
00:49:50.080 I worry
00:49:55.700 about that
00:49:56.060 definitely
00:49:56.280 quite a bit
00:49:56.800 and I think
00:49:57.340 that's why
00:49:57.680 you kind
00:49:58.000 of just
00:49:58.380 have to
00:49:58.900 keep
00:49:59.720 free speech
00:50:00.520 and
00:50:00.840 information
00:50:02.120 exchange
00:50:02.540 no disagreement
00:50:03.420 there at all
00:50:04.100 of course
00:50:04.480 one of the
00:50:05.400 things
00:50:05.700 other potential
00:50:06.460 problems
00:50:06.820 it brings
00:50:07.180 up is
00:50:07.720 you know
00:50:09.560 this well
00:50:10.080 because you're
00:50:10.580 bilingual
00:50:10.980 or trilingual
00:50:11.680 or whatever
00:50:12.120 you are
00:50:12.620 to speak
00:50:13.640 in a different
00:50:14.080 language
00:50:14.380 is to think
00:50:14.980 in a different
00:50:15.420 way
00:50:15.760 most people
00:50:16.580 don't know
00:50:17.020 this
00:50:17.280 but when
00:50:18.320 I speak
00:50:18.700 Russian
00:50:19.020 I'm a
00:50:19.460 different
00:50:19.680 person
00:50:20.060 to the
00:50:20.400 person
00:50:20.640 I am
00:50:21.000 when I
00:50:21.240 speak
00:50:21.440 English
00:50:21.780 does your
00:50:22.100 voice also
00:50:22.640 change
00:50:23.060 like the
00:50:23.580 tone of
00:50:24.020 the voice
00:50:24.340 right
00:50:24.580 it's kind
00:50:25.100 of funny
00:50:25.380 and how
00:50:25.740 you think
00:50:26.180 changes
00:50:26.720 and what
00:50:27.320 you might
00:50:28.880 say
00:50:29.320 you get
00:50:29.680 sad
00:50:30.020 and invades
00:50:32.400 other people's
00:50:33.060 space
00:50:33.280 exactly
00:50:33.800 no respect
00:50:35.460 for anyone
00:50:35.960 but even
00:50:38.200 on a
00:50:38.700 more kind
00:50:40.420 of depressing
00:50:41.260 like for
00:50:41.720 example
00:50:42.080 I am much
00:50:43.360 more comfortable
00:50:43.780 there he goes
00:50:44.240 he goes
00:50:44.520 depressive
00:50:44.920 I'm going
00:50:46.840 full Russian
00:50:47.500 full Russian
00:50:48.280 yes
00:50:48.860 I don't know
00:50:49.240 if you'd
00:50:49.500 agree with
00:50:49.820 me but
00:50:50.140 it's much
00:50:50.880 easier to be
00:50:53.120 homophobic
00:50:53.740 when you're
00:50:54.200 speaking Russian
00:50:54.780 we moved
00:50:57.680 out when I
00:50:58.100 was four
00:50:58.480 so I didn't
00:50:59.020 get to
00:50:59.320 actually grow
00:50:59.860 up in
00:51:00.100 Russia
00:51:00.340 so like
00:51:02.040 my Russian
00:51:02.540 is kind
00:51:02.860 of like
00:51:03.160 kept to
00:51:04.260 these like
00:51:04.740 very simple
00:51:05.780 conversations
00:51:06.400 you weren't
00:51:06.760 I didn't
00:51:08.700 but what I
00:51:10.580 mean is
00:51:11.160 different cultures
00:51:12.200 have different
00:51:12.700 values
00:51:13.200 totally
00:51:13.840 right
00:51:14.140 and so
00:51:15.400 as each
00:51:16.820 big player
00:51:17.960 on the global
00:51:19.200 stage develops
00:51:20.140 its own
00:51:20.600 language
00:51:21.120 model in
00:51:22.180 its own
00:51:22.560 language
00:51:23.220 that is at
00:51:24.080 least to
00:51:24.420 communicate with
00:51:25.160 human beings
00:51:25.820 in its own
00:51:26.540 language
00:51:26.940 those are
00:51:29.000 different
00:51:30.000 perspectives on
00:51:30.920 the world
00:51:31.360 because they
00:51:31.980 reflect a
00:51:32.520 different set
00:51:32.960 of values
00:51:33.480 and that
00:51:35.860 in itself
00:51:36.620 may carry
00:51:38.860 some bias
00:51:39.300 yeah
00:51:39.580 yeah
00:51:40.240 it's
00:51:40.800 interesting
00:51:41.160 so like
00:51:41.620 actually just
00:51:42.480 a few days
00:51:43.060 ago
00:51:43.400 kind of like
00:51:44.640 pretty big
00:51:45.500 breakthrough
00:51:45.960 came out
00:51:46.520 by
00:51:46.900 Anthropic
00:51:47.460 about
00:51:48.180 the question
00:51:50.720 of like
00:51:51.480 how do we
00:51:52.440 look at a
00:51:54.160 model
00:51:54.360 how do we
00:51:54.720 look inside
00:51:55.180 of it
00:51:55.400 and understand
00:51:56.280 what it
00:51:57.080 consists of
00:51:57.640 what are
00:51:57.920 these neurons
00:51:58.460 actually for
00:51:59.120 so like
00:51:59.840 that's
00:52:00.440 called
00:52:01.200 like
00:52:01.580 mechanistic
00:52:03.500 interpretability
00:52:04.300 problem
00:52:04.760 in AI
00:52:05.240 or you could
00:52:06.260 just think of
00:52:06.640 it as like
00:52:06.960 digital neuroscience
00:52:07.880 so like
00:52:08.920 trying to
00:52:09.560 understand
00:52:09.960 what is
00:52:10.640 like
00:52:10.960 which neuron
00:52:11.540 within the
00:52:12.020 model
00:52:12.240 is responsible
00:52:13.340 for which
00:52:13.940 outputs
00:52:14.400 and they
00:52:15.960 discovered
00:52:17.360 like that
00:52:18.240 well it's
00:52:18.640 very hard
00:52:19.040 to like
00:52:19.380 look at
00:52:19.760 a single
00:52:20.080 neuron
00:52:20.360 but they
00:52:20.680 can look
00:52:20.960 at clusters
00:52:21.400 and then
00:52:21.800 they can
00:52:22.080 understand
00:52:22.520 but one
00:52:23.440 thing that
00:52:23.860 also
00:52:24.280 turned out
00:52:25.360 there
00:52:25.640 is that
00:52:27.320 each
00:52:29.500 individual
00:52:30.020 spot
00:52:31.120 does a
00:52:32.420 lot of
00:52:32.660 things
00:52:32.940 so sometimes
00:52:33.640 it's
00:52:34.040 responsible
00:52:34.500 for like
00:52:35.320 the
00:52:36.300 identification
00:52:37.020 of Paris
00:52:37.780 but the
00:52:38.840 same
00:52:39.080 neuron
00:52:39.360 in another
00:52:40.100 instance
00:52:40.540 would like
00:52:41.280 write
00:52:41.920 computer
00:52:42.380 code
00:52:42.720 in another
00:52:43.560 instance
00:52:44.060 it like
00:52:44.520 would be
00:52:44.880 used for
00:52:45.700 like something
00:52:46.720 multimodal
00:52:47.300 maybe where
00:52:48.040 it's like
00:52:48.480 helping to
00:52:49.320 understand what
00:52:49.800 a chair is
00:52:50.380 or like
00:52:50.680 make it
00:52:51.040 visually
00:52:51.420 so what
00:52:53.080 that to
00:52:53.800 me at least
00:52:54.140 implies
00:52:54.520 and I wonder
00:52:54.900 what other
00:52:55.620 people think
00:52:56.060 but it's
00:52:56.360 that it's
00:52:57.420 like the
00:52:58.780 pattern that
00:52:59.800 it finds
00:53:00.640 in the
00:53:01.020 pursuit of
00:53:01.660 like compressing
00:53:02.480 all of the
00:53:02.880 data that
00:53:03.340 you fed
00:53:03.740 in and
00:53:04.520 then like
00:53:04.860 it like
00:53:05.180 compresses it
00:53:05.760 down to
00:53:06.140 like a
00:53:06.420 model
00:53:06.640 right
00:53:06.920 and for
00:53:08.280 that it
00:53:08.620 needs to
00:53:08.900 identify
00:53:09.240 patterns
00:53:09.680 that are
00:53:09.940 useful
00:53:10.220 for this
00:53:10.600 compression
00:53:10.940 so the
00:53:12.600 pattern that
00:53:13.040 it finds
00:53:13.500 are actually
00:53:13.980 kind of
00:53:14.480 even more
00:53:15.820 abstract
00:53:16.300 than what
00:53:17.160 we would
00:53:17.560 do if we
00:53:18.260 were just
00:53:18.700 compressing
00:53:19.240 language down
00:53:19.960 and like
00:53:20.340 noticing some
00:53:20.880 of these
00:53:21.120 biases
00:53:21.460 because like
00:53:22.020 some of
00:53:22.280 these neurons
00:53:22.700 find patterns
00:53:23.420 across
00:53:24.900 language
00:53:25.560 science
00:53:26.160 code
00:53:26.680 and like
00:53:27.120 a lot
00:53:27.360 of things
00:53:27.780 so I
00:53:28.680 wonder
00:53:28.880 whether
00:53:29.140 due to
00:53:29.680 that
00:53:29.980 if you
00:53:30.260 use
00:53:30.460 like
00:53:30.620 very
00:53:30.840 varied
00:53:31.280 data
00:53:31.860 points
00:53:32.240 you could
00:53:33.000 reduce
00:53:33.420 that
00:53:33.780 point
00:53:34.380 that you
00:53:35.300 made
00:53:35.600 that I
00:53:36.000 think
00:53:36.200 is
00:53:36.460 accurate
00:53:37.200 that like
00:53:37.680 each
00:53:37.880 natural
00:53:38.940 language
00:53:39.380 carries
00:53:39.700 some
00:53:39.940 bias
00:53:40.220 but if
00:53:41.240 you combine
00:53:41.700 it with
00:53:42.080 like
00:53:42.300 real world
00:53:42.700 data
00:53:43.000 and everything
00:53:43.560 else
00:53:44.000 probably
00:53:44.760 it reduces
00:53:45.260 down a little
00:53:45.820 bit
00:53:46.160 Igor
00:53:46.940 and moving
00:53:47.640 on now
00:53:48.220 you used
00:53:49.760 to work
00:53:50.200 for Elon
00:53:50.920 Musk
00:53:51.240 what is the
00:53:52.820 purpose
00:53:53.100 what is he
00:53:53.520 trying to do
00:53:54.220 with his
00:53:54.600 companies
00:53:55.020 is it
00:53:55.500 that he
00:53:56.000 wants
00:53:56.400 to
00:53:56.720 is it
00:53:57.580 that he
00:53:57.820 wants
00:53:58.020 power
00:53:58.380 is it
00:53:59.180 that he
00:53:59.440 wants
00:53:59.740 money
00:54:00.060 even more
00:54:00.500 money
00:54:00.700 than he
00:54:00.960 already
00:54:01.180 has
00:54:01.440 as ridiculous
00:54:01.920 as that
00:54:02.280 may seem
00:54:02.700 or is
00:54:03.480 he more
00:54:04.160 interested in
00:54:04.840 helping humanity
00:54:05.640 I think
00:54:07.480 the thing
00:54:08.900 that drives
00:54:09.680 him most
00:54:10.500 is the
00:54:10.940 thing that
00:54:11.620 he also
00:54:12.440 says that
00:54:13.020 drives him
00:54:13.460 most
00:54:13.720 which is
00:54:14.340 getting
00:54:15.140 humanity
00:54:15.780 to Mars
00:54:16.460 for the
00:54:17.800 reason that
00:54:18.440 currently we
00:54:19.880 only have
00:54:20.300 Earth
00:54:20.620 and
00:54:21.460 that's not
00:54:23.320 a good
00:54:24.040 state for
00:54:24.720 humanity
00:54:25.120 to be
00:54:25.720 at
00:54:27.140 in the
00:54:27.920 long history
00:54:28.420 of the
00:54:28.740 universe
00:54:29.080 if we
00:54:31.040 only ever
00:54:31.440 stay here
00:54:31.940 then like
00:54:32.400 in some
00:54:33.000 hundreds
00:54:33.260 of
00:54:33.460 millions
00:54:33.640 of
00:54:33.860 years
00:54:34.160 different
00:54:35.140 estimations
00:54:35.620 for it
00:54:35.940 like the
00:54:36.320 Earth
00:54:36.540 will burn
00:54:37.020 up
00:54:37.220 and then
00:54:39.160 we're
00:54:39.520 done
00:54:39.700 here
00:54:39.920 so like
00:54:40.660 we kind
00:54:41.500 of got
00:54:41.700 to get
00:54:41.960 somewhere
00:54:42.600 else
00:54:42.980 else
00:54:43.720 our
00:54:44.040 lifetime
00:54:44.460 is much
00:54:44.880 shorter
00:54:45.180 because
00:54:45.780 like the
00:54:46.200 star
00:54:46.760 era
00:54:47.040 is going
00:54:47.560 to go
00:54:47.840 for like
00:54:48.280 a thousand
00:54:48.640 times
00:54:48.980 more
00:54:49.220 than the
00:54:49.620 time
00:54:49.860 that we
00:54:50.120 would have
00:54:50.500 on Earth
00:54:50.840 in the
00:54:51.580 sense
00:54:51.920 of the
00:54:52.280 new
00:54:52.660 formation
00:54:53.060 of stars
00:54:53.500 so there
00:54:54.160 will be
00:54:54.340 many
00:54:54.520 other
00:54:54.720 planets
00:54:55.000 to go
00:54:55.300 to
00:54:55.480 obviously
00:54:56.060 it's
00:54:56.300 very
00:54:56.500 far
00:54:56.700 out
00:54:56.940 but
00:54:57.140 like
00:54:57.340 it's
00:54:57.520 a
00:54:57.620 very
00:54:57.860 aspirational
00:54:59.160 vision
00:54:59.520 of the
00:54:59.900 future
00:55:00.160 that I
00:55:00.660 at least
00:55:01.260 like
00:55:01.480 as someone
00:55:01.760 who read
00:55:02.060 a bunch
00:55:02.260 of sci-fi
00:55:02.700 I'm like
00:55:03.000 very excited
00:55:03.540 about
00:55:03.860 yeah
00:55:05.260 so
00:55:05.700 I think
00:55:06.980 that's
00:55:07.680 what drives
00:55:08.080 him
00:55:08.320 it's
00:55:09.140 this
00:55:09.340 desire
00:55:10.860 to go
00:55:11.160 to Mars
00:55:11.500 and does
00:55:11.740 he think
00:55:12.040 that this
00:55:12.320 will happen
00:55:12.700 in his
00:55:13.000 lifetime
00:55:13.400 I don't
00:55:15.820 know
00:55:16.120 if he
00:55:16.760 I mean
00:55:17.460 he's
00:55:17.840 damn well
00:55:18.520 gonna try
00:55:19.060 and I think
00:55:19.680 it's very
00:55:19.940 possible
00:55:20.340 that it
00:55:20.800 will happen
00:55:21.260 and like
00:55:22.080 we're
00:55:22.600 getting
00:55:23.280 closer
00:55:23.700 right
00:55:23.980 like
00:55:24.280 we might
00:55:25.480 I don't
00:55:25.800 know
00:55:25.900 we might
00:55:26.240 be
00:55:26.540 10 years
00:55:27.240 away
00:55:27.480 from like
00:55:28.300 at least
00:55:28.620 starship
00:55:29.000 or five
00:55:29.360 years
00:55:29.560 maybe
00:55:29.780 even
00:55:30.040 at least
00:55:30.420 starship
00:55:31.020 getting
00:55:31.340 to Mars
00:55:31.760 so
00:55:32.360 there's
00:55:32.600 this
00:55:32.740 craft
00:55:33.540 called
00:55:33.820 starship
00:55:34.320 so tell
00:55:34.660 us
00:55:34.800 about
00:55:35.040 that
00:55:35.220 what
00:55:35.420 does
00:55:35.740 that
00:55:35.840 involve
00:55:36.060 is that
00:55:36.360 unmanned
00:55:36.920 is that
00:55:37.600 man
00:55:37.940 what's
00:55:38.740 the
00:55:38.820 plan
00:55:39.060 yeah
00:55:39.380 so
00:55:39.800 I think
00:55:40.520 like
00:55:40.660 what they
00:55:40.900 say
00:55:41.080 is
00:55:41.200 that
00:55:41.460 like
00:55:42.360 it's
00:55:43.140 unclear
00:55:43.400 like
00:55:43.600 we only
00:55:43.860 get
00:55:44.060 like
00:55:44.260 a
00:55:44.400 close
00:55:44.760 pass
00:55:45.300 of
00:55:46.100 Earth
00:55:46.340 and
00:55:46.480 Mars
00:55:46.740 every
00:55:47.060 what
00:55:47.360 is
00:55:47.440 it
00:55:47.520 like
00:55:47.640 roughly
00:55:47.840 two
00:55:48.020 years
00:55:48.320 a bit
00:55:49.140 over
00:55:49.320 I
00:55:49.460 think
00:55:49.620 and
00:55:50.160 those
00:55:51.200 are
00:55:51.340 opportunities
00:55:51.780 to get
00:55:52.280 to
00:55:52.660 Mars
00:55:52.900 basically
00:55:53.200 and
00:55:53.340 I
00:55:53.440 think
00:55:53.560 the
00:55:53.680 first
00:55:53.860 few
00:55:54.020 times
00:55:54.280 you
00:55:54.400 would
00:55:54.540 send
00:55:54.820 unmanned
00:55:55.940 starship
00:55:56.780 over
00:55:57.020 then
00:55:57.420 you
00:55:57.520 would
00:55:57.640 have
00:55:57.940 maybe
00:55:58.860 only
00:55:59.100 one
00:55:59.420 time
00:55:59.700 you
00:56:00.220 would
00:56:00.360 have
00:56:00.640 at least
00:56:01.180 something
00:56:01.560 of a
00:56:01.880 base
00:56:02.120 already
00:56:02.460 there
00:56:02.780 and
00:56:04.000 we
00:56:04.200 have
00:56:04.620 rovers
00:56:05.040 already
00:56:05.500 right
00:56:05.820 so
00:56:06.700 you
00:56:07.880 would
00:56:08.020 do
00:56:08.140 that
00:56:08.420 and
00:56:08.600 then
00:56:08.740 afterwards
00:56:09.100 you
00:56:09.360 can
00:56:09.500 send
00:56:09.700 humans
00:56:10.000 out
00:56:10.220 there
00:56:10.420 and
00:56:11.620 then
00:56:12.580 you
00:56:12.720 can
00:56:12.880 start
00:56:13.240 building
00:56:13.680 a
00:56:14.880 actual
00:56:16.120 base
00:56:16.500 on
00:56:16.680 Mars
00:56:16.960 which
00:56:17.440 I
00:56:19.100 think
00:56:19.400 would
00:56:19.560 just
00:56:19.720 be
00:56:20.000 insane
00:56:20.620 to
00:56:20.960 think
00:56:21.340 about
00:56:21.600 that
00:56:21.780 humans
00:56:22.220 would
00:56:22.520 be
00:56:22.640 on
00:56:22.780 another
00:56:23.040 planet
00:56:23.360 and
00:56:24.020 then
00:56:24.300 we
00:56:24.520 can
00:56:24.840 start
00:56:25.780 from
00:56:27.140 there
00:56:27.460 you
00:56:27.600 would
00:56:27.740 probably
00:56:28.020 open
00:56:28.320 up
00:56:28.640 much
00:56:29.220 more
00:56:30.040 desire
00:56:30.460 to
00:56:30.940 go
00:56:31.300 out
00:56:31.580 and
00:56:31.760 settle
00:56:31.960 the
00:56:32.180 stars
00:56:32.520 and
00:56:34.200 that
00:56:34.540 would
00:56:34.700 come
00:56:34.980 with
00:56:36.000 completely
00:56:37.020 new
00:56:37.340 industries
00:56:37.720 that
00:56:37.960 would
00:56:38.100 be
00:56:38.240 much
00:56:38.580 developed
00:56:39.720 that
00:56:39.960 would
00:56:40.040 be
00:56:40.160 much
00:56:40.360 larger
00:56:40.780 I
00:56:41.920 don't
00:56:42.020 know
00:56:42.120 the
00:56:42.260 future
00:56:42.600 that
00:56:43.080 is
00:56:43.480 described
00:56:43.960 in
00:56:44.160 all
00:56:44.260 of
00:56:44.360 the
00:56:44.880 books
00:56:45.060 I
00:56:45.260 at
00:56:45.400 least
00:56:45.560 find
00:56:45.860 extremely
00:56:46.500 exciting
00:56:46.960 what
00:56:49.980 do
00:56:50.580 you
00:56:50.720 make
00:56:51.000 of
00:56:51.320 Twitter
00:56:52.580 and
00:56:52.940 the
00:56:53.500 direction
00:56:53.780 it's
00:56:54.020 going
00:56:54.200 particularly
00:56:54.560 since
00:56:55.060 it
00:56:55.460 was
00:56:55.560 taken
00:56:55.780 over
00:56:56.060 so
00:56:57.140 I
00:56:57.860 mean
00:56:57.980 I've
00:56:58.260 started
00:56:58.500 using
00:56:58.800 it
00:56:58.980 more
00:56:59.180 I
00:56:59.340 don't
00:56:59.460 know
00:56:59.580 if
00:56:59.680 that's
00:56:59.940 good
00:57:00.200 or
00:57:00.320 bad
00:57:00.560 I
00:57:00.700 can't
00:57:00.900 really
00:57:01.060 tell
00:57:01.360 it's
00:57:01.620 probably
00:57:04.000 bad
00:57:04.300 in
00:57:04.420 some
00:57:04.580 ways
00:57:04.720 I
00:57:04.840 started
00:57:05.040 contributing
00:57:05.460 more
00:57:05.840 which
00:57:06.060 is
00:57:06.160 probably
00:57:06.420 a
00:57:07.080 good
00:57:07.260 thing
00:57:07.480 at
00:57:07.620 least
00:57:07.820 rather
00:57:08.100 than
00:57:08.300 just
00:57:08.520 read
00:57:08.740 I
00:57:09.980 don't
00:57:10.120 know
00:57:10.240 I
00:57:10.460 previously
00:57:10.920 definitely
00:57:11.280 thought
00:57:11.640 that
00:57:12.140 man
00:57:13.120 it would
00:57:14.120 be great
00:57:14.580 if
00:57:15.000 a man
00:57:16.320 like him
00:57:16.600 was
00:57:16.760 focused
00:57:17.060 more
00:57:17.320 on
00:57:17.500 technologies
00:57:18.040 that
00:57:18.800 are
00:57:19.080 hard
00:57:19.580 to
00:57:19.760 build
00:57:20.040 in
00:57:20.380 the
00:57:21.040 physical
00:57:21.300 world
00:57:21.600 that
00:57:21.760 seems
00:57:21.940 to
00:57:22.080 be
00:57:22.340 what
00:57:22.960 he's
00:57:23.240 so
00:57:23.740 good
00:57:24.120 at
00:57:24.340 but
00:57:25.880 I
00:57:26.060 do
00:57:26.360 come
00:57:26.780 to
00:57:27.200 worry
00:57:27.920 more
00:57:28.860 and
00:57:29.060 more
00:57:29.340 about
00:57:30.000 free
00:57:31.860 speech
00:57:32.160 suppression
00:57:32.620 so
00:57:33.200 Twitter
00:57:33.920 being
00:57:34.440 open
00:57:35.700 in that
00:57:36.380 way
00:57:36.660 I
00:57:37.400 start
00:57:37.740 valuing
00:57:38.200 more
00:57:38.500 I
00:57:38.860 don't
00:57:38.980 know
00:57:39.100 yet
00:57:39.320 where
00:57:39.520 the
00:57:39.700 company
00:57:39.960 is
00:57:40.100 going
00:57:40.340 I
00:57:40.460 think
00:57:40.580 it's
00:57:40.700 too
00:57:40.840 early
00:57:41.040 to
00:57:41.220 tell
00:57:41.460 from
00:57:42.140 a
00:57:42.340 pure
00:57:42.540 business
00:57:42.860 perspective
00:57:43.360 as
00:57:44.100 long
00:57:45.020 as
00:57:45.180 there
00:57:45.340 in
00:57:46.140 any
00:57:46.340 transitional
00:57:46.780 period
00:57:47.180 it's
00:57:47.460 always
00:57:47.760 some
00:57:48.600 things
00:57:48.820 go
00:57:49.000 bad
00:57:49.260 some
00:57:49.500 things
00:57:49.680 go
00:57:49.860 well
00:57:50.140 I
00:57:51.160 wouldn't
00:57:51.420 have
00:57:51.560 made
00:57:51.780 many
00:57:52.020 of
00:57:52.160 the
00:57:52.260 same
00:57:52.420 choices
00:57:52.800 but
00:57:53.040 I
00:57:53.360 also
00:57:53.580 don't
00:57:53.880 know
00:57:54.160 technology
00:57:54.800 as
00:57:55.060 well
00:57:55.240 as
00:57:55.440 he
00:57:56.700 does
00:57:56.960 so
00:57:57.400 I
00:57:59.280 think
00:57:59.440 it's
00:57:59.560 too
00:57:59.680 early
00:57:59.840 to
00:58:00.020 judge
00:58:00.240 basically
00:58:00.640 I
00:58:01.180 do
00:58:01.900 agree
00:58:02.240 with
00:58:08.500 that
00:58:08.740 is
00:58:08.940 extremely
00:58:09.580 important
00:58:10.020 and
00:58:10.320 I
00:58:10.440 hope
00:58:10.560 that
00:58:10.680 we
00:58:10.800 get
00:58:11.040 there
00:58:11.220 with
00:58:11.400 it
00:58:11.600 yeah
00:58:12.260 absolutely
00:58:12.800 well
00:58:13.360 listen
00:58:13.600 before
00:58:14.020 we
00:58:14.200 go
00:58:14.340 to
00:58:14.520 locals
00:58:14.860 for
00:58:15.160 a
00:58:15.320 few
00:58:15.420 questions
00:58:15.800 from
00:58:16.000 our
00:58:16.160 supporters
00:58:16.560 the
00:58:16.860 question
00:58:17.200 we
00:58:17.340 always
00:58:17.540 end
00:58:17.740 on
00:58:17.960 in
00:58:18.440 the
00:58:18.540 main
00:58:18.780 part
00:58:19.360 of
00:58:19.440 the
00:58:19.540 interviews
00:58:19.880 what's
00:58:20.500 the
00:58:20.580 one
00:58:20.720 thing
00:58:20.880 we're
00:58:21.060 not
00:58:21.240 talking
00:58:21.600 about
00:58:21.900 that
00:58:22.060 we
00:58:22.160 really
00:58:22.400 should
00:58:22.560 be
00:58:22.700 as
00:58:22.840 a
00:58:22.940 society
00:58:23.360 hmm
00:58:24.480 that
00:58:27.200 we're
00:58:27.380 not
00:58:27.800 talking
00:58:28.440 about
00:58:28.960 so
00:58:31.960 we
00:58:32.220 started
00:58:32.640 fortunately
00:58:33.180 talking
00:58:33.680 about
00:58:34.640 AI
00:58:34.920 safety
00:58:35.240 that
00:58:38.500 there
00:58:38.780 are
00:58:38.940 smaller
00:58:39.580 issues
00:58:40.080 that
00:58:40.520 could
00:58:40.720 be
00:58:40.880 discussed
00:58:41.300 I
00:58:42.320 think
00:58:43.040 it's
00:58:43.220 awful
00:58:43.580 that
00:58:44.140 teenagers
00:58:45.340 and
00:58:45.680 children
00:58:45.900 wake up
00:58:46.420 to
00:58:46.560 school
00:58:46.820 so
00:58:47.060 early
00:58:47.320 and
00:58:47.520 mess
00:58:48.180 up
00:58:48.400 their
00:58:48.560 whole
00:58:48.920 brain
00:58:49.640 chemistry
00:58:49.920 which
00:58:50.160 seems
00:58:50.340 to
00:58:50.460 be
00:58:50.540 the
00:58:50.680 case
00:58:50.880 it's
00:58:51.220 not
00:58:51.360 good
00:58:51.520 for
00:58:51.640 them
00:58:51.760 to
00:58:51.900 be
00:58:52.040 up
00:58:52.260 so
00:58:52.460 early
00:58:52.700 and
00:58:52.880 they
00:58:53.200 sleep
00:58:53.400 very
00:58:53.620 little
00:58:53.920 the
00:58:55.140 current
00:58:55.360 school
00:58:55.740 regime
00:58:56.080 is
00:58:56.240 just
00:58:56.380 very
00:58:56.560 bad
00:58:56.800 I
00:58:56.940 think
00:58:57.180 that
00:58:59.300 could
00:58:59.500 have
00:58:59.640 a
00:58:59.740 bit
00:58:59.840 more
00:59:00.020 attention
00:59:00.440 than
00:59:01.740 what
00:59:04.300 else
00:59:04.520 will
00:59:04.680 be
00:59:04.780 different
00:59:05.520 in
00:59:08.500 we're
00:59:10.680 talking
00:59:11.920 no
00:59:13.060 we're
00:59:13.240 actually
00:59:13.420 doing
00:59:13.760 decent
00:59:14.220 amount
00:59:14.600 now
00:59:14.940 on
00:59:15.320 like
00:59:15.660 animal
00:59:16.100 welfare
00:59:16.520 there
00:59:17.300 are
00:59:17.400 definitely
00:59:17.620 some
00:59:17.880 ills
00:59:18.100 that
00:59:18.300 are
00:59:18.400 pretty
00:59:18.600 bad
00:59:18.940 yeah
00:59:22.300 I'll
00:59:22.500 stick
00:59:22.700 with
00:59:22.840 the
00:59:23.100 school
00:59:23.320 one
00:59:23.580 school
00:59:24.220 one
00:59:24.400 is
00:59:24.500 good
00:59:24.700 yeah
00:59:24.980 all right
00:59:25.320 perfect
00:59:26.020 well
00:59:26.420 thanks
00:59:26.680 for
00:59:26.800 coming
00:59:26.960 on
00:59:27.180 and
00:59:27.540 see
00:59:27.780 you
00:59:27.900 on
00:59:28.000 locals
00:59:28.320 this
00:59:30.100 is a
00:59:30.280 question
00:59:30.520 coming
00:59:30.780 back
00:59:30.960 to
00:59:31.080 your
00:59:31.180 poker
00:59:31.500 days
00:59:31.920 Colby
00:59:32.940 Hamilton
00:59:33.260 says
00:59:33.580 what
00:59:33.880 is
00:59:34.020 some
00:59:34.180 advice
00:59:34.560 for
00:59:34.740 hiding
00:59:35.120 my
00:59:35.400 towels
00:59:35.800 love
00:59:37.340 love
00:59:38.060 love
00:59:38.620 love
00:59:39.500 love
00:59:41.740 love
00:59:43.160 love
00:59:43.960 love
00:59:44.380 love
00:59:45.860 love
00:59:48.220 love
00:59:53.560 love
00:59:54.000 love
00:59:55.780 love
00:59:57.900 love
01:00:00.280 love
01:00:02.600 love
01:00:03.000 love
01:00:03.560 love
01:00:04.520 love
01:00:04.840 love
01:00:05.000 love
01:00:05.420 love
01:00:05.540 love