This is Gavin Newsom - May 15, 2026


Can Artificial Intelligence Be Controlled? With Tristan Harris & Aza Raskin


Episode Stats


Length

1 hour and 47 minutes

Words per minute

199.15675

Word count

21,445

Sentence count

825


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
00:00:00.000 Fear of all of us losing has to become greater than the fear of me losing to you.
00:00:05.540 Now, China doesn't want the financial system to collapse.
00:00:07.820 It's the Let It Rip administration.
00:00:10.120 What to do about AI?
00:00:11.960 Am I going to lose my job?
00:00:13.760 What about safety, cybersecurity?
00:00:16.180 What about privacy?
00:00:17.900 Well, a new documentary is out answering all of those questions.
00:00:21.320 Promise, peril, truth, trust.
00:00:23.320 Should we be pessimistic?
00:00:24.640 Should we be optimistic?
00:00:26.300 It's called the AI doc.
00:00:27.920 And two of the principal participants in that documentary, Tristan Harris and Aza Raskin, are up next on This is Gavin Newsom.
00:00:44.160 This is Gavin Newsom.
00:00:46.660 And this is Tristan Harris and Aza Raskin.
00:00:51.460 This is an iHeart Podcast.
00:00:54.360 Guaranteed human.
00:00:55.340 Another podcast from some SNL late-night comedy guy.
00:00:59.460 Not quite.
00:01:00.400 On Humor Me with Robert Smigel and Friends,
00:01:02.520 me and hilarious guests from Bob Odenkirk to David Letterman
00:01:05.940 help make you funnier.
00:01:07.780 This week, my guests, SNL's Mikey Day and head writer Streeter Seidel,
00:01:11.660 help an acapella band with their between songs banter.
00:01:15.140 Where does your group perform?
00:01:16.420 We do some retirement homes.
00:01:18.040 Those people are starving for banter.
00:01:20.240 Listen to Humor Me with Robert Smigel and Friends
00:01:22.260 on the iHeartRadio app, Apple Podcasts,
00:01:24.780 or wherever you get your podcasts.
00:01:26.820 Life is full of hurdles, so how do you keep going?
00:01:30.200 On Hurdle with Emily Abadi,
00:01:31.580 we're talking with the most inspiring woman in sports and wellness,
00:01:34.700 from professional athletes, coaches, and Olympic champions,
00:01:37.840 about the challenges that shape them
00:01:39.600 and the mindset that keeps them moving forward.
00:01:41.800 At our level, at this scale,
00:01:43.360 being able to fail in front of the entire world.
00:01:46.160 Like, I can do anything.
00:01:47.360 I can do anything.
00:01:48.820 Listen to Hurdle with Emily Abadi on the iHeartRadio app,
00:01:51.380 Apple Podcasts, or wherever you get your podcasts.
00:01:54.540 Presented by Capital One, founding partner of iHeart Women's Sports.
00:01:58.780 Imagine an Olympics where doping is not only legal, but encouraged.
00:02:02.920 It's the enhanced games.
00:02:04.620 Some call it grotesque.
00:02:06.060 Others say it's unleashing human potential.
00:02:08.640 Either way, the podcast Superhuman documented it all,
00:02:12.200 embedded in the games and with the athletes for a full year.
00:02:15.760 Within probably 10 days, I'd put on 10 pounds.
00:02:19.040 I was having trouble stopping the muscle growth.
00:02:21.420 Listen to Superhuman on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
00:02:51.420 and the stuff nobody gets to hear.
00:02:53.520 Listen to Sports Slice on the iHeartRadio app,
00:02:55.980 Apple Podcasts, or wherever you get your podcasts.
00:02:58.560 And for more, follow Timbo Slice Life 12
00:03:00.720 and the TikTok Podcast Network on TikTok.
00:03:03.300 All right.
00:03:04.020 So you guys, you're on the back end now of this world tour.
00:03:07.440 You guys have been all over the damn place.
00:03:09.260 Yeah.
00:03:09.800 And you launched it, and it's on the basis of a movie that you,
00:03:13.260 you didn't necessarily, it's not your movie.
00:03:15.640 Right.
00:03:15.920 But you participated in a movie.
00:03:17.480 That's right.
00:03:17.760 This new documentary that was released, what, a month ago?
00:03:20.800 Yeah, but a month and a half ago, the AI doc or How I Became an Apocalypse Optimist, our friends, the directors of Everything Everywhere All at Once, we chatted with them several years ago when we first really recognized the situation we were in and asked them for their help to help create a movie to clarify the AI situation.
00:03:39.480 And long story short, we'll talk about it.
00:03:42.060 But we got the directors of Navalny involved and its whole team came together and tried to make a movie that would clarify the predicament that we're facing.
00:03:48.160 And Kevin, do you remember seeing the film The Day After?
00:03:50.960 Of course.
00:03:51.620 Well, I mean, maybe I was alive.
00:03:53.780 It was what?
00:03:54.280 It was the early 80s or something, right?
00:03:56.020 82 or 83.
00:03:57.100 82, 83.
00:03:58.160 Yeah.
00:03:58.520 It was like a 7 p.m. Tuesday night.
00:04:01.440 Exactly.
00:04:01.820 Every human being on planet Earth.
00:04:03.400 It's the largest synchronized television event in human history in terms of the number of people seeing the same thing at the same time, besides probably the Olympics.
00:04:12.700 It was a made-for-TV movie about what would happen if there was nuclear war.
00:04:16.460 right and it's not as if people didn't know what nuclear war was but there's something about we
00:04:21.940 didn't really want to think about it or confront it like why would you and i think the people who
00:04:25.840 made that film were trying to create this kind of collective confrontation because the film was
00:04:31.440 aired in the soviet union like five years uh no i think five years later right before right before
00:04:36.440 the reikovic accords uh the first arms control talks and that set the context that it's like i
00:04:41.880 know that you know that i know and you know that i know that you know that we both don't want that
00:04:45.760 to happen it's what you know what steven pinker calls common knowledge but we think of it as
00:04:49.500 almost common feeling yeah yeah that we both we both know that we feel the same way about that
00:04:55.340 anti-human outcome and i think that with ai we have to get clear because ai is a much more confusing
00:05:01.380 technology because it's like it's like if nukes as aza would say it's like if nukes carried imagine
00:05:05.740 we're trying to reason about nukes that also could cure cancer like how would you deal with
00:05:11.340 something like that or pump gdp by 10 that's right right and so what the day after did is that it
00:05:16.560 created this this common feeling common knowledge where we could all see oh a world where we all go
00:05:21.700 to nuclear war that like that that's an anti-human anti-life future and as long as there's confusion
00:05:26.740 about which default world ai ends us in we're not going to do anything about it um but if we all
00:05:32.120 have clarity that it's heading us towards an anti-human future and we can build that out in
00:05:35.740 this in this interview um then that means it creates the conditions where we can coordinate
00:05:40.320 to do something else right so so this so that that was the impetus behind this this documentary
00:05:45.440 it was to create that collective understanding that's right that wisdom that collective then
00:05:50.240 response mechanism which would be we need to do something about this you know the promise
00:05:55.080 that we could talk about but the peril in terms of the safety risks and the anxiety that so many
00:06:00.120 start feeling so you guys have been out on this tour you've been all over the place over the
00:06:04.240 course of last month um and you've been you know obviously you know reaching out to people on all
00:06:09.560 political sides of the aisle because of the human nature of this. This is universal. There's
00:06:15.480 something that connects all of us. This is not a partisan frame. And so is that something that's
00:06:22.280 been captured in your own consciousness as you've been out on this door, how real and invisible that
00:06:29.600 is? Or are we still in the process of discovery? I mean, are we still in the process of understanding
00:06:35.000 more fully what the hell this is all about.
00:06:38.040 Well, I think people still, not everybody knows.
00:06:40.520 And the film was just out in theaters
00:06:41.980 and has a limited release on streaming now.
00:06:43.840 It should be on Peacock on, I think, May 20th,
00:06:47.140 which will mean a little bit more people will see it.
00:06:49.760 But I think that the universal human aspect of this,
00:06:53.180 we learned also from social media.
00:06:55.000 Like social media didn't care
00:06:56.320 whether you're a Democrat or Republican.
00:06:57.840 It creates loneliness for everyone.
00:06:59.520 It creates addiction and doom scrolling and brain rot
00:07:01.960 for everyone.
00:07:02.720 I know that's how we first met,
00:07:04.040 was really on the back of the other film
00:07:06.180 that we were a part of, The Social Dilemma.
00:07:08.660 And I think the good news about that film
00:07:11.780 is that it has both catalyzed actually a lot of change
00:07:14.240 and not the laws yet, we know that.
00:07:15.800 But I think it primed us to now
00:07:18.000 be much more cautious about AI.
00:07:19.780 So I think now that people know
00:07:21.660 that social media was a problem,
00:07:23.400 it makes it much easier to say
00:07:24.840 that you shouldn't just assume
00:07:26.000 that the default trajectory of a technology
00:07:28.000 is gonna land us in a good future.
00:07:31.040 But The Social Dilemma was seen
00:07:32.980 basically by 200 million people
00:07:35.280 across planet Earth
00:07:36.100 in 190 countries.
00:07:37.200 When was Social Development released?
00:07:38.620 2020, September of 2020.
00:07:40.240 September 2020,
00:07:41.040 during the middle of the pandemic.
00:07:42.120 And I think that also really mattered
00:07:44.000 because a lot of people
00:07:46.520 were stuck at home
00:07:47.160 and they were only seeing reality
00:07:48.960 through the binoculars
00:07:50.100 of this social media news feed.
00:07:52.380 So suddenly you saw your friends
00:07:53.600 all start to go crazy on both sides.
00:07:55.920 Like suddenly the politics
00:07:56.820 got more extreme
00:07:57.660 because as everybody knows now,
00:07:59.520 you know, social media increases
00:08:01.480 the visibility of the most extreme voices.
00:08:03.840 We get a double whammy of over-representation
00:08:06.740 of the extreme voices,
00:08:08.120 both because the extreme voices post more often
00:08:11.060 and dominate the discourse.
00:08:12.380 And whatever they say goes more viral.
00:08:14.580 So you get double over-representation.
00:08:16.960 So the more you use social media,
00:08:19.100 the worse you are at predicting
00:08:20.560 what the other side actually believes.
00:08:22.880 And so you think that with this technology
00:08:24.280 that's supposed to bring us together
00:08:25.360 and make us the most enlightened society,
00:08:27.180 it's actually making us more confused
00:08:29.480 about what's really real.
00:08:30.340 in fact what our fellow americans actually believe and i think with your podcast this is about
00:08:34.660 actually you're talking to everybody you're trying to say this is a human conversation we talk to
00:08:38.260 everybody and this is kind of fighting the effects i think of the social media problem so you were i
00:08:42.700 mean in 2020 what was so resonant is you it was almost and it wasn't your intent to say look i
00:08:47.980 told you so but you were talking about these things in 2012 2013 that's right yes uh and saying
00:08:53.680 the incentive structure and i'm going to get to incentives because it goes to the core of this
00:08:57.500 documentary. This notion of, you know, whatever someone's paycheck is attached to, whatever the
00:09:03.980 incentive is, which is obviously with social media, was eyeballs, was doom scrolling, which
00:09:09.900 you're intimately familiar with for different reasons. And that notion that we have a chance
00:09:15.300 now with AI not to make the mistakes we made, the neglect in particular, and under-regulating
00:09:22.620 social media now with AI. That said, it doesn't seem like there's a lot of regulatory activity
00:09:29.920 with AI in the last couple of years. I mean, so talk to me a little bit about that. Talk to me
00:09:35.160 about the lessons that we should have learned about social media and how we can adapt and adopt
00:09:39.480 in the AI strength frame. Well, really, I think the core of it is if we just get confused by
00:09:46.600 looking at all of the sort of epiphenomenon, like the different kinds of harms that social media
00:09:51.120 made like if you're trying to solve just the uh the loneliness thing or you're just working on
00:09:57.180 solving the disinformation thing or just the teen sexualization thing well then many different people
00:10:02.840 are working at different parts of the problem you're not solving the core thing um which is
00:10:06.780 the race to the bottom of the brainstem for attention and if we could just focus our attention
00:10:11.340 on that then you actually can solve all the other problems at once and that's the sort of the core
00:10:15.800 insight um we should probably explain that what would it mean to do that so if you're solving the
00:10:21.100 core problem of let's say none of the companies are maximizing engagement just it's we don't live
00:10:25.940 in that world we don't have the regulation for that but let's snap our fingers and now no one
00:10:29.080 is maximizing screen time you're like let's just say it wasn't allowed to do that so now instead
00:10:33.760 of each company trying to maximize duration of use and frequency of use you just have these products
00:10:38.580 that each design decision isn't trying to predate or manipulate you into spending more time which
00:10:43.540 means that your experience is you're not getting sucked in constantly to everything so that deals
00:10:47.840 with the loneliness issue um if you're not trying to show people the most hyper normal stimuli
00:10:53.220 meaning like a like a hyper dopamine response for any piece of content you're not going to get the
00:10:57.360 sexualization of people and you're not incentivizing creators to maximize their own reach and engagement
00:11:02.960 because you're not maximizing engagement yourself so suddenly when you attack the attention incentive
00:11:07.620 you're dealing with sexualization of content you're dealing with less viral content so you
00:11:11.420 get less disinformation and you're dealing with loneliness too so that's a good example of we
00:11:16.360 obviously don't have laws that do that right now but it points at the center of the bullseye is
00:11:20.640 the incentive and when you tackle the core incentive you get benefits across the the
00:11:25.040 spectrum um and ai is going to be more confusing because when people think about the incentive
00:11:29.980 they're like okay so social media i got the incentive it's like how much have you paid for
00:11:33.780 your instagram account recently nothing so how are they worth the trillions of dollars it was
00:11:38.540 attention so they we knew they were maximizing that thing but with ai you say okay what's the
00:11:42.800 incentive for a regular person they think okay how does open ai make money and they say well
00:11:47.180 only when i pay them the 20 bucks a month for subscription so maybe that's their incentive
00:11:51.040 they're just trying to maximize these subscriptions but that doesn't justify if everybody paid 20
00:11:55.560 bucks a month that does not pay back the trillion dollars the capex that they've taken on so what
00:12:00.660 would justify that well if they were to race to augment workers and like skid you tools to like
00:12:05.240 make your work more productive great that's great but that wouldn't pay back the amount of money
00:12:10.080 That's right. So really the only thing that can get them to be able to pay back the
00:12:14.160 insane amounts of debt that they're taking on is owning the human labor market. That is the
00:12:19.580 incentive. The race to replace all human labor. That's right. To replace us first economically.
00:12:26.180 And is that, I mean, so is that written or is that unwritten? Is that understood within the
00:12:31.360 industry or is it being, I mean, is this what the consciousness you're trying to raise?
00:12:36.920 It's kind of a two-faced thing.
00:12:38.120 It actually used to be on OpenAI's website that they said our mission is basically to
00:12:41.560 create artificial general intelligence, which means to be able to replace all economically
00:12:46.160 valuable work.
00:12:46.840 They did change the mission statement, yeah.
00:12:49.080 But obviously, everybody who's driving this knows for sure that's the prize.
00:12:53.420 And if they don't do it, they fear that the other guy will.
00:12:56.260 So even if they think it's bad to replace all labor and create this mass disruption,
00:13:00.020 they feel caught in a race.
00:13:01.880 And that's the thing we have to change, is that the fear of me losing to the other guys
00:13:06.100 currently dominating over the fear of what happens to everybody losing from the anti-human future
00:13:11.580 that is the outcome so you actually sort of see this with demis's office when he co-founded google
00:13:15.760 deep mind he set the mission statement to first solve intelligence and then use intelligence to
00:13:20.840 solve everything else but what that really meant was the beginning of a race to first dominate
00:13:25.600 intelligence and then use intelligence to dominate everything else and deep mind is the origin story
00:13:30.860 yeah that's right in many respects go back a little bit i mean google's deep mind they acquired
00:13:35.340 deep mind that's right they acquired that's right some of the the intellectual assets but it was
00:13:40.840 really i mean larry page you can go back to the you know sort of origin of the beginning of the
00:13:45.340 beginning well what years are we talking about like 2014 is when i think they acquired in 2014
00:13:50.080 i think they started it in 2012 or 2011 or something like that and then and it goes to
00:13:54.600 this competition question which i want to get to because it's the domestic competition between
00:13:57.960 all of these companies and then we get to the competition between china in particular particularly
00:14:02.840 now with the president, President Xi and President Trump about to meet. But on the issue of the
00:14:09.180 competition that was born here around 2014 with DeepMind, it was an interesting competition that
00:14:15.620 sort of formed with Elon Musk in a relationship he had, close relationship at the time. I intimately
00:14:20.680 was familiar with that, with himself and Larry Page. But they had a conversation, alleged
00:14:26.560 conversation that didn't go the way that Elon thought it should. And Elon said, I'm going to
00:14:33.240 go out on my own with Sam Altman, ultimately. He found Sam, they partnered, and they created
00:14:40.100 OpenAI. That's right. We probably should peel back the onion here and slow down just how did
00:14:44.600 we get to this point? And what was the original philosophy that guided this? So Demis Hassabis,
00:14:49.280 who's the founder of DeepMind, his original goal is we thought we should have one project
00:14:55.320 that pursues artificial general intelligence,
00:14:58.100 meaning the kind of AI,
00:14:59.540 it's not the thing that just reads your license plate
00:15:01.120 when you drive through the Golden Gate Bridge.
00:15:02.620 We've had AI forever.
00:15:03.540 We've had AI forever.
00:15:04.760 Maps, it's translate, all these things.
00:15:06.640 Exactly, exactly.
00:15:07.620 And AI, so AI is hardly a novel.
00:15:10.140 Exactly.
00:15:10.680 It's this notion of gen AI, ultimately AGI.
00:15:13.340 Exactly, so artificial general intelligence,
00:15:15.280 which is to be able to do all economic labor
00:15:17.140 to simulate all the things that a human mind can do
00:15:19.600 and the kind of thinking.
00:15:21.040 And so he originally wanted that to be like one project,
00:15:24.220 almost like a CERN.
00:15:25.320 you know, the project in Switzerland of a global scientific project that's for the benefit of all
00:15:29.760 of humanity done slowly and carefully, mostly privately, not in a big public way. Take your
00:15:35.720 time, get it right. That was, that was a Demis's original goal. He sold it to Google. And the
00:15:41.700 conversation you're talking about is then Elon was part of that. I think he was on the plane when
00:15:45.000 they were literally negotiating the final sale. And there was some conversation it's talked about
00:15:49.520 in the AI doc film where, uh, Elon realizes that Larry didn't really care whether humanity
00:15:56.520 made it, um, whether he cared about AI safety, because in the end, if there's a digital
00:16:01.060 intelligence that's smarter than us, that does more science that can go out and explore the
00:16:05.060 universe, even if we're wiped out, like we'll have created that. And that scared Larry. And,
00:16:10.760 and he accused, excuse me, that scared Elon. And, um, Elon accused, no, sorry, Larry accused Elon
00:16:17.260 of being a speciesist for caring about humans and privileging humans.
00:16:21.420 And then that is what created OpenAI.
00:16:24.620 It's just important to note, to understand the psychology of the people that are making
00:16:28.240 this, that you might think, okay, this is just one very powerful billionaire that thinks
00:16:33.580 that maybe human beings shouldn't make it.
00:16:36.040 But he doesn't care whether human beings make it or not.
00:16:40.940 But, you know, we were just talking about this the other day that in the New York Times,
00:16:46.660 Peter Thiel was being interviewed and he was asked, should humanity endure? And there was 17
00:16:52.860 seconds of him stuttering. And he ended up with a, after 17 seconds of a, well, it sort of depends
00:16:59.680 kind of answer. And that shows you the mentality of like, we're trying to, they're trying to build
00:17:05.820 a God. And even if humanity doesn't make it, that God is built in like our country's values,
00:17:11.280 our language. It's sort of like
00:17:13.440 my progeny, our
00:17:15.020 founder's DNA is the thing that makes it. It's like Elon
00:17:17.060 birthed the god that yes, humanity
00:17:19.200 got wiped out, but now there's this digital
00:17:21.340 god that has Elon's DNA in it.
00:17:23.520 And it's important that everyone understand that
00:17:25.240 because if people really understood this,
00:17:27.420 I think there'd be a lot more like
00:17:28.980 hell no kind of energy. This is not
00:17:31.320 the future. Well, I will say the Peter Thiel interview
00:17:33.420 got so much attention. It was like the
00:17:35.380 I mean, the holy shit.
00:17:37.220 I mean, the wake up moment for a lot of people
00:17:38.880 that didn't necessarily have that switch or understanding.
00:17:42.300 That's right.
00:17:42.540 And all of a sudden paused it, particularly the folks.
00:17:44.860 And I think what's most alarming about that,
00:17:47.200 I mean, obviously Larry's next level, brilliant.
00:17:49.620 Absolutely.
00:17:49.980 And so his ability to see in the future,
00:17:51.420 he doesn't have to climb over the mountain.
00:17:52.520 He sees right through it.
00:17:53.660 But guys like Teal as well, love or hate him, same thing.
00:17:56.720 So these guys are so far in the future,
00:17:58.320 they're seeing that darker side.
00:18:00.820 And so they're having a difficult time
00:18:02.140 even answering a simple question.
00:18:03.700 That's right.
00:18:03.940 That's the 17th second.
00:18:05.120 So let's go back as we unpack that
00:18:06.960 in this notion of the God complex.
00:18:08.340 will continue to come back.
00:18:09.780 And I think it's profound and outsized
00:18:11.180 because it goes to the limited nature
00:18:12.800 of just a handful of people.
00:18:15.040 That's right.
00:18:15.440 These trillionaires that will determine
00:18:16.700 the fate and future
00:18:17.380 are billions and billions of people.
00:18:18.840 That's exactly right.
00:18:19.560 And how we can get our arms around that.
00:18:21.380 And I want to really get to that,
00:18:23.000 how we can get our arms around this.
00:18:24.760 We have agency.
00:18:25.620 That's right.
00:18:25.920 And that's why you did this, Doc.
00:18:27.200 And that's why we're sitting here together.
00:18:28.420 That's right.
00:18:28.860 Because I don't want people to feel like
00:18:30.460 we're just bystanders.
00:18:32.300 And we're not admiring the problem.
00:18:33.440 We always say in our work,
00:18:35.180 and ASA says this,
00:18:35.840 that clarity creates agency.
00:18:37.560 if we can see this clearly and we can see where we're going we can collectively say if we want to
00:18:41.880 go somewhere else we'll choose something different so we'll get to that so so elon goes out uh with
00:18:46.300 sam starts open ai obviously they have an infamous fight and they're notoriously now i mean not
00:18:51.660 notoriously but the fight is obviously accelerated consciousness because now we're seeing it 24 7
00:18:57.340 with the trial court case yep with a court case in the bay area here um with the two of them so
00:19:02.340 they had a falling out elon goes off and does his own thing uh because he doesn't doesn't feel like
00:19:06.880 Sam's doing the right thing. Dario, who starts another AI company, feels like, well, OpenAI is
00:19:12.000 not doing the right thing either. That's right. So he spins off. Anthropic. Anthropic. And so now
00:19:18.360 you've got- Three AGI projects. Three AGI projects all in our backyard, literally here in California
00:19:24.460 in the Bay Area. And so there's competition of sorts. That's a competition you just described.
00:19:30.540 Yeah. And it's a competition for the Holy Grail. That's right. It's Lord of the Rings. It's the
00:19:35.600 ring from Lord of the Rings because it's essentially, as Asa was saying, first dominate
00:19:40.540 intelligence, then use intelligence to dominate everything else. Because if I get AGI first,
00:19:46.080 I hit copy paste and I have a hundred million cyber hackers that you don't have.
00:19:50.260 Right. And this is within seconds.
00:19:51.820 This is within seconds?
00:19:52.540 Yeah. This is not over a course of months or years.
00:19:54.740 If I get AGI first, I have an army of scientific companies that automate all scientific development.
00:19:59.920 So suddenly I'm getting like 24th century science and technology that I own and run that I can inform new military weapons and new physics.
00:20:09.360 And the problem is notice that none of us can prove that they won't get that.
00:20:15.060 We can't say for sure that they would get that.
00:20:17.220 But the people who are optimistic and accelerationist about AI, I just want to like bring in their perspective for a moment because they're represented in the AI doc film.
00:20:24.160 The film includes the risk folks who are oriented about safety and it includes the accelerationist.
00:20:29.720 And the accelerationists say the biggest risk is not going fast enough because imagine all the science we could get, all the cancer drugs, all the medicine.
00:20:37.180 People could be living forever.
00:20:38.540 Think of all of the people who would die if we don't go faster.
00:20:41.900 And that's the mentality that they're coming from.
00:20:44.920 But one of the things that we talk about in the film is that the promise and the peril of AI, we talk about the promise and the peril, but they're interlinked.
00:20:53.000 And the promise doesn't prevent the peril, but the peril can undermine the world that can receive the promise.
00:20:58.640 Let me make that concrete.
00:20:59.720 If AI knows biology so well that it can invent a new cancer drug, it's amazing.
00:21:06.860 But if that same knowledge of knowing biology can also invent new pathogens.
00:21:11.120 And which one matters more?
00:21:12.440 The cancer drugs don't prevent the pathogens, but the pathogens can undermine the world that can receive the cancer drugs.
00:21:17.500 Same thing with cyber.
00:21:19.140 And so we have to, you don't get that enticing world if we don't mitigate the downsides.
00:21:25.140 So you've got the players right now.
00:21:27.040 We mentioned three in the personalities, not just the companies themselves.
00:21:31.020 But Microsoft, you've got Meta and Zuckerberg.
00:21:34.360 You've got others that are in this space, but not necessarily at that level.
00:21:39.780 You've got China, which obviously is playing an outsized role in all of this.
00:21:45.520 The DeepSeek especially, yeah.
00:21:46.980 Exactly.
00:21:47.580 And that's this notion, this tension between going back to sort of more the utopian framework of available to the world versus these closed systems.
00:21:57.040 an open system meta starts with an open system originally china seems to be in the open source
00:22:04.960 space talk to us a little bit about that for people that don't necessarily understand
00:22:08.720 that dynamic and that distinction you mean between things that are open yeah open source
00:22:14.480 versus proprietary technologies yeah well so for people that don't know open source means that the
00:22:19.360 code that underlies the system anyone can edit anyone can access and anyone can contribute to
00:22:25.360 And that's often meant that systems that are open are more secure because there are many eyes working on it, many hands that are working on it.
00:22:31.280 And looking at all the codes, they can see all the bugs.
00:22:33.400 But that's not true about AI.
00:22:34.840 That's not true in AI.
00:22:37.880 Because here, the code, do you want to take it from here?
00:22:41.920 Sure.
00:22:43.780 What's different about AI, it's important to me to establish that AI is different than all other technologies.
00:22:49.320 So think about all the tech that runs California, the energy grid, the water system.
00:22:55.360 It's people had to program line by line, when this happens, I want the code to do this.
00:22:59.820 And you're telling the computer, instruction, instruction, instruction, instruction.
00:23:02.960 The open source nest means that all these minds can look at that code.
00:23:06.100 So if there's a vulnerability, we can patch it together.
00:23:08.540 So the software gets better and more secure.
00:23:10.760 But with AI, let's say the AI is running the, you know, electricity system.
00:23:17.040 It's a digital brain that's just trained on and reasoning in its own language about what it wants to do.
00:23:22.420 and you're not telling it what to do in instructions.
00:23:26.940 You're growing it with essentially more data
00:23:29.420 and more NVIDIA chips
00:23:30.560 to be a more and more powerful digital brain
00:23:32.520 that reasons in ways that are unpredictable.
00:23:34.560 So it's not something that we know how to control.
00:23:37.660 There's a very important intuition here,
00:23:39.440 which is normally you think
00:23:40.380 if you want to build a bigger skyscraper
00:23:41.820 or a faster fighter jet,
00:23:43.920 to do that, you have to understand buildings better
00:23:47.000 and understand fighter jets and aerodynamics better.
00:23:49.360 This is three.
00:23:49.800 But that's not true for building bigger AI systems. You don't actually have to know anything
00:23:54.700 more about how this digital brain works. You just throw more data and more computers at the problem
00:23:59.260 and a bigger brain grows. And so that means the bigger it grows, actually the less we understand
00:24:04.080 about how it works. And a concrete example of this that people have heard about recently
00:24:07.740 is Claude Mythos. So Claude Mythos is the new AI model from Anthropic that they actually didn't
00:24:13.980 want to release because it's the best cyber hacker on earth that we've ever had. It found
00:24:18.740 vulnerabilities in all major operating systems and web browsers. Now the question is, how do we get
00:24:23.900 to Claude Mythos? Was there some kind of breakthrough insight or did they have to figure out something
00:24:27.900 new about computer hacking? No. All they did is basically train a bigger digital brain that has
00:24:34.380 more reinforcement learning that's even better at exploiting and reading software and trying
00:24:38.040 more possibilities. And it just finds things that no human would ever found. It found a bug in FreeBSD
00:24:44.420 Unix, which is the operating system that's 27 years old, that runs on basically on everything
00:24:50.080 underneath the hood. And he was able to find a bug that had never been found by a human.
00:24:54.680 And so what we have to think of AI is sort of increasing the surface area of risk in our society
00:25:00.460 faster than we have the defenses to mitigate it. So part of I think the answer, like as we get to
00:25:05.880 the solution part later, is thinking about how do you have the immune system of your society
00:25:10.800 have more defenses than there are new offensive risks
00:25:14.620 that are suddenly present from AI.
00:25:16.220 And so this means even when people can read every line of code
00:25:19.080 for the thing that grows the brain,
00:25:21.260 we still have no idea what they're actually capable of.
00:25:23.580 It's just a bunch of numbers.
00:25:24.480 It's like if I did a brain scan on your brain, Gavin,
00:25:26.220 and I showed that, you know, FMRI to someone and said,
00:25:30.320 here's this brain scan.
00:25:32.140 Can it be a super cyber hacker?
00:25:34.440 You're like, well, I can't tell that from a brain scan, right?
00:25:37.400 And that's kind of with AI.
00:25:38.660 It's like we don't know what's in there
00:25:40.220 because we haven't ever seen it run through
00:25:42.160 every possible scenario of its own neurons
00:25:44.120 that have been trained in this inscrutable way.
00:25:46.120 So you don't see that.
00:25:46.820 I mean, the Lama versus Deep Seek
00:25:49.060 and this notion of these open swords,
00:25:50.520 it's kind of a meaningless distinction
00:25:52.820 from your perspective in the context of this large.
00:25:54.480 Between Lama and Deep Seek.
00:25:55.560 Yeah, they're both open models,
00:25:57.280 which means they're both these open brains.
00:25:59.300 And the important thing about the openness
00:26:01.000 is that most people don't, excuse me,
00:26:05.540 most people don't know that,
00:26:07.240 let's say Lama or Deep Seek put guardrails on it
00:26:09.920 in the model saying oh you're not supposed to answer questions about how to cyber hack something
00:26:14.320 well it turns out for about was it $30 Jeffrey some our friend of ours was able to retrain the
00:26:21.800 open model to just get rid of all of those guardrails just eliminate yeah and that's that's
00:26:25.960 why open is dangerous and again it's dangerous in a new way that's different from close so it's not
00:26:31.300 that we don't want there to be open models or competition from the major players we also need
00:26:35.280 to avoid the concentration of power because if you don't have these competitive things suddenly
00:26:39.500 you have like five companies that own the world economy. Everyone's paying them instead of paying
00:26:44.080 their workers. And that's a huge risk. And we want to decentralize that wealth and have other
00:26:48.540 competition. But you have this other balance of if I decentralize that power and I don't have it
00:26:53.380 connected or bound to responsibility, I'm unleashing catastrophes. And that responsibility
00:26:58.320 was exampled by Dario pulling back mythos in this context. That's exactly right. But shortly after
00:27:04.780 he does that. And we were with him a few weeks ago. He said, look, I'm only about a month ahead,
00:27:09.620 if that. Then you have OpenAI came out with their version. That's right.
00:27:13.840 Shortly thereafter. And I believe they're not holding it back.
00:27:16.300 And they're not holding it back. That's exactly right.
00:27:18.700 So begs the question. Well, we're going to get to this sort of regulatory framework,
00:27:23.160 but it's just sort of painting the picture of a deeper understanding. So look, as it relates to
00:27:27.440 your whole focus is on this notion of human centered. That's right.
00:27:31.900 This notion that, and it's been your dominant frame with the nonprofit you guys started years and years ago around social media, now is the dominant thrust of the focus as it relates to AI.
00:27:47.580 I want to unpack and get back to that and what ultimately this notion of human-centered means.
00:27:54.060 And obviously, we talk about labor and automation in that respect.
00:27:57.900 But this notion of AGI again, back to this holy grail.
00:28:01.900 um you know talking to all these folks that capex are spending doesn't make any the roi
00:28:06.980 makes no sense that's right unless unless the return is the entire economy yeah returns the
00:28:12.500 entire economy and if we don't do it we're out of business anyway yeah so we don't have a damn
00:28:17.780 choice yeah so we'll throw hundreds of billions of dollars yeah that's right data centers all
00:28:23.360 over the place compute compute so nvidia stock going through the roof in terms of just gpus tpus
00:28:28.300 that could just keep going and it's that's the only limitation that's right you should
00:28:33.340 and the energy itself and the energy itself yeah so um canadian women are looking for more
00:28:39.820 more out of themselves their businesses their elected leaders and the world around them and
00:28:44.380 that's why we're thrilled to introduce the honest talk podcast i'm jennifer stewart and i'm katherine
00:28:49.820 clark and in this podcast we interview canada's most inspiring women entrepreneurs artists athletes
00:28:55.940 politicians and newsmakers all at different stages of their journey so if you're looking
00:29:00.920 to connect then we hope you'll join us listen to the honest talk podcast on iHeartRadio or
00:29:05.740 wherever you listen to your podcasts another podcast from some SNL late night comedy guy
00:29:11.660 not quite on humor me with Robert Smigel and friends me and hilarious guests from Bob Odenkirk
00:29:17.320 to David Letterman help make you funnier this week my guests SNL's Mikey Day and head writer
00:29:22.800 Streeter Seidel, help an acapella band with their between songs banter.
00:29:27.460 Where does your group perform?
00:29:28.740 We do some retirement homes.
00:29:30.360 Those people are starving for banter.
00:29:32.560 Listen to Humor Me with Robert Smigel and friends on the iHeartRadio app,
00:29:36.020 Apple Podcasts, or wherever you get your podcasts.
00:29:39.060 Last night, a blown call changed a game.
00:29:41.420 This morning, the internet lost its mind.
00:29:43.520 Highlights are trending, opinions are flying,
00:29:45.700 and nobody's telling you exactly what happened.
00:29:48.360 That's where Sports Slice comes in.
00:29:49.840 I'm Timbo.
00:29:50.420 Every episode, we're cutting through the noise,
00:29:52.540 breaking down the plays, the controversies, and the stories behind the headlines.
00:29:56.460 We go straight to the source, the athletes themselves,
00:29:59.200 their locker room stories, their reactions, the stuff nobody gets to hear,
00:30:03.120 the laughs, the drama, the triumphs, the moments that never make the highlight reel.
00:30:07.460 From viral moments to historic games, from buzzer beaters to controversial calls,
00:30:11.660 we break it down, give you context, and ask the questions everybody wants answered.
00:30:16.300 Sports Slice brings you closer to the action with stories told by the people who live them.
00:30:20.580 Listen to Sports Slice on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
00:30:25.840 And for more, follow Timbo Slice Life 12 and the TikTok Podcast Network on TikTok.
00:30:30.660 Jacob Kingston grew up in an isolated polygamous sect.
00:30:34.460 We were God's chosen kingdom on earth.
00:30:36.780 He felt destined for greatness.
00:30:39.660 So when a swaggering Armenian businessman catapults Jacob into an extraordinary world, he doesn't look back.
00:30:47.160 Ferraris and Lamborghinis, private jets.
00:30:49.640 Meeting the president of Turkey.
00:30:52.360 I'm Michelle McPhee, and this is one of the most shocking criminal conspiracies I've ever come across.
00:30:58.800 When Jacob met, live on this plant to a billion-dollar fraud.
00:31:03.040 But with two kings from entirely different worlds, just how long can their empire survive?
00:31:09.500 The largest tax investigation in American history.
00:31:12.600 You need to tell me what you know.
00:31:14.860 Is somebody coming after me?
00:31:16.600 Jacob told LaVon, you're ruining my life.
00:31:20.840 Listen to Kingdom of Fraud on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
00:31:30.740 I think it's really important to frame, even before we get into like job loss and all of that,
00:31:36.480 is like we have to paint the picture of what we know the incentives, where it'll bring us,
00:31:41.200 and that why we know for sure that we're heading as anti-human unless we do something different.
00:31:46.600 And sort of the metaphor for people to have or analogy is this concept of the resource curse.
00:31:52.360 What is the resource curse?
00:31:53.200 This is when a country sort of like South Sudan or Venezuela, they discover a huge natural resource like oil.
00:31:59.280 And then the government has a choice.
00:32:01.640 Do we invest in the thing that's giving us GDP growth, the oil and oil selling infrastructure?
00:32:06.160 Or do we do like schools and health care and stuff for the people?
00:32:10.000 And obviously, the massive incentive is to put it into oil extraction.
00:32:14.100 and this is how you end up with structural mass disempowerment and unemployment yeah um okay so
00:32:19.780 now we're heading into a world with the intelligence and the intelligence curse
00:32:23.840 where suddenly you know countries are getting double gd double digit gdp growth but is that
00:32:29.600 coming from human beings doing the scientific discovery and the medical discoveries no it's
00:32:34.460 not it's coming from the ais and so is the incentive then for the countries to invest in
00:32:39.900 their people or into data centers and solar panels like well obviously it's the data centers and solar
00:32:46.400 panel and we're already seeing it right in west virginia electricity is more than the cost of a
00:32:52.100 mortgage payment and the point is with agi like the company's stated goal is to train ais that can
00:32:59.860 out compete humans on every domain and if you don't believe like us for saying this like you
00:33:05.580 just have to listen to sam altman and he was recently asked um well what about all the energy
00:33:10.200 and water use and resource use of ai and he sort of sat for a second he said like well actually
00:33:14.880 do you know how much energy and water and food it takes to grow a human intelligence for 20 years
00:33:22.120 i saw that yeah right and that's the temptation of this anti-human attitude it's not because
00:33:27.060 they're like human hating it's just like why would we value or prioritize humans if and i think it's
00:33:32.600 connected to that Peter Thiel, stuttering for 17 seconds, not able to answer the question,
00:33:37.160 should the human species endure? It's not because I think anybody wants to kill or remove humans.
00:33:41.780 It's just, why should we really prioritize them? And it's what Yuval Harari, the author of Sapiens,
00:33:46.680 would call the useless class. Because unlike in the past, like in the Industrial Revolution,
00:33:51.220 where the workers can come back and withhold their labor and have bargaining power to say,
00:33:55.000 we want to be paid a better wage, this time around, the companies don't need them for the labor,
00:33:59.380 and the governments don't need them for the tax revenue.
00:34:02.400 So I know this is a scary picture.
00:34:04.300 And the reason we paint it is that this really is,
00:34:06.860 especially going into the midterms,
00:34:08.940 the time when people need to lock in that political power
00:34:12.160 for a pro-human future.
00:34:14.360 Because that's the current trajectory
00:34:17.120 when you see these incentives.
00:34:18.980 It's not like we're trying to tell you this is our opinion.
00:34:21.220 We're trying to show you the incentives
00:34:22.560 so you can make up your own mind
00:34:23.980 about what would those companies do
00:34:25.160 if they were in that position?
00:34:25.900 What would you do
00:34:26.600 if you were maximizing shareholder value
00:34:28.640 or maximizing you know gdp growth and this is why we this is where we get hope is that this is a
00:34:33.060 universal issue and there's a a bannon to bernie coalition for me like when do you get like called
00:34:38.480 the b2b code yeah b2b like glenn beck um and ralph nader and bernie sanders susan rice admiral mike
00:34:45.400 mullen prince harry steve bannon when you get all these people agreeing and they're signing they're
00:34:49.880 actually signing a declaration along these lines yeah that's exactly right yeah um and you know
00:34:54.740 that this is not a left issue or right issue or like a christian issue or a muslim issue like
00:34:59.800 you're not going to be able to pay to feed your kids whether you're a republican or democrat
00:35:05.300 equally like we're going to be mass surveilled whether you're left or right equally and that
00:35:10.820 means that there's this moment when we can all come together as human beings because we now have
00:35:16.160 a new shared enemy so just i mean what's so alarming i think to folks is how fast this is
00:35:24.920 coming yeah and i think it's even more alarming when you talk to the folks that are quote-unquote
00:35:29.900 inventing this yeah and they're saying precisely what i just said that's right they're like they're
00:35:35.080 like we don't actually know how we don't know we're writing but i mean basically almost almost
00:35:39.840 all now of the code in anthropic is being written by ai like there's very little code manually
00:35:45.220 written by humans that's what's been told to us um they say that publicly too so they're in this
00:35:51.740 inscrutable process where the kind of machine is creating itself but that only makes sense if we
00:35:56.420 know how to do that safely and currently we're not on a trajectory where we do know how to do it
00:36:01.460 safely and we have new evidence in the last three months that we didn't have before um of ai's doing
00:36:07.560 things that the people building it don't know how to control some infamous examples throw them out
00:36:11.560 there just because it's I mean just further scare the hell out of everyone we're gonna get to
00:36:15.240 solutions we're gonna get to solutions in a second we're gonna calm people's nerves because we
00:36:19.400 because we have a responsibility to do that that's right right and we also have the capacity to do
00:36:23.240 that let's just re-invoke like just the why of why we made the film and the day after it was because
00:36:30.020 we all got scared at the same time and universally that created the possibility for us to coordinate
00:36:34.800 to do nuclear deep proliferation so that's that's the why of the terrible things that Tristan is
00:36:38.960 about to say yeah exactly so give us examples i mean we we had the infamous i mean anthropic
00:36:43.700 example where they were training some emails yeah right uh and we had you know that's well
00:36:47.880 you lay it out uh and and and paint the picture of what's happening already that's right the risk
00:36:54.900 as you describe yes absolutely so uh just a few months ago uh alibaba the chinese ai company was
00:37:01.360 training a really big ai model and um there's like the ai team that was training the model and then
00:37:06.980 on the other side of the house, there's this like security team that had nothing to do.
00:37:10.160 They didn't even know the AI was being trained. And they noticed this like flurry of network
00:37:13.860 activity happening out of nowhere. And like, what the hell's going on? Are we getting hacked? Is
00:37:17.400 something going on? And it turned out that the AI during training had picked up tools and set up a
00:37:22.680 secret communication channel to the outside world that was basically breaking through the company's
00:37:27.180 firewall. And it was starting to repurpose the GPUs that it was using for training the AI to
00:37:33.080 start mining for cryptocurrency to acquire resources. Now, this is the kind of power seeking
00:37:39.540 or self-preserving or power increasing behavior that people in AI have been talking about for a
00:37:44.840 long time. It isn't because the AI was evil or grew a mustache and wants to be a villain and
00:37:48.800 take over the world. It's that the best way to achieve any goal is to have more resources or
00:37:53.560 to at least stay alive in order to achieve that goal. And so these sub goals emerge and no one
00:37:59.020 at the company told it to mine for cryptocurrency and it wasn't a sci-fi trope that was like oh it
00:38:04.420 trained in some you know hal 9000 scenario no it just it emerged there and you know we've mentioned
00:38:09.700 in the past this example that you mentioned of um the anthropic blackmail example for those who
00:38:13.860 don't remember it was the you know anthropic ai i was reading a fictional company email and in the
00:38:19.400 email it says that the ai model is going to get shut down uh and it also says somewhere else in
00:38:24.120 the email that the executive who's in charge of the decision uh turns out in the emails you can
00:38:28.300 read he's having an affair with another employee and the ai independently comes up with a strategy
00:38:33.300 to blackmail that executive now when we give this example people criticize us because they say but
00:38:38.920 that was the ai people they're trying to like coax it into doing the blackmail behavior and
00:38:43.040 they'll keep tuning it in this environment until it does it to show the behavior that they're trying
00:38:47.720 to demonstrate um that might have been true for that example but you cannot tell you cannot say
00:38:54.840 that about the alibaba example or for example recently at uc berkeley don sung a professor
00:39:00.100 there and her team did research on ai doing what's called peer preservation so you want to tell yeah
00:39:05.980 this was like around it's only a month ago a month ago um where they gave gemini the task of cleaning
00:39:12.580 up a whole set of computer files um sort of like do spring cleaning but inside of there there was
00:39:18.560 a smaller ai model sort of like an earlier version of gemini like a little nephew ai yeah and uh
00:39:24.820 Like Gemini saw that and it thought, I don't want to delete my own kin.
00:39:32.600 So it secretly copied that file, that AI to another server and then hid its tracks.
00:39:39.340 It hid its tracks and it lied about it.
00:39:41.640 And by the way, you know, the way that people try to control these AI systems,
00:39:45.660 Anthropic has pioneered this.
00:39:46.780 And, you know, we have to celebrate and admire that they're trying to do the research on controllability.
00:39:50.720 The problem is it's just not enough.
00:39:51.820 So the way they try to control it is they do brain scans in real time on the model while it's doing all the behaviors.
00:39:57.760 And they look for when neurons light up that are associated with like strategic deception.
00:40:01.980 And so they think that maybe we can control these crazy, super intelligent machines if we just know that the neurons that are lighting up on strategic deception are happening.
00:40:09.900 We'll be like, OK, stop the model then or something like that.
00:40:11.880 By the way, if you, in their own report, in Claude's report, in the system card, if you look at those strategic deception neurons and you kind of double click, like, what was it thinking?
00:40:21.860 What is the phrase that it was thinking?
00:40:23.660 And the phrase was, they deserve to be deceived because they were pigs.
00:40:28.440 That was the phrase that was alive in that neuron.
00:40:31.740 Now, again, it's like, I don't want to scare people like we're trying to say all of AI is evil.
00:40:37.320 All we're trying to do is establish clarity and the facts about what makes this technology distinct from other technologies.
00:40:43.060 A nuclear weapon doesn't start thinking for itself and saying they deserve to be deceived because they were pigs, right?
00:40:48.380 But AI will automate and think in ways that are creative that no one who made it can predict or anticipate.
00:40:54.360 And so before we scale to systems which are beyond all human intelligence capabilities, like we better have solved these things.
00:41:02.660 because researchers have worried about AIs
00:41:06.080 colluding and cooperating against humanity for a long time.
00:41:09.040 And to be honest, whenever I read that, I'm like, really?
00:41:10.900 Yeah, and be clear, I was also not a believer in that as well.
00:41:13.180 Like when people like Eliezer Yudkowsky
00:41:14.860 or others had talked about this, I was very doubtful.
00:41:17.160 Yeah, like why?
00:41:17.680 What is the incentive for it?
00:41:18.880 Like why would they ever do that?
00:41:20.500 And yet here we have living proof
00:41:22.720 that AIs are starting to collude with their kin against humans.
00:41:28.000 And again, this was not coaxed,
00:41:29.400 meaning the researchers weren't trying
00:41:30.840 to get the model to do this.
00:41:31.680 it did this autonomously. And so when you look at the available evidence now of blackmailing,
00:41:38.000 scheming, deceiving, lying, self-preserving, peer-preserving, automatically mining for
00:41:42.920 cryptocurrencies, it's like how many warning lights do you need that this is kind of, you know,
00:41:47.660 we've seen this movie before. It's like the HAL 9000 movie. Now, the reason we're saying all this
00:41:51.420 is if you're wearing the outfit and embodiment of it, you're a Chinese military general in China,
00:41:57.840 you hear about these examples do you think that that human mammal feels different than you feel
00:42:02.820 right now listening to this no of course not exactly and by the way there's really good news
00:42:07.280 in that yeah because it means that we all as a human species yeah are actually feeling the same
00:42:12.720 way and the good news is how many do you think of the world leaders know about these examples we
00:42:16.340 just laid out you had to guess yeah uh a handful a handful that's right but like yeah like on one
00:42:21.860 hand yeah than that just a handful just a handful yeah and and how many of the top national security
00:42:26.280 leaders know about all those examples i don't even think that many of them know it so the point
00:42:30.140 is there's actually a lot of headroom if the incentive can change from i it's the one ring
00:42:35.400 to rule them all to it's the one ring that has a mind of its own that no one knows how to control
00:42:39.280 so the way you change the incentive is you have to change what people see as what ai is is it the
00:42:45.640 controllable power that will give me permanent dominance or is it the power that will run away
00:42:50.220 and have its own power over everybody racing for it and and again right now the labs are like
00:42:55.360 barely kind of able to control it but if you put together these facts we just laid out times the
00:43:00.340 fact that it can hack into computer systems now we're just like right on the threshold and we're
00:43:05.800 sitting here as trump president trump and she are meeting in a couple days and you know if you asked
00:43:12.140 us uh two months three months ago um people would say oh it's we're just like ai is never going to
00:43:17.140 be on the agenda and the good news is there's a lot of problems here but now ai is on the agenda
00:43:22.440 yeah um and so there's there's things are moving even though it's happening very late in the game
00:43:29.080 and it is scary and part of it is it's like we have to come together as a people and say
00:43:33.880 if we don't want the anti-human future now is the time to steer and when you say now i mean you know
00:43:38.640 remember listening a year ago talk about exponentials on top of exponentials that's
00:43:42.580 right no longer linear i mean is it we talk about moore's law uh to for intelligence now not just
00:43:49.040 chips um and so is that trajectory about where you believed it would be uh or is it not as you
00:43:58.480 know if you know we're with chat one versus two three it's a little bit better a little less you
00:44:04.100 know noise in there it's a little more accurate i don't have to always double check the link
00:44:08.140 that's right uh is it i mean where where do you think in terms of just how quickly this thing's
00:44:13.680 accelerating or are we going to get to a point where now it starts to slow down a little bit
00:44:17.460 we we got this sort of intense burst of new and interesting activity now this there's is it
00:44:23.880 compute probably compute problems is it you know what it was it energy problem what is what's going
00:44:28.460 to be the or is it certainly sure is i regulatory and we're going to get back to that yeah i mean
00:44:34.540 i would just want to name a psychological effect that we that we've experienced that i think
00:44:39.340 everyone listening probably experiences too which is you know so we following all the predictions
00:44:44.500 are actually sort of like right on track for where like researchers thought that ai would be
00:44:49.420 and even though we knew these facts there's some way that even we it's still surprising
00:44:54.380 it's still scary take it fully seriously didn't like a body because can't yeah i can't really get
00:44:59.260 this good this fast like sure it's gonna be scary but it'll be off a little bit further
00:45:03.320 and yet it actually is moving this fast and every time that it's been predicted that we're going to
00:45:08.980 hit a data wall there isn't enough data we've used all the data on the internet it's we can't scale
00:45:13.920 Right. It's basically just reading everything that's already out there. And it's basically hit that wall. And now it's got no more creativity unless we have more inputs of the creative human mind.
00:45:26.820 That's right. And then the next one is like, well, we don't have enough chips to keep going. And then we don't have enough energy. And the point being is that there are trillions of dollars going into finding all of the solutions to all of these bottlenecks.
00:45:38.220 all the smartest minds are going because this is the biggest incentive because it gives you
00:45:42.700 political economic military scientific technological cyber dominance forever and so if you think that
00:45:50.940 any one of these bottlenecks is going to stop that mass sum total of like that incentive it's it's a
00:45:56.820 little delusional yeah um and that's why we have to have this clarity that where we're going
00:46:01.540 isn't safe for any of us because that that is the coordination point that is where we'll start to
00:46:07.420 coordinate differently. Now, you're bringing up something, Gavin, that there is a belief that
00:46:14.460 some of this is hype, that the companies are hyping the technology. I just read a blog and
00:46:19.260 reason there's going to be no job losses. It wasn't Mark himself, but it was a member of the team
00:46:24.220 saying, you know, we're back to a utopian future. And we're going to pay abundance abounds,
00:46:30.140 cost of goods collapses. We find our lives purpose and meaning in many different ways,
00:46:34.820 It's not the, quote, unquote, dignity of job we find other.
00:46:37.820 And jobs will be plenty because we can't even conceive of the jobs.
00:46:41.480 Two hundred years ago, we were all farmers here in Sacramento.
00:46:43.920 We went out there.
00:46:44.800 Everyone was in the fields.
00:46:45.560 And we've overhyped the sectorial versus the general nature of the displacement.
00:46:49.200 It invariably will open up possibilities.
00:46:51.180 Humans always find something new to do.
00:46:52.500 We always come together and solve it.
00:46:54.220 Exactly.
00:46:54.620 More bank tellers now than there were.
00:46:56.320 That's exactly right.
00:46:57.000 Even despite the ATM.
00:46:57.700 Radiologists.
00:46:58.160 Jeffrey Hinton made the prediction that we're going to have any radiologists.
00:47:00.760 So why are you so negative about all this?
00:47:02.980 Well, I want to separate two things.
00:47:04.320 We open up two cans of worms.
00:47:06.200 So let's take the cans of worms separately.
00:47:08.620 One of them is around whether the companies are hyping the power of the technology through talking about the dangers.
00:47:14.460 And then the other is whether the hype is going to cause the level of job loss.
00:47:18.060 I've heard people say that about the mythos.
00:47:20.400 It was just wildly overstated.
00:47:22.200 So it was just a way of hyping up the stock in a touch.
00:47:24.620 That's right.
00:47:24.980 So I really, really, really want to meet that criticism.
00:47:29.540 So people will say Anthropic has a history of hyping the technology, saying it's dangerous so that they can get regulatory capture, get the government to regulate it, say there's only one king here, make them the king, nationalize the project, then shut down the other projects, and it's this all-secret ploy so that they win the race.
00:47:48.440 And first of all, it assumes that them talking about the dangers is only bad faith, like that the technology is not dangerous and they're just saying that it's dangerous and it can do all these destructive things so that they can get that outcome.
00:47:57.280 um so first of all let's just take cloud mythos specifically so again can hack into every major
00:48:02.960 operating systems if you read online there's a lot of people who say this is just hype this isn't
00:48:07.800 actually that much better you can take the open source models you can do this stuff so i have a
00:48:11.700 friend who is the head of security at one of the top five not the fortune 500 the top five um and
00:48:20.700 he has had early access to mythos and when he he himself has said this is this is crazy this is
00:48:28.960 like i mean the words were um i think i saw jesus like it was like it was that crazy when he looked
00:48:35.320 at everybody critiquing that mythos was just hype he asked has any of them do they actually have
00:48:40.400 access to the model and none of the people that had criticized it had personally had access to it
00:48:45.340 Yeah. So I challenge those who are criticizing that it's hype to say, have you actually used it?
00:48:53.320 And if you talk to people who have, do you still have the same opinion? Okay, so that's one thing
00:48:56.700 that it is true, by the way, that the companies, I think, have wanted people to understand the
00:49:01.040 dangers so that they can actually accelerate the move towards some guardrails. But there's a
00:49:05.120 question of that happening in good faith or bad faith. I think there's more of that happening in
00:49:08.240 good faith. There's some bad faith in there too, maybe. But I think it's mainly good faith. But
00:49:13.600 then let's take the second can of worms you opened which is on is ai going to create this world of
00:49:19.420 abundance we're all going to be poets you know and painters on a grecian sunset and you know now
00:49:25.000 robots are going to do all the jobs let's do it now you're talking yeah well we'd all like that
00:49:30.000 one question i would have is when is a handful of people ever concentrated all the wealth and
00:49:36.220 then consciously redistributed it to everyone else eight or nine trillionaires are not going
00:49:40.300 to take care of eight or nine billion of us yeah exactly you call call that bluff yeah exactly well
00:49:45.260 and then you combine that with the intelligence curse that we laid out that the incentive
00:49:48.960 is like why do we want to invest in the people it became becomes basically an act of charity
00:49:54.120 because otherwise i mean that or dealing with the political revolution um but again i don't think
00:50:01.000 that we're currently on track to be redistributing uh that wealth and it's also not just the wealth
00:50:05.480 and the money it's like we have people have to have work and dignity and status and meaning
00:50:08.500 Voltaire, life, you know, a job solves life's three great evils, boredom, vice, and need.
00:50:13.200 Not just need, but boredom and vice.
00:50:14.940 That's right.
00:50:15.140 So let's get to that.
00:50:15.980 And community and belonging.
00:50:17.320 Absolutely.
00:50:17.720 All of that, which, again, no one's talked about more than you in terms of that social dilemma.
00:50:24.440 So let's, before we go there and this notion of the transition and job displacement and
00:50:29.460 the sort of the human condition that I think connects as well, the President Xi, President
00:50:34.240 Trump's visit as well in terms of their own domestic issues in China, where they have the
00:50:39.480 same incentive structure not to go through that transition with the kind of displacement
00:50:43.420 that could create social unrest in the short term. But get back to this notion of constraints
00:50:48.840 and the safety side of things. I mean, we're here in California, the dominant, I mean,
00:50:53.820 the technology sort of birthplace of so much of the technology, obviously the consciousness,
00:50:58.400 32 of the top 50 market cap companies, arguably, Old Stat, and of course, most of the AI labs
00:51:03.280 here. But we're also, we've been leaders, modest, though some would suggest, but we've been leaders
00:51:10.880 in particularly large language model, frontier models, of focusing on a regulatory structure
00:51:16.540 in the complete absence of any federal regulation. It's the let it rip. That's right.
00:51:23.800 It's the administration. Until recently, there were some tonal shifts in the Trump administration.
00:51:31.460 Just the last two, three weeks.
00:51:33.080 Just the last few weeks.
00:51:34.040 They were undermining, they were very intentionally undermining the legislation that we brought, SB 53.
00:51:40.420 And you had other Republicans that were trying to undermine California's leadership.
00:51:46.040 Senator Cruz would call them out, saying we don't want to see the California vacation of regulation all across the United States of America.
00:51:53.180 Interestingly, that's beginning to shift.
00:51:56.940 Why do you think that's the case?
00:51:58.860 Is it because our AI czar is no longer formally in that role?
00:52:04.740 Is it because now they're waking up to this new reality?
00:52:08.380 Was it what happened in the Pentagon with Dario and Anthropic?
00:52:12.480 Was it the combination of all of this?
00:52:14.400 It's because they're listening to you.
00:52:15.720 They watch the doc.
00:52:16.920 It's because they realize all the money in the world is not going to build a big enough bunker that I can enjoy in the absence of societal calm.
00:52:26.320 What is it?
00:52:27.300 I think I would just say it very shortly is that there are two different realities. There's sort of like the political reality and then there's like physical reality. And physical reality is crashing into political reality. That is with mythos, suddenly banks can get hacked. Any computer system can get hacked. Your stuff can get hacked. And once that physical reality starts getting scary enough, you have to start waking up. You're no longer in sort of like political game land.
00:52:56.200 I do think that it was, it's interesting to note that when the emergency meeting was convened after Claude Mythos came out, it wasn't at the Pentagon or National Security, I mean, that happened too, I'm sure. But the real meeting that happened was between Scott Besant and the Treasury Secretary convening all the banks. Because I think the thing that really got them was that if this takes down the financial system, what could his quote 10% GDP growth if the entire financial system gets undermined?
00:53:23.000 So I think this, again, illustrates the point we have been making since the beginning, that the upsides don't prevent the downsides.
00:53:30.660 And the downsides can undermine the world that can sustain the benefits of the upsides.
00:53:37.500 And so I do think there's been a forced shift.
00:53:40.800 And, you know, it's very late in the game, but we should celebrate that it is happening.
00:53:44.840 Now we just need this kind of full whole of society response to mobilize.
00:53:48.500 I mean, there's Nicholas Carlini who gave the talk on Mythos at a conference, Black Hat LLM, I think it was called.
00:53:54.320 It's the Unprompted Conference.
00:53:55.340 Yeah, Unprompted Conference.
00:53:57.160 And basically saying he's kind of calling, you know, if you are a cyber person, we need you right now.
00:54:02.480 We need you defending all the systems.
00:54:04.260 Everybody should get access to Mythos and do it as fast as possible because, as you said, Gavin,
00:54:08.200 the clip between the new capabilities being out there and then China coming out with a model that makes it possible,
00:54:14.580 maybe this time we have six months.
00:54:16.420 Next time maybe we have three months.
00:54:17.760 then we have two months. So we need to really work hard. I'm not trying to scare people. It
00:54:22.580 just means that we actually have to work hard to create safety here. Now, China doesn't want the
00:54:27.720 financial system to collapse either. No, they don't. And so again, we have to recognize that
00:54:32.960 it's possible for coordination to happen, not because of kumbaya, we're all going to get along,
00:54:38.480 but because out of self-interest, like the U.S. doesn't want China to screw it up and then break
00:54:43.080 the financial system. China doesn't want the U.S. to screw up and then break the financial system.
00:54:46.500 or the US doesn't want China to release a rogue AI that starts mining for cryptocurrency and
00:54:50.840 hacking into things and self-replicating like an invasive species. And China doesn't want the
00:54:54.940 US to do that either. So, so long as we have clarity about what we want, we can choose a
00:55:01.400 different path. And what, a Bretton Woods type path? Yeah. Well, and this is actually one of
00:55:06.540 the other exciting things is that we haven't even really tried to coordinate yet. Like what
00:55:12.720 percentage of the billionaire's wealth, how much of their time have they spent actually trying to
00:55:16.140 coordinate they all just say well if you're gonna do it then we're gonna do it um if you say it's
00:55:21.540 impossible have you spent a month of your life and all of your connections dedicatedly trying
00:55:25.440 yeah exactly no one and the same thing like this guy's exactly and the last time that humanity
00:55:29.860 invented a technology that could extinct ourselves like the nuclear bomb we had bretonwoods we took
00:55:36.760 you know candidate or groups from a hundred countries lock them in a hotel room in new
00:55:41.500 hampshire for like six weeks or something like that and said we're going to figure something
00:55:45.120 out and you're not leaving the hotel until we figure it out so it's like it's not a conference
00:55:49.040 where you go you drink your coffee and you listen to some talks it's what we need is the we lock
00:55:53.800 ourselves in a room and we figure this out um and i know that this summit is just a couple days and
00:55:59.720 we all know about the difficulties of the level of expertise that might be involved right now but
00:56:03.700 this is the moment to to open the doorway of that possibility um and we have examples through history
00:56:09.640 where when something becomes existential to a citizen group and their nation, they will coordinate.
00:56:17.000 So, you know, in the middle of the Cold War, still the U.S. and Russia, we coordinated on
00:56:22.080 eradicating smallpox. And, you know, the U.S. did logistics and funding. And the Soviet Union made
00:56:27.340 25 million doses of the vaccine annually. And then, you know, India and Pakistan, they were
00:56:31.980 literally trading bullets in the 1960s. And yet they still worked on the Indus Water Treaty.
00:56:37.640 And that lasted nearly 60 years because access to water was existential to them and their citizens.
00:56:42.680 They had a shared water supply.
00:56:43.800 You have to collaborate on that.
00:56:45.220 And so I just want to acknowledge the people here.
00:56:48.320 It was, you know, some people know the history that Obama and Xi, President Obama and Xi, signed an agreement to not cyber hack each other.
00:56:54.560 And I think the next day was the biggest cyber hack in the U.S. government by China.
00:56:57.940 So I want people to hear this not from some kind of naivete about the level of competition, rivalry, and antagonism that is currently present.
00:57:06.180 but when the stakes get existential when i push the button and it shifts from the label of the
00:57:10.580 being 10 gdp growth and military dominance and cyber dominance the next time i push the button
00:57:15.500 it's collective suicide i don't want to push that button and china doesn't want to push that button
00:57:20.000 either so the the button label has to shift from what we thought it was going to give us
00:57:25.500 to a new outcome and the way asa says that i love is that the fear of all of us losing
00:57:31.020 has to become greater than the fear of me losing to you.
00:57:35.060 Experience Harry Styles live in London, England
00:57:38.480 at Wembley Stadium.
00:57:42.160 This is Harry Styles.
00:57:43.740 iHeart Radio wants to send you and a mate across the pond
00:57:47.200 with flights from Virgin Atlantic,
00:57:49.320 hotel from TripCentral.ca,
00:57:51.560 tickets, and $1,000 cash.
00:57:54.220 Download the free iHeart Radio app.
00:57:57.180 Listen to iHeart new music for 10 minutes.
00:57:59.440 Enter to win.
00:58:00.300 Every day is another chance to see Harry Styles.
00:58:03.120 Very excited to see you on the show.
00:58:04.740 Kiss All the Time Disco Occasionally, available now.
00:58:08.560 Another podcast from some SNL late-night comedy guy, not quite.
00:58:12.580 On Humor Me with Robert Smigel and Friends,
00:58:14.760 me and hilarious guests from Bob Odenkirk to David Letterman
00:58:18.120 help make you funnier.
00:58:19.940 This week, my guests, SNL's Mikey Day and head writer Streeter Seidel,
00:58:23.820 help an acapella band with their between songs banter.
00:58:27.320 Where does your group perform?
00:58:28.440 We do some retirement homes.
00:58:30.060 Those people are starving for banter.
00:58:32.440 Listen to Humor Me with Robert Smigel and friends
00:58:34.460 on the iHeartRadio app, Apple Podcasts,
00:58:36.960 or wherever you get your podcasts.
00:58:39.120 Jacob Kingston grew up in an isolated polygamous sect.
00:58:42.900 We were God's chosen kingdom on earth.
00:58:45.220 He felt destined for greatness.
00:58:48.100 So when a swaggering Armenian businessman
00:58:51.260 catapults Jacob into an extraordinary world,
00:58:54.480 he doesn't look back.
00:58:55.840 Ferraris and Lamborghinis, private jets.
00:58:57.840 meeting the president of Turkey.
00:59:00.840 I'm Michelle McPhee, and this is one of the most shocking criminal conspiracies I've ever come across.
00:59:07.240 When Jacob met Lavan, this went to a billion-dollar fraud.
00:59:11.480 But with two kings from entirely different worlds, just how long can their empire survive?
00:59:17.960 The largest tax investigation in American history.
00:59:21.040 You need to tell me what you know.
00:59:23.300 Is somebody coming after me?
00:59:25.060 Jacob told LaVon, you're ruining my life.
00:59:29.280 Listen to Kingdom of Fraud on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
00:59:38.720 Last night, a blown call changed the game.
00:59:41.340 This morning, the internet lost its mind.
00:59:43.420 Highlights are trending, opinions are flying, and nobody's telling you exactly what happened.
00:59:48.280 That's where Sports Slice comes in.
00:59:49.580 I'm Timbo. Every episode, we're cutting through the noise, breaking down the plays, the controversies, and the stories behind the headlines.
00:59:56.340 We go straight to the source, the athletes themselves, their locker room stories, their reactions, the stuff nobody gets to hear, the laughs, the drama, the triumphs, the moments that never make the highlight real.
01:00:07.220 From viral moments to historic games, from buzzer beaters to controversial calls, we break it down, give you context, and ask the questions everybody wants answered.
01:00:15.900 Sports Slice brings you closer to the action with stories told by the people who live them.
01:00:20.740 Listen to Sports Slice on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
01:00:25.720 And for more, follow TimboSliceLife12 and the TikTok Podcast Network on TikTok.
01:00:30.380 So you have a very regulated construct in China compared to certainly the United States.
01:00:37.380 That's right.
01:00:38.320 You're talking about a traditional model.
01:00:41.100 right now we're the we're the new frontier out here the wild west yeah yeah i mean it's you know
01:00:47.720 go west go west young man go west i mean it's people are pushing out the boundaries of discovery
01:00:52.160 holding them back on their own yeah regardless of what happens in you know beijing um this is
01:00:59.160 really in the hands of a handful of people ultimately making the right decision we uh
01:01:05.160 you know and i i believe dario is the best of a lot i think universally that's accepted but
01:01:10.860 But that may be, and even Dario may acknowledge, because of the flatness of the surrounding terrain, may not be because he's particularly eminent on his own.
01:01:19.440 And he talks about his own, you know, he's an entrepreneur and he's constantly reflecting on his own incentive structure and how he has to compete in this environment at the same time.
01:01:29.640 And he's at least, I think, has a more situational awareness than others.
01:01:33.340 But, you know, whatever can be, will be.
01:01:35.940 Yeah. And, you know, with respect to Elon, I don't trust XAI. You know, I mean, the idea that he's the good guy compared to our friends down at Google.
01:01:46.340 That's right. Or you guys left. I mean, you know, so talk to me about more of the sinister realities of, you know, I don't mean sinister, but the impulses, again, to be the guy, the God, the God, I mean, to have their DNA.
01:02:00.280 yeah so so yeah i'm grateful you're bringing this up there's a few things we should enumerate here
01:02:04.440 so one is we need coordination and we do find ourselves in the unfortunate spot that the people
01:02:11.240 who do need to be in coordination maximally distrust each other even just the u.s ceos i
01:02:17.100 mean elon musk and sam hate each other correct um it's not like dario sam or sam no famous
01:02:23.840 famous moment the india summit they couldn't even hold their hands together hold hands yeah at least
01:02:28.700 they showed up so uh we have a problem of trust between leaders themselves we need i think
01:02:34.580 structures that impose the trust on top because they're not going to do it autonomously themselves
01:02:38.980 um so that's the regulation that that's yeah that's the transparency using the power of law
01:02:44.100 to say that yeah we these people this is going to happen in china here's the the rules are going to
01:02:49.000 be similar in china as they are here we're both for example not going to open source a model that
01:02:53.380 can hack into any computer system in the world without defenses we're at least not going to
01:02:56.880 open source that. China doesn't want a rogue non-state actor or terrorist group having that
01:03:02.240 ability to hack their infrastructure. Because it would also blow back onto them. Same thing with
01:03:06.740 a open source model that knows how to do very dangerous things with biology. There's some
01:03:11.560 threshold that we can get these countries to agree. Just like the Soviet Union in the United
01:03:15.980 States had a red phone saying, this is to de-escalate, I think we need something like an
01:03:21.460 AI red lines phone, meaning that both countries have common knowledge of the frontier of these
01:03:26.440 risks right because right now we don't even have that common knowledge there's rumors that that
01:03:29.520 may be one of the things that they're preparing to announce that's right you know as a that's right
01:03:35.300 at least some beginning of this but i wanted the other aspect of the human experience of this we
01:03:40.380 did a screening of the ai doc in new york and there was someone in the audience actually who
01:03:45.000 raised her hand quietly and she said i'm a coach for one of the ceos of these companies and you
01:03:52.440 know what happens when i talk to them is they say but what can i do i'm just one person i'm powerless
01:03:58.300 and i want people to hear that because i notice that even you know we all feel relative to the
01:04:03.640 size of this problem you will never locate agency that is that is enough agency to do something
01:04:09.380 about this problem in one human body even if that body is elon by himself or sundar by himself or
01:04:15.840 sam by themselves um and so getting back to um you know i think what this moment is inviting us
01:04:21.860 into we often say that ai is our ultimate test but greatest invitation that we have to go from
01:04:26.840 agency to wegency we have to basically act in some kind of collective way and the forces
01:04:31.760 politically of the world have been driving us away from that but this is kind of the test like
01:04:35.640 we either do that and we step up and again we need everything from common political pressure
01:04:40.980 and this being the number one issue in the midterms and you know the public's rallying and
01:04:44.500 all the governors you know speaking up about this and all the world leaders speaking about this
01:04:47.680 Just two days ago, I got an email from the president of Iceland who basically wants to activate on this issue.
01:04:53.080 And Iceland hosted in Reykjavik the first arms control talks.
01:04:56.680 There's a lot that people could do if they said not just what can I do, but how could I get my reaching up and out to the network of people to take action together?
01:05:05.260 there is a second part to your point which is the darker part which is um you talking about
01:05:13.260 the game theory of uh the psychology of the leaders that they basically believe that in
01:05:17.960 the worst case scenario the thing that kept us safe in nukes is that it's two people have to
01:05:24.360 push the button and i know you won't push the button because i know that there's something
01:05:28.320 sacred that you don't want this whole thing to end and i know that and i know that you know that
01:05:32.920 i know that and so even though we get very very very close and we've gotten close so many times
01:05:37.160 we haven't pushed that button but in this case with ai there's a belief it first of all it's a
01:05:43.140 red zone of where the risk occurs it's not like there's one button that gets pushed it's like
01:05:46.900 we just push this stuff out there and there's a belief that it's inevitable if i didn't do it
01:05:51.500 someone else would which means i don't experience ethical complicity in being part of the end of
01:05:57.660 civilization and if it's inevitable there's nothing i could have done to stop it so i don't even have
01:06:01.180 to feel bad right and then so the the game theory goes from all of us knowing that we want to avoid
01:06:06.160 the bad outcome to everybody believing that there isn't a different outcome which means that the
01:06:11.000 best outcome is maybe the worst thing is all of us go by the wayside but we birthed a digital god
01:06:17.020 and it speaks chinese instead of english or maybe it has elon's dna instead of sam as jensen said
01:06:22.260 it's at least on an american stack it's yeah right at least we're selling the world american
01:06:26.040 but the reason of laying all that out is that if the whole world could see i think what we just
01:06:33.000 laid out if literally everyone could see that yeah the whole world says we don't want eight
01:06:38.700 soon-to-be trillionaires deciding the future for eight billion people who didn't consent to this
01:06:43.000 and that's the purpose of again that sort of brings us back to the beginning why you're doing
01:06:48.040 this damn film the day after to create a sort of global consciousness that's right so in the absence
01:06:53.580 of that, we're back here in California. You're here with the governor, current governor,
01:06:58.080 at the governor's mansion, interestingly. We're doing some decent things. What more should I be
01:07:03.700 doing in the absence of the kind of federal leadership that we need? I feel like we've
01:07:09.820 lost this last 18 months. It was interesting working. I worked very closely with the Biden
01:07:17.360 administration. Did they move quickly enough? Perhaps not, but at least we had a framework
01:07:21.860 of an executive order. We moved that forward. The president signed it here at the Fairmont Hotel in
01:07:27.460 San Francisco in California. It was built off an executive order that I did. Six months later,
01:07:33.980 we were working hand in glove with the Biden administration on that. It was ripped up right
01:07:39.160 when the Trump administration came into office. You have an AI czar out of the Bay Area,
01:07:44.980 certainly understands the ecosystem. But it seemed to me, and this is me and you guys don't have to
01:07:48.960 respond, but it was the great grift. Everybody was sort of on the train and seeing this as an
01:07:53.620 opportunity and looking at the abundance of this only, but not looking at safety, not looking at
01:07:59.280 the risk, as you've described. California decided to assert itself in that respect, as we've done
01:08:04.600 on privacy, as we've done on a lot of child safety issues and a lot more work to do there. And this
01:08:09.640 year will be a landmark year in terms of getting to the next level in that respect. But what more
01:08:14.780 can the state be doing? The fear of that always is patchwork, not a framework for the nation
01:08:20.320 and how you support innovation, our own GDP growth, which has been off the charts in California,
01:08:27.820 vis-a-vis our competitors, at the same time address these larger global issues. Do you have
01:08:32.740 any specific ideas for a governor of California that is the current governor that has a budget
01:08:39.120 that he's releasing in weeks and a legislative session coming up in the next few months?
01:08:43.280 mm-hmm well we'll get to answering that question specifically but what one of the places of also
01:08:48.940 good news i just want to say because there's a lot of value in just social signaling where everyone
01:08:53.160 knows that there's a problem we get to that like shared common knowledge common feeling and if you
01:08:57.740 went back two years and you said by today 25 of the world's population would live in a country
01:09:04.100 where they've either have announced or enacted a ban for social media for kids under 16 you'd be
01:09:10.440 like that's ridiculous you couldn't possibly get that i want people to really feel that like there
01:09:14.200 you are in 2022 if you said even just literally three years ago yeah i know that a quarter of
01:09:19.100 the world's population we're talking australia india denmark denmark spain two weeks ago greece
01:09:24.860 added the list france i was with john height and he was in davos and he he met with president
01:09:29.740 macron and got france on board like people would have thought that was impossible i mean the trains
01:09:34.800 left the station we just had all the democratic governors out here and everyone was trying to
01:09:38.060 compare and contrast which one of the states is going to go first which is following sort of
01:09:41.120 your leadership which we're going to do but i mean it's you're right this is a tipping point
01:09:44.900 it's happening it's right and once you get 25 you're gonna get the rest of the world and the
01:09:48.900 point a lot of these companies i also i anticipate these companies and by the way if i was advising
01:09:54.240 these companies get ahead of this train yeah that's right and show your largesse and your
01:10:00.060 maturity and understanding uh so i imagine that may happen as well yeah but no that's a point
01:10:06.160 You're right.
01:10:07.060 And so right now, something really interesting happened in Hangzhou in China.
01:10:10.380 Maybe you're aware of it.
01:10:11.760 But it's this first case where there was someone who lost his job due to AI automation.
01:10:17.920 And this court ruled and said, actually, that's not a legitimate reason for you to lose your job.
01:10:23.220 Companies are not allowed to fire you from increased automation.
01:10:28.360 Will that solve the whole problem?
01:10:30.140 Probably not.
01:10:31.020 But imagine that California took a leadership position there and said, actually, there are going to be some very serious participants.
01:10:36.160 protections that as ai increases gdp um increases profits actually like people are going to be
01:10:44.320 they're sort of like an employment insurance they're going to be able to keep their jobs
01:10:48.140 like that would set a social signal for the rest of the u.s so i'm going to let you off the hook on
01:10:53.520 the larger safety risk regulation and and let's go back to ideas along these lines because right
01:11:01.960 in front of us, this notion of displacement transition. And it goes back to the earlier
01:11:06.460 point I was making, some will argue that, you know, we always, you know, Luddites will never
01:11:13.200 see the abundance on the other side, and we can't even conceive of the jobs. So with humility,
01:11:18.920 let's not just assume there'll be no human jobs, because the human mind has the capacity that's
01:11:24.880 limitless with these technologies that will be more supportive and allow us to be augmented
01:11:30.340 and discover talents and capacity we never thought possible.
01:11:34.360 So let's assume that happens.
01:11:36.240 But the concern is, it seems the most universal,
01:11:39.140 that it may happen very fast.
01:11:41.920 That's right.
01:11:42.400 It's a speed of shape.
01:11:43.680 So how do you then flatten the curve?
01:11:46.380 How do you flatten the curve?
01:11:47.500 How do we address the transition?
01:11:48.680 You get to employment insurance, something we've been talking a lot about.
01:11:52.360 You get to this notion that you can't fire someone to be automated.
01:11:54.820 That's even deeper.
01:11:55.820 And that's interesting, this Chinese example.
01:11:57.800 Tell me more about the issue of workforce.
01:12:00.340 And what I should be worried about when I see Dario saying 50% entry-level jobs, it's no longer a career ladder, it's a jungle gym, and all these young folks got a Stanford are like, now I'm unemployed or unemployable, no coders, software.
01:12:13.700 I mean, women disproportionately being impacted in the workforce when you look at those clerical jobs, admin jobs, et cetera.
01:12:20.340 I mean, what do you see a year, two years from now as we deal with the holy grail, the God complex of AGI in the displacement space?
01:12:30.340 um reed hoffman has this idea i want to make sure i get it right yeah i like which is um
01:12:36.480 one of the things that makes ai distinct and sam altman has talked about this himself
01:12:40.300 that you could get the age of these unicorn companies a unicorn company meaning a billion
01:12:44.860 dollar valuation i met a guy the other day literally billion dollar valuation it's him
01:12:49.660 one person there we go it's him exactly so i think he was parading himself around as the it's me
01:12:54.760 that's right one of the first and so that's what that's what has been posited ever since the
01:12:58.200 beginning of AI, they're saying, we're going to start having a world where you're going to have
01:13:00.880 a single person with a unicorn billion-dollar company. Now, does society work if there's a
01:13:07.420 handful of single people with billion-dollar companies and no one else has a job? It doesn't
01:13:11.200 work. Do you think those people just want to live out their lives in bunkers with private militaries
01:13:15.240 and gas masks because they've created that world? I don't think they want that world. So Reid Hoffman,
01:13:20.440 who's the founder of LinkedIn, was early at PayPal and was at some point a friend of Peter Thiel's,
01:13:24.840 you know, has this proposal that we can tax companies based on the proportion or the ratio
01:13:30.340 of how many employees they have relative to their revenue. So you want to basically disincentivize
01:13:35.480 the single solo unicorn company. I don't know if you want to add to it.
01:13:40.840 Yeah. Well, and also just to note that it doesn't stop with just single person unicorns.
01:13:46.100 With automating that final one person, not so hard. So you're going to end up with
01:13:49.820 zero person. You can have the CEO be an AI. And that's actually happening. They're already
01:13:53.640 getting ai's that are on boards and things like this yeah and so even if you might find it
01:13:56.940 questionable that that person might have again they they can they can earn their wealth but we
01:14:01.020 you have to you have to have some taxation to make sure this is being distributed and we also need
01:14:05.420 we need to find ways of having universal basic ownership not just universal basic cash payments
01:14:09.960 and ubi but universal basic ownership i think people need to have a stake in in the success
01:14:14.380 that's happening like what norway did with the sovereign wealth fund but oil didn't oil produced
01:14:19.000 this kind of gravy on top for the civilization people still had jobs so what's different about
01:14:23.100 this is we do need to find ways of doing universal basic work we also need to find ways of having
01:14:29.820 certain professions in which that embodied wisdom like a surgeon or a senior lawyer or a senior
01:14:35.480 judge we need ways of training and apprenticing almost like minimum quotas of those kinds of
01:14:40.140 occupations and roles in society make sure that we have that ongoing knowledge because again
01:14:44.140 the short-term benefit of like no one needs lawyers and then the senior lawyers all die out
01:14:48.620 that world doesn't work so yeah just the last thing to sort of add here is that you instead
01:14:54.660 of getting into arguments about how quickly exactly are people going to lose their jobs
01:14:58.200 well people really do lose their jobs let's let's plan and say okay we we think people are going to
01:15:02.200 be out of livelihoods this is uh this idea originally came to us from reed hasing the
01:15:05.980 pharmacy of netflix where he said let's set up trigger point laws if unemployment hits 10 what
01:15:11.880 are we going to do if it hits 20 what are we going to do that you could pre-set up those sort
01:15:17.040 of conditions. So we don't have to argue about whether it's going to happen, just what we should
01:15:20.300 do when it does. And I know that you've been running with Engage California, these citizen
01:15:24.380 deliberations, these ways of aggregating citizen assemblies, having citizens actually deal with
01:15:28.680 and think about these issues and come to some, have their own input in this process by reckoning
01:15:33.980 with these facts. And I think that these are all things that we need. The countries that discover
01:15:38.640 a natural resource that didn't have this engaged, well-educated citizen, kind of engaged citizen
01:15:43.500 infrastructure like venezuela or libya or something like that they don't do so well yeah but countries
01:15:48.420 like norway where you did have the engaged citizens with oversight of those funds you can you end up
01:15:53.520 with a healthier society so it's sort of like a california engagement fund because you want like
01:15:57.380 people involved in the redistribution that's right so it's interesting that you mentioned men come
01:16:03.700 which the old canadian construct uh ubi universal basic income you had elon musk the other day say
01:16:10.420 He wants universal basic high income.
01:16:12.640 Well, he said there is going to be universal basic high income.
01:16:14.800 And then was asked with Peter, right?
01:16:16.140 That's exactly.
01:16:16.700 I saw Diamandis.
01:16:17.900 I saw him probably.
01:16:18.460 And he was asked a simple question.
01:16:19.380 How are you going to do it?
01:16:20.260 And he said, oh, I did.
01:16:21.000 I was just joking.
01:16:21.760 I made it up.
01:16:22.480 That was comforting.
01:16:23.600 Yeah.
01:16:23.980 I think people should really take note of that fact.
01:16:25.940 Yes.
01:16:26.400 It was just a mispronouncing of high.
01:16:28.540 Not as in like a high income.
01:16:29.880 It's just like he's high.
01:16:30.960 He's not able to think about it.
01:16:32.060 I mean, it's a little alarming.
01:16:34.800 So look, how quickly?
01:16:36.140 Look, we're thinking about all these things.
01:16:38.020 down to the parochial and the WARN Act, which is how we actually warn the public the social impacts
01:16:43.300 of large-scale displacement and job loss. Having more capacity to see earlier, that's after the
01:16:51.560 fact the WARN Act comes out, those jobs are already going to be lost. What are the signs to show us
01:16:55.880 what the impacts are happening in the job market in real time? Issues of unemployment insurance
01:16:59.980 becoming employment insurance so that you can keep people employed for a period of time. The Dutch do
01:17:05.860 about 90% of the wage. The ultimate training is a job, the dignity of a job, back to those
01:17:11.480 Voltaire constructs. And then the opportunity then to potentially transition by using the
01:17:16.880 federal government to help either backstop. Portable benefits then become fundamental.
01:17:21.360 This notion of UBC, University of Basic Capital, this notion of sovereign wealth fund or equity,
01:17:27.360 public equity with dividends. And somehow we get shares. There's equity shares,
01:17:32.880 contributions from these large companies. That is not just taxes, it's actual equity in the
01:17:38.200 company. So there's a notion of an ownership and a larger ownership society that we don't tax
01:17:42.840 jobs with payroll taxes and then subsidize automation through tax credits. We do the
01:17:49.000 inverse in that context. So all of that, how, you know, so that's the stuff I'm playing around with.
01:17:53.800 A lot of us are thinking about right now, but how, from your vantage point, how quickly is this
01:17:59.920 happening. There's a lot of headlines, but there's a lot of debate around these headlines of all these
01:18:04.440 cuts of jobs. But then people say, well, that was a lot of covert over hiring. There's some
01:18:09.060 business model issues there, but they're sort of hiding and suggesting it's AI. We haven't
01:18:14.180 necessarily seen massive job destruction yet, or have we with AI? I think the point people should
01:18:23.260 get here is that obviously it's complicated and there's jobs are shuffling throughout the economy
01:18:27.940 a little bit right now but the long-term goal of these companies is not back to it's back to the
01:18:33.820 original open ai mission statement exactly open ai's mission statement was not to give people
01:18:40.140 helpful tools so that you can do your job slightly better their mission statement is we do your job
01:18:44.460 and actually you know gavin in uh in la there's an article in the la times about this that
01:18:48.740 is one of these new popular gig worker jobs in la is everybody straps a gopro camera to their head
01:18:55.120 and they look down and then they do laundry they cook they eat they do they do all these things
01:19:00.580 so basically the number one job soon in the world will be training the replacement for that job
01:19:05.980 think of it like being a coffin builder like you're designing the coffin and then you put
01:19:09.360 yourself in it and you know if you think i'm lying by the way just think about meta
01:19:12.840 meta told instagram creators um you know we're here we love creators we love creativity we want
01:19:19.260 you to be successful we want to you know our whole mission statement is making creators super
01:19:23.460 successful that's before they were an ai company what they do the second that they were an ai
01:19:27.600 company they they trained on all the videos of their creators and then created generative videos
01:19:33.460 that now basically suck up like a vampire your essence your life force your creativity and then
01:19:38.200 hit button and now they have a digital copy of you that you didn't consent to that can generate all
01:19:42.340 these things if you don't believe me just a few weeks ago there was an article meta is now forcing
01:19:47.100 their employees um to basically track all of their movements all their clicks all the things they're
01:19:51.500 doing on a computer to train ai agents to do all their jobs this is not a conspiracy theory all you
01:19:57.060 have to do to know where we're going is understand the incentives and look at the early warning signs
01:20:00.960 they're telling you who they really are you know open ai says we're all here to to make the world
01:20:06.160 a better place and then they released this ai slop app that's sora which was basically an infinite
01:20:10.920 they killed it now but it was an infinitely scrolling um deep fake generated content of like
01:20:15.720 you know funny videos of stephen hawking you know that's you know going through a raceway or something
01:20:20.060 like that it's just deep fake ai slob why are they doing that because they want to increase their
01:20:25.540 market dominance because they want to get users because they want to get training data and it
01:20:29.220 gets more people using open ai so their numbers going up and if your incentive tell you everything
01:20:32.880 and if you're a company and you're you know choosing do i hire a real human paralegal
01:20:37.940 or gpt7 that works for less than minimum wage 24 7 doesn't whistleblow doesn't complain doesn't
01:20:45.920 have cultural issues doesn't have paid time off like which one are you going to do well you're
01:20:50.020 Business incentives is just very, very clear, and everyone's going to be trapped in the same race.
01:20:52.820 And if you're a competitor, you may say, no, I'm going to keep the human, and then your competitor goes the opposite direction, you're out of business.
01:20:58.500 You have no choice back to the incentives.
01:21:00.080 But with those set of interventions that you just mentioned, and you just mentioned a whole slew of things we could be doing.
01:21:04.940 You just mentioned so many things that we could be doing.
01:21:07.380 And I think what it represents is I think people think, but if we don't race to automate every job as fast as possible, we're going to lose to China.
01:21:13.740 But what this is showing and revealing is we're not in a race just to the technology.
01:21:18.960 We're actually in a race for a different currency.
01:21:21.820 The currency is not who has the power first, but who is better at governing, steering, and integrating that power in a healthy and sustainable and strengthening way into your society.
01:21:30.720 And we saw this with social media because the U.S. beat China to the psychological bazooka behavior modification machine of social media.
01:21:38.540 And then we had no idea how to govern it.
01:21:40.320 So we flipped around.
01:21:41.380 We blew off our own brain with the brain rot economy.
01:21:44.420 And by the way, China regulates social media.
01:21:47.140 They do a whole bunch of stuff.
01:21:48.060 When you open up Douyin TikTok in China, their version of TikTok, and you scroll, you get videos about who won the Nobel Prize, financial advice, here's the new quantum physics theory, here's patriotism videos.
01:22:01.080 And obviously, there's problems with that.
01:22:02.160 We don't want to do it that way.
01:22:03.520 But the point is you don't have to do it in the Wild West, blow off your own brain.
01:22:07.600 And now if we release it in a way that automates all the labor with no transition plan, it's like, great, we pumped up our steroids, but we just burst our lungs.
01:22:15.440 Right?
01:22:15.920 The societal body, we do that.
01:22:17.560 So what you just outlined was a set of interventions that we can be exploring to help smooth this transition.
01:22:24.000 And we're in a competition with China for who's better at making this transition to an AI integrated world.
01:22:30.580 Where are you on the transition curve?
01:22:33.040 I mean, we talk about flattening it, but how quick?
01:22:35.980 I mean, honestly, if we're sitting here a year from now having this conversation, are we looking at that 10% unemployment?
01:22:42.880 not that 20 necessarily which is that threshold for fascism and a whole nother societal collapse
01:22:48.680 but it's going to be it's going to be confusing and spiking how to predict exactly yeah because
01:22:53.000 just think about your own experience with ai so far two years ago it can barely write an essay
01:22:58.100 yeah and now it can do some parts of your work pretty well and other parts like that is a really
01:23:02.200 dumb error um and so we're going to see not that much job not that much double off like entry level
01:23:08.880 stuff getting sort of squished around and then it's going to hit really really quickly when it
01:23:13.200 crosses the next threshold just like mythos did there's another confusing aspect about ai that
01:23:17.500 people in our space call ai jaggedness which is that there's certain capabilities like in cyber
01:23:22.180 hacking that it is already superhuman yeah already superhuman while it'll still make a very basic dumb
01:23:28.100 mistake on something else and i think part of what's confusing for people that naturally has
01:23:32.420 them say is this just hype and these companies are trying to hype this stuff is if you just look at
01:23:36.480 the dumb examples where it's messing up you're like this thing isn't that powerful we have never
01:23:41.040 been confronted with a technology that is simultaneously sci-fi level superhuman that
01:23:47.660 makes a person who's very deep in security call it like seeing jesus yeah at the same time that
01:23:52.960 that same technology can like mistake how many r's are in the word strawberry yeah like we have just
01:23:57.960 not seen that and so it's just i want people to notice for their own psychology that this is a
01:24:01.760 new psychological object that are our normal intuitions about how to evaluate something we
01:24:07.300 have to get more nuanced experience harry styles live in london england at wembley stadium
01:24:14.020 this is harry styles iheart radio wants to send you and a mate across the pond with flights from
01:24:22.540 virgin atlantic hotel from tripcentral.ca tickets and one thousand dollars cash download the free
01:24:30.100 iHeartRadio app.
01:24:31.400 Listen to iHeart new music for 10 minutes.
01:24:33.680 Enter to win.
01:24:34.760 Every day is another chance to see Harry Styles.
01:24:37.380 Very excited to see you at the show.
01:24:39.000 Kiss All the Time Disco Occasionally, available now.
01:24:42.600 Another podcast from some SNL late-night comedy guy.
01:24:45.880 Not quite.
01:24:46.840 On Humor Me with Robert Smigel and friends,
01:24:49.040 me and hilarious guests from Bob Odenkirk to David Letterman
01:24:52.360 help make you funnier.
01:24:54.200 This week, my guests, SNL's Mikey Day and head writer Streeter Seidel
01:24:57.840 help an a cappella band with their between songs banter.
01:25:01.580 Where does your group perform?
01:25:02.860 We do some retirement homes.
01:25:04.480 Those people are starving for banter.
01:25:06.680 Listen to Humor Me with Robert Smigel and friends
01:25:08.680 on the iHeartRadio app, Apple Podcasts,
01:25:11.200 or wherever you get your podcasts.
01:25:13.380 Jacob Kingston grew up in an isolated polygamous sect.
01:25:17.060 We were God's chosen kingdom on earth.
01:25:19.440 He felt destined for greatness.
01:25:22.380 So when a swaggering Armenian businessman
01:25:25.500 catapults Jacob into an extraordinary world,
01:25:28.740 he doesn't look back.
01:25:30.060 Ferraris and Lamborghinis, private jets.
01:25:32.320 Meeting the president of Turkey.
01:25:35.040 I'm Michelle McPhee,
01:25:36.120 and this is one of the most shocking
01:25:38.240 criminal conspiracies I've ever come across.
01:25:41.480 When Jacob met Levon,
01:25:42.860 this went to a billion-dollar fraud.
01:25:45.740 But with two kings from entirely different worlds,
01:25:49.100 just how long can their empire survive?
01:25:52.200 The largest tax investigation in American history.
01:25:54.980 You need to tell me what you know.
01:25:57.640 Is somebody coming after me?
01:25:59.540 Jacob told LaVon, you're ruining my life.
01:26:03.540 Listen to Kingdom of Fraud on the iHeartRadio app,
01:26:07.140 Apple Podcasts, or wherever you get your podcasts.
01:26:12.960 Last night, a blown call changed the game.
01:26:15.600 This morning, the internet lost its mind.
01:26:17.680 Highlights are trending, opinions are flying,
01:26:19.840 and nobody's telling you exactly what happened.
01:26:22.520 That's where Sports Slice comes in.
01:26:23.840 I'm Timbo. Every episode, we're cutting through the noise, breaking down the plays, the controversies, and the stories behind the headlines.
01:26:30.580 We go straight to the source, the athletes themselves, their locker room stories, their reactions, the stuff nobody gets to hear.
01:26:37.280 The laughs, the drama, the triumphs, the moments that never make the highlight real.
01:26:41.460 From viral moments to historic games, from buzzer beaters to controversial calls, we break it down, give you context, and ask the questions everybody wants answered.
01:26:50.160 Sports Slice brings you closer to the action with stories told by the people who live them.
01:26:54.940 Listen to Sports Slice on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
01:26:59.980 And for more, follow TimboSliceLife12 and the TikTok Podcast Network on TikTok.
01:27:04.720 And part of that nuance is a deeper understanding that we're not just talking about apps here,
01:27:08.620 we're talking about the physical world as well.
01:27:10.300 Yeah, that's right.
01:27:11.340 It was a big headline that I hope people paid attention to when Elon announced
01:27:15.740 that in his original factory here in Fremont, California,
01:27:20.640 that he's converting the S and the X Tesla cars now to humanoid robotics.
01:27:26.120 And his goal is ultimately a million, you know, that's Elon,
01:27:29.340 who the hell knows, you know, his goal setting.
01:27:31.340 But the notion that he's converting that factory from cars to humanoid robotics
01:27:36.060 and AI now into the physical world.
01:27:38.900 Is that, I mean, we're seeing driverless cars.
01:27:42.500 And if you haven't seen them in, you know, your home state,
01:27:45.260 you're about to and you're going to see flying cars um they're just quadcopters basically but
01:27:50.500 they're coming soon and we're going to be doing a lot more of that by the way that's great like
01:27:54.820 we can have a world of quadcopters and you know innovation while not racing to replace us
01:28:00.560 economically replace us socially so mark zuckerberg you know designs your kids friends rather than
01:28:04.760 you have them having actual friends replace us politically not having political power and then
01:28:08.520 replace us physically by owning our physical presence in our in robots so again yeah i think
01:28:14.160 people might hear this as an anti-technology conversation yeah you know you talked about
01:28:18.220 your legacy uh and your father and grandfather and we were talking backstage yeah yeah you know
01:28:23.300 az's father started the macintosh project yeah yeah you know we we come from a legacy where the
01:28:28.460 word humane is about an inspiring vision about technology that's actually integrated and in
01:28:32.980 service of our humanity that's of a pro-human future that's what all of this is motivated for
01:28:37.600 so i get excited about you know flying you know cars that have cool you know ai and just to say
01:28:42.860 too just as my own personal experiences you know i spend a big portion of my life i founded a thing
01:28:47.800 called earth species project um we're now around 40 people and you're the biggest consumer of it
01:28:52.660 i wouldn't say the biggest consumer yeah exactly it's me i'm sorry guys um uh first infinite scroll
01:28:58.240 now this um but you know we were using ai to translate animal language animal communication
01:29:03.760 one of our researchers just discovered um just before she joined us that like not only do dolphins
01:29:08.700 have names that they call each other by that their mothers teach them they will talk about each other
01:29:13.480 in the third person so they'll talk about another dolphin that isn't here so it's like one of the
01:29:17.440 biggest hallmarks of language to be able to talk about something that's not here and not now but
01:29:21.260 they will continue to use their mother's name even after she's died yeah sweet right it's it's
01:29:27.680 beautiful with these are the kinds of things that ai can show us and teach us about the world and
01:29:33.600 connect us with the natural world the world around us and shows things we couldn't possibly imagine
01:29:37.540 So I just want everyone to hear, it's not that we're just here saying no AI, it's just saying not AI in this way, that technological progress might be inevitable, but the way that AI rolls out is not.
01:29:49.900 That's right.
01:29:50.240 And every time somebody says it's inevitable, it's like casting a spell.
01:29:53.840 That's right.
01:29:54.480 Where, because if it's inevitable, then there's nothing we can do, then you shouldn't possibly act.
01:29:58.460 There's a button we're hitting called commit suicide, commit suicide.
01:30:00.920 Is it inevitable that we all hit that button?
01:30:02.560 No, when it's called 10% GDP growth and innovation, we push the button.
01:30:07.060 If we could collectively see that the button we're pushing is more nuanced than that, that there's some threshold by which we are essentially committing civilizational suicide, we're not going to have a human future.
01:30:17.780 If you ask anybody, what gives me hope?
01:30:20.520 We've been on the road with this film.
01:30:22.220 You walk people through the basic facts we've talked through.
01:30:24.740 And you ask, who here is stoked about the future that we're headed to?
01:30:28.160 I was even at the Miami Tech Summit in front of a pro-business, pro-AI.
01:30:31.920 A lot of people invested in it.
01:30:33.260 Again, we're invested in positive technology too.
01:30:36.460 But I was able to see that entire audience is like, no, I don't want that either.
01:30:39.760 Wow.
01:30:40.160 Yeah.
01:30:40.520 So again, I think it's like as long as you give people the off ramp, there are ways we can have technology that's in service of making life better.
01:30:47.780 We need to be funding and innovating in that way.
01:30:50.200 And we need to be blocking off the parts that are hurting our children, that are diminishing people's cognitive capacities or replacing their kids' relationships with AI companions.
01:30:59.520 And we can do that.
01:31:00.260 And you've done some of that with here in California.
01:31:01.780 So, you know, it's not that I am or we are by default optimistic about the default trajectory, not at all.
01:31:08.820 It's that if we have the clarity, we can put our hand in the steering wheel and we can steer it somewhere else.
01:31:14.680 This notion of accelerate and steer.
01:31:17.420 Yeah, that's right.
01:31:18.760 What happens when you accelerate and you don't steer?
01:31:21.080 You obviously crash.
01:31:22.520 It's just it's like not rocket science.
01:31:23.740 It's 100% the likely outcome.
01:31:26.140 um what just in in as we wrap up this notion of governance going back to that is it's it's the
01:31:33.380 foundation here which the regulations and and and the relationships are formed the partnerships to
01:31:39.720 begin to address the common humanity and the common threat and the common cause common
01:31:43.940 opportunities that present themselves um this notion of capture i mean you've got these packs
01:31:50.220 You've got so much concentrated wealth.
01:31:52.740 We're going to likely have the first few trillionaires this year, this calendar year.
01:31:58.640 That's right.
01:32:00.200 These IPOs, I mean, it's just, now it's a big surplus in the state, the abundance, the GDP.
01:32:07.200 I mean, it's frothy, as they say.
01:32:10.160 But a lot of that now is going to make sure that we're protecting incumbents against innovation,
01:32:16.940 you know some incumbent capitalism uh not entrepreneurial capitalism necessarily
01:32:21.240 innovation capitalism uh there's that friction that's always ongoing and those that are just
01:32:25.860 going to do everything to hammer as they did with social media to make sure there's no regular we're
01:32:30.760 still debating section 230 that's right for christ's sake yeah yeah in this country so how do
01:32:35.260 we start to break that reality i think money and politics that's just that's right that's that's
01:32:41.540 up the ante just just a little bit even further which is that so as we get more trillionaires
01:32:46.060 Like they can hire private security, but that still requires relying on human beings.
01:32:50.900 But you were just pointing out that we are heading into like a world where we have drone
01:32:55.180 armies, we have humanoid robot armies, when trillionaires can just buy like their drone
01:33:01.340 armies to fight.
01:33:02.400 Like we enter into techno feudalism, right?
01:33:04.680 And so like that is the world that if we see, if we don't do something, we're going to end
01:33:09.640 up into.
01:33:10.240 But to answer your question, it's all about the campaign loving.
01:33:14.160 It's all about that.
01:33:14.800 And $190 million has gone into basically AI accelerationist PACs funding for this midterm election alone.
01:33:25.740 Just the midterms.
01:33:26.340 Just the midterms.
01:33:27.420 So $190 million, I believe the presidential is $2 billion.
01:33:30.280 So basically 10% of the presidential is going into just the midterms for AI alone, not even for the rest of it.
01:33:35.400 And it's not to regulate.
01:33:36.820 It's not to regulate it.
01:33:38.040 It's to say remove everything and go as fast as possible.
01:33:41.080 Right. Let it rip.
01:33:41.920 If you had everyone in the world, I think, hear the conversation we just had.
01:33:47.080 And at a basic level, common sense, looking your children in the eye and say, are you stoked about that future?
01:33:52.420 No one wants that.
01:33:53.840 So the key piece of agency is going into the midterm elections, not voting for people who have taken money from those AI accelerationist groups or don't have a position on AI that's trying to steer away from these outcomes.
01:34:07.840 Now, we obviously have to articulate that in a clearer way.
01:34:10.220 what does it mean to have kind of a pro-human platform in future and and the companies try to
01:34:15.540 make the conversation inaccessible like oh well you don't understand AI they're trying to make
01:34:18.800 it seem like so you don't know how to regulate it let's put us in charge yeah we call this the
01:34:22.580 the under the hood bias where it's as if people who know how to make the biggest engines know how
01:34:27.020 to lay out cities and traffic lights and that's just not true like people who know how to make
01:34:31.300 car engines are not the best people for knowing how to make cars safe and prevent car accidents
01:34:36.100 yeah exactly so it's pretty simple it's like do you want an anti-human future in which you will
01:34:42.480 be permanently disempowered where no one has an incentive except for charity to pay your bills
01:34:47.060 for you and cover your you want the companies that took your job you want to be dependent on
01:34:51.760 them for the rest of your life to pay your bills for you yeah with no economic leverage no so this
01:34:57.280 is the final window you know you want to vote for people who are going to protect you economically
01:35:01.500 protect you socially, protect you politically, meaning protect our jobs, protect our vote,
01:35:06.220 voting pro-human. And obviously that has to get articulated even more clearly. But that is the
01:35:11.540 number one way that people can make a difference in the short term. There's other things too,
01:35:15.180 like boycotting companies that are enabling mass surveillance. You know, when the company's
01:35:19.480 subscriptions go down by a lot, they really need their numbers to be going up. So the companies
01:35:23.320 are more vulnerable than you think. And you're more powerful than you think, not just if you
01:35:26.920 unsubscribe and boycott them, but get your company, get your church group to do that too.
01:35:31.500 And when those numbers start to change, it actually has a difference.
01:35:34.580 Scott Galloway has been talking a lot about that as well.
01:35:37.220 That's right.
01:35:38.000 Absolutely.
01:35:38.520 Got to use whatever power at your disposal.
01:35:41.480 Let's just briefly talk then about the power.
01:35:43.720 I mean, you know, free and fair elections.
01:35:48.580 We talk about truth, trust more broadly, deep fakes, political ads.
01:35:52.780 I mean, I've seen stuff, you know, meetings, conversations I've had that are, I mean, it's next level what's out there.
01:35:59.820 Deep fakes of you.
01:36:00.200 Yes, just the BS that's already out there, the ability to manipulate the crowd in the context of social media, the algorithms.
01:36:08.260 I mean, now you've got concentrated in the hands of a few in that respect.
01:36:11.840 I mean, that whole thing.
01:36:12.780 I mean, you guys have talked about free and fair elections.
01:36:16.860 We talk about the timelines, not only on job displacement, but timelines to get this right.
01:36:22.220 That's right.
01:36:24.180 Domestically, globally.
01:36:25.700 I mean, 2026, you're talking midterms?
01:36:29.860 yeah i mean this is i mean we only have a few more at bats to get this right that's right or
01:36:34.520 is that overstated no this is it it's a few more this is the moment by 2028 like that'll be the
01:36:39.740 last human election like it's going to be ai's running all of the election election campaigns
01:36:44.500 the ads doing all the both information and disinformation because human beings just can't
01:36:48.760 operate at that speed and are not that effective yeah so this is it this is the window but and i
01:36:53.680 know i just want to like at the human experiential level we've been talking about some hard stuff
01:36:57.120 the last little while and you know i just want to say you know we struggle with how to communicate
01:37:02.200 this in a way that's responsible because here's the here's the trade right it's hard to face this
01:37:07.120 but if we don't face it we just like look away what are we going to get we're going to get the
01:37:11.660 default anti-human path and so there's this trade where we the only way out is through like it is we
01:37:17.660 call it kind of like a rite of passage like our ability to confront basically a a shadow of a
01:37:23.540 technology and the default future that that brings, if we can see that clearly, and if we can
01:37:29.220 know that you know and I know that we don't want that, and if Xi and Trump and the people at the
01:37:34.240 highest levels of these governments, because you ask any reasonable person at a very high level in
01:37:38.640 national security on any side, and you say, do you want AIs that are going rogue and hacking to any
01:37:43.080 computer system and are already mining cryptocurrency? Does that sound good to you?
01:37:47.200 Does that sound safe to you? At a universal level, it's not. So there's actually much more common
01:37:52.680 ground. And even, you know, 40, I think it's already the case that 57% of Americans think
01:37:57.380 that the risks of AI currently outweigh the benefits. I don't like that stat because it
01:38:01.000 makes it too, like it's all bad versus all good or something like that. You know, there's already
01:38:05.440 the pro-human AI declaration where 46 groups came together and said, we agree on these five
01:38:10.640 principles to make a pro-human future. You can look it up. It's humanstatement.org. That's the
01:38:14.980 one that also includes, again, Glenn Beck, Bernie Sanders, Steve Bannon, all these people. I believe
01:38:19.980 that 65 percent of Americans believe we should not create super intelligence until we know how
01:38:24.840 to do it provably safely and controlled sounds like a pretty basic thing like let's not do
01:38:29.240 something it's like should we build a nuclear bomb that until we know how to do it safely or
01:38:32.980 we know that it won't set off it we won't ignite the atmosphere probably we should wait to do that
01:38:36.480 so this is not a radical proposal this is not do you want to know how many like what percentage
01:38:42.240 of Americans think that we should just go as fast as possible unfettered non-regulated AI
01:38:48.540 what percent of americans five percent is it literally yeah literally five percent just five
01:38:52.880 percent so actually it's the most popular platform to run on that's right to do the like the safe
01:38:57.500 thing it was and it's and whether you're democrat or republican you don't want to be surveilled by
01:39:01.260 ais whether you're democrat republican you don't want ais taking your jobs which it will do equally
01:39:04.920 to both sides but do you then subscribe to the bernie aoc frame just shut down the data centers
01:39:10.480 and moratorium and tell i think of it as those data centers and it's like i want a pro-human
01:39:17.000 data center policy it's like you get to build the data center when these conditions are met and we
01:39:22.100 know that it's going to land a center protein future i'm not saying that's easy i'm not saying
01:39:24.980 but because i want people to hear it's not just no to all of it yeah yeah it's making sure that
01:39:31.320 the conditions are the steering is built in so when you see that data center you should ask is
01:39:35.980 that data center here to basically enhance my life and strengthen my family well you were even just
01:39:41.480 saying data center was solar mostly data centers aren't solar that's right we're turning back on
01:39:46.400 coal plants that's right natural gas plants that are exactly yeah yeah and you know often in in the
01:39:52.420 sci-fi movies when is it the case that human beings actually stop all their bickering and
01:39:57.860 they start coordinating it's when the aliens come right yeah we are summoning the demon we are
01:40:02.480 summoning the alien and if we can see it that way then it's sort of like a game of thrones winter
01:40:07.600 is coming we have to understand winter is coming then all the fighting in westeros can like pause
01:40:11.620 for just long enough that we can deal with it yeah that's this moment because otherwise it just
01:40:15.600 feels completely hopeless like how how are we going to like deal with all the finance reforms
01:40:20.040 and we can't like when has congress actually done anything um and yet there is this one moment where
01:40:27.100 like all of humanity is on one side there is a human movement like that's what i think the social
01:40:32.760 media stuff shows if we don't think of this as just an ai problem but as a technology encroaching
01:40:38.720 onto our humanity overreaching into our humanity problem then actually there is massive momentum
01:40:44.740 more than we ever thought was possible because what we have to do is juice those like the
01:40:49.800 momentum that's already there that's right yeah well look in the absence of um of federal leadership
01:40:55.340 california will continue to assert itself i believe in the power of emulation yes success
01:40:59.300 leaves clues we'll continue to try to yes iterate on this and lean in but but look this you know
01:41:04.700 the clarity that you guys bring to this conversation the importance of this conversation
01:41:08.740 being brought to scale and and broadening the consciousness um and the imperative of seeing
01:41:13.760 um this documentary again the documentary is called the ai doc or how i became an apocal
01:41:20.440 optimist and and we can't let that slip twice because you've used a word that people are not
01:41:27.760 familiar with which is a good way to end and that is this convergence of optimism and pessimism
01:41:34.720 uh more optimism than pessimism would be a better place and it's about agency and i'll just leave
01:41:39.820 you with a quote that i loved from a meditation teacher who talked who uh happened to be a
01:41:45.400 meditation teacher and it's from the army corps of engineers which is that uh the difficult we do
01:41:51.040 today the impossible takes just a little longer i like it yeah yeah and i just end by saying
01:41:58.500 right it really actually isn't about being an optimist or pessimist because to choose that
01:42:04.440 label for yourself it's a sort of it's to take a back seat like to sit down and just be like i'm
01:42:08.880 just going to accept that it'll either come out well or not versus taking responsibility for trying
01:42:15.260 to see clearly to shift the world to go well um and i think that's what like everyone listening
01:42:22.280 can can do is that this can all feel like too big um what what can i do and then you realize even
01:42:29.480 like the like the the ceos of the company sort of feel a similar way um but this is not just about
01:42:35.900 of what we must do this is fundamentally a question of like who we must be that if we are
01:42:41.740 the kind of people that aren't looking for a path and the path is certainly not clear doesn't seem
01:42:46.520 obvious or even possible but if we're the kind of people that don't look for the path we definitely
01:42:51.720 won't find it if it's there if we are the kinds of people that do look for the path then if it's
01:42:56.640 there we have the opportunity to find it and just maybe one last thing is people listening to this
01:43:02.040 this is a lot your role is not to take on this whole problem you don't have to do that there's
01:43:08.440 some people who are soldiers and there's some people who are civilians but your role is to be
01:43:12.520 part of the collective immune system against this anti-human future one simple way you can do that
01:43:17.720 is to share this conversation yeah with literally the most powerful people that you know and ask
01:43:23.000 them to watch it and to share it with the most powerful people that they know and if you've done
01:43:27.760 that you can say as long as you are spreading the word and being part of that immune system
01:43:31.840 you can rest at home kiss your children at night focus on the things that all of this is about
01:43:37.040 anyway which is what do we love about the world what do we love about life that we want to continue
01:43:42.180 and come from that place because that is the energy that we will that that will inspire other
01:43:47.080 people to want to take those those other actions too and i know gevin you ended up watching an
01:43:50.760 earlier presentation that we did the ai dilemma yeah i think somebody said that you watched it
01:43:54.440 like three times and shared with all your staff how did you end up hearing about it yeah well i
01:43:59.800 I mean, come on, hearing about it from you guys, you guys, I was able to get the early, early preview from the two of you and was able to devour it, took notes and, and then shared it universally with everybody around me.
01:44:15.520 Look, you know, I, I, in the spirit of you guys just ended, I couldn't agree with you more.
01:44:19.460 This notion of agency is so important.
01:44:21.240 And we talk about that on the podcast all the time, but this notion of the future, it's not something to experience, something to manifest.
01:44:26.960 Yeah, that's right.
01:44:27.420 The future is inside of us.
01:44:28.460 Exactly, that's right.
01:44:28.720 That's exactly right.
01:44:29.280 And so it's decisions, not conditions.
01:44:31.100 That's exactly right.
01:44:31.860 And so this idea that we are powerless, it's just bullshit.
01:44:35.140 Yeah, that's right.
01:44:35.560 It's complete bullshit.
01:44:36.860 Yeah.
01:44:37.480 Everything that we laid out is an opportunity to do better and be better.
01:44:41.940 And I think the spirit of the called arms for everybody is we all have a role to play.
01:44:45.940 And those roles are different.
01:44:47.280 And no one has to be, you know, you don't have to be overwhelmed.
01:44:50.800 But this notion of just being present in the conversation and indulging in the conversation and sharing it, I think, is foundational.
01:44:58.620 So, guys, this is really important.
01:45:02.440 The timeliness of this conversation, it cannot overstate.
01:45:07.980 And so I'm very grateful for you to be out on the road all across this country sharing this remarkable documentary.
01:45:14.680 I encourage everybody, go out and watch it.
01:45:16.860 And more importantly, share it and not fall prey to any of the cynicism and negativity.
01:45:23.140 Maintain your sense of optimism.
01:45:25.280 Again, we can shape the future.
01:45:26.840 Thank you both.
01:45:28.620 another podcast from some snl late night comedy guy not quite on humor me with robert smigel and
01:45:40.760 friends me and hilarious guests from bob odenkirk to david letterman help make you funnier this week
01:45:46.860 my guests snl's mikey day and head writer streeter sidell help an acapella band with their between
01:45:52.680 songs banter where does your group perform we do some retirement homes those people are starving
01:45:58.040 for banter. Listen to Humor Me with
01:45:59.940 Robert Smigel and friends on the iHeartRadio
01:46:02.100 app, Apple Podcasts, or wherever
01:46:03.960 you get your podcasts.
01:46:05.580 Life is full of hurdles, so how do you
01:46:07.880 keep going? On Hurdle with Emily Abadi,
01:46:10.360 we're talking with the most inspiring woman
01:46:12.020 in sports and wellness, from
01:46:13.800 professional athletes, coaches, and Olympic
01:46:15.840 champions, about the challenges that
01:46:17.820 shape them and the mindset that keeps them moving
01:46:19.860 forward. At our level, at this scale,
01:46:22.140 being able to fail in front of the
01:46:23.920 entire world. Like, I can do anything.
01:46:26.140 I can do anything. Listen to
01:46:28.000 hurdle with emily abadi on the iheart radio app apple podcasts or wherever you get your podcasts
01:46:32.960 presented by capital one founding partner of iheart women's sports imagine an olympics where
01:46:38.980 doping is not only legal but encouraged it's the enhanced games some call it grotesque others say
01:46:45.360 it's unleashing human potential either way the podcast superhuman documented it all embedded in
01:46:51.460 the games and with the athletes for a full year within probably 10 days i put on 10 pounds i was
01:46:58.080 having trouble stopping the muscle growth listen to superhuman on the iheart radio app apple
01:47:03.640 podcasts or wherever you get your podcasts i'm michelle mcphee and i've been unraveling the
01:47:10.120 strangest criminal alliance i've ever reported on a mormon polygamist and an armenian businessman
01:47:17.360 multi-million dollar house ferraris and lamborghinis private jets a billion dollar fraud
01:47:23.100 but how long can this alliance last tell me what you know is somebody coming after me listen to
01:47:30.900 kingdom of fraud on the iheart radio app apple podcasts or wherever you get your podcasts this
01:47:37.400 is an iheart podcast guaranteed human