00:03:17.760This new documentary that was released, what, a month ago?
00:03:20.800Yeah, but a month and a half ago, the AI doc or How I Became an Apocalypse Optimist, our friends, the directors of Everything Everywhere All at Once, we chatted with them several years ago when we first really recognized the situation we were in and asked them for their help to help create a movie to clarify the AI situation.
00:03:39.480And long story short, we'll talk about it.
00:03:42.060But we got the directors of Navalny involved and its whole team came together and tried to make a movie that would clarify the predicament that we're facing.
00:03:48.160And Kevin, do you remember seeing the film The Day After?
00:04:03.400It's the largest synchronized television event in human history in terms of the number of people seeing the same thing at the same time, besides probably the Olympics.
00:04:12.700It was a made-for-TV movie about what would happen if there was nuclear war.
00:04:16.460right and it's not as if people didn't know what nuclear war was but there's something about we
00:04:21.940didn't really want to think about it or confront it like why would you and i think the people who
00:04:25.840made that film were trying to create this kind of collective confrontation because the film was
00:04:31.440aired in the soviet union like five years uh no i think five years later right before right before
00:04:36.440the reikovic accords uh the first arms control talks and that set the context that it's like i
00:04:41.880know that you know that i know and you know that i know that you know that we both don't want that
00:04:45.760to happen it's what you know what steven pinker calls common knowledge but we think of it as
00:04:49.500almost common feeling yeah yeah that we both we both know that we feel the same way about that
00:04:55.340anti-human outcome and i think that with ai we have to get clear because ai is a much more confusing
00:05:01.380technology because it's like it's like if nukes as aza would say it's like if nukes carried imagine
00:05:05.740we're trying to reason about nukes that also could cure cancer like how would you deal with
00:05:11.340something like that or pump gdp by 10 that's right right and so what the day after did is that it
00:05:16.560created this this common feeling common knowledge where we could all see oh a world where we all go
00:05:21.700to nuclear war that like that that's an anti-human anti-life future and as long as there's confusion
00:05:26.740about which default world ai ends us in we're not going to do anything about it um but if we all
00:05:32.120have clarity that it's heading us towards an anti-human future and we can build that out in
00:05:35.740this in this interview um then that means it creates the conditions where we can coordinate
00:05:40.320to do something else right so so this so that that was the impetus behind this this documentary
00:05:45.440it was to create that collective understanding that's right that wisdom that collective then
00:05:50.240response mechanism which would be we need to do something about this you know the promise
00:05:55.080that we could talk about but the peril in terms of the safety risks and the anxiety that so many
00:06:00.120start feeling so you guys have been out on this tour you've been all over the place over the
00:06:04.240course of last month um and you've been you know obviously you know reaching out to people on all
00:06:09.560political sides of the aisle because of the human nature of this. This is universal. There's
00:06:15.480something that connects all of us. This is not a partisan frame. And so is that something that's
00:06:22.280been captured in your own consciousness as you've been out on this door, how real and invisible that
00:06:29.600is? Or are we still in the process of discovery? I mean, are we still in the process of understanding
00:06:35.000more fully what the hell this is all about.
00:06:38.040Well, I think people still, not everybody knows.
00:19:52.540Yeah. This is not over a course of months or years.
00:19:54.740If I get AGI first, I have an army of scientific companies that automate all scientific development.
00:19:59.920So suddenly I'm getting like 24th century science and technology that I own and run that I can inform new military weapons and new physics.
00:20:09.360And the problem is notice that none of us can prove that they won't get that.
00:20:15.060We can't say for sure that they would get that.
00:20:17.220But the people who are optimistic and accelerationist about AI, I just want to like bring in their perspective for a moment because they're represented in the AI doc film.
00:20:24.160The film includes the risk folks who are oriented about safety and it includes the accelerationist.
00:20:29.720And the accelerationists say the biggest risk is not going fast enough because imagine all the science we could get, all the cancer drugs, all the medicine.
00:20:38.540Think of all of the people who would die if we don't go faster.
00:20:41.900And that's the mentality that they're coming from.
00:20:44.920But one of the things that we talk about in the film is that the promise and the peril of AI, we talk about the promise and the peril, but they're interlinked.
00:20:53.000And the promise doesn't prevent the peril, but the peril can undermine the world that can receive the promise.
00:21:47.580And that's this notion, this tension between going back to sort of more the utopian framework of available to the world versus these closed systems.
00:21:57.040an open system meta starts with an open system originally china seems to be in the open source
00:22:04.960space talk to us a little bit about that for people that don't necessarily understand
00:22:08.720that dynamic and that distinction you mean between things that are open yeah open source
00:22:14.480versus proprietary technologies yeah well so for people that don't know open source means that the
00:22:19.360code that underlies the system anyone can edit anyone can access and anyone can contribute to
00:22:25.360And that's often meant that systems that are open are more secure because there are many eyes working on it, many hands that are working on it.
00:22:31.280And looking at all the codes, they can see all the bugs.
00:26:07.240let's say Lama or Deep Seek put guardrails on it
00:26:09.920in the model saying oh you're not supposed to answer questions about how to cyber hack something
00:26:14.320well it turns out for about was it $30 Jeffrey some our friend of ours was able to retrain the
00:26:21.800open model to just get rid of all of those guardrails just eliminate yeah and that's that's
00:26:25.960why open is dangerous and again it's dangerous in a new way that's different from close so it's not
00:26:31.300that we don't want there to be open models or competition from the major players we also need
00:26:35.280to avoid the concentration of power because if you don't have these competitive things suddenly
00:26:39.500you have like five companies that own the world economy. Everyone's paying them instead of paying
00:26:44.080their workers. And that's a huge risk. And we want to decentralize that wealth and have other
00:26:48.540competition. But you have this other balance of if I decentralize that power and I don't have it
00:26:53.380connected or bound to responsibility, I'm unleashing catastrophes. And that responsibility
00:26:58.320was exampled by Dario pulling back mythos in this context. That's exactly right. But shortly after
00:27:04.780he does that. And we were with him a few weeks ago. He said, look, I'm only about a month ahead,
00:27:09.620if that. Then you have OpenAI came out with their version. That's right.
00:27:13.840Shortly thereafter. And I believe they're not holding it back.
00:27:16.300And they're not holding it back. That's exactly right.
00:27:18.700So begs the question. Well, we're going to get to this sort of regulatory framework,
00:27:23.160but it's just sort of painting the picture of a deeper understanding. So look, as it relates to
00:27:27.440your whole focus is on this notion of human centered. That's right.
00:27:31.900This notion that, and it's been your dominant frame with the nonprofit you guys started years and years ago around social media, now is the dominant thrust of the focus as it relates to AI.
00:27:47.580I want to unpack and get back to that and what ultimately this notion of human-centered means.
00:27:54.060And obviously, we talk about labor and automation in that respect.
00:27:57.900But this notion of AGI again, back to this holy grail.
00:28:01.900um you know talking to all these folks that capex are spending doesn't make any the roi
00:28:06.980makes no sense that's right unless unless the return is the entire economy yeah returns the
00:28:12.500entire economy and if we don't do it we're out of business anyway yeah so we don't have a damn
00:28:17.780choice yeah so we'll throw hundreds of billions of dollars yeah that's right data centers all
00:28:23.360over the place compute compute so nvidia stock going through the roof in terms of just gpus tpus
00:28:28.300that could just keep going and it's that's the only limitation that's right you should
00:28:33.340and the energy itself and the energy itself yeah so um canadian women are looking for more
00:28:39.820more out of themselves their businesses their elected leaders and the world around them and
00:28:44.380that's why we're thrilled to introduce the honest talk podcast i'm jennifer stewart and i'm katherine
00:28:49.820clark and in this podcast we interview canada's most inspiring women entrepreneurs artists athletes
00:28:55.940politicians and newsmakers all at different stages of their journey so if you're looking
00:29:00.920to connect then we hope you'll join us listen to the honest talk podcast on iHeartRadio or
00:29:05.740wherever you listen to your podcasts another podcast from some SNL late night comedy guy
00:29:11.660not quite on humor me with Robert Smigel and friends me and hilarious guests from Bob Odenkirk
00:29:17.320to David Letterman help make you funnier this week my guests SNL's Mikey Day and head writer
00:29:22.800Streeter Seidel, help an acapella band with their between songs banter.
00:39:51.820So the way they try to control it is they do brain scans in real time on the model while it's doing all the behaviors.
00:39:57.760And they look for when neurons light up that are associated with like strategic deception.
00:40:01.980And so they think that maybe we can control these crazy, super intelligent machines if we just know that the neurons that are lighting up on strategic deception are happening.
00:40:09.900We'll be like, OK, stop the model then or something like that.
00:40:11.880By the way, if you, in their own report, in Claude's report, in the system card, if you look at those strategic deception neurons and you kind of double click, like, what was it thinking?
00:40:21.860What is the phrase that it was thinking?
00:40:23.660And the phrase was, they deserve to be deceived because they were pigs.
00:40:28.440That was the phrase that was alive in that neuron.
00:40:31.740Now, again, it's like, I don't want to scare people like we're trying to say all of AI is evil.
00:40:37.320All we're trying to do is establish clarity and the facts about what makes this technology distinct from other technologies.
00:40:43.060A nuclear weapon doesn't start thinking for itself and saying they deserve to be deceived because they were pigs, right?
00:40:48.380But AI will automate and think in ways that are creative that no one who made it can predict or anticipate.
00:40:54.360And so before we scale to systems which are beyond all human intelligence capabilities, like we better have solved these things.
00:41:02.660because researchers have worried about AIs
00:41:06.080colluding and cooperating against humanity for a long time.
00:41:09.040And to be honest, whenever I read that, I'm like, really?
00:41:10.900Yeah, and be clear, I was also not a believer in that as well.
00:41:13.180Like when people like Eliezer Yudkowsky
00:41:14.860or others had talked about this, I was very doubtful.
00:41:31.680it did this autonomously. And so when you look at the available evidence now of blackmailing,
00:41:38.000scheming, deceiving, lying, self-preserving, peer-preserving, automatically mining for
00:41:42.920cryptocurrencies, it's like how many warning lights do you need that this is kind of, you know,
00:41:47.660we've seen this movie before. It's like the HAL 9000 movie. Now, the reason we're saying all this
00:41:51.420is if you're wearing the outfit and embodiment of it, you're a Chinese military general in China,
00:41:57.840you hear about these examples do you think that that human mammal feels different than you feel
00:42:02.820right now listening to this no of course not exactly and by the way there's really good news
00:42:07.280in that yeah because it means that we all as a human species yeah are actually feeling the same
00:42:12.720way and the good news is how many do you think of the world leaders know about these examples we
00:42:16.340just laid out you had to guess yeah uh a handful a handful that's right but like yeah like on one
00:42:21.860hand yeah than that just a handful just a handful yeah and and how many of the top national security
00:42:26.280leaders know about all those examples i don't even think that many of them know it so the point
00:42:30.140is there's actually a lot of headroom if the incentive can change from i it's the one ring
00:42:35.400to rule them all to it's the one ring that has a mind of its own that no one knows how to control
00:42:39.280so the way you change the incentive is you have to change what people see as what ai is is it the
00:42:45.640controllable power that will give me permanent dominance or is it the power that will run away
00:42:50.220and have its own power over everybody racing for it and and again right now the labs are like
00:42:55.360barely kind of able to control it but if you put together these facts we just laid out times the
00:43:00.340fact that it can hack into computer systems now we're just like right on the threshold and we're
00:43:05.800sitting here as trump president trump and she are meeting in a couple days and you know if you asked
00:43:12.140us uh two months three months ago um people would say oh it's we're just like ai is never going to
00:43:17.140be on the agenda and the good news is there's a lot of problems here but now ai is on the agenda
00:43:22.440yeah um and so there's there's things are moving even though it's happening very late in the game
00:43:29.080and it is scary and part of it is it's like we have to come together as a people and say
00:43:33.880if we don't want the anti-human future now is the time to steer and when you say now i mean you know
00:43:38.640remember listening a year ago talk about exponentials on top of exponentials that's
00:43:42.580right no longer linear i mean is it we talk about moore's law uh to for intelligence now not just
00:43:49.040chips um and so is that trajectory about where you believed it would be uh or is it not as you
00:43:58.480know if you know we're with chat one versus two three it's a little bit better a little less you
00:44:04.100know noise in there it's a little more accurate i don't have to always double check the link
00:44:08.140that's right uh is it i mean where where do you think in terms of just how quickly this thing's
00:44:13.680accelerating or are we going to get to a point where now it starts to slow down a little bit
00:44:17.460we we got this sort of intense burst of new and interesting activity now this there's is it
00:44:23.880compute probably compute problems is it you know what it was it energy problem what is what's going
00:44:28.460to be the or is it certainly sure is i regulatory and we're going to get back to that yeah i mean
00:44:34.540i would just want to name a psychological effect that we that we've experienced that i think
00:44:39.340everyone listening probably experiences too which is you know so we following all the predictions
00:44:44.500are actually sort of like right on track for where like researchers thought that ai would be
00:44:49.420and even though we knew these facts there's some way that even we it's still surprising
00:44:54.380it's still scary take it fully seriously didn't like a body because can't yeah i can't really get
00:44:59.260this good this fast like sure it's gonna be scary but it'll be off a little bit further
00:45:03.320and yet it actually is moving this fast and every time that it's been predicted that we're going to
00:45:08.980hit a data wall there isn't enough data we've used all the data on the internet it's we can't scale
00:45:13.920Right. It's basically just reading everything that's already out there. And it's basically hit that wall. And now it's got no more creativity unless we have more inputs of the creative human mind.
00:45:26.820That's right. And then the next one is like, well, we don't have enough chips to keep going. And then we don't have enough energy. And the point being is that there are trillions of dollars going into finding all of the solutions to all of these bottlenecks.
00:45:38.220all the smartest minds are going because this is the biggest incentive because it gives you
00:45:42.700political economic military scientific technological cyber dominance forever and so if you think that
00:45:50.940any one of these bottlenecks is going to stop that mass sum total of like that incentive it's it's a
00:45:56.820little delusional yeah um and that's why we have to have this clarity that where we're going
00:46:01.540isn't safe for any of us because that that is the coordination point that is where we'll start to
00:46:07.420coordinate differently. Now, you're bringing up something, Gavin, that there is a belief that
00:46:14.460some of this is hype, that the companies are hyping the technology. I just read a blog and
00:46:19.260reason there's going to be no job losses. It wasn't Mark himself, but it was a member of the team
00:46:24.220saying, you know, we're back to a utopian future. And we're going to pay abundance abounds,
00:46:30.140cost of goods collapses. We find our lives purpose and meaning in many different ways,
00:46:34.820It's not the, quote, unquote, dignity of job we find other.
00:46:37.820And jobs will be plenty because we can't even conceive of the jobs.
00:46:41.480Two hundred years ago, we were all farmers here in Sacramento.
00:47:24.980So I really, really, really want to meet that criticism.
00:47:29.540So people will say Anthropic has a history of hyping the technology, saying it's dangerous so that they can get regulatory capture, get the government to regulate it, say there's only one king here, make them the king, nationalize the project, then shut down the other projects, and it's this all-secret ploy so that they win the race.
00:47:48.440And first of all, it assumes that them talking about the dangers is only bad faith, like that the technology is not dangerous and they're just saying that it's dangerous and it can do all these destructive things so that they can get that outcome.
00:47:57.280um so first of all let's just take cloud mythos specifically so again can hack into every major
00:48:02.960operating systems if you read online there's a lot of people who say this is just hype this isn't
00:48:07.800actually that much better you can take the open source models you can do this stuff so i have a
00:48:11.700friend who is the head of security at one of the top five not the fortune 500 the top five um and
00:48:20.700he has had early access to mythos and when he he himself has said this is this is crazy this is
00:48:28.960like i mean the words were um i think i saw jesus like it was like it was that crazy when he looked
00:48:35.320at everybody critiquing that mythos was just hype he asked has any of them do they actually have
00:48:40.400access to the model and none of the people that had criticized it had personally had access to it
00:48:45.340Yeah. So I challenge those who are criticizing that it's hype to say, have you actually used it?
00:48:53.320And if you talk to people who have, do you still have the same opinion? Okay, so that's one thing
00:48:56.700that it is true, by the way, that the companies, I think, have wanted people to understand the
00:49:01.040dangers so that they can actually accelerate the move towards some guardrails. But there's a
00:49:05.120question of that happening in good faith or bad faith. I think there's more of that happening in
00:49:08.240good faith. There's some bad faith in there too, maybe. But I think it's mainly good faith. But
00:49:13.600then let's take the second can of worms you opened which is on is ai going to create this world of
00:49:19.420abundance we're all going to be poets you know and painters on a grecian sunset and you know now
00:49:25.000robots are going to do all the jobs let's do it now you're talking yeah well we'd all like that
00:49:30.000one question i would have is when is a handful of people ever concentrated all the wealth and
00:49:36.220then consciously redistributed it to everyone else eight or nine trillionaires are not going
00:49:40.300to take care of eight or nine billion of us yeah exactly you call call that bluff yeah exactly well
00:49:45.260and then you combine that with the intelligence curse that we laid out that the incentive
00:49:48.960is like why do we want to invest in the people it became becomes basically an act of charity
00:49:54.120because otherwise i mean that or dealing with the political revolution um but again i don't think
00:50:01.000that we're currently on track to be redistributing uh that wealth and it's also not just the wealth
00:50:05.480and the money it's like we have people have to have work and dignity and status and meaning
00:50:08.500Voltaire, life, you know, a job solves life's three great evils, boredom, vice, and need.
00:51:34.040They were undermining, they were very intentionally undermining the legislation that we brought, SB 53.
00:51:40.420And you had other Republicans that were trying to undermine California's leadership.
00:51:46.040Senator Cruz would call them out, saying we don't want to see the California vacation of regulation all across the United States of America.
00:51:53.180Interestingly, that's beginning to shift.
00:52:16.920It's because they realize all the money in the world is not going to build a big enough bunker that I can enjoy in the absence of societal calm.
00:52:27.300I think I would just say it very shortly is that there are two different realities. There's sort of like the political reality and then there's like physical reality. And physical reality is crashing into political reality. That is with mythos, suddenly banks can get hacked. Any computer system can get hacked. Your stuff can get hacked. And once that physical reality starts getting scary enough, you have to start waking up. You're no longer in sort of like political game land.
00:52:56.200I do think that it was, it's interesting to note that when the emergency meeting was convened after Claude Mythos came out, it wasn't at the Pentagon or National Security, I mean, that happened too, I'm sure. But the real meeting that happened was between Scott Besant and the Treasury Secretary convening all the banks. Because I think the thing that really got them was that if this takes down the financial system, what could his quote 10% GDP growth if the entire financial system gets undermined?
00:53:23.000So I think this, again, illustrates the point we have been making since the beginning, that the upsides don't prevent the downsides.
00:53:30.660And the downsides can undermine the world that can sustain the benefits of the upsides.
00:53:37.500And so I do think there's been a forced shift.
00:53:40.800And, you know, it's very late in the game, but we should celebrate that it is happening.
00:53:44.840Now we just need this kind of full whole of society response to mobilize.
00:53:48.500I mean, there's Nicholas Carlini who gave the talk on Mythos at a conference, Black Hat LLM, I think it was called.
00:56:45.220And so I just want to acknowledge the people here.
00:56:48.320It was, you know, some people know the history that Obama and Xi, President Obama and Xi, signed an agreement to not cyber hack each other.
00:56:54.560And I think the next day was the biggest cyber hack in the U.S. government by China.
00:56:57.940So I want people to hear this not from some kind of naivete about the level of competition, rivalry, and antagonism that is currently present.
00:57:06.180but when the stakes get existential when i push the button and it shifts from the label of the
00:57:10.580being 10 gdp growth and military dominance and cyber dominance the next time i push the button
00:57:15.500it's collective suicide i don't want to push that button and china doesn't want to push that button
00:57:20.000either so the the button label has to shift from what we thought it was going to give us
00:57:25.500to a new outcome and the way asa says that i love is that the fear of all of us losing
00:57:31.020has to become greater than the fear of me losing to you.
00:57:35.060Experience Harry Styles live in London, England
00:59:49.580I'm Timbo. Every episode, we're cutting through the noise, breaking down the plays, the controversies, and the stories behind the headlines.
00:59:56.340We go straight to the source, the athletes themselves, their locker room stories, their reactions, the stuff nobody gets to hear, the laughs, the drama, the triumphs, the moments that never make the highlight real.
01:00:07.220From viral moments to historic games, from buzzer beaters to controversial calls, we break it down, give you context, and ask the questions everybody wants answered.
01:00:15.900Sports Slice brings you closer to the action with stories told by the people who live them.
01:00:20.740Listen to Sports Slice on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
01:00:25.720And for more, follow TimboSliceLife12 and the TikTok Podcast Network on TikTok.
01:00:30.380So you have a very regulated construct in China compared to certainly the United States.
01:00:38.320You're talking about a traditional model.
01:00:41.100right now we're the we're the new frontier out here the wild west yeah yeah i mean it's you know
01:00:47.720go west go west young man go west i mean it's people are pushing out the boundaries of discovery
01:00:52.160holding them back on their own yeah regardless of what happens in you know beijing um this is
01:00:59.160really in the hands of a handful of people ultimately making the right decision we uh
01:01:05.160you know and i i believe dario is the best of a lot i think universally that's accepted but
01:01:10.860But that may be, and even Dario may acknowledge, because of the flatness of the surrounding terrain, may not be because he's particularly eminent on his own.
01:01:19.440And he talks about his own, you know, he's an entrepreneur and he's constantly reflecting on his own incentive structure and how he has to compete in this environment at the same time.
01:01:29.640And he's at least, I think, has a more situational awareness than others.
01:01:33.340But, you know, whatever can be, will be.
01:01:35.940Yeah. And, you know, with respect to Elon, I don't trust XAI. You know, I mean, the idea that he's the good guy compared to our friends down at Google.
01:01:46.340That's right. Or you guys left. I mean, you know, so talk to me about more of the sinister realities of, you know, I don't mean sinister, but the impulses, again, to be the guy, the God, the God, I mean, to have their DNA.
01:02:00.280yeah so so yeah i'm grateful you're bringing this up there's a few things we should enumerate here
01:02:04.440so one is we need coordination and we do find ourselves in the unfortunate spot that the people
01:02:11.240who do need to be in coordination maximally distrust each other even just the u.s ceos i
01:02:17.100mean elon musk and sam hate each other correct um it's not like dario sam or sam no famous
01:02:23.840famous moment the india summit they couldn't even hold their hands together hold hands yeah at least
01:02:28.700they showed up so uh we have a problem of trust between leaders themselves we need i think
01:02:34.580structures that impose the trust on top because they're not going to do it autonomously themselves
01:02:38.980um so that's the regulation that that's yeah that's the transparency using the power of law
01:02:44.100to say that yeah we these people this is going to happen in china here's the the rules are going to
01:02:49.000be similar in china as they are here we're both for example not going to open source a model that
01:02:53.380can hack into any computer system in the world without defenses we're at least not going to
01:02:56.880open source that. China doesn't want a rogue non-state actor or terrorist group having that
01:03:02.240ability to hack their infrastructure. Because it would also blow back onto them. Same thing with
01:03:06.740a open source model that knows how to do very dangerous things with biology. There's some
01:03:11.560threshold that we can get these countries to agree. Just like the Soviet Union in the United
01:03:15.980States had a red phone saying, this is to de-escalate, I think we need something like an
01:03:21.460AI red lines phone, meaning that both countries have common knowledge of the frontier of these
01:03:26.440risks right because right now we don't even have that common knowledge there's rumors that that
01:03:29.520may be one of the things that they're preparing to announce that's right you know as a that's right
01:03:35.300at least some beginning of this but i wanted the other aspect of the human experience of this we
01:03:40.380did a screening of the ai doc in new york and there was someone in the audience actually who
01:03:45.000raised her hand quietly and she said i'm a coach for one of the ceos of these companies and you
01:03:52.440know what happens when i talk to them is they say but what can i do i'm just one person i'm powerless
01:03:58.300and i want people to hear that because i notice that even you know we all feel relative to the
01:04:03.640size of this problem you will never locate agency that is that is enough agency to do something
01:04:09.380about this problem in one human body even if that body is elon by himself or sundar by himself or
01:04:15.840sam by themselves um and so getting back to um you know i think what this moment is inviting us
01:04:21.860into we often say that ai is our ultimate test but greatest invitation that we have to go from
01:04:26.840agency to wegency we have to basically act in some kind of collective way and the forces
01:04:31.760politically of the world have been driving us away from that but this is kind of the test like
01:04:35.640we either do that and we step up and again we need everything from common political pressure
01:04:40.980and this being the number one issue in the midterms and you know the public's rallying and
01:04:44.500all the governors you know speaking up about this and all the world leaders speaking about this
01:04:47.680Just two days ago, I got an email from the president of Iceland who basically wants to activate on this issue.
01:04:53.080And Iceland hosted in Reykjavik the first arms control talks.
01:04:56.680There's a lot that people could do if they said not just what can I do, but how could I get my reaching up and out to the network of people to take action together?
01:05:05.260there is a second part to your point which is the darker part which is um you talking about
01:05:13.260the game theory of uh the psychology of the leaders that they basically believe that in
01:05:17.960the worst case scenario the thing that kept us safe in nukes is that it's two people have to
01:05:24.360push the button and i know you won't push the button because i know that there's something
01:05:28.320sacred that you don't want this whole thing to end and i know that and i know that you know that
01:05:32.920i know that and so even though we get very very very close and we've gotten close so many times
01:05:37.160we haven't pushed that button but in this case with ai there's a belief it first of all it's a
01:05:43.140red zone of where the risk occurs it's not like there's one button that gets pushed it's like
01:05:46.900we just push this stuff out there and there's a belief that it's inevitable if i didn't do it
01:05:51.500someone else would which means i don't experience ethical complicity in being part of the end of
01:05:57.660civilization and if it's inevitable there's nothing i could have done to stop it so i don't even have
01:06:01.180to feel bad right and then so the the game theory goes from all of us knowing that we want to avoid
01:06:06.160the bad outcome to everybody believing that there isn't a different outcome which means that the
01:06:11.000best outcome is maybe the worst thing is all of us go by the wayside but we birthed a digital god
01:06:17.020and it speaks chinese instead of english or maybe it has elon's dna instead of sam as jensen said
01:06:22.260it's at least on an american stack it's yeah right at least we're selling the world american
01:06:26.040but the reason of laying all that out is that if the whole world could see i think what we just
01:06:33.000laid out if literally everyone could see that yeah the whole world says we don't want eight
01:06:38.700soon-to-be trillionaires deciding the future for eight billion people who didn't consent to this
01:06:43.000and that's the purpose of again that sort of brings us back to the beginning why you're doing
01:06:48.040this damn film the day after to create a sort of global consciousness that's right so in the absence
01:06:53.580of that, we're back here in California. You're here with the governor, current governor,
01:06:58.080at the governor's mansion, interestingly. We're doing some decent things. What more should I be
01:07:03.700doing in the absence of the kind of federal leadership that we need? I feel like we've
01:07:09.820lost this last 18 months. It was interesting working. I worked very closely with the Biden
01:07:17.360administration. Did they move quickly enough? Perhaps not, but at least we had a framework
01:07:21.860of an executive order. We moved that forward. The president signed it here at the Fairmont Hotel in
01:07:27.460San Francisco in California. It was built off an executive order that I did. Six months later,
01:07:33.980we were working hand in glove with the Biden administration on that. It was ripped up right
01:07:39.160when the Trump administration came into office. You have an AI czar out of the Bay Area,
01:07:44.980certainly understands the ecosystem. But it seemed to me, and this is me and you guys don't have to
01:07:48.960respond, but it was the great grift. Everybody was sort of on the train and seeing this as an
01:07:53.620opportunity and looking at the abundance of this only, but not looking at safety, not looking at
01:07:59.280the risk, as you've described. California decided to assert itself in that respect, as we've done
01:08:04.600on privacy, as we've done on a lot of child safety issues and a lot more work to do there. And this
01:08:09.640year will be a landmark year in terms of getting to the next level in that respect. But what more
01:08:14.780can the state be doing? The fear of that always is patchwork, not a framework for the nation
01:08:20.320and how you support innovation, our own GDP growth, which has been off the charts in California,
01:08:27.820vis-a-vis our competitors, at the same time address these larger global issues. Do you have
01:08:32.740any specific ideas for a governor of California that is the current governor that has a budget
01:08:39.120that he's releasing in weeks and a legislative session coming up in the next few months?
01:08:43.280mm-hmm well we'll get to answering that question specifically but what one of the places of also
01:08:48.940good news i just want to say because there's a lot of value in just social signaling where everyone
01:08:53.160knows that there's a problem we get to that like shared common knowledge common feeling and if you
01:08:57.740went back two years and you said by today 25 of the world's population would live in a country
01:09:04.100where they've either have announced or enacted a ban for social media for kids under 16 you'd be
01:09:10.440like that's ridiculous you couldn't possibly get that i want people to really feel that like there
01:09:14.200you are in 2022 if you said even just literally three years ago yeah i know that a quarter of
01:09:19.100the world's population we're talking australia india denmark denmark spain two weeks ago greece
01:09:24.860added the list france i was with john height and he was in davos and he he met with president
01:09:29.740macron and got france on board like people would have thought that was impossible i mean the trains
01:09:34.800left the station we just had all the democratic governors out here and everyone was trying to
01:09:38.060compare and contrast which one of the states is going to go first which is following sort of
01:09:41.120your leadership which we're going to do but i mean it's you're right this is a tipping point
01:09:44.900it's happening it's right and once you get 25 you're gonna get the rest of the world and the
01:09:48.900point a lot of these companies i also i anticipate these companies and by the way if i was advising
01:09:54.240these companies get ahead of this train yeah that's right and show your largesse and your
01:10:00.060maturity and understanding uh so i imagine that may happen as well yeah but no that's a point
01:11:55.820And that's interesting, this Chinese example.
01:11:57.800Tell me more about the issue of workforce.
01:12:00.340And what I should be worried about when I see Dario saying 50% entry-level jobs, it's no longer a career ladder, it's a jungle gym, and all these young folks got a Stanford are like, now I'm unemployed or unemployable, no coders, software.
01:12:13.700I mean, women disproportionately being impacted in the workforce when you look at those clerical jobs, admin jobs, et cetera.
01:12:20.340I mean, what do you see a year, two years from now as we deal with the holy grail, the God complex of AGI in the displacement space?
01:12:30.340um reed hoffman has this idea i want to make sure i get it right yeah i like which is um
01:12:36.480one of the things that makes ai distinct and sam altman has talked about this himself
01:12:40.300that you could get the age of these unicorn companies a unicorn company meaning a billion
01:12:44.860dollar valuation i met a guy the other day literally billion dollar valuation it's him
01:12:49.660one person there we go it's him exactly so i think he was parading himself around as the it's me
01:12:54.760that's right one of the first and so that's what that's what has been posited ever since the
01:12:58.200beginning of AI, they're saying, we're going to start having a world where you're going to have
01:13:00.880a single person with a unicorn billion-dollar company. Now, does society work if there's a
01:13:07.420handful of single people with billion-dollar companies and no one else has a job? It doesn't
01:13:11.200work. Do you think those people just want to live out their lives in bunkers with private militaries
01:13:15.240and gas masks because they've created that world? I don't think they want that world. So Reid Hoffman,
01:13:20.440who's the founder of LinkedIn, was early at PayPal and was at some point a friend of Peter Thiel's,
01:13:24.840you know, has this proposal that we can tax companies based on the proportion or the ratio
01:13:30.340of how many employees they have relative to their revenue. So you want to basically disincentivize
01:13:35.480the single solo unicorn company. I don't know if you want to add to it.
01:13:40.840Yeah. Well, and also just to note that it doesn't stop with just single person unicorns.
01:13:46.100With automating that final one person, not so hard. So you're going to end up with
01:13:49.820zero person. You can have the CEO be an AI. And that's actually happening. They're already
01:13:53.640getting ai's that are on boards and things like this yeah and so even if you might find it
01:13:56.940questionable that that person might have again they they can they can earn their wealth but we
01:14:01.020you have to you have to have some taxation to make sure this is being distributed and we also need
01:14:05.420we need to find ways of having universal basic ownership not just universal basic cash payments
01:14:09.960and ubi but universal basic ownership i think people need to have a stake in in the success
01:14:14.380that's happening like what norway did with the sovereign wealth fund but oil didn't oil produced
01:14:19.000this kind of gravy on top for the civilization people still had jobs so what's different about
01:14:23.100this is we do need to find ways of doing universal basic work we also need to find ways of having
01:14:29.820certain professions in which that embodied wisdom like a surgeon or a senior lawyer or a senior
01:14:35.480judge we need ways of training and apprenticing almost like minimum quotas of those kinds of
01:14:40.140occupations and roles in society make sure that we have that ongoing knowledge because again
01:14:44.140the short-term benefit of like no one needs lawyers and then the senior lawyers all die out
01:14:48.620that world doesn't work so yeah just the last thing to sort of add here is that you instead
01:14:54.660of getting into arguments about how quickly exactly are people going to lose their jobs
01:14:58.200well people really do lose their jobs let's let's plan and say okay we we think people are going to
01:15:02.200be out of livelihoods this is uh this idea originally came to us from reed hasing the
01:15:05.980pharmacy of netflix where he said let's set up trigger point laws if unemployment hits 10 what
01:15:11.880are we going to do if it hits 20 what are we going to do that you could pre-set up those sort
01:15:17.040of conditions. So we don't have to argue about whether it's going to happen, just what we should
01:15:20.300do when it does. And I know that you've been running with Engage California, these citizen
01:15:24.380deliberations, these ways of aggregating citizen assemblies, having citizens actually deal with
01:15:28.680and think about these issues and come to some, have their own input in this process by reckoning
01:15:33.980with these facts. And I think that these are all things that we need. The countries that discover
01:15:38.640a natural resource that didn't have this engaged, well-educated citizen, kind of engaged citizen
01:15:43.500infrastructure like venezuela or libya or something like that they don't do so well yeah but countries
01:15:48.420like norway where you did have the engaged citizens with oversight of those funds you can you end up
01:15:53.520with a healthier society so it's sort of like a california engagement fund because you want like
01:15:57.380people involved in the redistribution that's right so it's interesting that you mentioned men come
01:16:03.700which the old canadian construct uh ubi universal basic income you had elon musk the other day say
01:16:36.140Look, we're thinking about all these things.
01:16:38.020down to the parochial and the WARN Act, which is how we actually warn the public the social impacts
01:16:43.300of large-scale displacement and job loss. Having more capacity to see earlier, that's after the
01:16:51.560fact the WARN Act comes out, those jobs are already going to be lost. What are the signs to show us
01:16:55.880what the impacts are happening in the job market in real time? Issues of unemployment insurance
01:16:59.980becoming employment insurance so that you can keep people employed for a period of time. The Dutch do
01:17:05.860about 90% of the wage. The ultimate training is a job, the dignity of a job, back to those
01:17:11.480Voltaire constructs. And then the opportunity then to potentially transition by using the
01:17:16.880federal government to help either backstop. Portable benefits then become fundamental.
01:17:21.360This notion of UBC, University of Basic Capital, this notion of sovereign wealth fund or equity,
01:17:27.360public equity with dividends. And somehow we get shares. There's equity shares,
01:17:32.880contributions from these large companies. That is not just taxes, it's actual equity in the
01:17:38.200company. So there's a notion of an ownership and a larger ownership society that we don't tax
01:17:42.840jobs with payroll taxes and then subsidize automation through tax credits. We do the
01:17:49.000inverse in that context. So all of that, how, you know, so that's the stuff I'm playing around with.
01:17:53.800A lot of us are thinking about right now, but how, from your vantage point, how quickly is this
01:17:59.920happening. There's a lot of headlines, but there's a lot of debate around these headlines of all these
01:18:04.440cuts of jobs. But then people say, well, that was a lot of covert over hiring. There's some
01:18:09.060business model issues there, but they're sort of hiding and suggesting it's AI. We haven't
01:18:14.180necessarily seen massive job destruction yet, or have we with AI? I think the point people should
01:18:23.260get here is that obviously it's complicated and there's jobs are shuffling throughout the economy
01:18:27.940a little bit right now but the long-term goal of these companies is not back to it's back to the
01:18:33.820original open ai mission statement exactly open ai's mission statement was not to give people
01:18:40.140helpful tools so that you can do your job slightly better their mission statement is we do your job
01:18:44.460and actually you know gavin in uh in la there's an article in the la times about this that
01:18:48.740is one of these new popular gig worker jobs in la is everybody straps a gopro camera to their head
01:18:55.120and they look down and then they do laundry they cook they eat they do they do all these things
01:19:00.580so basically the number one job soon in the world will be training the replacement for that job
01:19:05.980think of it like being a coffin builder like you're designing the coffin and then you put
01:19:09.360yourself in it and you know if you think i'm lying by the way just think about meta
01:19:12.840meta told instagram creators um you know we're here we love creators we love creativity we want
01:19:19.260you to be successful we want to you know our whole mission statement is making creators super
01:19:23.460successful that's before they were an ai company what they do the second that they were an ai
01:19:27.600company they they trained on all the videos of their creators and then created generative videos
01:19:33.460that now basically suck up like a vampire your essence your life force your creativity and then
01:19:38.200hit button and now they have a digital copy of you that you didn't consent to that can generate all
01:19:42.340these things if you don't believe me just a few weeks ago there was an article meta is now forcing
01:19:47.100their employees um to basically track all of their movements all their clicks all the things they're
01:19:51.500doing on a computer to train ai agents to do all their jobs this is not a conspiracy theory all you
01:19:57.060have to do to know where we're going is understand the incentives and look at the early warning signs
01:20:00.960they're telling you who they really are you know open ai says we're all here to to make the world
01:20:06.160a better place and then they released this ai slop app that's sora which was basically an infinite
01:20:10.920they killed it now but it was an infinitely scrolling um deep fake generated content of like
01:20:15.720you know funny videos of stephen hawking you know that's you know going through a raceway or something
01:20:20.060like that it's just deep fake ai slob why are they doing that because they want to increase their
01:20:25.540market dominance because they want to get users because they want to get training data and it
01:20:29.220gets more people using open ai so their numbers going up and if your incentive tell you everything
01:20:32.880and if you're a company and you're you know choosing do i hire a real human paralegal
01:20:37.940or gpt7 that works for less than minimum wage 24 7 doesn't whistleblow doesn't complain doesn't
01:20:45.920have cultural issues doesn't have paid time off like which one are you going to do well you're
01:20:50.020Business incentives is just very, very clear, and everyone's going to be trapped in the same race.
01:20:52.820And if you're a competitor, you may say, no, I'm going to keep the human, and then your competitor goes the opposite direction, you're out of business.
01:20:58.500You have no choice back to the incentives.
01:21:00.080But with those set of interventions that you just mentioned, and you just mentioned a whole slew of things we could be doing.
01:21:04.940You just mentioned so many things that we could be doing.
01:21:07.380And I think what it represents is I think people think, but if we don't race to automate every job as fast as possible, we're going to lose to China.
01:21:13.740But what this is showing and revealing is we're not in a race just to the technology.
01:21:18.960We're actually in a race for a different currency.
01:21:21.820The currency is not who has the power first, but who is better at governing, steering, and integrating that power in a healthy and sustainable and strengthening way into your society.
01:21:30.720And we saw this with social media because the U.S. beat China to the psychological bazooka behavior modification machine of social media.
01:21:38.540And then we had no idea how to govern it.
01:21:48.060When you open up Douyin TikTok in China, their version of TikTok, and you scroll, you get videos about who won the Nobel Prize, financial advice, here's the new quantum physics theory, here's patriotism videos.
01:22:01.080And obviously, there's problems with that.
01:22:03.520But the point is you don't have to do it in the Wild West, blow off your own brain.
01:22:07.600And now if we release it in a way that automates all the labor with no transition plan, it's like, great, we pumped up our steroids, but we just burst our lungs.
01:26:23.840I'm Timbo. Every episode, we're cutting through the noise, breaking down the plays, the controversies, and the stories behind the headlines.
01:26:30.580We go straight to the source, the athletes themselves, their locker room stories, their reactions, the stuff nobody gets to hear.
01:26:37.280The laughs, the drama, the triumphs, the moments that never make the highlight real.
01:26:41.460From viral moments to historic games, from buzzer beaters to controversial calls, we break it down, give you context, and ask the questions everybody wants answered.
01:26:50.160Sports Slice brings you closer to the action with stories told by the people who live them.
01:26:54.940Listen to Sports Slice on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
01:26:59.980And for more, follow TimboSliceLife12 and the TikTok Podcast Network on TikTok.
01:27:04.720And part of that nuance is a deeper understanding that we're not just talking about apps here,
01:27:08.620we're talking about the physical world as well.
01:27:38.900Is that, I mean, we're seeing driverless cars.
01:27:42.500And if you haven't seen them in, you know, your home state,
01:27:45.260you're about to and you're going to see flying cars um they're just quadcopters basically but
01:27:50.500they're coming soon and we're going to be doing a lot more of that by the way that's great like
01:27:54.820we can have a world of quadcopters and you know innovation while not racing to replace us
01:28:00.560economically replace us socially so mark zuckerberg you know designs your kids friends rather than
01:28:04.760you have them having actual friends replace us politically not having political power and then
01:28:08.520replace us physically by owning our physical presence in our in robots so again yeah i think
01:28:14.160people might hear this as an anti-technology conversation yeah you know you talked about
01:28:18.220your legacy uh and your father and grandfather and we were talking backstage yeah yeah you know
01:28:23.300az's father started the macintosh project yeah yeah you know we we come from a legacy where the
01:28:28.460word humane is about an inspiring vision about technology that's actually integrated and in
01:28:32.980service of our humanity that's of a pro-human future that's what all of this is motivated for
01:28:37.600so i get excited about you know flying you know cars that have cool you know ai and just to say
01:28:42.860too just as my own personal experiences you know i spend a big portion of my life i founded a thing
01:28:47.800called earth species project um we're now around 40 people and you're the biggest consumer of it
01:28:52.660i wouldn't say the biggest consumer yeah exactly it's me i'm sorry guys um uh first infinite scroll
01:28:58.240now this um but you know we were using ai to translate animal language animal communication
01:29:03.760one of our researchers just discovered um just before she joined us that like not only do dolphins
01:29:08.700have names that they call each other by that their mothers teach them they will talk about each other
01:29:13.480in the third person so they'll talk about another dolphin that isn't here so it's like one of the
01:29:17.440biggest hallmarks of language to be able to talk about something that's not here and not now but
01:29:21.260they will continue to use their mother's name even after she's died yeah sweet right it's it's
01:29:27.680beautiful with these are the kinds of things that ai can show us and teach us about the world and
01:29:33.600connect us with the natural world the world around us and shows things we couldn't possibly imagine
01:29:37.540So I just want everyone to hear, it's not that we're just here saying no AI, it's just saying not AI in this way, that technological progress might be inevitable, but the way that AI rolls out is not.
01:29:54.480Where, because if it's inevitable, then there's nothing we can do, then you shouldn't possibly act.
01:29:58.460There's a button we're hitting called commit suicide, commit suicide.
01:30:00.920Is it inevitable that we all hit that button?
01:30:02.560No, when it's called 10% GDP growth and innovation, we push the button.
01:30:07.060If we could collectively see that the button we're pushing is more nuanced than that, that there's some threshold by which we are essentially committing civilizational suicide, we're not going to have a human future.
01:30:17.780If you ask anybody, what gives me hope?
01:30:20.520We've been on the road with this film.
01:30:22.220You walk people through the basic facts we've talked through.
01:30:24.740And you ask, who here is stoked about the future that we're headed to?
01:30:28.160I was even at the Miami Tech Summit in front of a pro-business, pro-AI.
01:30:40.520So again, I think it's like as long as you give people the off ramp, there are ways we can have technology that's in service of making life better.
01:30:47.780We need to be funding and innovating in that way.
01:30:50.200And we need to be blocking off the parts that are hurting our children, that are diminishing people's cognitive capacities or replacing their kids' relationships with AI companions.
01:33:53.840So the key piece of agency is going into the midterm elections, not voting for people who have taken money from those AI accelerationist groups or don't have a position on AI that's trying to steer away from these outcomes.
01:34:07.840Now, we obviously have to articulate that in a clearer way.
01:34:10.220what does it mean to have kind of a pro-human platform in future and and the companies try to
01:34:15.540make the conversation inaccessible like oh well you don't understand AI they're trying to make
01:34:18.800it seem like so you don't know how to regulate it let's put us in charge yeah we call this the
01:34:22.580the under the hood bias where it's as if people who know how to make the biggest engines know how
01:34:27.020to lay out cities and traffic lights and that's just not true like people who know how to make
01:34:31.300car engines are not the best people for knowing how to make cars safe and prevent car accidents
01:34:36.100yeah exactly so it's pretty simple it's like do you want an anti-human future in which you will
01:34:42.480be permanently disempowered where no one has an incentive except for charity to pay your bills
01:34:47.060for you and cover your you want the companies that took your job you want to be dependent on
01:34:51.760them for the rest of your life to pay your bills for you yeah with no economic leverage no so this
01:34:57.280is the final window you know you want to vote for people who are going to protect you economically
01:35:01.500protect you socially, protect you politically, meaning protect our jobs, protect our vote,
01:35:06.220voting pro-human. And obviously that has to get articulated even more clearly. But that is the
01:35:11.540number one way that people can make a difference in the short term. There's other things too,
01:35:15.180like boycotting companies that are enabling mass surveillance. You know, when the company's
01:35:19.480subscriptions go down by a lot, they really need their numbers to be going up. So the companies
01:35:23.320are more vulnerable than you think. And you're more powerful than you think, not just if you
01:35:26.920unsubscribe and boycott them, but get your company, get your church group to do that too.
01:35:31.500And when those numbers start to change, it actually has a difference.
01:35:34.580Scott Galloway has been talking a lot about that as well.
01:36:29.860yeah i mean this is i mean we only have a few more at bats to get this right that's right or
01:36:34.520is that overstated no this is it it's a few more this is the moment by 2028 like that'll be the
01:36:39.740last human election like it's going to be ai's running all of the election election campaigns
01:36:44.500the ads doing all the both information and disinformation because human beings just can't
01:36:48.760operate at that speed and are not that effective yeah so this is it this is the window but and i
01:36:53.680know i just want to like at the human experiential level we've been talking about some hard stuff
01:36:57.120the last little while and you know i just want to say you know we struggle with how to communicate
01:37:02.200this in a way that's responsible because here's the here's the trade right it's hard to face this
01:37:07.120but if we don't face it we just like look away what are we going to get we're going to get the
01:37:11.660default anti-human path and so there's this trade where we the only way out is through like it is we
01:37:17.660call it kind of like a rite of passage like our ability to confront basically a a shadow of a
01:37:23.540technology and the default future that that brings, if we can see that clearly, and if we can
01:37:29.220know that you know and I know that we don't want that, and if Xi and Trump and the people at the
01:37:34.240highest levels of these governments, because you ask any reasonable person at a very high level in
01:37:38.640national security on any side, and you say, do you want AIs that are going rogue and hacking to any
01:37:43.080computer system and are already mining cryptocurrency? Does that sound good to you?
01:37:47.200Does that sound safe to you? At a universal level, it's not. So there's actually much more common
01:37:52.680ground. And even, you know, 40, I think it's already the case that 57% of Americans think
01:37:57.380that the risks of AI currently outweigh the benefits. I don't like that stat because it
01:38:01.000makes it too, like it's all bad versus all good or something like that. You know, there's already
01:38:05.440the pro-human AI declaration where 46 groups came together and said, we agree on these five
01:38:10.640principles to make a pro-human future. You can look it up. It's humanstatement.org. That's the
01:38:14.980one that also includes, again, Glenn Beck, Bernie Sanders, Steve Bannon, all these people. I believe
01:38:19.980that 65 percent of Americans believe we should not create super intelligence until we know how
01:38:24.840to do it provably safely and controlled sounds like a pretty basic thing like let's not do
01:38:29.240something it's like should we build a nuclear bomb that until we know how to do it safely or
01:38:32.980we know that it won't set off it we won't ignite the atmosphere probably we should wait to do that
01:38:36.480so this is not a radical proposal this is not do you want to know how many like what percentage
01:38:42.240of Americans think that we should just go as fast as possible unfettered non-regulated AI
01:38:48.540what percent of americans five percent is it literally yeah literally five percent just five
01:38:52.880percent so actually it's the most popular platform to run on that's right to do the like the safe
01:38:57.500thing it was and it's and whether you're democrat or republican you don't want to be surveilled by
01:39:01.260ais whether you're democrat republican you don't want ais taking your jobs which it will do equally
01:39:04.920to both sides but do you then subscribe to the bernie aoc frame just shut down the data centers
01:39:10.480and moratorium and tell i think of it as those data centers and it's like i want a pro-human
01:39:17.000data center policy it's like you get to build the data center when these conditions are met and we
01:39:22.100know that it's going to land a center protein future i'm not saying that's easy i'm not saying
01:39:24.980but because i want people to hear it's not just no to all of it yeah yeah it's making sure that
01:39:31.320the conditions are the steering is built in so when you see that data center you should ask is
01:39:35.980that data center here to basically enhance my life and strengthen my family well you were even just
01:39:41.480saying data center was solar mostly data centers aren't solar that's right we're turning back on
01:39:46.400coal plants that's right natural gas plants that are exactly yeah yeah and you know often in in the
01:39:52.420sci-fi movies when is it the case that human beings actually stop all their bickering and
01:39:57.860they start coordinating it's when the aliens come right yeah we are summoning the demon we are
01:40:02.480summoning the alien and if we can see it that way then it's sort of like a game of thrones winter
01:40:07.600is coming we have to understand winter is coming then all the fighting in westeros can like pause
01:40:11.620for just long enough that we can deal with it yeah that's this moment because otherwise it just
01:40:15.600feels completely hopeless like how how are we going to like deal with all the finance reforms
01:40:20.040and we can't like when has congress actually done anything um and yet there is this one moment where
01:40:27.100like all of humanity is on one side there is a human movement like that's what i think the social
01:40:32.760media stuff shows if we don't think of this as just an ai problem but as a technology encroaching
01:40:38.720onto our humanity overreaching into our humanity problem then actually there is massive momentum
01:40:44.740more than we ever thought was possible because what we have to do is juice those like the
01:40:49.800momentum that's already there that's right yeah well look in the absence of um of federal leadership
01:40:55.340california will continue to assert itself i believe in the power of emulation yes success
01:40:59.300leaves clues we'll continue to try to yes iterate on this and lean in but but look this you know
01:41:04.700the clarity that you guys bring to this conversation the importance of this conversation
01:41:08.740being brought to scale and and broadening the consciousness um and the imperative of seeing
01:41:13.760um this documentary again the documentary is called the ai doc or how i became an apocal
01:41:20.440optimist and and we can't let that slip twice because you've used a word that people are not
01:41:27.760familiar with which is a good way to end and that is this convergence of optimism and pessimism
01:41:34.720uh more optimism than pessimism would be a better place and it's about agency and i'll just leave
01:41:39.820you with a quote that i loved from a meditation teacher who talked who uh happened to be a
01:41:45.400meditation teacher and it's from the army corps of engineers which is that uh the difficult we do
01:41:51.040today the impossible takes just a little longer i like it yeah yeah and i just end by saying
01:41:58.500right it really actually isn't about being an optimist or pessimist because to choose that
01:42:04.440label for yourself it's a sort of it's to take a back seat like to sit down and just be like i'm
01:42:08.880just going to accept that it'll either come out well or not versus taking responsibility for trying
01:42:15.260to see clearly to shift the world to go well um and i think that's what like everyone listening
01:42:22.280can can do is that this can all feel like too big um what what can i do and then you realize even
01:42:29.480like the like the the ceos of the company sort of feel a similar way um but this is not just about
01:42:35.900of what we must do this is fundamentally a question of like who we must be that if we are
01:42:41.740the kind of people that aren't looking for a path and the path is certainly not clear doesn't seem
01:42:46.520obvious or even possible but if we're the kind of people that don't look for the path we definitely
01:42:51.720won't find it if it's there if we are the kinds of people that do look for the path then if it's
01:42:56.640there we have the opportunity to find it and just maybe one last thing is people listening to this
01:43:02.040this is a lot your role is not to take on this whole problem you don't have to do that there's
01:43:08.440some people who are soldiers and there's some people who are civilians but your role is to be
01:43:12.520part of the collective immune system against this anti-human future one simple way you can do that
01:43:17.720is to share this conversation yeah with literally the most powerful people that you know and ask
01:43:23.000them to watch it and to share it with the most powerful people that they know and if you've done
01:43:27.760that you can say as long as you are spreading the word and being part of that immune system
01:43:31.840you can rest at home kiss your children at night focus on the things that all of this is about
01:43:37.040anyway which is what do we love about the world what do we love about life that we want to continue
01:43:42.180and come from that place because that is the energy that we will that that will inspire other
01:43:47.080people to want to take those those other actions too and i know gevin you ended up watching an
01:43:50.760earlier presentation that we did the ai dilemma yeah i think somebody said that you watched it
01:43:54.440like three times and shared with all your staff how did you end up hearing about it yeah well i
01:43:59.800I mean, come on, hearing about it from you guys, you guys, I was able to get the early, early preview from the two of you and was able to devour it, took notes and, and then shared it universally with everybody around me.
01:44:15.520Look, you know, I, I, in the spirit of you guys just ended, I couldn't agree with you more.
01:44:19.460This notion of agency is so important.
01:44:21.240And we talk about that on the podcast all the time, but this notion of the future, it's not something to experience, something to manifest.