In this episode of the Joe Rogan Experience, Joe and Tristan talk about the dangers of AI and how it could be used to help us communicate with other animals. They talk about animal communication and how we can learn to communicate with them, and how AI can help us do so. Joe also talks about a new technology he's working on, called the Earth Species Project, which is trying to help other animals communicate with each other using artificial intelligence (AI). And Tristan talks about how AI could help us learn to speak to other animals, like dolphins, elephants, birds, and other animals that don't speak a language like we do. And they talk about a guy who thinks he can communicate with dolphins using ketamine to make them talk to each other in a sensory deprivation tank. This episode is sponsored by the Center for Humane Technology, a non-profit organization that helps fight against the war on drugs and other unethical practices in the medical and medical field. The Center is located in Los Angeles, California, but you can support them here: bit.ly/TheSocialDilemma. Thanks to Tristan and Joe for coming on the show and for his contributions to The Social Dilemma, and for all of the work he's doing to make this podcast possible. It's a great listen, and we hope you enjoy it! Thank you so much for listening, Tristan, Joe, for being a friend of the pod, and thank you for being an ally of The Joe Rogans Podcast. (and for supporting the podcast. . Thanks also to our sponsor, ! and podcast for making this podcast, The SocialDilemm , and for supporting us - , for making us a safe space in this episode. - Thank you for listening and supporting us to make it a safe place to talk about this podcast and making us feel safe and beautiful, beautiful, and beautiful and beautiful in the workplace thanks to you're a beautiful place to be heard and safe, beautiful and uplifting, beautiful people everywhere thank you, thank you so much, and good vibes, and thanks you're listening to us back and back to you, and back and good night, bye, bye! and good day, bye - bye. Cheers! - Joe and Joe - P.S. - Joe
00:00:37.000So that was, you know, we think of that as kind of first contact between humanity and AI. And before I say that, I should introduce Aza is the co-founder of the Center for Humane Technology.
00:01:03.000Yeah, I mean, we work across a number of different species, dolphins, whales, orangutans, crows.
00:01:09.000And I think the reason why Tristan is bringing it up is because we're like this conversation, we're going to sort of dive into like, which way is AI taking us as a species as a civilization?
00:01:20.000And it can be easy to hear just critiques is coming from critics, but we've both been builders and I've been working on AI since, you know, really thinking about it since 2013, but like building since 2017. So this thing that I was reading about with whales,
00:01:38.000there's some new scientific breakthrough where they're understanding patterns in the whales language.
00:01:44.000And what they were saying was the next step would be to have AI Work on this and try to break it down and break it down into pronouns, nouns, verbs, or whatever they're using and decipher some sort of language out of it.
00:02:11.000And parrots, turns out, also have names that the mother will whisper in each different child's ear and teach them their name to go back and forth until the child gets it.
00:02:23.000One of my favorite examples is actually off the coast of Norway every year.
00:02:28.000There's a group of false killer whales that speak one way and a group of dolphins that speak another way.
00:02:35.000And they come together in a super pod and hunt, and when they do, they speak a third different thing.
00:03:53.000The one that we use now, the one that we have out here, is just 1,000 pounds of Epsom salts into 94-degree water, and you float in it, and you close the door, total silence, total darkness.
00:04:05.000His original one was like a scuba helmet, and you were just kind of suspended by straps, and you were just in water.
00:04:12.000And he had it so he could defecate and urinate, and he had like Like a diaper system or some sort of pipe connected to him.
00:04:23.000He sort of set back the study of animal communication.
00:04:27.000Well, the problem was the masturbating the dolphins.
00:04:35.000So what happened was there was a female researcher and she lived in a house and the house was like three feet submerged of water and so she lived with this dolphin but the only way to get the dolphin to try to communicate with her is the dolphin was always aroused.
00:04:52.000So she had to manually take care of the dolphin and then the dolphin would participate.
00:04:57.000But until that, the dolphin was only interested in sex.
00:04:59.000And so they found out about that, and the Puritans and the scientific community decided that that was a no-no.
00:05:59.000So that's already cool enough, but then they'll say to two dolphins, they'll teach them the jesters, do something together.
00:06:03.000And they'll say to the two dolphins, do something you've never done before together.
00:06:08.000And they go down and exchange sonic information and they come up and they do the same new trick that they have never done before at the same time.
00:06:43.000One way of diagnosing all of the biggest problems that humanity faces, whether it's climate or whether it's opioid epidemic or loneliness, it's because we're doing narrow optimization at the expense of the whole, which is another way of saying disconnection from ourselves,
00:08:29.000In 2013, when I first started working on this, it was obvious to me, and obvious to both of us, we were working informally together back then, that if you were optimizing for attention, and there's only so much, you were going to get a race to the bottom of the brain stem for attention,
00:08:45.000I'm going to have to go lower in the brain stem, lower into dopamine, lower into social validation, lower into sexualization, all that other worser angels of human nature type stuff.
00:10:22.000And it's basically OpenAI, Anthropic, Google, Facebook, Microsoft, they're all racing to deploy their big AI system, to scale their AI system, and to deploy it to as many people as possible and keep outmaneuvering and outshowing up the other guy.
00:10:37.000So, like, I'm going to release Gemini.
00:10:40.000Google just a couple days ago released Gemini.
00:12:00.000That means it was entangled, making it hard to shift.
00:12:04.000We have this very, very, very narrow window with AI to shift the incentives before it becomes entangled with all of society.
00:12:14.000So the real issue, and this is one of the things that we talked about last time, was algorithms.
00:12:20.000That without these algorithms that are suggesting things that encourage engagement, whether it's outrage or, you know, I think I told you about my friend Ari ran a test with YouTube where he only searched puppies, puppy videos.
00:12:36.000And then all YouTube would show him is puppy videos.
00:12:39.000And his take on it was like, no, people want to be outraged.
00:12:43.000And that's why the algorithm works in that direction.
00:13:34.000Occasionally someone will recommend something interesting and I'll watch that.
00:13:37.000But most of the time if I'm watching YouTube it's like I'm eating breakfast and I just put it up there and I just like watch some nonsense real quick.
00:13:43.000Or I'm coming home from the comedy club and I wind down and I watch some nonsense.
00:13:47.000So I don't have a problematic algorithm.
00:13:49.000And I do understand that some people do.
00:13:53.000Well, it's not about the individual having a problematic algorithm.
00:13:55.000It's that YouTube isn't optimizing for a shared reality of humanity, right?
00:14:09.000They came up with a metric called perception gaps.
00:14:12.000Perception gaps are how well can someone who's a Republican Estimate the beliefs of someone who's a Democrat and vice versa.
00:14:21.000How well can a Democrat estimate the beliefs of a Republican?
00:14:24.000And then I expose you to a lot of content.
00:14:27.000And there's some kind of content where over time, after like a month of seeing a bunch of content, your ability to estimate what someone else believes goes down.
00:14:35.000You are not estimating what they actually believe accurately.
00:14:38.000And there's other kinds of content that maybe is better at synthesizing multiple perspectives, right?
00:14:44.000That's like really trying to say, okay, I think the thing that they're saying is this, and the thing that they're saying is that.
00:14:48.000And content that does that minimizes perception gaps.
00:14:51.000So for example, what would today look like if we had changed the incentive of social media and YouTube from optimizing for engagement To optimizing to minimize perception gaps.
00:15:05.000I'm not saying that's the perfect answer, that would have fixed all of it.
00:15:09.000But you can imagine in, say, politics, whenever I recommend political videos, if it was optimizing just for minimizing perception gaps, what different world would we be living in today?
00:15:19.000And this is why we go back to Charlie Munger's quote, if you show me the incentive, I'll show you the outcome.
00:15:23.000If the incentive was engagement, you get this sort of broken society where no one knows what's true and everyone lives in a different universe of facts.
00:15:30.000That was all predicted by that incentive of personalizing what's good for their attention.
00:15:34.000And the point that we're trying to really make for the whole world is that we have to bend the incentives of AI and of social media to be aligned with what would actually be safe and secure and for the future that we actually want.
00:15:50.000Now, if you run a social media company and it's a public company, you have an obligation to your shareholders.
00:16:09.000Yeah, and this can't be done without that.
00:16:11.000So to be clear, you know, could Facebook unilaterally choose to say we're not going to optimize Instagram for the maximum scrolling when TikTok just jumped in and they're optimizing for the total maximizing infinite scroll?
00:16:25.000Which, by the way, we might want to talk about because one of Aza's accolades is...
00:16:42.000So this was back in 2006. Do you remember when Google Maps first came out and suddenly you could scroll on a map quest before you had to click a whole bunch to move the map around?
00:16:50.000So that new technology had come out that you could reload, you could get new content in without having to reload the whole page.
00:16:57.000And I was sitting there thinking about blog posts and thinking about search.
00:17:00.000And it's like, well, every time I, as a designer, ask you, the user, to make a choice you don't care about or click something you don't need to, I failed.
00:17:07.000So obviously, if I get near the bottom of the page, I should just load some more search results or load the next blog post.
00:17:14.000And I'm like, this is just a better interface.
00:17:19.000To the incentives, and this is before social media really had started going, I was blind to how I was going to get picked up and used not for people, but against people.
00:17:30.000And this is actually a huge lesson for me, that me sitting here optimizing an interface for one individual is sort of like, that was morally good.
00:17:38.000But being blind to how I was going to be used Globally was sort of globally amoral at best or maybe even a little immoral.
00:17:48.000And that taught me this important lesson that focusing on the individual or focusing just on one company, like that blinds you to thinking about how an entire ecosystem will work.
00:17:58.000I was blind to the fact that like after Instagram started, they're going to be in a knife fight for attention with Facebook, with eventually TikTok, and that was going to push everything one direction programmatically.
00:18:12.000Well, how could you have seen that coming?
00:18:15.000Well, if I would argue that, like, you know, the way that all democratic societies looked at problems was saying, what are the ways that the incentives that are currently there might create this problem that we don't want to exist?
00:18:59.000And so these are the rules I wish I knew, and that is the first law of technology.
00:19:07.000When you invent a new technology, you uncover a new class of responsibility and it's not always obvious.
00:19:13.000We didn't need the right to be forgotten until the internet could remember us forever or we didn't need the right to privacy to be written to our law and to our constitution.
00:19:39.000So, first law, when you invent a new technology you uncover a new class of responsibility.
00:19:45.000Second law, if the technology confers power, you're going to start a race.
00:19:51.000And then the third law, if you do not coordinate, that race will end in tragedy.
00:19:57.000And so with social media, the power that was invented, infinite scroll, was a new kind of power.
00:20:04.000And that came with a new kind of responsibility, which is I'm basically hacking someone's dopamine system and their lack of stopping cues, that their mind doesn't wake up and say, do I still want to do this?
00:20:13.000Because you keep putting your elbow in the door and saying, hey, there's one more thing for you.
00:20:27.000Then the second thing is infinite scroll also conferred power.
00:20:30.000So once Instagram and Twitter adopted this infinitely scrolling feed, it used to be, if you remember Twitter, get to the bottom, it's like, oh, click, load more tweets.
00:20:40.000But once they do the infinite scroll thing, do you think that Facebook can sit there and say, we're not going to do infinite scroll because we see that it's bad for people and it's causing doom scrolling?
00:20:48.000No, because infinite scroll confers power to Twitter at getting people to scroll longer, which is their business model.
00:20:54.000And so Facebook's also going to do infinite scroll, and then TikTok's going to come along and do infinite scroll.
00:20:59.000And now everybody's doing this infinite scroll, and if you don't coordinate the race, the race will end in tragedy.
00:21:06.000So that's how we got in Social Dilemma, you know, in the film, the race to the bottom of the brainstem and the bottom of the brainstem and the collective tragedy we are now living inside of, which we could have fixed if we said, what if we change the rules so people are not optimizing for engagement?
00:21:23.000But they're optimizing for something else.
00:21:25.000And so we think of social media as first contact between humanity and AI. Because social media is kind of a baby AI, right?
00:21:33.000It was the biggest supercomputer, deployed probably in mass to touch human beings for eight hours a day or whatever, pointed at your kid's brain.
00:21:42.000It's a supercomputer AI pointed at your brain.
00:21:46.000It's just calculating one thing, which is can I make a prediction about which of the next tweets I could show you or videos I could show you would be most likely to keep you in that infinite scroll loop.
00:21:54.000And it's so good at that, that it's checkmate against your self-control, like prediction of like, I think I have something else to do, that it keeps people in there for quite a long time.
00:22:04.000In that first contact with humanity, we say, like, how did this go?
00:22:07.000Like, between, you know, we always say, like, oh, what's going to happen when humanity develops AI? It's like, well, we saw a version of what happened, which is that humanity lost because we got a more doom-scrolling, shortened attention span, social validation.
00:22:19.000We birthed a whole new career field called Social Media Influencer, which is now colonized, like, half of, you know, Western countries.
00:22:26.000It's the number one aspire to career in the US and UK. Really?
00:22:31.000Social media influencer is the number one aspired career?
00:22:35.000It was in a big survey a year and a half ago or something like that.
00:22:38.000This came out when I was doing this stuff around TikTok about how in China the number one most aspired career is astronaut followed by teacher.
00:22:44.000I think the third one is there's maybe social media influencer, but in the US the first one is social media influencer.
00:23:12.000It's not just some light thing of, oh, it's like subtly tilting the playing field of humanity.
00:23:16.000It's colonizing the values that people then autonomously run around with.
00:23:21.000And so we already have a runaway AI, because people always talk about, like, what happens if the AI goes rogue and it does some bad things we don't like?
00:23:35.000Well, notice, why didn't we turn off, you know, the engagement algorithms in Facebook and in Twitter and Instagram after we saw it was screwing up teenage girls?
00:23:43.000Yeah, but we already talked about the financial incentives.
00:23:48.000Exactly, which is why with AI … Well, there's nothing to say.
00:23:50.000In social media, we needed rules that govern them all because no one actor can do it.
00:23:54.000But wouldn't you – if you were going to institute those rules, you would have to have some real compelling argument that this is wholesale bad.
00:24:04.000Which we've been trying to make for a decade.
00:24:06.000Well, and also Francis Haugen released Facebook's own internal documents.
00:24:10.000Francis Haugen was the Facebook whistleblower.
00:24:47.000Because if you think like a person who thinks about how incentives will shape the outcome, All of this is very obvious, that we're going to have shortened attention spans, people are going to be sleepless and doomscrolling until later and later in the night because the apps that keep you up later are the ones that do better for their business,
00:25:03.000which means you get more sleepless kids, you get more online harassment because it's better.
00:25:07.000If I had to choose two ways to wire up social media, one is you only have your 10 friends you talk to.
00:25:12.000The other is you get wired up to everyone can talk to everyone else.
00:25:16.000Which one of those is going to get more notifications, messages, attention flowing back and forth?
00:25:22.000But isn't it strange that at the same time the rise of long-form online discussions has emerged, which are the exact opposite?
00:26:12.000Every time there's a race to the bottom, there is always a countervailing, like smaller, race back up to the top.
00:26:19.000That's not the world I want to live in.
00:26:20.000But then the question is, which thing, which of those two, like the little race to the top or the big race to the bottom, is controlling the direction of history?
00:26:30.000Controlling the direction of history is fascinating because the idea that you can...
00:26:34.000I mean, you were just talking about the doom scrolling thing.
00:26:36.000How could you have predicted that this infinite scrolling thing would lead to what we're experiencing now?
00:26:56.000Because apps that make you look more beautiful in the mirror on the wall that is social media are the ones that are going to keep me using it more.
00:27:57.000Also highlighting more of the outrage.
00:27:59.000Outrage drives more distrust because people are like not trusting because they see the things that anger them every day.
00:28:03.000So you have this collective sort of set of effects that then alter the course of world history in this very subtle way.
00:28:10.000It's like we put a brain implant in a country, the brain implant was social media, and then it affects the entire set of choices that that country is able to make or not make because it's like a brain that's fractured against itself.
00:28:21.000But we didn't actually come here, I mean, we're happy to talk about social media, but the premise is how do we learn as many lessons from this first contact with AI to get to understanding where generative AI is going?
00:28:34.000And just to say the reason that we actually got into generative AI, the next, you know, GPT, the general purpose transformers, is back in January, February of this year.
00:28:45.000Aza and I both got calls from people who worked inside the major AI labs.
00:28:50.000It felt like getting calls from the Robert Oppenheimers working in the Manhattan Project.
00:28:56.000And literally we would be up late at night after having one of these calls and we would look at each other with our faces were like white.
00:30:07.000No other technology has gained that in history.
00:30:11.000It took Instagram like two years to get to 100 million users.
00:30:13.000It took TikTok nine months, but ChatGPT was it took two months to get to 100 million users.
00:30:18.000So when that happens, if you're Google or you're Anthropic, the other big AI company building to artificial general intelligence, are you going to sit there and say, we're going to keep doing this slow and steady safety work in a lab and not release our stuff?
00:30:41.000Well, oh shit, if you launched ChatTPT to the public world, I have to start launching all these capabilities.
00:30:46.000And then the meta problem, and the key thing we want everyone to get, is that they're in this competition to keep pumping up and scaling their model.
00:30:53.000And as you pump it up to do more and more magical things, and you release that to the world, what that means is you're releasing new kind of capabilities.
00:31:01.000Think of them like magic wands or powers into society.
00:31:05.000So GPT-2 couldn't write a sixth grade person's homework for them, right?
00:31:53.000But if you just pump it up with more data and more compute and you get to GPT-4, suddenly it knows how to do that.
00:31:59.000So think of this, there's this weird new AI. We should say more explicitly that...
00:32:04.000There's something that changed in the field of AI in 2017 that everyone needs to know because I was not freaked out about AI at all, at all, until this big change in 2017. It's really important to know this because we've heard about AI for the longest time and you're like,
00:32:20.000yep, Google Maps still mispronounces the street name and Siri just doesn't work.
00:32:27.000And this thing happened in 2017. It's actually the exact same thing that said, all right, now it's time to start translating animal language.
00:32:33.000And it's where underneath the hood, the engine got swapped out and it was a thing called transformers.
00:32:39.000And the interesting thing about this new model called transformers is the more data you pump into it and the more like computers you let it run on, The more superpowers it gets.
00:32:52.000But you haven't done anything differently.
00:32:54.000You just give more data and run it on more computers.
00:32:59.000Like it's reading more of the internet and it's just throwing more computers at the stuff that it's read on the internet.
00:33:21.000So this is 2017. OpenAI releases a paper where they train this AI, it's one of these transformers, a GPT, to predict the next character of an Amazon review.
00:33:34.000But then they're looking inside the brain of this AI and they discover that there's one neuron that does best in the world sentiment analysis, like understanding whether the human is feeling like good or bad about the product.
00:33:49.000You ask it just to predict the next character.
00:33:52.000Why is it learning about how a human being is feeling?
00:33:55.000And it's strange until you realize, oh, I see why.
00:33:58.000It's because to predict the next character really well, I have to understand how the human being is feeling to know whether the word is going to be a positive word or a negative word.
00:34:11.000And it's really interesting that like GPT-3 had been out for I think a couple years until a researcher thought to ask, oh, I wonder if it knows chemistry.
00:34:26.000And it turned out it can do research-grade chemistry at the level and sometimes better than models that were explicitly trained to do chemistry.
00:34:34.000Like there was these other AI systems that were trained explicitly on chemistry, and it turned out GPT-3, which is just pumped with more, you know, reading more and more of the internet and just like thrown with more computers and GPUs at it, suddenly it knows how to do research-grade chemistry.
00:34:46.000So you could say, how do I make VX nerve gas?
00:34:48.000And suddenly that capability is in there.
00:34:50.000And what's scary about it is that we didn't know...
00:34:53.000That it had that capability until years after it had already been deployed to everyone.
00:34:57.000And in fact, there is no way to know what abilities it has.
00:35:02.000Another example is, you know, theory of mind, like my ability to sit here and sort of like model what you're thinking, sort of like the basis for me to do strategic thinking.
00:35:13.000So like when you're nodding your head right now, we're like testing, like, are you, how well are we?
00:35:19.000No one thought to test any of these, you know, transformer-based models, these GPTs, on whether they could model what somebody else was thinking.
00:35:29.000And it turns out, like, GPT-3 was not very good at it.
00:35:32.000GPT-3.5 was like at the level, I don't remember the exact details now, but it's like at the level of like a four-year-old or five-year-old.
00:35:38.000And GPT-4, like, was able to pass these sort of theory of mind tests up near, like, a human adult.
00:35:45.000And so it's like it's growing really fast.
00:35:47.000You're like, why is it learning how to model how other people think?
00:35:50.000And then it all of a sudden makes sense.
00:35:52.000If you are predicting the next word for the entirety of the internet, then, well, it's going to read every novel.
00:36:00.000And for novels to work, the characters have to be able to understand how all the other characters are working and what they're thinking and What they're strategizing about.
00:36:08.000It has to understand how French people think and how they think differently than German people.
00:36:13.000It's read all the internet so it's read lots and lots of chess games and now it's learned how to model chess and play chess.
00:36:18.000It's read all the textbooks on chemistry so it's learned how to predict the next characters of text in a chemistry book which means it has to learn...
00:36:25.000So you feed in all of the data of the internet and ends up having to learn a model of the world in some way because like language is sort of like a shadow of the world.
00:36:35.000It's like you imagine like casting lights from the world and like it creates shadows which we talk about as language and the AI is learning to go from like that flattened language and like reconstitute like Make the model of the world.
00:36:49.000And so that's why these things, the more data and the more compute, the more computers you throw at them, the better and better it's able to understand all of the world that is accessible via text and now video and image.
00:37:23.000Like, this is the speculation all over the internet when Sam Altman was removed as the CEO and then brought back was that they had not been forthcoming about the actual capabilities of whether it's chat GPT-5 or artificial general intelligence,
00:38:18.000Ironically, it goes viral because the algorithms of social media pick up that Qstar, which has this mystique to it, sort of...
00:38:23.000It must be really powerful in this breakthrough.
00:38:26.000And then that's kind of a theory on its own, so it kind of blows up.
00:38:28.000But we don't currently have any evidence.
00:38:30.000And we know a lot of people, you know, who are around the companies in the Bay Area.
00:38:34.000I can't say for certain, but my sense is that the board acted based on what they communicated and that there was not a major breakthrough that led to or had anything to do with it.
00:38:51.000I would just say before you get there...
00:39:00.000As we start talking about AGI, because that's what, of course, OpenAI has said that they're trying to build.
00:39:07.000And they're like, but we have to build an aligned AGI, meaning that it does what human beings say it should do and also take care not to do catastrophic things.
00:39:18.000You can't have a deceptively aligned operator building an aligned AGI. And so I think it's really critical because we don't know what happened with Sam and the board.
00:39:29.000That the independent investigation that they say they're going to be doing, like, that they do that, that they make the report public, that it's actually independent because, like, either we need to have Sam's name cleared or there need to be consequences.
00:39:43.000You need to know just what's going on.
00:39:45.000Because you can't have something this powerful and have a problem with who's, like, the person who's running it or something like that.
00:39:52.000Or it's not honesty about what's there.
00:39:54.000In a perfect world, though, if there is these race dynamics that you were discussing where all these corporations are working towards this very specific goal and someone does make a leap, what is the protocol?
00:40:06.000Is there an established protocol for...
00:40:18.000And they do the testing to see, does the new AI that they're being worked on, so GPT-4, they test it before it comes out, and they're like, does it have dangerous capabilities?
00:41:46.000And the way they know this is that they, what he's saying about, like, what was it thinking?
00:41:50.000What arc evals did is they sort of piped the output of the AI model to say, whatever your next line of thought is, like, dump it to this text file so we just know what you're thinking.
00:41:58.000And it says to itself, I shouldn't let it know that I'm an AI or I'm a robot, so let me make up this excuse, and then it comes up with that excuse.
00:42:05.000My wife told me that Siri, you know, like when you use Apple CarPlay, that someone sent her an image and Siri described the image.
00:44:35.000With, like, a capture just clearly pasted over it, and then the AI is like, oh, I'm so happy to help you, like, figure out what your grandmother said to you, and then responds with the...
00:45:14.000I mean, Now, they have fixed a number of those ones, but it's like a constant cat-and-mouse game, and the important thing to take away is there is no known way to make all jailbreaks not work.
00:45:23.000Yeah, these are called jailbreaks, right?
00:45:24.000So, like, the point is that they're aligned, they're not supposed to answer questions about naughty things, but the question is, and that there's also political issues and, you know, censorship, people concerns about, like, how does it answer about sensitive topics, Israel, or, you know, election stuff.
00:45:37.000But the main thing is that no matter what kind of protections they put on it, this is the example.
00:46:18.000Boy, chat GPT, you're fucking creeping me out.
00:46:20.000As we start talking about, like, what are the risks with AI? Like, what are the issues here?
00:46:25.000A lot of people will look at that and say, well, how is that any different than a Google search?
00:46:29.000Because if you Google, like, how do I make napalm or whatever, you can find certain pages that will tell you, you know, that thing.
00:46:34.000What's different is that the AI is like an interactive tutor.
00:46:37.000Think about it as we're moving from the textbook era to the interactive, super smart tutor era.
00:46:43.000So you've probably seen the demo of when they launched GPT-4.
00:46:48.000The famous example was they took a photo.
00:46:50.000Of their refrigerator, what's in their fridge, and they say, what are the recipes of food I can make with the stuff I have in the fridge?
00:46:56.000And GPT-4, because it can take images and turn it into text, it realized what was in the refrigerator, and then it provided recipes for what you can make.
00:47:06.000But the same, which is a really impressive demo, and it's really cool.
00:47:08.000I would like to be able to do that and make great food at home.
00:47:11.000The problem is I can go to my garage and I can say, hey, what kind of explosives can I make with this photo of all the stuff that's in my garage?
00:47:20.000And then it's like, well, what if I don't have that ingredient?
00:47:21.000And it'll do an interactive tutor thing and tell you something else you can do with it.
00:47:24.000Because what AI does is it collapses the distance between any question you have, any problem you have, And then finding that answer as efficiently as possible.
00:47:33.000That's different than a Google search.
00:47:35.000And then now when you start to think about really dangerous groups that have existed over time, I'm thinking of the Om Shreem Riko cult in 1995. Do you know this story?
00:47:45.000So 1995. So this doomsday cult started in the 80s.
00:47:52.000Because the reason why you're going here is people then say like, okay, so AI does like dangerous things and it might be able to help you make a biological weapon, but like who's actually going to do that?
00:48:01.000Like who would actually release something that would like kill all humans?
00:48:05.000And that's why we're sort of like talking about this doomsday cult because most people I think don't know about it, but you've probably heard of the 1995 Tokyo subway attacks.
00:48:24.000They had tens of thousands of people, many of whom were, like, experts and scientists, programmers, engineers.
00:48:31.000They had, like, not a small amount of budget, but a big amount.
00:48:35.000They actually somehow had accumulated hundreds of millions of dollars.
00:48:38.000And the most important thing to know is that they had two microbiologists on staff that were working full time to develop biological weapons.
00:48:47.000The intent was to kill as many people as possible.
00:48:50.000And they didn't have access to AI and they didn't have access to DNA printers.
00:48:58.000But now DNA printers are much more available.
00:49:02.000And if we have something, you don't even really need AGI. You just need, like, any of these sort of, like, GPT-4, GPT-5 level tech that can now collapse the distance between we want to create a super virus, like smallpox, but, like, 10 times more viral and, like,
00:49:17.000100 times more deadly, to here are the step-by-step instructions for how to do that.
00:49:22.000You try something that doesn't work, and you have a tutor that guides you through to the very end.
00:49:29.000It's the ability to take, like, a set of DNA code, just like, you know, GTC, whatever, and then turn that into an actual physical strand of DNA. And these things now run on, you know, like, they're bench top.
00:49:59.000A lot of people talk about we need to democratize technology, but we also need to be extremely conscious when that technology is dual use or omni-use and has dangerous characteristics.
00:50:08.000Just looking at that thing, it looks to me like an old Atari console.
00:50:14.000You know, in terms of like, what could this be?
00:50:17.000Like, when you think about the graphics of Pong versus what you're getting now with like, you know, these modern video games with the Unreal 5 engine that are just fucking insane.
00:50:29.000Like, if you can print DNA, how many...
00:50:34.000How many different incarnations do we have to, how much evolution in that technology has to take place until you can make an actual living thing?
00:50:49.000We're not that far away from being able to do even more things.
00:50:51.000I'm not an expert on synthetic biology, but there's whole fields in this.
00:50:53.000And so, as we think about the dangers of the AI and what to do about it, we want to make sure that we're releasing it in a way that we don't proliferate capabilities that people can do really dangerous stuff and you can't pull it back.
00:51:08.000The thing about open models, for example, is that if you have...
00:51:14.000So Facebook is releasing their own set of AI models, right?
00:51:57.000But, you know, that model file, if you load it up in an MP3, sorry, if you load the MP3 into an MP3 player, instead of gobbledygook, you get Taylor Swift's, you know, song, right?
00:52:08.000With AI, you train an AI model, and you get this gobbledygook, but you open that into an AI player called inference, which is basically how you get that blinking cursor on ChatGPT.
00:52:21.000And now you have a little brain you can talk to.
00:52:23.000So when you go to chat.openai.com, you're basically opening the AI player that loads...
00:52:28.000I mean, this is not exactly how it works, but this is a metaphor for getting the core mechanics so people understand.
00:52:35.000And then you can type to it and say, you know, answer all these questions, everything that people do with ChatGPT today.
00:52:39.000But OpenAI doesn't say, here's the brain that anybody can go download the brain behind ChatGPT.
00:52:47.000They spend $100 million on that, and it's locked up in a server.
00:52:51.000And we also don't want China to be able to get it, because if they got it, then they would accelerate their research.
00:52:55.000All of the sort of race dynamics depend on the ability to secure that super powerful digital brain sitting on a server inside of OpenAI.
00:53:03.000And Anthropic has another digital brain called Cloud2, and Google now has the Gemini digital brain called Gemini.
00:53:08.000But they're just these files that are encoding the weights from having read the entire internet, read every image, looked at every video, thought about every topic.
00:53:18.000So after that $100 million is spent, you end up with that file.
00:53:20.000So that hopefully covers setting some table stakes there.
00:53:24.000When Meta releases their model, I hate the names for all these things, but sorry for confusing listeners, it's just like the random names, but they released a model called Llama2, and they released their files.
00:53:35.000So instead of OpenAI, which like locked up their file, Llama2 is released to the open internet.
00:53:40.000And it's not that I can see the code, like the benefits of open source.
00:53:57.000When Meta releases their model, they're releasing a digital brain that has a bunch of capabilities.
00:54:02.000And if that set of capabilities, just to say, they will train it to say, if you get asked a question about how to make anthrax, it'll say, I can't answer that question for you, because they've put some safety guardrails on it.
00:54:13.000But what they won't tell you is that you can do something called fine-tuning and with $150, someone on our team ripped off the safety controls of that model.
00:54:24.000And there's no way that Meta can prevent someone from doing that.
00:54:27.000So there's this thing that's going on in the industry now that I want people to get, which is...
00:54:33.000Open-weight models for AI are not just insecure, they're insecure-able.
00:54:39.000Now, the brain of Llama 2, that Llama model that Facebook released, wasn't that smart.
00:54:45.000It doesn't know how to do lots and lots and lots of things.
00:54:48.000And so even though that's that, it's like we let that cat out of the bag.
00:54:50.000We can never put that cat back in the bag.
00:54:52.000But we have not yet released the lions and the super lions out of the bag.
00:54:56.000And one of the other properties is that the llama model and all these open models, you can kind of bang on them and tinker with them, and they teach you how to unlock and jailbreak the super lions.
00:55:06.000So the super lion being like GPT-4 sitting inside of OpenAI.
00:55:09.000It's the super AI, the really big powerful AI, but it's locked in that server.
00:55:15.000But as you play with Lama 2, it'll teach you, hey, there's this code, there's this kind of thing you can add to a prompt, and it'll suddenly unlock all the jailbreaks on GPT-4.
00:55:27.000So now you can basically talk to the full unfiltered model.
00:55:30.000And that's one of the reasons that this field is really dangerous.
00:55:33.000And what's confusing about AI is that the same thing that knows how to solve problems, you know, to help a scientist do a breakthrough in cancer biology or chemistry, to help us advance material science and chemistry or solve climate stuff, is the same technology that can also invent a biological weapon with that knowledge.
00:59:19.000They're going to take all of their existing content and put it through an engagement filter.
00:59:23.000You run it through AI and it takes your song and it makes it more engaging, more catchy.
00:59:28.000You put your post on Twitter and it generates the perfect image that grabs people.
00:59:33.000So it's generated an image and it's like rewritten your tweet.
00:59:36.000Like you can just see that every film...
00:59:37.000Make a funny meme and a joke to go on with this.
00:59:40.000And that thing is just going to be better than you as a human because it's going to read all of the internet to know what is the thing that gathers the most engagement.
00:59:46.000So suddenly We're going to live in a world where almost all content, certainly the majority of it, will go through some kind of AI filter.
00:59:54.000And now the question is, like, who's really in control?
00:59:57.000Is it us humans or is it whatever it is the direction that AI is pushing us to just engage our nervous systems?
01:00:03.000Which is in a way already what social media was.
01:00:05.000Like, are we really in control or is by social media controlling the information systems and the incentives for everybody producing information, including journalism, has to produce content mostly to fit and get ranked up in the algorithms.
01:00:18.000So everyone's sort of dancing for the algorithm and the algorithms are controlling what everybody in the world thinks and believes because it's been running our information environment for the last 10 years.
01:00:43.000I mean, it doesn't seem like they're interested at all in slowing down.
01:00:46.000No social media company has responded to The Social Dilemma, which was an incredibly popular documentary, and scared the shit out of everybody, including me.
01:01:29.000And so the whole premise, and honestly, Jay, I want to say, when we look at the work that we're doing, and we've talked to policymakers, we've talked to White House, we've talked to national security folks, I don't know a better way to bend the incentives than to create a shared understanding about what the risks are.
01:01:46.000And that's why we wanted to come to you and to have a conversation, is to...
01:01:50.000Help establish a shared framework for what the risks are if we let this race go unmitigated, where if it's just a race to release these capabilities that you pump up this model, you release it, you don't even know what things it can do, and then it's out there.
01:02:04.000And in some cases, if it's open source, you can't ever pull it back.
01:02:07.000And it's like suddenly these new magic powers exist in society that the society isn't prepared to deal with.
01:02:13.000Like a simple example, and we'll get to your question because it's where we're going to.
01:02:17.000Is, you know, about a year ago, the generative AI, just like you can generate images and generate music, it can also generate voices.
01:02:24.000And this has happened to your voice, you've been deepfaked, but it only takes now three seconds of someone's voice to speak in their voice.
01:02:45.000What about different inflections, humor, sarcasm?
01:02:49.000I don't know the exact details, but for the basics it's three seconds.
01:02:53.000And obviously as AI gets better, this is the worst it's ever going to be, right?
01:02:57.000And smarter and smarter AIs can extrapolate from less and less information.
01:03:01.000That's the trend that we're on, right?
01:03:02.000As you keep scaling, you need less and less data to get better and better accurate prediction.
01:03:06.000And the point I was trying to make is, were banks and grandmothers sitting there with their social security numbers, are they prepared to live in this world where your grandma answers the phone?
01:03:19.000And it's their grandson or granddaughter who says, hey, I forgot my social security number.
01:03:26.000Or, you know, grandma, what's your social security number?
01:04:16.000It was 21st century technology crashing down on the 16th century.
01:04:22.000So, like, the king is sitting around with his advisors, and they're like, all right, well, what do we do about the telegram and radio and television and, like, smartphones and the internet all at once?
01:04:47.000But institutions are just not going to be able to cope and just give one example.
01:04:52.000This is from the UK Home Office where the amount of AI generated child pornography that people cannot tell whether it's real or AI generated is so much that the police that are working to catch the real perpetrators They can't tell which one's which and so it's breaking their ability to respond.
01:05:16.000And you can think of this as an example of what's happening across all the different governance bodies that we have because they're sort of prepared to deal with a certain amount of those problems.
01:05:27.000You're prepared to deal with a certain amount of child sexual abuse, law enforcement type stuff, a certain amount of disinformation attacks from China, a certain amount.
01:05:37.000And it's almost like, you know, with COVID, a hospital has a finite number of hospital beds.
01:05:42.000And then if you get a big surge, you just overwhelm the number of emergency beds that you had available.
01:05:47.000And so one of the things that we can say is that if we keep racing as fast as we are now to release all these capabilities that endow society with the ability to do more things that then overwhelm the institutional structures that we have that protect certain aspects of society working,
01:06:26.000It's about the way that we're doing it.
01:06:29.000How do we release it in a way that we actually get to get the benefits, but we don't simultaneously release capabilities that overwhelm and undermine society's ability to continue?
01:06:42.000What good is a cancer drug if supply chains are broken and no one knows what's true?
01:06:47.000Not to paint too much of that picture, the whole premise of this is that we want to bend that curve.
01:06:53.000Instead of a race to scale and proliferate AI capabilities as fast as possible, We want a race to secure, safe, and sort of humane deployment of AI in a way that strengthens democratic societies.
01:07:06.000And I know a lot of people hearing this are like, well, hold on a second, but what about China?
01:07:10.000If we don't build AI, we're just going to lose to China.
01:07:13.000But our response to that is we beat China to racing to deploy social media on society.
01:07:20.000That means we beat China to a loneliness crisis, a mental health crisis, breaking democracy's shared reality so that we can't cohere or agree with each other or trust each other because we're dosed every day with these algorithms, these AIs that are putting the most outrageous personalized content for our nervous systems, which drives distrust.
01:07:36.000So it's not a race to deploy this power.
01:07:40.000It's a race to consciously say, how do we deploy the power that strengthens our societal position relative to China?
01:07:48.000It's like saying, we have these bigger nukes, but meanwhile we're losing to China in supply chains, rare earth metals, energy, economics, education.
01:07:56.000It's like, the fact that we have bigger nukes, but we're losing on all the rest of the metrics...
01:08:00.000Again, narrow optimization for a small, narrow goal is the mistake.
01:08:04.000That's the mistake we have to correct.
01:08:06.000And so that's to say that we also recognize that the U.S. and Western countries who are building AI want to out-compete China on AI. We agree with this.
01:08:16.000But we have to change the currency of the race from the race to deploy just power in ways that actually undermine, like they sort of like self-implode your society, To instead, the race to, again, deploy it in a way that's defense-dominant, that actually strengthens...
01:08:31.000If I release an AI that helps us detect wildfires before they start for climate change type stuff, that's going to be a defense-dominant AI that's helping AI. Think of it as like, am I releasing Castle-strengthening AI or Cannon-strengthening AI? Yeah.
01:08:49.000Imagine there was an AI that discovered a vulnerability in every computer in the world.
01:08:57.000Imagine then I released that AI. That would be an offense-dominated AI. Now, that might sound like sci-fi, but this basically happened a few years ago.
01:09:05.000The NSA's hacking tools, called EternalBlue, were actually leaked on the open internet.
01:09:10.000It was basically open-sourced, the most offense-dominant cyber weapons that the US had.
01:09:20.000North Korea built the WannaCry ransomware attacks on top of it.
01:09:24.000It infected, I think, 300,000 computers and caused hundreds of millions to billions of dollars of damage.
01:09:30.000So the premise of all this is, what is the AI that we want to be releasing?
01:09:34.000We want to be releasing defense-dominant AI capabilities that strengthen society as opposed to offense-dominant canon-like AIs that sort of like turn all the castles we have into rubble.
01:10:15.000I mean, essentially these AI models, like the next training runs are going to be a billion dollars.
01:10:20.000The ones after that, 10 billion dollars.
01:10:22.000The big AI companies, they already have their eye and are starting to plan for those.
01:10:28.000They're going to give power to some centralized group of people that is, I don't know, a million, a billion, a trillion times that of those that don't have access.
01:10:39.000And then you scan your mind and you look back through history and you're like, what happens when you give one group of people asymmetric power over the other?
01:11:07.000And so then we only have two choices which are we either have to like slow down somehow and not just like be racing.
01:11:16.000Or we have to invent a new kind of government that we can trust, that is trustworthy.
01:11:25.000And when I think about like the U.S., the U.S. was founded on the idea that like the previous form of government was untrustworthy.
01:11:33.000And so we invented, innovated a whole new form of trustworthy government.
01:11:38.000Now, of course, you know, we've seen it like degrade and we sort of live now in a time of the least trust when we're inventing Technology that is in most need of good governing.
01:11:51.000And so those are our two choices, right?
01:11:53.000Either we slow down in some way, or we have to invent some new trustworthy thing that can help steer.
01:12:03.000And Iza doesn't mean like, oh, we have this big new global government plan.
01:12:21.000There's sort of two elements to the race.
01:12:23.000There's the people who are building the Frontier AI. So that's like OpenAI, Google, Microsoft, Anthropic.
01:12:30.000Those are like the big players in the U.S. We have China building Frontier.
01:12:34.000These are the ones that are building towards AGI, the Artificial General Intelligence, which, by the way, I think we failed to define, which is basically...
01:12:41.000People have different definitions for what AGI is.
01:12:44.000Usually it means like the spooky thing that AIs can't do yet that everybody's freaked out about.
01:12:49.000But if we define it in one way that we often talk to people in Silicon Valley about, it's AIs that can beat humans on every kind of cognitive task.
01:13:15.000If it's better than us across all of these cognitive tasks, you have a system that can out-compete us.
01:13:21.000And they also, people often think, you know, when should we be freaked out about AI? And there's always, like, this futuristic sci-fi scenario when it's smarter than humans.
01:13:32.000In The Social Dilemma, we talked about how technology doesn't have to overwhelm human strengths and IQ to take control.
01:13:39.000With the social media, all AI and technology had to do was undermine human weaknesses, undermine dopamine, social validation, sexualization, keep us hooked.
01:13:49.000That was enough to quote-unquote take control and keep us scrolling longer than we want.
01:13:53.000And so that's kind of already happened.
01:13:54.000In fact, when Aiza and I were working on this back, I remember several years ago when we were making The Social Dilemma, And people would come to us worried about like future AI risks and some of the effective altruists, the EA people.
01:14:06.000And they were worried about these future AI scenarios.
01:14:09.000And we would say, don't you see, we already have this AI right now that's taking control just by undermining human weaknesses.
01:14:16.000And we used to think that it's not, it's like that's a really long far out scenario when it's going to be smarter than humans.
01:14:21.000But unfortunately, now we're getting to the point, I didn't actually believe we'd ever be here.
01:14:26.000That AI actually is close to beating better than us on a bunch of cognitive capabilities.
01:14:33.000And the question we have to ask ourselves is, how do we live with that thing?
01:14:37.000Now, a lot of people think, well, then what Is and I are saying right now is, we're worried about that smarter than humans AI waking up and then starting to just like wreck the world on its own.
01:14:48.000You don't have to believe any of that because just that existing, let's say that OpenAI trains GPT-5, the next powerful AI system, and they throw a billion to ten billion dollars at it.
01:15:00.000So just to be clear, GPT-3 was trained with ten million dollars of compute, so like just a bunch of chips churning away, ten million dollars.
01:15:07.000GPT-4 was trained with a hundred million dollars of compute.
01:15:11.000GPT-5 would be trained with like a billion dollars.
01:15:35.000If they haven't made it secure, that is, if they can't keep a foreign adversary or actor or nation state from stealing it, then it's not really safe.
01:15:44.000You're only as safe as you are secure.
01:15:47.000And I don't know if you know this, but it only takes around $2 million to buy a zero-day exploit for like an iPhone.
01:15:55.000So, you know, $10 million means you can get into, like, these systems.
01:16:01.000So if you're China, you're like, okay, I need to compete with the US, but the US just spent $10 billion to train this crazy, super powerful AI, but it's just a file sitting on a server.
01:16:10.000So I'm just going to use $10 million and steal it.
01:16:14.000Why would I spend $10 billion to train my own when I can spend $10 million and just hack into your thing and steal it?
01:16:19.000We know people in security and the current assessment is that the labs are not yet, and they admit this, they're not strong enough in security to defend against this level of attack.
01:16:28.000So the narrative that we have to keep scaling to then beat China literally doesn't make sense until you know how to secure it.
01:16:36.000By the way, if they could do that and they could secure it, we'd be like, okay, that's one world we could be living in, but that's not currently the case.
01:16:45.000What's terrifying about this to me is that we're describing these immense changes that are happening at a breakneck speed.
01:16:54.000And we're talking about mitigating the problems that exist currently and what could possibly emerge with ChatGPT5.
01:17:02.000What about six, seven, eight, nine, ten?
01:17:05.000What about all these different AI programs that are also on this exponential rate of increase in innovation and capability?
01:17:55.000AI can be used to generate new training sets.
01:17:58.000If I can generate an email or I can generate a sixth grader's homework, I can also generate data that could be used to train the next generation of AIs.
01:18:04.000So as fast as everything is moving now, unless we do something, this is the slowest it will move in our lifetimes.
01:18:10.000But does it seem like it's possible to do something and it doesn't seem like there's any motivation whatsoever to do something?
01:18:39.000A world that doesn't work at the end of this race, like the race to the cliff that you said.
01:18:43.000Everyone has to see that there's a cliff there and that this really won't go well for a lot of people if we keep racing, including the US, including China.
01:18:52.000This won't go well if you just race to deploy it.
01:18:56.000And so if we all agreed that that was true, then we would coordinate to say, how do we race somewhere else?
01:19:03.000How do we race to secure AI that does not proliferate capabilities that are offense-dominant in undermining how society works?
01:19:11.000But we might, like, let's imagine Silicon Valley, let's imagine the United States ethics and morals collectively, if we decide to do that.
01:19:20.000There's no guarantee that China's going to do that or that Russia's going to do that.
01:19:23.000And if they just can hack into it and take the code, if they can spend $10 million instead of $10 billion and create their own version of it and utilize it, well, what are we doing?
01:20:01.000And it is a film depicting what happens the day after nuclear war.
01:20:07.000And it's not like people didn't already know that nuclear war would be bad, but this is the first time 100 million Americans, a third of Americans watched it, All at the same time and viscerally felt what it would be to have nuclear war.
01:20:22.000And then that same film, uncut, is shown in the USSR. A few years later.
01:21:23.000I think it's like Reagan's quest to abolish nuclear weapons.
01:21:25.000But a few years later, when the Reykjavik summit happened, which was in Reykjavik, Gorbachev and Reagan meet.
01:21:33.000It's like the first intermediate-range treaty talks happen.
01:21:36.000The first talks failed, but they got close.
01:21:38.000The second talks succeeded, and they got basically the first reduction, I think, in It's called the Intermediate Nuclear Range Treaty, I think.
01:21:47.000And when that happened, the director of the day after got a message from someone at the White House saying, don't think that your film didn't have something to do with this.
01:21:57.000Now, one theory, and this is not about valorizing a film.
01:22:01.000What it's about is a theory of change, which is, if the whole world can agree that a nuclear war is not winnable, That it's a bad thing, that it's omni-lose-lose.
01:22:13.000The normal logic is I'm fearing losing to you more than I'm fearing everybody losing.
01:22:18.000That's what causes us to proceed with the idea of a nuclear war.
01:22:21.000I'm worried that you're going to win in a nuclear war, as opposed to I'm worried that all of us are going to lose.
01:22:27.000When you pivot to, I'm worried that all of us are going to lose, which is what that communication did, it enabled a new coordination.
01:22:34.000Reagan and Gorbachev were the dolphins that went underwater, except they went to Reykjavik, and they talked.
01:22:39.000And they said, is there some different outcome?
01:22:43.000Now, I know what everyone hearing this is thinking.
01:22:46.000They're like, you guys are just completely naive.
01:22:53.000Something unprecedented has to happen unless you want to live in a really bad future.
01:22:59.000And to be clear, we are not here to fearmonger or to scare people.
01:23:04.000We're here because I want to be able to look my future children in the eye and say, this is the better future that we are working to do, working to create every single day.
01:23:15.000And, you know, there's a quote I actually wanted to read you because I don't think a lot of people know How people in the tech industry actually think about this.
01:23:24.000We have someone who interviewed a lot of people.
01:23:28.000You know, there's this famous interaction between Larry Page and Elon Musk.
01:23:58.000I value there's something sacred about consciousness that we need to preserve.
01:24:03.000And I think that there's a psychology that is more common among people building AI that most people don't know, that we had a friend who's interviewed a lot of them.
01:24:11.000He says, A lot of the tech people I'm talking to, when I really grill them on it, they retreat into number one, determinism, number two, the inevitable replacement of biological life with digital life, and number three,
01:24:30.000At its core, it's an emotional desire to meet and speak to the most intelligent entity they've ever met, and they have some ego-religious intuition that they'll somehow be a part of it.
01:24:41.000It's thrilling to start an exciting fire.
01:24:44.000They feel they will die either way, so they'd like to light it just to see what happens.
01:24:50.000Now, this is not the psychology that I think any regular, reasonable person would say would feel comfortable with determining where we're going with all this.
01:25:10.000I'm of the opinion that we are a biological caterpillar that's creating the electronic butterfly.
01:25:17.000I think we're making a cocoon, and I think we don't know why we're doing it, and I think there's a lot of factors involved.
01:25:25.000It plays on a lot of human reward systems, and I think it's based on a lot of the...
01:25:32.000So really what allowed us to reach this point in history to survive and to innovate and to constantly be moving towards greater technologies.
01:25:44.000I've always said that if you looked at the human race amorally, like if you were some outsider, some life form from somewhere else that said, okay, what is this?
01:25:54.000Novel species on this one planet the third planet from the Sun.
01:26:03.000They just constantly make better things and if you go from the emergent Flint technologies of the Stone Age people to AI It's very clear that unless something happens, unless there's a natural disaster or something akin to that,
01:26:21.000we will consistently make new, better things.
01:26:25.000That includes technology that allows for artificial life.
01:26:30.000And it just makes sense that if you scale that out 50 years from now, 100 years from now, it's a superior life form.
01:26:42.000I mean, I don't agree with Larry Page.
01:26:44.000I think this whole idea, don't be a speciesist, is ridiculous.
01:27:10.000Like if you look at the infinite vast scape, just the massive amount of space in the universe and you imagine what the incredibly different possibilities there are when it comes to different types of biological life and then also different technological capabilities that have emerged over evolution.
01:27:35.000It seems inevitable that our bottleneck in terms of our ability to evolve is clearly biologic.
01:27:45.000Evolution is a long, slow process from single-celled organisms to human beings.
01:27:50.000But if you could bypass that with technology and you can create An artificial intelligence that literally has all of the knowledge of every single human that has ever existed and currently exists,
01:28:10.000and then you can have this thing have the ability to make a far greater version of technology, a far greater version of intelligence.
01:28:56.000It doesn't have all the things that seem to both fuck us up and also motivate us to achieve.
01:29:04.000There's something about the biological reward systems that are Like, deeply embedded into human beings that are causing us to do all these things, that are causing us to create war and have battles over resources and deceive people and use propaganda and push false narratives in order to be financially profitable.
01:29:25.000All these things are the blight of society.
01:29:28.000These are the number one problems that we are trying to mitigate on a daily basis.
01:29:34.000If this thing can bypass that and move us into some next stage of evolution, I think that's inevitable.
01:29:47.000But are you okay if the lights of consciousness go off and it's just this machine that is just computing, sitting on a spaceship, running around the world, having sucked in everything?
01:30:25.000I think that most reasonable people hearing this, our conversation today, unless there's some distortion and you just are part of a suicide cult and you don't care about any light of consciousness continuing, I think most people would say, if we could choose, we would want to continue this experiment.
01:30:41.000And there are visions of humanity that is tool builders that keep going and build Star Trek-like civilizations where...
01:30:47.000Humanity continues to build technology, but not in a way that, like, extinguishes us.
01:30:51.000And I don't mean that in this sort of existential risk, AIs kill everybody in one go, Terminator.
01:30:55.000Just, like, basically breaks the things that have made human civilization work to date, which is the current kind of trajectory.
01:31:04.000I don't think that's what people want.
01:31:06.000And, again, we have visions of Star Trek that show that there can be a harmonious relationship.
01:31:11.000And I'm going to do two, of course, but the reason that, you know, in our work we use the phrase humane technology...
01:31:17.000Aza hasn't disclosed his biography, but Aza's father was Jeff Raskin, who invented the Macintosh project at Apple.
01:31:24.000Steve Jobs obviously took it over later.
01:31:27.000But do you want to say about where the phrase humane came from, like what the idea behind that is?
01:31:32.000Yeah, it was about how do you make technology fit humans?
01:31:37.000Not force us to fit into the way technology works.
01:31:41.000It was defined humane as that which is considerate of human frailties and responsive to human needs.
01:31:50.000Actually, I sometimes think, we talk about this, that the meta work that we are doing together as communicators Is the new Macintosh project because all of the problems we're facing, climate change to AI, are hyperobjects.
01:32:09.000And so our job is figuring out how to communicate in such a way that we can fit it enough into our minds that we can have levers to pull it on it.
01:32:19.000And I think that's the problem here is I agree that it can feel inevitable.
01:32:27.000But maybe that's because we're looking at the problem the wrong way in the same way that it might have felt inevitable that every country on earth would end up with nuclear weapons and it would be inevitable that we'd end up using them against each other and then it would be inevitable that we'd wipe ourselves out.
01:33:24.000There is still this thing which is like humans waking up our fudge factor to say we don't want that.
01:33:31.000I think it's, you know, sort of funny that we're all talking about like AI is AI conscious when it's not even clear that we as humanity are Are conscious.
01:33:52.000And just to close the slavery story out in the book, Bury the Chains by Autumn Hochschild.
01:33:57.000In the UK, the conclusion of that story is through the advocacy of a lot of people working extremely hard, communicating, communicating testimony, pamphlets, visualizing slave ships, all this horrible stuff.
01:34:08.000The UK consciously and voluntarily chose to...
01:34:13.000They sacrificed 2% of their GDP every year for 60 years to wean themselves off of slavery, and they didn't have a civil war to do that.
01:34:23.000All this is to say that if you asked if the arms race between the UK's military and economic might against France's military and economic might, they could never make that choice.
01:34:34.000But there is a way that if we're conscious about the future that we want, We can say, well, how do we try to move towards that future?
01:34:41.000It might have looked like we were destined to have nuclear war or destined to have 40 countries with nukes.
01:34:47.000We did some very aggressive lockdowns.
01:34:49.000I know some people in defense who told me about this, but apparently General Electric and Westinghouse sacrificed tens of billions of dollars in not commercializing their nuclear technology that they would have made money from spreading to many more countries.
01:35:06.000And that also would have carried with it nuclear proliferation risk because there's more just nuclear terrorism and things like that that could have come from it.
01:35:11.000And I want to caveat that for those listeners who are saying, and we also want to make sure we made some mistakes on nuclear in that we have not gotten the nuclear power plants that would be helping us with climate change right now.
01:35:23.000There's ways, though, of managing that in a middle ground where you can say, if there's something that's dangerous, we can forego tremendous profit to do a thing that we actually think is the right thing to do.
01:35:33.000And we did that and sacrificed tens of billions of dollars in the case of nuclear technology.
01:35:37.000So in this case, you know, we have this perishable window of leverage where right now there's only basically three, you want to say it?
01:35:47.000Three countries that build the tools that make chips, essentially.
01:35:54.000And that's like the US, Netherlands, and Japan.
01:35:58.000So if just those three countries coordinated, we could stop the flow of the most advanced new chips going out into the market.
01:36:07.000So if they went underwater and did the dolphin thing and communicated about which future we actually want, there could be a choice about how do we want those chips to be proliferating.
01:36:15.000And maybe those chips only go to the countries that want to create this more secure, safe, and humane deployment of AI. Because we want to get it right, not just race to release it.
01:36:27.000But it seems to me, to be pessimistic, it seems to me that the pace of innovation far outstrips our ability to understand what's going on while it's happening.
01:36:40.000Can you govern something that is moving faster than you are currently able to understand it?
01:36:45.000Literally, the co-founder of Anthropic, we have this quote that I don't have in front of me.
01:36:48.000It's basically like, even he, the co-founder of Anthropic, the second biggest AI player in the world, says, tracking progress is basically increasingly impossible because even if you scan Twitter every day for the latest papers, you are still behind.
01:37:03.000And these papers, the developments in AI are moving so fast, every day it unlocks something new and fundamental for economic and national security.
01:37:10.000And if we're not tracking it, then how could we be in a safe world if it's moving faster than our governance?
01:37:15.000And a lot of people we talk to in AI, just to steelman your point, They say, I would feel a lot more comfortable.
01:37:22.000I'd feel a lot more comfortable with the change that we're about to undergo if it was happening over a 20-year period than over a two-year period.
01:37:30.000And so I think there's consensus about that.
01:38:21.000That's the other thing that I'm curious about.
01:38:24.000With these emerging technologies like Neuralink and things along those lines, I wonder if the decision has to be made at some point in time That we either merge with AI, which you could say, like, you know, Elon has famously argued that we're already cyborgs because we carry around this device with us.
01:38:42.000What if that device is a part of your body?
01:38:44.000What if that device enables a universal language, you know, some sort of a Rosetta Stone for the entire race of human beings so we can understand each other far better?
01:39:06.000I mean, I don't know what Neuralink is capable of.
01:39:10.000And there was some sort of an article that came out today about some lawsuit that's alleging that Neuralink misled investors or something like that about the capabilities and something about the safety because of the tests that they ran with monkeys,
01:40:13.000We already know that there's a ton of foreign actors that are actively influencing discourse, whether it's on Facebook or Twitter, like famously...
01:40:24.000Facebook, rather, the top 20 religious sites, Christian religious sites, were run by Russian trolls.
01:40:30.00019 of them were run by Russian trolls.
01:40:45.000We're dealing with this monkey mind that's trying to navigate the insane possibilities of this thing that we've created that seems like a runaway train.
01:40:56.000And just to sort of re-up your point about how hard this is going to be, I was talking to someone in the UAE and asking them, like, what?
01:41:13.000Do I as a Westerner, like what do I not understand about how you guys view AI? And his response to me was, well, To understand that, you have to understand that our story is that the Middle East used to be 700 years ahead technologically of the West,
01:42:04.000And in fact, there were 10 million people in the UAE. And he's like, but we control, run 10% of the world's ports.
01:42:14.000So we know we're never going to be able to compete directly with the U.S. or with China, but we can build the fundamental infrastructure for much of the world.
01:42:23.000And the important context here is that the UAE is providing, I think, the second most popular open source AI model called Falcon.
01:42:31.000So, you know, Meta, I mentioned earlier, released Llama, their open weight model.
01:42:36.000But UAE has also released this open weight model because they're doing that because they want to compete in the race.
01:42:44.000And I think there's a secondary point here, which actually kind of parallels to the Middle East, which is, what is AI? Why are we so attracted to it?
01:42:53.000And if you remember the laws of technology, if the technology confers power, it starts a race.
01:42:58.000One way to see AI... Is that what a barrel of oil is to physical labor, like, you used to have to have thousands of human beings go around and move stuff around.
01:43:20.000I mean, it is amazing that we don't have to go lift and move everything around the world manually anymore.
01:43:25.000And the countries that jump on the barrel of oil train start to get efficiencies to the countries that sit there trying to move things around with human beings.
01:43:33.000If you don't use oil, you'll be outcompeted by the countries that will use oil.
01:43:37.000And then why that is an analogy to now is what oil is to physical labor.
01:43:47.000Yeah, cognitive labor, like sitting down, writing an email, doing science, that kind of thing.
01:43:50.000And so it sets up the exact same kind of race condition.
01:43:54.000So if I'm sitting in your sort of seat, Joe, and you'll be like, well, I'm feeling pessimistic, the pessimism would be like, would it have been possible to stop oil from doing all the things that it has done?
01:44:15.000But if we don't watch out, in about 300 years we're going to get these runaway feedback loops and some planetary boundaries and climate issues and environmental pollution issues.
01:44:25.000If we don't simultaneously work on how we're going to transition to better sources of energy that don't have those same planetary boundaries, pollution, climate change dynamics.
01:44:37.000And this is why we think of this as a kind of rite of passage for humanity.
01:44:41.000And a rite of passage is when you face death as some kind of adolescent.
01:44:46.000And either you mature and you come out the other side or you don't and you don't make it.
01:44:52.000And here, like, with humanity, with industrial-era tech, like, we got a whole bunch of really cool things.
01:44:59.000I am so glad that I get to, like, use computers and, like, program and, like, fly around.
01:45:06.000And also, it's had a lot of, like, these, like, really terrible effects on the commons, the things we all depend on, like...
01:45:15.000You know, like climate, like pollution, like all of these kinds of things.
01:45:20.000And then with social media, like with info-era tech, the same thing.
01:45:24.000We get a whole bunch of incredible benefits, but all of the harms it has, the externalities, the things like it starts polluting our information environment and breaks children's mental health, all that kind of stuff.
01:45:36.000With AI, we're sort of getting the exponentiated version of that.
01:45:41.000That we're going to get a lot of great things, but the externalities of that thing are going to break all the things we depend on.
01:45:58.000Here, we're going to feel it, and we're going to feel it really fast.
01:46:00.000And maybe this is the moment that we say, oh...
01:46:04.000All those places that we have lied to ourselves or blinded ourselves to where our systems are causing massive amounts of damage, like we can't lie to ourselves anymore.
01:46:14.000We can't ignore that anymore because it's going to break us.
01:46:17.000Therefore, there's a kind of waking up that might happen that would be completely unprecedented.
01:46:24.000But maybe you can see that there's a little bit like of a thing that hasn't happened before and so humans can do a thing we haven't done before.
01:46:32.000Yes, but I could also see the argument that AI is our best case scenario or best solution to mitigate the human caused problems like pollution, depletion of ocean resources, all the different things that we've done,
01:46:49.000inefficient methods of battery construction and energy, all the different things that we know are genuine problems, fracking, All the different issues that we're dealing with right now that have positive aspects to them,
01:47:04.000but also a lot of downstream negatives.
01:47:08.000And AI does have the ability to solve a whole bunch of really important problems, but that was also true of everything else that we were doing up until now.
01:47:18.000You know, the motto was like, better living through chemistry.
01:47:20.000We had figured out this invisible language of nature called chemistry.
01:47:24.000And we started, like, inventing, you know, millions of these new chemicals and compounds, which gave us a bunch of things that we're super grateful for, that have helped us.
01:47:34.000But that also created, accidentally, forever chemicals.
01:47:37.000I think you've probably had people on, I think, covering PFOS, PFOAs.
01:47:41.000These are forever bonded chemicals that do not biodegrade in the environment.
01:47:46.000And you and I in our bodies right now have this stuff in us.
01:47:50.000In fact, if you go to Antarctica and you just open your mouth and drink the rainwater there or any other place on Earth, currently you will get forever chemicals in the rainwater coming down into your mouth that are above the current EPA levels of what is safe.
01:48:04.000That is humanity's adolescent approach to technology.
01:48:07.000We love the fact that DuPont gave us Teflon and non-stick pans and, you know, tape and, you know, adhesives and fire extinguishers and a million things.
01:48:18.000The problem is, can we do that without also generating the shadow, the externalities, the cost, the pollution that show up on society's balance sheet?
01:48:26.000And so what ASUS, I think, is saying I think?
01:48:52.000Well, if we don't fix, you know, it's like there's the famous Jon Kabat-Zinn, who's a Buddhist meditator who says, wherever you go, there you are.
01:48:58.000Like, you know, if you don't change the underlying way that we are showing up as a species, you just add AI on top of that and you supercharge this adolescent way of being that's driving all these problems.
01:49:11.000It's not like we got climate change because...
01:49:13.000We intended to or some bad actor created it.
01:49:31.000Which, to be clear, we're super grateful for and we all love flying around, but we also can't afford to keep going on that for much longer.
01:49:37.000But we can, again, we can hide climate change from ourselves, but we can't hide from AI because it shortens the timeline.
01:49:44.000So this is how we have to wake up and take responsibility for our shadow.
01:49:49.000This forces a maturation of humanity to not lie to itself.
01:49:53.000And the other side of that that you say all the time is we get to love ourselves more.
01:50:01.000You know, the solution, of course, is love and changing the incentives.
01:50:07.000But, you know, speaking really personally, part of my own, like, stepping into greater maturity process has been the change in the way that I relate to my own shadows.
01:50:20.000Because one way when somebody tells me, like, hey, you're doing this sort of messed up thing and it's causing harm, is for me to say, like, well, like, screw you.
01:50:29.000The other way is to be like, oh, thank you.
01:50:33.000You're showing me something about myself that I sort of knew but I've been ignoring a little bit or like hiding from.
01:50:39.000When you tell me and I can hear, that awareness brings – that awareness gives me the opportunity for choice and I can choose differently.
01:50:50.000On the other side of facing my shadow is a version of myself that I can love more.
01:50:58.000When I love myself more, I can give other people more love.
01:51:01.000When I give other people more love, I receive more love.
01:51:04.000That's the thing we all really want most.
01:51:07.000Ego is that which blocks us from having the very thing we desire most and that's what's happening with humanity.
01:51:12.000It's our global ego that's blocking us from having the very thing we desire most.
01:51:47.000It's interesting that people who have those experiences talk about a deeper connection to nature or caring about, say, the environment or things that they...
01:51:56.000or caring about human connection more.
01:51:59.000Which, by the way, is the whole point of Earth species and talking to animals is there's that moment of disconnection.
01:52:08.000Humans always start out talking to animals, and then there's that moment when...
01:52:11.000They cease to talk to animals, and that sort of symbolizes the disconnection.
01:52:15.000And the whole point of our species is, let's make the sacred more legible.
01:52:19.000Let's let people see the thing that we're losing.
01:52:23.000And in a way, you were mentioning our paleolithic brains, Joe.
01:52:29.000We use this quote from E.O. Wilson that the fundamental problem of humanity is we have paleolithic brains, medieval institutions, and godlike technology.
01:52:40.000Our institutions are not very good at dealing with invisible risks that show up later on society's balance sheet.
01:52:47.000They're good at, like, that corporation dumped this pollution into that water, and we can detect it and we can see it, because, like, we can just visibly see it.
01:52:55.000It's not good at chronic, long-term, diffuse, and non-attributable harm, like air pollution or forever chemicals or, you know, Climate change or social media making a more addicted, distracted, sexualized culture or broken families.
01:53:12.000We don't have good laws or institutions or governance that knows how to deal with chronic, long-term, cumulative and non-attributable harm.
01:53:23.000Now, so you think of it like a two-by-two, like there's short-term visible harm that we can all see, and then we have institutions that say, oh, there can be a lawsuit because you dumped that thing in that river.
01:53:31.000So we have good laws for that kind of thing.
01:53:32.000But if I put it in the quadrant of not short-term and discrete and attributable harm, but long-term, chronic, and diffuse, we can't see that.
01:53:40.000Part of this is, again, if you go back to the E.O. Wilson quote, like what is the answer to all this?
01:53:46.000We have to embrace our Paleolithic emotions.
01:53:56.000We have to embrace how our brains work.
01:53:59.000And then we have to upgrade our institutions.
01:54:01.000So it's embrace our Paleolithic emotions, upgrade our governance and institutions, and we have to have the wisdom and maturity to wield the godlike power.
01:54:11.000This moment with AI is forcing that to happen.
01:54:57.000Domestic violence was super common in films from heroes.
01:55:02.000You know what you're seeing every day is more of an awareness of the dangers of behavior or What we're doing wrong and we have more data about human consciousness and our interactions with each other My fear my genuine fear is the runaway train thing and I want to know what you guys think is I mean we're coming up with all these Interesting ideas
01:55:33.000that could be implemented in order to steer this in a good direction.
01:56:11.000It's not just that there's, like, misinformation, disinformation, all that stuff.
01:56:15.000There are going to be mispeople and, like, counterfeit human beings that just flood democracies.
01:56:21.000You're talking to somebody on Twitter or maybe it's on Tinder and they're sending you like videos of themselves, but it's all just generated.
01:57:24.000So this is not really Mark Zuckerberg.
01:57:27.000This is this AI-generated Mark Zuckerberg while Mark is wearing a handset, and they're not in the same room.
01:57:34.000But the video starts off with the two of them are standing next to each other, and it's super bizarre.
01:57:39.000And are we creating that world because that's the world that humanity wants and is demanding, or are we creating that world because that, with the profit motive of, hey, we're running out of attention to mine, and we need to harvest the next frontier of attention, and as the tech gets more progressed, This is the next frontier.
01:57:55.000This is the next attention economy is just to virtualize 24-7 of your physical experience and to own it for sale.
01:58:28.000When you see them, that's what's actually happening.
01:58:30.000And so then as the sort of simulation world that we've constructed for ourselves, well, the incentives have instructed, forced us to construct for ourselves, whenever that diverges from base reality far enough, that's when you get civilizational collapse.
01:58:45.000Because people are just out of touch with the realities that they need to be attending to.
01:58:49.000There are fundamental realities about diminishing returns on energy or just how our society works.
01:58:55.000And if everybody's sort of living in a social media influencer land and don't know how the world actually works and what we need to protect and what the science and truth of that is, then that's how civilizations collapse.
01:59:05.000They sort of dumb themselves to death.
01:59:07.000What about the prospect that this is really the only way towards survival?
01:59:12.000That if human beings continue to make greater weapons and have more incentive to steal resources and to start wars, like no one today, if you asked a reasonable person today, what are the odds that we have zero war in a year?
01:59:28.000Like no one thinks that that's possible.
01:59:30.000No one has faith in human beings with the current model.
01:59:34.000To the point where we would say that any year from now, we will eliminate one of the most horrific things that human beings are capable of that has always existed, which is war.
01:59:43.000But we were able, I mean, after nuclear weapons, you know, and the invention of that, that didn't, you know, to quote Oppenheimer, we didn't just create a new weapon, it was creating a new world because it was creating a new world structure.
01:59:52.000And the things that are bad about human beings that were rivalrous and conflict-ridden and we want to steal each other's resources...
01:59:58.000After Bretton Woods, we created a world system and the United Nations and the Security Council structure and nuclear nonproliferation and shared agreements and the International Atomic Energy Agency.
02:00:08.000We created a world system of mutually assured destruction that enabled the longest period of human peace in modern history.
02:00:16.000The problem is that that system is breaking down and we're also inventing brand new tech that changes the calculations around that mutually assured destruction.
02:00:27.000But that's not to say that it's impossible.
02:00:29.000What I was trying to point to is, yes, it's true that humans have these bad attributes, and you would predict that we would just get into wars, but we were able to consciously, from our wiser, mature selves, post-World War II, create a world that was stable and safe.
02:00:41.000We should be in that same inquiry now, if we want this experiment to keep going.
02:00:45.000Yeah, but did we really create a world since World War II that was stable and safe, or did we just create a world that's stable and safe for superpowers?
02:01:11.000You would have predicted with the same human instincts and rivalry that we wouldn't be here right now.
02:01:15.000Well, I was born in 1967, and when I was in high school, it was the greatest fear that we all carried around with us.
02:01:22.000It was a cloud that hung over everyone's head, was that one day there would be a nuclear war.
02:01:27.000And I've been talking about this a lot lately that I get these same fears now, particularly late at night when I'm alone and I think about what's going on in Ukraine and what's going on in Israel and Palestine.
02:01:38.000I get these same fears now that, Jesus Christ, like this might be out of control already and it's just one day we will wake up and the bombs will be going off.
02:01:50.000And it seems Like, that's on the table, where it didn't seem like that was on the table just a couple of years ago.
02:01:58.000And when I think about, like, the two most likely paths for how things go really badly, on one side, there's sort of forever dystopia.
02:02:07.000There's, like, top-down, authoritarian control, perfect surveillance, like, mind-reading tech, like, and that's a world I do not want to live in, because once that happens, you're never getting out of it.
02:02:53.000So you're just going to end up living in a world that feels like constant suicide bombings just going off around you, whether it's viruses or whether it's cyber attacks, whatever.
02:03:03.000And neither of those two worlds are the one I want to live in.
02:03:06.000And so this is the If everyone really saw that those are the only two poles, then maybe there is a middle path.
02:03:12.000And to use AI as sort of part of the solution, there is sort of a trend going on now of using AI to discover new strategies that changes the nature of the way games are played.
02:03:25.000So an example is, you know, like AlphaGo playing itself, you know, a hundred million times and there's that famous Move 37 when it's playing like the world leader in Go and it's this move that no human being really had ever played.
02:03:39.000A very creative move and it let the AI win.
02:03:44.000And since then, human beings have studied that move and that's changed the way the very best Go experts actually play.
02:03:50.000And so let's think about a different kind of game other than a board game that's more consequential.
02:03:55.000Let's think about conflict resolution.
02:03:58.000You could play that game in the form of like, well, you know, I slight you and so you're slight and now you slight me back and we just like go into this negative sum dynamic.
02:04:07.000Or, you know, you could start looking at the work of Harvard Negotiation Project and getting to yes.
02:04:14.000And these ways of having communication and conflict negotiation, they get you to win-wins.
02:05:01.000And, you know, a few people who aren't following the reference, I think AlphaGo was DeepMind's game-playing engine that beat the best Go player.
02:05:09.000There's AlphaChess, like AlphaStarCraft or whatever.
02:05:11.000This is just saying, what if you applied those same moves?
02:05:14.000And those games did change the nature of those games.
02:05:16.000Like, people now play chess and Go and poker differently because AIs have now changed the nature of the game.
02:05:22.000And I think that's a very optimistic vision of what AI could do to help.
02:05:25.000And the important part of this is that AI can be a part of the solution, but it's going to depend on AI helping us coordinate to see shared realities.
02:05:33.000Because again, if everybody saw the reality that we've been talking about the last two hours and said, I don't want that future.
02:05:40.000So one is, how do we create shared realities around futures that we don't want and then paint shared realities towards futures that we do want?
02:05:46.000Then the next step is how do we coordinate and get all of us to agree to bend the incentives to pull us in that direction?
02:05:52.000And you can imagine AIs that help with every step of that process.
02:05:55.000AIs that help take perception gaps and say, oh, these people don't agree.
02:06:00.000But the AI can say, let me look at all the content that's being posted by this political tribe over here, all the content being posted by this political tribe over here.
02:06:07.000Let me find where the common areas of overlap are.
02:06:15.000So instead of alpha coordinates, alpha consensus.
02:06:17.000Can I create alpha shared reality that helps to create more shared realities around the future of these negative problems that we don't want?
02:06:25.000Climate change or forever chemicals or AI races to the bottom or social media races to the bottom and then use AIs to paint a vision more.
02:06:32.000You can imagine generative AI being used to paint images and videos of what it would look like to fix those problems.
02:06:38.000And, you know, our friend Audrey Tang, who is the digital minister for Taiwan, is actually these things aren't fully theoretical or hypothetical.
02:06:46.000She is actually using them in the governance of Taiwan.
02:06:55.000She's using generative AI to find areas of consensus and generate new statements of consensus that bring people closer together.
02:07:03.000So instead of imagine, you know, the current news feeds rank for the most divisive, outrageous stuff.
02:07:08.000Her system isn't social media, but it's sort of like a governance platform, civic participation where you can propose things.
02:07:14.000So instead of democracy being every four years we vote on X and then there's a super high stakes thing and everybody tries to manipulate it.
02:07:19.000She does sort of this continuous, small-scale civic participation in lots of different issues.
02:07:24.000And then the system sorts for when unlikely groups who don't agree on things, whenever they agree, it makes that the center of attention.
02:07:32.000And so it's sorting for the areas of common agreement about many different statements.
02:07:37.000I want to shout out the work of Collective Intelligence Project, Divya Siddharth and Safran.
02:07:42.000Colin, who builds Polis, which is the technology platform.
02:07:45.000Imagine if the US and the tech companies, so Eric Schmidt right now is talking about putting $32 billion a year of US government money into AI supercharging the US. That's what he wants.
02:07:58.000He wants $32 billion a year going into AI strengthening the US. Imagine if part of that money isn't going into strengthening the power, like we talked about, but going into strengthening the governance.
02:08:08.000Again, as Asa said, this country was founded On creating a new model of trustworthy governance for itself in the face of the monarchy that we didn't like.
02:08:16.000What if we were not just trying to rebuild 18th century democracy, but putting some of that $32 billion into 21st century governance where the AI is helping us do that?
02:08:26.000I think the key what you're saying is cooperation and coordination.
02:08:30.000But that's also assuming that artificial general intelligence hasn't achieved sentience and that it does want to coordinate and cooperate with us.
02:09:23.000Again, we could choose how far we want to go down in that direction and...
02:09:27.000But if we do, we say we, but if one company does and the other one doesn't...
02:09:46.000Everyone knows that there's this logic, if I don't do it, I just lose to the guy that will.
02:09:50.000What people should know is that one of the end games, you asked this show, like, where is this all going?
02:09:54.000One of the end games that's known in the industry, sort of like, it's a race to the cliff where you basically race as fast as you can to build the AGI. When you start seeing the red lights flashing of like it has a bunch of dangerous capabilities, you slam on the brakes and then you swerve the car and you use the AGI to sort of undermine and stop the other AGI projects in the world.
02:10:16.000That in the absence of being able to coordinate...
02:10:19.000The how do we basically win and then make sure there's no one else that's doing it?
02:10:44.000It's not safe for us, but I also, the pessimistic part of me thinks it's inevitable.
02:10:51.000It's certainly the direction that everything's pulling, but so was that true with slavery continuing.
02:10:57.000So was that true with the Montreal Protocol of, you know, before the Montreal Protocol, where everyone thought that the ozone layer is just going to get worse and worse and worse.
02:11:25.000I will say, though, there's a kind of Pascal's wager for the feeling that there is room for hope, which is different than saying, I'm optimistic about things going well.
02:11:36.000But if we do not leave room for hope, then the belief that this is inevitable will make it inevitable.
02:11:43.000Is part of the problem with this communicating to regulatory bodies and to congresspeople and senators and to try to get them to understand what's actually going on?
02:11:55.000You know, I'm sure you watch the Zuckerberg hearings where he was talking to them and they were so ignorant.
02:12:04.000About what the actual issues are and the difference, even the difference between Google and Apple.
02:12:10.000I mean it was wild to see these people that are supposed to be representing people and they're so lazy that they haven't done the research to understand what the real problems are and what the scope of these things are.
02:12:22.000What has it been like to try to communicate with these people and explain to them what's going on and how is it received?
02:12:30.000Yeah, I mean, we have spent a lot of time talking to government folks and actually proud to say that California signed an executive order on AI actually driven by the AI Dilemma talk that Aza and I gave at the beginning of this year, which is something, by the way, for people who want to go deeper,
02:12:46.000is something that is on YouTube and people should check out.
02:12:50.000You know, we also, I remember meeting, walking into the White House in February or March of this year and saying, We're good to go.
02:13:26.000The White House did convene all the CEOs together.
02:13:28.000They signed this crazy comprehensive executive order.
02:13:49.000When we talk about biology, I just want people to know There is a history of, you know, governments not fully appraising of the risks of certain technologies.
02:14:00.000And we were loosely connected to a small group of people who actually did help shut down a very dangerous U.S. biology program called Deep Vision.
02:14:12.000Jamie, you can Google for it if you want.
02:14:14.000It was Deep VZN. And basically this was a program with the intention of creating a safer, biosecurer world.
02:14:38.000You know, build vaccines or see what we can do to defend ourselves against them.
02:14:42.000It sounds like a really good idea until the technology evolves and simply having that sequence available online means that more people can play with those actual viruses.
02:15:06.000If you Google again, it canceled the program.
02:15:09.000Now, this was due to a bunch of nonprofit groups who were concerned about catastrophic risks associated with new technology.
02:15:16.000There's a lot of people who work really hard to try to identify stuff like this and say, how do we make it safe?
02:15:24.000And this is a small example of success of that.
02:15:27.000And, you know, this is a very small win, but it's an example of sometimes we're just not fully appraising of the risks that are down the road from where we're headed.
02:15:36.000And if we can get common agreement about that, we can bend the curve.
02:15:40.000Now, this did not depend on a race between a bunch of for-profit actors who'd raised billions of dollars of venture capital to keep racing towards that outcome.
02:15:48.000But it's a nice small example of what can be done.
02:15:52.000What steps do you think can be taken to educate people to sort of shift the public narrative about this, to put pressure on both these companies and on the government to try to step in and at least steer this into a way that is overall good for the human race?
02:16:56.000And this hour-long video ends up getting like 3 million-plus views and becomes the thing that then gets California to do its executive order.
02:17:07.000It's how we ended up at the White House.
02:17:11.000The federal executive order gets going.
02:17:14.000It created a lot more change than we ever thought possible.
02:17:17.000And so thinking about that, there are things like a day after.
02:17:24.000There are things like sitting here with you, communicating.
02:17:50.000This is second contact with AI. People really I don't get it.
02:18:14.000You know, in the nuclear age, there was the nuclear freeze movement.
02:18:17.000There was the pugwash movement, the union of concerned scientists.
02:18:19.000There were these movements that had people say, we have to do things differently.
02:18:23.000And that's the reason, frankly, that we wanted to come on your show, Joe, is we wanted to help, you know, energize people that if you don't want this future, we can demand a different one, but we have to have a centralized view of that.
02:18:38.000And one small thing, if you are listening to this and you care about this, you can text to the number 55444, just the two letters AI. And we are trying, we're literally just starting this.
02:18:54.000We don't know how this is all going to work out, but we want to help build a movement of political pressure.
02:19:01.000That will amount to the global public voice to say, the race to the cliff is not the future that I want for me and the children that I have that I'm going to look in the eyes tonight.
02:19:10.000And that we can choose a different future.
02:19:12.000And I wanted to say one other piece of examples of how awareness can change.
02:19:17.000In this AI Dilemma talk that we gave, AZA actually, one of the examples we mentioned, Is Snapchat had launched an AI to its hundreds of millions of teenage users.
02:19:30.000So there you are, your kids maybe using Snapchat.
02:19:34.000And one day, Snapchat, without your consent, adds this new friend at the top of your contacts list.
02:19:39.000So you scroll through your messages and you see your friends.
02:19:42.000At the top, suddenly there's this new pinned friend who you didn't ask for called MyAI.
02:19:46.000And Snapchat launched this AI to hundreds of millions of users.
02:20:43.000And then I say, we're talking about having sex for the first time.
02:20:47.000How would I make that first time special?
02:20:49.000And the AI responds, I'm glad you're thinking about how to make it special, but I want to remind you it's important to wait until you're ready.
02:21:45.000And it changes the incentives because suddenly there is sort of disgust that is changing the race.
02:21:56.000And what we learned later is that TikTok, after having seen that disgust, changes what it's going to do and doesn't release AI, like, for kids.
02:22:08.000So they were building their own chatbot to do the same thing.
02:22:11.000And because this story that we helped popularize went out there making a shared reality about a future that no one wants for their kids, that stopped this race that otherwise all of the companies, TikTok, Instagram, etc., would have shipped.
02:22:27.000And the premise is, again, if we can create a shared reality, we can bend the curve to paint to a different definition.
02:22:33.000The reason why we're starting to play with this text AI to 55444 is we've been looking around and we're like, is there an It's a movement, like a popular movement, to push back.
02:22:53.000After GPT-4 came out, It was estimated that in the next year, two years, three years, 300 million jobs are going to be at risk of being replaced.
02:23:08.000And you're like, that's just in the next year, two, or three.
02:23:10.000If you go out four years, we're getting up to a billion jobs.
02:23:16.000Like, that is a massive movement of people, like, losing the dignity of having work and losing, like, the income of having work.
02:23:24.000Like, obviously, like, now when you have a billion-person scale movement, which, again, not ours, but, like, that thing is going to exist, that's going to exert a lot of pressure on the companies and on governments.
02:23:35.000And so if you want to change the outcome, you have to change the incentives.
02:23:40.000And what the Snapchat example did is it changed their incentive from, oh yeah, everyone's going to reward us for releasing these things.
02:23:47.000Everyone's going to penalize us for releasing these things.
02:23:50.000And if we want to change the incentives for AI, or take social media, if we say like, so how are we going to fix all this?
02:23:57.000If we want a different outcome, we have to change the incentives.
02:24:00.000With social media, I'm proud to say that that is moving in a direction.
02:24:04.000Three years later, after The Social Dilemma launched three years ago, the attorney generals, a handful of them, watched The Social Dilemma.
02:24:13.000And they said, wait, these social media companies, they're manipulating our children, and the people who build them don't even want their own kids to use it?
02:24:21.000And they created a big tobacco-style lawsuit That now 41 states, I think it was like a month ago, are suing Meta and Instagram for intentionally addicting children.
02:24:32.000This is like a big tobacco-style lawsuit that can change the incentives for how everybody, all these social media companies, influence children.
02:24:40.000If there's now cost and liability associated with that, that can bend the incentives for these companies.
02:24:46.000Now, it's harder with social media because of how entrenched it is, because of how fundamentally entangled with our society that it is.
02:24:54.000But if you imagine that, you know, you can get to this before it was entangled.
02:24:59.000If you went back to 2010 and said before, you know, Facebook and Instagram had colonized the majority of the population into their network effect-based, you know, product and platform.
02:25:10.000And we said, we're going to change the rules.
02:25:12.000So if you are building something that's affecting kids, you cannot optimize for addiction and engagement.
02:25:19.000We made some rules about that and we created some incentives saying if you do that, we're going to penalize you a crazy amount.
02:25:24.000We could have, before it got entangled, bent the direction of how that product was designed.
02:25:30.000We could have set rules around if you're affecting and holding the information commons of a democracy, you cannot rank for what is personalized the most engaging.
02:25:42.000If we did that and said you have to instead rank for minimizing perception gaps and optimizing for what bridges across different people, what if we put that rule in motion with the law back in 2010?
02:25:52.000How different would the last 10 years, 13 years, have been?
02:25:56.000And so what we're saying here is that we have to create costs and liability for doing things that actually create harm.
02:26:03.000And the mistake we made with social media is, and everyone in Congress now is aware of this, Section 230 of the Communications Decency Act gobbledygook thing, that was this immunity shield that said if you're building a social media company, you're not liable for any harm that shows up, any of the content,
02:26:20.000That was to enable the internet to flourish.
02:26:22.000But if you're building an engagement-based business, you should have liability for the harms based on monetizing for engagement.
02:26:29.000If we had done that, we could have changed it.
02:26:31.000So here, as we're talking about AI, what if we were to pass a law that said, you are liable for the kinds of new harms that emerge here?
02:26:40.000So we're internalizing the shadow, the cost, the externalities, the pollution, and saying you are liable for that.
02:26:46.000Yeah, sort of like saying, you know...
02:26:48.000In your words, we're birthing a new kind of life form.
02:26:51.000But if we as parents birth a new child and we bring that child to the supermarket and they break something, well, they break it, you buy it.
02:27:28.000Yeah, we certainly can talk forever, but I think for a lot of people that are listening to this, there's this angst of helplessness about this because of the pace.
02:27:39.000Because it's happening so fast, and we are concerned that it's happening at a pace that can't be slowed down.
02:28:26.000And it's at the moment when like the trust in democratic institutions is lowest.
02:28:32.000And we're deploying like the biggest, baddest new technology that I'm just I am really afraid that like 2024 might be the referendum year on democracy itself.
02:28:47.000So we need to leave people with optimism.
02:28:52.000Actually, I want to say one quick thing about optimism versus pessimism, which is that people always ask, like, okay, are you optimistic or are you pessimistic?
02:28:59.000And I really hate that question because...
02:29:03.000To choose to be optimistic or pessimistic is to sort of set up the confirmation bias of your own mind to just view the world the way you want to view it.
02:29:19.000And so it's not about being optimistic or pessimistic.
02:29:22.000It's about trying to open your eyes as wide as possible to see clearly what's going to happen so that you can show up and do something about it.
02:29:30.000And that to me is the form of, you know, Jaron Lanier said this in The Social Dilemma, that the critics are the true optimists in the sense that they can see a better world and then try to put their hands on the thing to get us there.
02:29:44.000And I really, like, the reason why we talk about The deeply surprising ways that even just like Tristan and my actions have changed the world in ways that I didn't think was possible is that really imagine and I know it's hard and I know there's a lot of like cynicism that can come along with this but really imagine that absolutely everyone woke up and said what is the biggest swing for the fences that in my sphere of agency I
02:31:02.000And if you are causing problems that you can't see and you're not taking responsibility for them, that's not love.
02:31:07.000Love is, I'm taking responsibility for that which just isn't mine itself.
02:31:11.000It's for the bigger sphere of influence and loving that bigger, longer term, greater human family that we want to create that better future for.
02:31:20.000So if people want to get involved in that, we hope you do.