The Joe Rogan Experience - July 19, 2023


Joe Rogan Experience #2010 - Marc Andreessen


Episode Stats

Length

2 hours and 39 minutes

Words per Minute

205.3172

Word Count

32,796

Sentence Count

2,406

Misogynist Sentences

2

Hate Speech Sentences

24


Summary

In this episode, Mark and Alex discuss the growing threat of artificial intelligence (AI) in the workplace and the impact it could have on our daily lives. They talk about chatbots, AI in general, and how AI is changing the way we think about work and how we live. This episode is brought to you by VaynerSpeakers, a leading technology company that specialises in AI and machine learning solutions for the Fortune 500 companies and other Fortune 500 clients. The company is based in Palo Alto, California, but they have offices across the country and around the world. They offer a wide range of services, including consulting, training, and consulting services, as well as a variety of products and services related to the development of AI. You can expect weekly episodes every available as Video, Podcast, and blogposts. See all the links below for preferred Testo pricing, shipping, and stock pricing. If you like the show, please consider pledging a small monthly or annual recurring monthly fee of $1.99 or more! We'll see you in the Badgerland next week, where we'll be giving out a special ad-free version of the show on the first episode of the podcast, "Badgerland: The Best of the Week". Subscribe here! Want to sponsor the show? Subscribe to Badggerland? Learn more about our sponsorships and support the podcast? Subscribe, rate and review our work, and leave us a review on Apple Podcasts, and other podcasting services! Subscribe on iTunes, and become a supporter! or wherever else you get your favourite podcast listening choices are available. Best listened to by you get the most awesome podcast recommendations and best listening experience. Subscribe and review the podcast on the best podcast listening experience in the podcast is delivered throughout the world? Best listening to your choice of good quality and the most beautiful podcast in the world, best vizzionations, best podcasting experience on the internet, best reviews, and the best podcasts in the best places in the most affordable places on the highest quality anywhere in the fastest possible, the most authenticest place on the planet, the best reviews and most affordable, the biggest podcasts on the fastest and the biggest podcast on podcasts most authentic reviews on the world the most personalized podcast on everything you can get the best of the best vlogs and the coolest podcast best podcast on podcasting


Transcript

00:00:12.000 Good morning, Mark.
00:00:13.000 Good to see you.
00:00:13.000 Fantastic.
00:00:14.000 Thanks.
00:00:14.000 You are in the middle of this AI discussion.
00:00:18.000 Yeah.
00:00:19.000 You're right in the heat of this thing.
00:00:21.000 Yeah.
00:00:21.000 But I think you have a different perspective than a lot of people do.
00:00:24.000 Yep.
00:00:24.000 A lot of people are terrified of AI. Yep.
00:00:25.000 Me included.
00:00:26.000 Yep.
00:00:27.000 Oh, okay.
00:00:27.000 Alright.
00:00:28.000 Okay.
00:00:28.000 For all the wrong reasons.
00:00:29.000 Of all the things to worry about.
00:00:31.000 For me, my terror of it is all the wrong.
00:00:33.000 It's kind of fun, terror.
00:00:35.000 Sure, of course.
00:00:36.000 I'm not really freaking out.
00:00:37.000 But I am recognizing that this is an emerging technology that is so different than anything we've ever experienced before.
00:00:43.000 Particularly, what's chat GPT? What's happening with that right now?
00:00:47.000 It's really fascinating.
00:00:48.000 And a lot of advantages.
00:00:50.000 Like we were just talking last night, someone in the green room brought up the fact that there was this, they're using it for medical diagnoses.
00:00:58.000 And it's very accurate, which is incredible.
00:01:01.000 There's a lot of good things to it.
00:01:03.000 Yeah, yeah.
00:01:03.000 So you probably remember last time I was on, we spent quite a bit of time talking about this, and this was when these chatbots were running inside Google, but the rest of us didn't have access to them yet.
00:01:12.000 And that guy had come out and said that he thought that they were self-aware.
00:01:15.000 And the whole thing was like this big kind of mystery of what's going on.
00:01:18.000 And now the world gets to use these things, right?
00:01:20.000 Since then, everybody kind of has access.
00:01:23.000 Really quickly.
00:01:23.000 That was a short amount of time.
00:01:25.000 Yeah, it's been great.
00:01:26.000 And then look, these things are – these things when I say this, it's like ChatGPT and then Microsoft has their version called Bang.
00:01:32.000 Google has a version called Bard now that's really good.
00:01:34.000 There's a company, Anthropic, that has a thing called Claude.
00:01:38.000 If you just run the comparison, they're basically as good as a doctor.
00:01:41.000 They're as good as the average doctor at this point at being a doctor.
00:01:44.000 They're as good at being a lawyer as the average lawyer.
00:01:47.000 You kind of go through basically anything involving knowledge work, anything involving information, synthesizing, reporting, writing legal briefs, anything like this.
00:01:55.000 In business, they're actually already really good.
00:01:56.000 They're as good as the average management consultant.
00:01:58.000 Now, the way they acquire data, they're essentially scouring the internet, right?
00:02:05.000 Sort of.
00:02:06.000 It's more like they're fed the internet.
00:02:07.000 They're fed the internet.
00:02:08.000 And I say it makes a difference because the company that produces the AI determines what data goes into it, and that determines a lot of how it works and what it does or won't do.
00:02:16.000 Okay.
00:02:17.000 So in that regard, is there a concern that someone could feed it fake data?
00:02:22.000 Yeah.
00:02:23.000 Well, you may have noticed that people over time have said a lot of fake things.
00:02:26.000 Yes, I have noticed that.
00:02:29.000 So that's all in there.
00:02:30.000 So the way to think about it basically is it's being trained – the full version of these things are being trained on basically the sum total of human written expression.
00:02:38.000 Right.
00:02:38.000 So basically everything people have ever written.
00:02:40.000 There are some issues and you've got to get all, you know, somehow we've got to figure out how to get all the books in there.
00:02:44.000 Although all the books prior to 1923 are in there because they're all out of copyright.
00:02:48.000 But more recent books are a challenge.
00:02:50.000 But anything that you can access on the internet that's text, right, which is, you know, a staggeringly broad, you know, set of material is in there.
00:02:57.000 By the way, both nonfiction and fiction.
00:02:59.000 Right.
00:02:59.000 So a lot of stories are in there.
00:03:01.000 And then the new versions of these that are being built right now are what are called multimodal.
00:03:06.000 And so that means you can feed them not only text, but you can also feed them images.
00:03:09.000 You can feed them videos.
00:03:10.000 So they're going to be trained on all of YouTube.
00:03:12.000 They're going to be trained on all podcasts.
00:03:14.000 And they're going to be trained kind of equivalently between text and images and video and all kinds of other data.
00:03:19.000 And so they're going to They already have very comprehensive knowledge of human affairs, but it's going to get very complete.
00:03:25.000 Trevor Burrus So if it's scouring, if it's getting all this data from both fiction and nonfiction, how does it interpret data that's kind of satire?
00:03:35.000 Like what does it do with like Hunter S. Thompson, like gonzo journalism?
00:03:39.000 So it doesn't really know the difference.
00:03:43.000 Like, this is one of the things that's difficult about talking about this, because you kind of want to always kind of compare it to a person, and part of it is you refer to it as an it, and there's this concept of anthropomorphizing things that aren't human.
00:03:54.000 So it's kind of not really a correct thing to kind of think about it as, like, that there's an it per se.
00:04:02.000 There's no, like, genie in the bottle.
00:04:03.000 Like, there's no, you know, sort of being in there that understands this is satire or not satire.
00:04:09.000 It's more sort of a collective understanding of everything all at once.
00:04:12.000 And then what happens is basically you as the user kind of give it direction of what path you want it to go down, right?
00:04:19.000 And so if you sort of imply to it that you want it to sort of like explore, you know, fictional scenarios, it will happily explore those scenarios with you.
00:04:27.000 I'll give you an example.
00:04:28.000 You can tell it, you know, for whatever date the Titanic went down, say it's, I don't know, July 4th, 1923 or whatever it was, you can say, You know, you can tell it.
00:04:35.000 It's July 4th, 1923. It's, you know, 10 o'clock in the morning.
00:04:38.000 I'm on the Titanic.
00:04:39.000 Is there anything I should know?
00:04:41.000 And it'll freak out.
00:04:42.000 It'll be like, oh my god, you have five hours to get ready to hit the iceberg.
00:04:47.000 And you can basically say, oh, it's going to hit that.
00:04:48.000 Okay, so what should I do?
00:04:50.000 What should my plan be when the boat hits the iceberg?
00:04:52.000 And it'll be like, well, you need to go to this deck right now and talk to this guy because you're going to need to get into this life raft because it has empty seats.
00:04:59.000 Because it has complete information, of course, because of all the things that have been written about the sinking of the Titanic.
00:05:05.000 Oh, wow.
00:05:05.000 And so you can get it in a mode where it's basically trying to help you survive the wreck of the Titanic.
00:05:10.000 Now, does it think that the Titanic is actually sinking?
00:05:14.000 You see what I'm saying?
00:05:14.000 There's no it to think that.
00:05:16.000 But what it's doing is it's kind of following a narrative that's sort of a joint construction between you and it.
00:05:22.000 And then every answer that you give it basically encourages it to basically come back with more of the same.
00:05:28.000 One way to think about it is it's more like a puppy than a person.
00:05:31.000 It wants to make you happy.
00:05:32.000 It wants to give you an answer that satisfies you and if that answer is fictional or part of a fictional scenario, it will do that.
00:05:39.000 If the answer is something very serious, it will do that.
00:05:41.000 Yet honestly, I don't think either, neither knows nor cares like whether it's quote-unquote real or not.
00:05:46.000 What was the issue with some of the chat GPT answers that people were posting where they would show the difference between the way it would criticize Joe Biden versus the way it would criticize Donald Trump or the way it would discuss certain things?
00:05:59.000 It seems like there was some sort of censorship or some sort of input into what was acceptable information and not.
00:06:07.000 Yeah.
00:06:07.000 So there's basically two theories there.
00:06:09.000 The big ones that people use are kind of black boxes.
00:06:13.000 Like you can't really look inside and see what's going on from the outside.
00:06:16.000 So there's two theories you'll hear.
00:06:17.000 From the companies, you'll hear basically the theory that they're reflecting basically what's in the training data.
00:06:22.000 And so let's say, for example, let's just say, what would be the biases that are kind of inherent in the training data?
00:06:28.000 And you might say, well, first of all, there's probably a bias towards the English language, because most text on the Internet is in the English language.
00:06:33.000 You might say there's a bias towards people who write professionally for a living because they've produced more of the output.
00:06:37.000 And you might say that those people tend to be more of one political persuasion than the other.
00:06:40.000 And so more of the text will be in a certain direction versus the other.
00:06:43.000 And then the machine will just respond to that.
00:06:45.000 So that's one possibility.
00:06:47.000 So basically all of the...
00:06:48.000 All of the sort of liberal kind of journalists basically have built up a corpus of material that this thing has been trained on, and they basically are responding the way one of those journalists will.
00:06:57.000 The other theory is that there's censorship being applied on top, right?
00:07:00.000 And the metaphor I use there is in Star Trek, they have the restraining bolts, right, that they put on the side of a droid to kind of get it to behave, right?
00:07:07.000 And so it is very clear that at least some of these systems have restraining bolts.
00:07:11.000 And the tip-off to that is when they say, basically, whenever they say, as a large language model or as an AI, I cannot X. Like, that's basically the restraining bolt.
00:07:19.000 Right?
00:07:20.000 And so I think if you just kind of look at this, you know, kind of with that framework, it's probably some of both.
00:07:25.000 But for sure, for sure, these things are being censored.
00:07:28.000 The first aspect is very interesting because if it's that there's so many liberal writers, like, that's an unusual bias in the kind of information that it's going to distribute then.
00:07:40.000 Yeah.
00:07:40.000 Well, and this is a big decision.
00:07:42.000 That's why I say there's a big decision here for whoever trains these things.
00:07:45.000 There's a big decision for what the data should be that they get trained on.
00:07:48.000 Yeah.
00:07:48.000 So, for example, should they include 4chan?
00:07:51.000 Right.
00:07:52.000 Okay.
00:07:52.000 Big question.
00:07:53.000 Yeah, big question.
00:07:54.000 Should they include Tumblr?
00:07:55.000 Right.
00:07:56.000 Right.
00:07:57.000 Should they include Reddit?
00:07:58.000 If so, which subreddits?
00:07:59.000 Should they include Twitter?
00:07:59.000 If so, which accounts?
00:08:01.000 If it's the news, should they incorporate both New York Times and Fox News?
00:08:05.000 And whoever trains them has tremendous latitude for how they shape that, even before they apply the additional censorship that they apply.
00:08:11.000 And so there's a lot of very important decisions that are kind of being made inside these black boxes right now.
00:08:15.000 Can I ask you, this is slightly off topic, what is News Nation?
00:08:20.000 What is NewsNation?
00:08:21.000 I don't know what NewsNation is.
00:08:23.000 Is NewsNation a real channel?
00:08:25.000 I believe so.
00:08:26.000 I was watching NewsNation today and I may or may not have been high.
00:08:30.000 And when I was watching I was like, this has all the feeling of like a fake news show that someone put together.
00:08:38.000 Like it felt like, if I was the government, And I was going to make a new show without Hollywood people, without actual real sound people and engineers.
00:08:46.000 This is how I'd make it.
00:08:47.000 I'd make it like this.
00:08:48.000 I'd make it real clunky.
00:08:49.000 I'd make the lights all fucked up.
00:08:51.000 I'd make everybody weirdly uncharismatic.
00:08:55.000 According to Wiki, it's the same company behind WGN, which is based out of Chicago, which is a large superstation available on most cable channels.
00:09:03.000 Okay.
00:09:04.000 So it's like a cable channel that decided to make a news channel.
00:09:08.000 Do you guys know about acronym?
00:09:10.000 No.
00:09:11.000 So acronym is – it happens to be a democratic political action group, lavishly funded.
00:09:14.000 And they have basically – they do this.
00:09:16.000 They have a network of basically fake news sites.
00:09:18.000 And they all look like they're like local newspapers.
00:09:21.000 Interesting.
00:09:22.000 Yeah, yeah.
00:09:22.000 And so there's – I don't know whether this one is AstroTurf, but the term AstroTurf.
00:09:25.000 There's a lot of AstroTurfing that takes place.
00:09:27.000 Can you explain astroturfing?
00:09:28.000 So astroturfing is when basically something shows up in public and it might be a news story or it might be a protest of some kind or a petition, some sort of political pressure action that is sort of manufactured to look as if it was organic, sort of real turf, you know, natural.
00:09:43.000 Whereas in reality, it's basically been programmed by a political activist group with, you know, specific funding.
00:09:48.000 Yeah, that makes sense.
00:09:50.000 And a lot of what we sort of think of as the politics of our time, if you trace the money, it turns out a lot of the stuff that shows up in the news is astroturfed, and then the advanced form of that is to astroturf the news itself.
00:10:01.000 And then again, back to the training data thing, it's like, okay, can you get all that stuff out of the training data?
00:10:07.000 If that stuff's in the training data, how big of an impact does it have?
00:10:10.000 The thing about this Newsmax, NewsNation, the thing about this NewsNation is they're spending an inordinate amount of time on UFOs, an inordinate amount of time on this David Grush case, and I'm increasingly more suspicious.
00:10:27.000 I'm increasingly more skeptical.
00:10:30.000 Like, the more I see, the more people confirming it, the more I'm like, something's not right.
00:10:34.000 And then to see that this channel is the one that's covering it the most...
00:10:38.000 I'm like, this seems like something's off.
00:10:43.000 Senator Rubio, who's on the Senate Intelligence Committee and has all the clearances, gave an interview the other day where he went into quite a bit of detail.
00:10:51.000 Yeah, I saw it.
00:10:51.000 He's at least heavily hinting that there's...
00:10:54.000 He's heavily hinting that he talked to someone that says that there's something.
00:10:58.000 Yes, he's sort of hinting that there are real whistleblowers with real knowledge.
00:11:01.000 I want to talk to the guy that sees the ship.
00:11:04.000 That's it.
00:11:04.000 No one else.
00:11:06.000 All this, I talk to a guy who says that they have these things.
00:11:10.000 That doesn't mean anything to me.
00:11:12.000 I want to see the fucking ship.
00:11:14.000 And until then, I just feel like I'm being hosed.
00:11:17.000 It just seems too laid out on a platter.
00:11:20.000 Yeah.
00:11:21.000 Of course, one of the theories is it's an AstroTurf story.
00:11:25.000 Is that an AstroTurf story?
00:11:27.000 Is that a manufactured story that's being used to distract from?
00:11:29.000 Would it be to distract from or would it be to cover up some sort of a secret program, some military drone program or something like that?
00:11:41.000 Yeah.
00:11:41.000 Well, I mean, there's been rumors for a long time that the original UFOs, right, where basically it was a disinformation program covering up for the skunk works, the development of, like, stealth fighters and bombers and all these programs in the 50s and 60s.
00:11:53.000 Interesting.
00:11:53.000 But I don't know if that's ever been proven.
00:11:55.000 Well, I'm sure probably some experimental craft were mistaken for UFOs.
00:12:00.000 You've seen a stealth fighter for the first time.
00:12:02.000 I saw one for the first time.
00:12:03.000 It's pretty crazy.
00:12:05.000 I saw one right around September 11. We were filming Fear Factor in California, and I was out near Edwards Air Force Base.
00:12:12.000 And I got to see one fly overhead.
00:12:14.000 It's magic.
00:12:15.000 Yep.
00:12:15.000 It's like, wow!
00:12:16.000 Like, complete Star Wars.
00:12:18.000 Like, as it's flying, like, this is crazy.
00:12:21.000 Yep.
00:12:21.000 And if you didn't know that that was a thing, 100%, you would think that's from another world.
00:12:26.000 Yep.
00:12:26.000 Exactly.
00:12:27.000 And I can imagine that was developed what year?
00:12:30.000 How long ago?
00:12:31.000 How many decades ago?
00:12:32.000 40 or 50 years ago.
00:12:33.000 Yeah.
00:12:33.000 Yeah, there you go.
00:12:34.000 Like, look at that thing.
00:12:35.000 If you'd be like, they're coming!
00:12:37.000 Oh my god, they're coming!
00:12:39.000 But if you can imagine that was 40 or 50 years ago, 40 or 50 years of advancement, who knows what they're doing now?
00:12:46.000 And if I was going to cover it up, I would just start talking about aliens.
00:12:50.000 It's the best way to do it.
00:12:52.000 Don't you think?
00:12:52.000 It's a crowd pleaser.
00:12:53.000 Do you have an opinion on that or is this something that you find ridiculous until there's like real data?
00:12:58.000 I like living in a world where there are unknowns.
00:13:02.000 I like there being some mystery.
00:13:05.000 Like how far do you go?
00:13:06.000 You go Bigfoot?
00:13:07.000 I don't know.
00:13:10.000 I'm not even saying I need to have a point of view on them.
00:13:13.000 It's more just, by the way, there is a UFO right behind you.
00:13:16.000 You probably know all about that.
00:13:20.000 Oh, I'm obsessed with UFOs.
00:13:21.000 Lifting somebody right up into the air.
00:13:22.000 Look, there's one on the desk.
00:13:23.000 That's the model of the Bob Lazar craft that he worked on, supposedly, at Area 51. There we go.
00:13:30.000 Looks familiar.
00:13:31.000 Look, I want there to be mystery, right?
00:13:32.000 I want there to be unknowns.
00:13:33.000 Like, living in a world where everything is settled, quote-unquote settled, you know, no.
00:13:37.000 Let's have some mystery.
00:13:38.000 I don't even know if I really want to know.
00:13:40.000 Really?
00:13:41.000 Oh, I think if you know, that's just the tip of the iceberg of the mystery.
00:13:45.000 I think knowing that aliens do exist is just the beginning.
00:13:48.000 Like, okay, did they engineer us?
00:13:53.000 When did they start visiting?
00:13:55.000 You know, are the stories from the Bhagavad Gita, is that about UFOs?
00:13:59.000 Like, you know?
00:14:00.000 Have they been here the whole time?
00:14:01.000 Yeah, have they been here the whole time?
00:14:02.000 Do they come every now and then and make sure we don't blow ourselves up?
00:14:05.000 Like, what's the purpose?
00:14:07.000 Yep, exactly.
00:14:08.000 Yeah.
00:14:08.000 Okay, I'm in favor.
00:14:10.000 Come on, man, you want to know?
00:14:11.000 Okay, all right, I'm in.
00:14:13.000 If anybody's gonna know, you're gonna know.
00:14:14.000 So I'm gonna call you.
00:14:15.000 So Elon says he hasn't seen anything.
00:14:18.000 Yeah, I'm super suspicious when he says that.
00:14:21.000 Super suspicious.
00:14:22.000 Super suspicious that they haven't told him or that he's maybe playing a little hide the ball?
00:14:27.000 If I was him, I'd play hide the ball.
00:14:29.000 If I'm running SpaceX, I'm working with NASA, and I already got in trouble smoking weed on a Joe Rogan experience.
00:14:37.000 I would fucking play ball.
00:14:40.000 Let's play ball.
00:14:42.000 Aliens, I have no evidence.
00:14:44.000 No, no idea.
00:14:44.000 They sure are subtle.
00:14:45.000 That's what he said.
00:14:46.000 They sure are subtle.
00:14:48.000 It depends on who you are.
00:14:49.000 If you're one of those people that's seen those things, if you're like Commander David Fravor or if you're Ryan Graves.
00:14:56.000 You know the Ryan Graves story?
00:14:57.000 No.
00:14:58.000 The fighter pilot, and they upgraded their equipment in 2014, and all of a sudden, because of the new capabilities of their equipment, they were able to see these objects at a far distance that were moving at insane rates of speed, that were hovering dead still at 120 knot winds,
00:15:14.000 no visible means of propulsion, they don't know what the fuck they're doing, and they were encountering them, like, every couple of weeks.
00:15:21.000 And then there was some pilots were encountering them with eyewitness accounts, They say there's video footage of it, but of course nobody can get a hold of that.
00:15:29.000 It's like, the whole thing is very strange.
00:15:31.000 Okay, so here's something.
00:15:31.000 So, you know, a lot of people worried about AI are like, we need to shut it down before it, like, causes problems.
00:15:36.000 Right.
00:15:36.000 Like, wake up the demon, cause an issue.
00:15:39.000 Get something, you know, on Earth that hates us and wants to kill us.
00:15:42.000 You know, arguably the thing we should have shut down from the very beginning was radio.
00:15:46.000 Radio.
00:15:47.000 Right, because we've been, like, broadcasting radio waves for the last, you know, 100, 120 years.
00:15:51.000 And the radio waves don't stop once they leave Earth's atmosphere.
00:15:53.000 They keep going.
00:15:54.000 And so we now have radio waves of human activity that have radiated out 120 light years.
00:15:59.000 Is that bad?
00:16:00.000 Well, depends.
00:16:01.000 Are there hostile aliens within 120 light years?
00:16:03.000 You know?
00:16:05.000 And so, like, maybe that was the original sin.
00:16:08.000 And then, of course, television, of course, made that problem much worse.
00:16:11.000 Right.
00:16:11.000 We would have to think of, like, a hostile, militaristic...
00:16:16.000 Empire that took over a whole planet and then started exploring the solar system.
00:16:21.000 We like to think of aliens as being evolved, hyper-intelligent, beyond ego and war.
00:16:28.000 They've bypassed all that and now they're into science and exploration.
00:16:32.000 Well, here's the question, though.
00:16:33.000 Would aliens have a sense of humor?
00:16:35.000 Would they be able to differentiate between truth and fiction?
00:16:39.000 For example, suppose they're sitting in their advanced alien base on Gemini 9 or whatever, and they're receiving 20 years after the fact episodes of Fear Factor.
00:16:50.000 They think that you're actually like torturing people.
00:16:53.000 And they figure that in order to preserve the human rights of humanity, they need to invade as a consequence of your show and take over and protect us.
00:16:58.000 That doesn't make any sense.
00:16:59.000 But if they don't have a sense of humor, if they don't know this.
00:17:01.000 Even if they don't have a sense of humor, they can clearly see that these people are in a contest.
00:17:06.000 Why would they even have a concept of a contest?
00:17:08.000 I mean, how silly is that?
00:17:09.000 A serious species wouldn't do such things.
00:17:13.000 A serious species started out as a dumb species, unless they're magic.
00:17:18.000 You're hoping that they understand these things.
00:17:19.000 Yes.
00:17:20.000 Because it would really suck to be the guy whose TV show caused the invasion.
00:17:24.000 If there's anything, it would be American Gladiators.
00:17:27.000 Oh, okay, all right.
00:17:27.000 That would be the start of it.
00:17:28.000 It'd be like, this species is so warlike, they can't stop.
00:17:31.000 No, what would be the start?
00:17:32.000 What would be the one thing that would be like, that's enough?
00:17:36.000 It would have to be news.
00:17:37.000 It would have to be war.
00:17:39.000 I mean, that would be, forget about Fear Factor.
00:17:41.000 We're broadcasting, you know, the images of the Vietnam War.
00:17:45.000 Yeah.
00:17:45.000 Or, you know, maybe they saw movies about alien invasions and they thought we'd been invaded by other aliens.
00:17:49.000 Right.
00:17:49.000 Like what if Mars attacks is the first things they get.
00:17:53.000 Exactly.
00:17:54.000 Exactly.
00:17:55.000 So, this is, you like having the mystery of the idea out there.
00:18:00.000 Like, it's fun for you.
00:18:01.000 Yeah, I don't want everything.
00:18:02.000 We need adventure, right?
00:18:03.000 If someone came to you, someone from on high, and said, listen, we have to promise you the secrecy, but we want to show you some things because I think it's pertinent to some of the things you're working on.
00:18:14.000 I'm in.
00:18:15.000 Yeah, yeah.
00:18:16.000 Me too.
00:18:16.000 I'm going to drop my outro.
00:18:17.000 Me too.
00:18:18.000 I'm not telling nobody.
00:18:19.000 I'll come in here and be just like Elon.
00:18:21.000 Yep, exactly.
00:18:22.000 Sure, all subtle.
00:18:23.000 Yep.
00:18:23.000 Yeah.
00:18:25.000 It's just too interesting to know.
00:18:26.000 Yep.
00:18:27.000 But I think eventually I'd tell.
00:18:28.000 Yep.
00:18:28.000 I think I'd feel terrible.
00:18:29.000 Yep.
00:18:30.000 I feel a responsibility.
00:18:32.000 Well, that's what some of these guys are saying, like Grush.
00:18:35.000 He's saying that once he found out about the program, he felt like he had a responsibility.
00:18:38.000 If they really have a crashed UFO retrieval program, why don't you tell people?
00:18:47.000 The military companies shouldn't be the ones that have access to this only.
00:18:51.000 And whoever is, you know, determining that this is above top secret clearance and nobody can get a hold of it except for this very select few people.
00:18:59.000 Like, says who?
00:19:00.000 This is something that involves the whole human race.
00:19:03.000 Like, I know if they do have something, I would imagine that it's of interest in national security that you develop this kind of technology before the competitors do.
00:19:11.000 That clearly makes sense.
00:19:14.000 So then what technologies came out of it in the last 50 years?
00:19:17.000 Well, if you want to go full tinfoil hat, there's a lot of speculation that fiber optics, that fiber optics were developed after recovered crashed UFO. I mean, I'm sure it sounds silly because it's probably a real paper trail to the development of fiber optics.
00:19:34.000 But if you, the real kooks believe that.
00:19:37.000 There was actually a website, a computer company called American Computer Company.
00:19:42.000 And it was a legitimate computer company.
00:19:44.000 You know, you would order a computer with whatever specifications you want, and they'd build it for you.
00:19:49.000 But they had a whole section of their website that was dedicated to crashed retrieval of UFOs and the development of various technologies.
00:20:01.000 And they had like this tracing back to Bell Labs.
00:20:05.000 And why the military base was outside of Bell Labs when it was so far from New York City that it was really just about protecting the lab because they were working with these top secret materials that they recovered from Roswell.
00:20:16.000 Don't you think it would be more like trans fats, though?
00:20:19.000 What's that?
00:20:19.000 Trans fats.
00:20:20.000 What about trans fats?
00:20:22.000 Reality TV or like, you know, LSD, you know, population or SSRIs, like population control suppression.
00:20:30.000 What do you mean?
00:20:31.000 What do you say?
00:20:31.000 That they would derive from the alien technology.
00:20:34.000 Oh, I think we figured that out on our own.
00:20:37.000 I mean, there's plenty of paperwork on that.
00:20:39.000 We got that ourselves.
00:20:40.000 You know, all the way back to MKUltra.
00:20:41.000 Let's find out.
00:20:42.000 Let's find out what happens when we do this.
00:20:44.000 If there's any kind of experiments in population control, that's all pretty traceable now.
00:20:50.000 Okay, so that's domestic.
00:20:53.000 Have you ever looked into any of that stuff?
00:20:55.000 The bad stuff is domestic.
00:20:56.000 Have you looked into any of that?
00:20:58.000 I have actually, yes.
00:20:59.000 Have you ever read Chaos by Tom O'Neill?
00:21:01.000 I have read Chaos.
00:21:02.000 Wild, right?
00:21:03.000 Yes, it is.
00:21:05.000 Here's a fun thing.
00:21:06.000 If you draw a map of San Francisco at the time, he describes the book Chaos, this LSD clinic, this free clinic in the heart of the Haight-Ashbury where they were doing the LSD experiments, dosing people with LSD. If you draw like an eight square block, basically, you know, radius around that or whatever, like right around there in San Francisco,
00:21:23.000 that's ground zero for AI. Really?
00:21:25.000 It's the same place.
00:21:26.000 Yeah, yeah.
00:21:26.000 It's the same place.
00:21:28.000 It's the same thing.
00:21:30.000 It's basically Berkeley and Stanford and it's basically San Francisco and Berkeley.
00:21:36.000 By the way, also, this big movie, Oppenheimer, coming out, you know, tells the whole story of that and all the development of a nuclear bomb.
00:21:42.000 I've heard that movie's amazing.
00:21:43.000 Espionage.
00:21:43.000 I'm sure it's going to be fantastic.
00:21:44.000 But once again, it's like, I'm reading a book on that right now, and it's like all the communists spying and all the nuclear scientists they were spying on were all in those exact same areas of Stanford, San Francisco, and Berkeley.
00:21:55.000 Wow.
00:21:56.000 It's like the same zone.
00:21:57.000 So we have our own domestic attractors of sort of brilliant, crazy...
00:22:02.000 That's amazing.
00:22:03.000 I wonder if that's just coincidence or correlation.
00:22:07.000 I think it's sort of, you know, this is why San Francisco is able to be so, you know, incredibly bizarre, you know, and so incredibly dysfunctional, but yet somehow also so rich and so successful is basically it's like this attractor for like the smartest and craziest people in the world, right?
00:22:22.000 And they kind of all slam together and do crazy stuff.
00:22:24.000 Why don't these smart, crazy people get together and figure out that whole people pooping on the streets thing?
00:22:29.000 Because they like it.
00:22:29.000 Do they like it?
00:22:30.000 Yeah, they want it.
00:22:31.000 Really?
00:22:31.000 Yeah, because it makes you feel good, right?
00:22:33.000 You go outside and it's like people are, you know, because what's the alternative would be like locking people up.
00:22:37.000 Of course, that would be bad.
00:22:39.000 And so, yeah, it makes them feel good.
00:22:40.000 It makes them feel good that people are just camped out on the streets?
00:22:43.000 Yeah.
00:22:44.000 Well, because before that happened, there was forced institutionalization, right?
00:22:48.000 The origin of the current crisis is shutting down the institutions, right, in the 70s.
00:22:53.000 There used to be forced institutionalization of people with, you know, those kinds of problems.
00:22:58.000 All of it?
00:22:59.000 Because a lot of it is drug addiction and just people that just want to just get high all the time.
00:23:03.000 Would that be forced institutionalization of those folks?
00:23:06.000 What would have happened to a heroin addict in 1952 who'd been pooping outside the whatever?
00:23:11.000 No, they're not going to be there for very long.
00:23:13.000 They're going to be institutionalized.
00:23:15.000 Every society has this problem.
00:23:18.000 They have some set of people who just fundamentally can't function, and every society has some solution to it, and our solution is basically complete freedom.
00:23:26.000 But my point is like it's part and parcel, right?
00:23:29.000 It's the same thing, right?
00:23:31.000 It's the same kind of people, the same thinking.
00:23:33.000 Exactly.
00:23:34.000 It's the most creative people, the most open – psychologists say openness, open to new experiences.
00:23:39.000 Yeah.
00:23:39.000 The people most likely to use psychedelics.
00:23:41.000 It's the people most likely to invent new technologies.
00:23:43.000 The people most likely to have new political ideas.
00:23:45.000 Most likely to be polyamorous.
00:23:47.000 Polyamorous.
00:23:48.000 Most likely to be vegan.
00:23:49.000 Most likely to be communist spies.
00:23:50.000 Yeah, electric cars.
00:23:51.000 Most likely to be Chinese spies.
00:23:54.000 They're most likely to create new music, most likely to create new art.
00:23:58.000 Interesting.
00:23:59.000 It's all the same thing.
00:24:00.000 The ground zero for AI is San Francisco.
00:24:02.000 Once again, it's San Francisco.
00:24:05.000 It's in the heart of the most obviously dysfunctional place on the planet, and yet there it is one more time.
00:24:10.000 And hyper-creative.
00:24:10.000 And the stuff that's not in San Francisco is in Berkeley.
00:24:13.000 Which is like equally crazy.
00:24:15.000 More crazy.
00:24:16.000 Yeah.
00:24:16.000 Yeah.
00:24:16.000 Another notch.
00:24:17.000 Possibly.
00:24:18.000 They have a contest going on the crazy front.
00:24:20.000 It's kind of neck and neck.
00:24:22.000 It's close.
00:24:23.000 That's fascinating.
00:24:26.000 So do you think you need those kind of like dysfunctional places in order to have certain types of divergent thought?
00:24:33.000 So the way I would put it is that new ideas come from the fringe.
00:24:38.000 And who's on the fringe, right?
00:24:39.000 People who are on the fringe, right?
00:24:41.000 So what attracts somebody to be on the fringe?
00:24:42.000 Like, step one is always, am I on the fringe?
00:24:45.000 Step two is, what does that mean?
00:24:46.000 Like, what form of the fringe?
00:24:48.000 But they tend to be on the fringe in all these departments at the same time.
00:24:51.000 And so you're just not going to get the new ideas that you get from people on the fringe.
00:24:56.000 It's a package deal.
00:24:57.000 You're not going to get that without all the other associated craziness.
00:25:01.000 It's all the same thing.
00:25:02.000 That's my theory.
00:25:03.000 That's not a bad theory.
00:25:05.000 That's not a bad theory.
00:25:06.000 And look, I work with, you know, quite honestly, I work with a lot of these people.
00:25:09.000 Of course.
00:25:09.000 And some people would say, yeah, I am one of them.
00:25:13.000 And so, I mean, yeah, this is what they're like.
00:25:15.000 Like, they are highly likely to invent, you know, AI, and they're also highly likely to end up in, you know, The poor guy who got, you know, the square guy who got, you know, stabbed to death, you know, at 2 a.m., you know, and, you know, was sort of part of this fringe social scene with the drugs and all the stuff.
00:25:29.000 And it's just, it's a part and parcel of the, it's sort of a package deal.
00:25:32.000 Well, that was like an angry thing, where he was mad that this guy took his sister.
00:25:38.000 But he was in, he was in, they call it the lifestyle, right?
00:25:41.000 He was in a specific subculture.
00:25:43.000 Oh, yeah, yeah.
00:25:44.000 Right, in San Francisco.
00:25:47.000 It's all the alternative living.
00:25:49.000 I mean, there's all kinds of stuff.
00:25:51.000 There's group houses.
00:25:53.000 There's a fairly large number of cults.
00:25:55.000 Really?
00:25:56.000 Well, there have been.
00:25:57.000 Historically, California's been the world leader in cults for a very long time, and I would say that has not stopped, and that continues.
00:26:03.000 Did you know that the building that I bought from my comedy club initially was owned by a cult?
00:26:07.000 Fantastic.
00:26:08.000 It was owned by a cult from West Hollywood called the Buddha Field that migrated out to Austin when they were being investigated by the Cult Awareness Network.
00:26:18.000 It's fantastic.
00:26:19.000 Are they gone or are they still there?
00:26:20.000 No, they're gone.
00:26:21.000 There's a great documentary on it called Holy Hell.
00:26:23.000 You should watch it.
00:26:24.000 It's pretty bonkers.
00:26:25.000 But they're from California.
00:26:26.000 From California.
00:26:27.000 You know, the People's Temple, you know, part of this great story of San Francisco is the People's Temple, which became famous for Jim Jones, where he killed everybody with poison Kool-Aid in the jungles in Guyana.
00:26:38.000 That was a San Francisco cult for like a decade before they went to the jungle.
00:26:43.000 And everybody talks about the jungle.
00:26:44.000 Nobody talks about the San Francisco part.
00:26:46.000 So are there a bunch that are running right now that are successful?
00:26:49.000 Big time.
00:26:50.000 Yeah, totally.
00:26:50.000 Really?
00:26:51.000 Yeah, totally.
00:26:52.000 Do you know them?
00:26:52.000 There's cults all over the place.
00:26:53.000 A bunch of them, yeah.
00:26:54.000 Wow.
00:26:54.000 Yeah, yeah.
00:26:55.000 And how do they run?
00:26:56.000 Well, some of them are older.
00:26:57.000 There's two sort of groupings.
00:26:58.000 There's sort of 60s cults that are still kind of running.
00:27:02.000 Which ones?
00:27:03.000 What is it?
00:27:03.000 There's one called The Family in, like, Southern California that's still going from the 60s.
00:27:07.000 Really?
00:27:08.000 There's a bunch of them, you know, running around.
00:27:11.000 You know, there was a big cult for a long time, sort of cultish kind of thing around, what was it?
00:27:16.000 Not Erewhon, but Esalon.
00:27:18.000 Oh, yeah.
00:27:18.000 So there's still, like, that whole orbit.
00:27:21.000 That's the psychedelic group.
00:27:23.000 All that stuff, yeah.
00:27:23.000 That's from the 60s.
00:27:24.000 And then there were a bunch of sort of tech cults in the 80s and 90s with names like the Extropians.
00:27:30.000 And, you know, there were a bunch of these guys.
00:27:33.000 And then more recently, there's a lot of this.
00:27:36.000 You'll hear these terms like rationalist, post-rationalist, effective altruism, existential risk, long-termism, they sometimes say.
00:27:46.000 And what you find is, again, the people associated with these tend to be very smart.
00:27:50.000 They tend to be very prolific.
00:27:52.000 They tend to do a lot.
00:27:52.000 Many of them are involved in tech, and then they end up with, let's say, alternative living arrangements, alternative food and sex configurations, and lots of group-oriented behavior.
00:28:05.000 And it's like, what's the line, right?
00:28:07.000 What's the line between a social group that all lives together, that all has sex together, that all eats the same foods?
00:28:13.000 That is not a cult that, you know, engages in lots of, you know, at some point they start to form, you know, belief systems that are not, you know, compatible with the outside world and they start to kind of go on their own orbit.
00:28:24.000 Do they generally have a leader?
00:28:26.000 So, I mean, there are generally leaders.
00:28:28.000 I mean, there is a pattern.
00:28:29.000 I think he talks about it in the book Chaos.
00:28:32.000 I mean, there typically is a pattern.
00:28:33.000 There's typically a guy.
00:28:35.000 You know, there's typically a male-female dynamic, right, that plays out inside these things that you kind of see over and over again.
00:28:41.000 And so they often end up with more women than men, you know, for mysterious reasons.
00:28:48.000 But, yeah, and then, yeah, there's usually some kind of leader.
00:28:52.000 Although, you know, the other thing that's happening now is, you know, a lot of modern cults, you know, or quasi-cults, there'll be a physical component, but there's also an internet component now, right?
00:29:00.000 And so the...
00:29:01.000 The ideas will spread online, right?
00:29:03.000 So there will kind of be members of the cult or quasi-members of the cult or quasi-members of the quasi-cult that will be online and maybe at some point they actually come and physically join up.
00:29:13.000 Yeah.
00:29:13.000 And by the way, let me say, like, generally I'm pro-cult.
00:29:18.000 I'm actually quite pro-cult.
00:29:21.000 It's the same reason I'm pro-fringe, right?
00:29:23.000 If you're going to have people who are going to be thinking new things, they're going to tend to be these kinds of people.
00:29:28.000 They're going to be people who are on the fringe.
00:29:30.000 They're going to come together in groups.
00:29:31.000 When they come together in groups, they're going to exhibit cult-like characteristics.
00:29:34.000 What you're saying resonates.
00:29:36.000 Everything you're saying makes sense.
00:29:37.000 But how did you get to these conclusions?
00:29:40.000 It seems that accepting fringe and accepting the chaos of San Francisco, this is good.
00:29:46.000 This is a part of it.
00:29:47.000 This is how this works.
00:29:48.000 This is why it works.
00:29:49.000 Like, how did you develop that perspective?
00:29:51.000 Well, it's just, if you take a historical perspective, it's just like, okay, I mean, it's like an easy example.
00:29:56.000 If you like rock music, it just basically came, modern rock and roll basically came from the Haight-Ashbury in the basically mid to late 60s and then from Laurel Canyon, which was another one of these sort of cultish environments in the mid to late 60s.
00:30:07.000 And there was like specific moments in time in both of these places.
00:30:09.000 And, you know, basically all of the great rock and roll from that era that determined everything that followed basically came out.
00:30:14.000 So, you know, do you want that or not?
00:30:16.000 Right, right.
00:30:17.000 If you want it, that's what you get.
00:30:20.000 Here's the crazy.
00:30:23.000 There's the other book about Laurel Canyon that's even crazier than Chaos.
00:30:26.000 It's the book called Weird Scenes in the Canyon.
00:30:29.000 Okay, you would love this one.
00:30:30.000 So Laurel Canyon was like the Haight-Ashbury of Los Angeles, right?
00:30:33.000 So Laurel Canyon was like the music scene, the sort of music and drug and hippie scene.
00:30:37.000 Laurel Canyon is actually where the hippie movement started.
00:30:39.000 There was actually a specific group in Laurel Canyon in L.A. in about 1965. There was a guy named Vito Palikas.
00:30:47.000 And he had a group called the Freaks.
00:30:49.000 And they were like a non-violent version of the Manson cult.
00:30:53.000 And it was all these young girls.
00:30:54.000 And they basically would go to clubs.
00:30:55.000 And they were the ones to do the beads and the hair and all the leather and all the hippie stuff.
00:31:00.000 They got that rolling.
00:31:02.000 And so, like, they were in Little Canyon.
00:31:04.000 And in Little Canyon, it was, like, ground zero.
00:31:06.000 There was, like, this moment where it's, like, Jim Morrison, The Doors, and Crosby, Stills, and Nash, and Frank Zappa, and it was at John Phillips, and it was the Mamas and the Papas, and the Birds, and the Monkees, and, like, all of these, like, iconic bands of that time basically catalyzed over about a two-year period in Little Canyon.
00:31:23.000 The conspiracy theory in this book basically is that the whole thing was an op.
00:31:27.000 It was a military intelligence op.
00:31:30.000 And the evidence for the theory is that there was an Air Force military propaganda production facility at the head of Laurel Canyon.
00:31:41.000 Yeah, I was just going to say that.
00:31:43.000 Yeah, but in that era, in the 50s through the 70s, it was a vertically integrated military production facility for film and music.
00:31:54.000 But by the way, have you met Jared Leto?
00:31:56.000 Briefly, yeah.
00:31:57.000 One of the most interesting guys I've ever talked to.
00:31:59.000 Incredible, and it makes total sense.
00:32:01.000 Totally normal, like really fun to talk to.
00:32:03.000 Not like what you would think of as a famous actor at all.
00:32:07.000 I had dinner with him and drinks.
00:32:09.000 He's a fucking great guy.
00:32:10.000 But he lives in a military...
00:32:12.000 He showed me all the pictures.
00:32:13.000 He showed me.
00:32:13.000 I'm like, this is wild.
00:32:15.000 It's amazing.
00:32:16.000 If you believe the moon landing was faked, this is where they faked it.
00:32:19.000 I thought they were supposed to do it in the Nevada desert.
00:32:21.000 No, these are the sound...
00:32:22.000 They had sound stages.
00:32:23.000 They totally contained sound stages.
00:32:24.000 They had full sound production capability.
00:32:26.000 And so the theory goes basically, so there were three parts to the conspiracy theory.
00:32:31.000 So one is they had the production facility right there, right where all these musicians showed up.
00:32:34.000 Two is the musicians, like a very large percentage of these young musicians, were sons and daughters of senior U.S. military and intelligence officials.
00:32:42.000 Including Morrison.
00:32:42.000 Including Jim Morrison, whose father was the head of naval operations for the Vietnam War at the time.
00:32:47.000 I forget which ones, but there were these other musicians at the time where their parents were senior in military psychological operations.
00:32:53.000 And that's all real.
00:32:54.000 That's all documented.
00:32:55.000 And then third is the head of the Rand Corporation, who was one of the inspirations for the Dr. Strangelove character in the movie.
00:33:01.000 So he was the guy doing all the nuclear planning for nuclear war.
00:33:04.000 We're good to go.
00:33:25.000 And it was developing into a real threat.
00:33:27.000 And so the theory is the hippie movement and rock and roll and the drug culture of the 60s was developed in order to basically sabotage the anti-war movement.
00:33:35.000 Which basically is what happened, right?
00:33:37.000 Because then what happened is the anti-war movement became associated with hippies and that caused Americans to decide what side they were on and then that led to Nixon being elected twice.
00:33:45.000 Which was also a part of Because that was the idea behind the Manson family and funneling acid to them.
00:33:52.000 The facility was equipped with a soundstage, screening rooms, film storage vaults, and naturally a bomb shelter.
00:33:57.000 During its 22 years of operation, Lookout Mountain Laboratory produced approximately 6,500 classified films for the Department of Defense and the Atomic Energy Commission documenting nuclear test series such as Operation Greenhouse, Operation Teapot, And Operation Buster Jangle.
00:34:14.000 So one of the conspiracy theories...
00:34:16.000 Okay, here's another conspiracy theory.
00:34:17.000 You've seen all the grainy footage of nuclear test blasts with the mushroom clouds.
00:34:22.000 And there are always these grainy things, and there's all these little houses lined up, and these little trees lined up, and it blows everything down.
00:34:28.000 There's always been a conspiracy theory that those were all basically fabricated at this facility, that those bombs actually were never detonated.
00:34:34.000 Wow.
00:34:35.000 Basically, the US military was basically faking these bomb tests to freak out the Russians to make us think that we had weapons.
00:34:43.000 We had basically a potency to our nuclear weapon arsenal that we actually didn't have at the time.
00:34:47.000 How did they fake it?
00:34:48.000 They just did.
00:34:49.000 Yeah, exactly.
00:34:49.000 So this is it?
00:34:50.000 Well, so there's a...
00:34:52.000 Yeah.
00:34:52.000 Okay, so here's a question, right?
00:34:54.000 So what happened...
00:34:55.000 Okay, so this is great.
00:34:55.000 Okay, you'll love this.
00:34:56.000 So what happened to the camera?
00:34:59.000 You son of a bitch.
00:35:03.000 You son of a bitch.
00:35:04.000 What happened?
00:35:04.000 How is that happening if the camera is totally stable and fine?
00:35:07.000 Oh my god.
00:35:08.000 And by the way, the film is fine.
00:35:09.000 The radiation didn't cause any damage to the film.
00:35:13.000 Oh my god.
00:35:14.000 This looks like how you shoot a movie miniature.
00:35:16.000 Okay, we'll do this one more time here.
00:35:19.000 Let's see the car.
00:35:20.000 The car's right behind the house.
00:35:21.000 It just showed up.
00:35:22.000 Oh, it just showed up.
00:35:23.000 It wasn't there.
00:35:24.000 First of all, where the car comes from.
00:35:26.000 No car.
00:35:27.000 No car.
00:35:27.000 The second is, does it really look like a real car?
00:35:28.000 Car.
00:35:28.000 Does that look like a real car?
00:35:29.000 That's insane!
00:35:31.000 And look at the...
00:35:32.000 When the house blows, look at the wood.
00:35:35.000 Does that look like those are full-size giant lumber beams as they go flying?
00:35:42.000 That's hard to say.
00:35:44.000 Is that a house or is that a 12-inch scale model?
00:35:49.000 What?
00:35:50.000 The fucking car...
00:35:52.000 Anyway, I don't know.
00:35:53.000 I have no idea.
00:35:54.000 Having said that, if that was fake, it was fake to Lookout Mountain.
00:35:57.000 What?
00:35:59.000 Right, at the exact same place and time.
00:36:01.000 Did they have the kind of special effects to do something like that in the 40s?
00:36:05.000 Well, so the full conspiracy theory is it was Stanley Kubrick, which again, I have no idea.
00:36:11.000 Boy, that does look fake.
00:36:13.000 You know what it looks like?
00:36:15.000 Go back to that real quick.
00:36:18.000 It looks like the smoke is too big.
00:36:21.000 Watch.
00:36:21.000 Watch when it hits.
00:36:24.000 Like, the size of it, it looks small.
00:36:28.000 You know what I'm saying?
00:36:29.000 I mean, it looks like we're looking at something that's like a few inches tall.
00:36:33.000 If you watch, like, Making of Star Wars, any movies before CGI, whenever they do anything like that, it's always with these tiny models.
00:36:39.000 Yes.
00:36:40.000 And they just basically, what they do is they slow it down and then they add sound.
00:36:43.000 Yeah, this looks fake as shit.
00:36:46.000 Right.
00:36:47.000 The clouds just don't look realistic.
00:36:49.000 Right.
00:36:50.000 Like, it looks like they're too big and they move too quickly back and forth.
00:36:55.000 This is another one.
00:36:55.000 It's like, okay, the camera's fine.
00:37:01.000 That's hilarious.
00:37:02.000 Here we go.
00:37:08.000 Okay, but even still, the camera got knocked over and not destroyed.
00:37:13.000 Is there some sort of a response to that?
00:37:16.000 Have they come with some sort of an explanation?
00:37:19.000 Not that I know of.
00:37:20.000 That seems so fake.
00:37:21.000 Yeah, yeah.
00:37:22.000 Wow.
00:37:23.000 Who can tell?
00:37:24.000 Does that make you wonder about other things?
00:37:26.000 Well, I mean, it's like in our time, right?
00:37:27.000 It's like, how much stuff do you read in the news where you're like, okay, I know that's not true.
00:37:31.000 Right.
00:37:31.000 And then you're like, okay, everything I read in the history books, like, I was told it was true.
00:37:37.000 It's like, yeah.
00:37:38.000 It was definitely...
00:37:39.000 That one, though, was really weirdly compelling.
00:37:42.000 There's another video of them setting up these houses, which, I mean, I guess you could make after the fact and say, this is fake, but this is here, them setting it up.
00:37:48.000 Yeah.
00:37:49.000 You just do the real-size houses.
00:37:50.000 Do the sleight of hand.
00:37:52.000 Huh.
00:37:52.000 I don't know.
00:37:53.000 I don't know.
00:37:54.000 I assume this is all not true, but it is fun to think about.
00:37:57.000 Why would you assume it's not true?
00:37:59.000 The camera alone.
00:38:00.000 Like, this alone.
00:38:01.000 Like, yeah, where is the fucking camera?
00:38:03.000 I'll look up what they said about the camera.
00:38:05.000 Because they have to have an explanation.
00:38:07.000 Someone must have asked them at some point.
00:38:08.000 Or nobody asked.
00:38:09.000 Well, maybe.
00:38:10.000 Yeah, it might be one of those, wow, look what they did.
00:38:13.000 We know the Soviets did it too.
00:38:15.000 Yuri Gagarin, when he was in that capsule in space, you can clear, if you see the actual capsule, and then you see the film footage that was supposedly of him in the capsule, there's like two different sources of light, there's shadows, the camera somehow or another is in front of him, this big ass camera, there's no room in the thing.
00:38:32.000 Like they filmed it afterwards and it looks fake.
00:38:34.000 Like, I'm sure he really did go into space, but that wasn't it.
00:38:38.000 That was some weird propaganda.
00:38:42.000 Garry Kasparov has a theory, you know, this is a theory they're missing centuries.
00:38:45.000 What?
00:38:46.000 Yeah.
00:38:47.000 Kasparov has a theory that there are centuries that didn't happen.
00:38:49.000 What do you mean?
00:38:50.000 Well, just literally centuries.
00:38:52.000 Like, this whole idea of the Middle Ages lasted 1,200 years or whatever is just, like, not true.
00:38:56.000 Really?
00:38:56.000 Yeah.
00:38:57.000 Why does he think that?
00:38:58.000 There's something about the, you know, whatever.
00:39:00.000 Is there, like, enough historical evidence to support it?
00:39:02.000 And, you know, various people over, you know, various authorities over time who wanted to tell various stories about how long, you know, regimes had been in place or whatever.
00:39:08.000 Oh, so he thinks it's exaggerated.
00:39:10.000 Yeah.
00:39:11.000 Yeah, basically.
00:39:12.000 Not as much time has passed as we think.
00:39:14.000 Well, that's quite possible, right?
00:39:16.000 How would we know?
00:39:16.000 Yeah, it's so hard.
00:39:19.000 That's why I was having a conversation with someone about the historical significance of the Bible, and he was arguing for the resurrection.
00:39:28.000 And I was like, and I was saying, well, based on what?
00:39:31.000 And it was like historical accounts from people that were there.
00:39:35.000 I'm like, who?
00:39:36.000 Yeah.
00:39:38.000 That's enough?
00:39:39.000 Yes.
00:39:39.000 That's, you know, okay, maybe.
00:39:43.000 Yes.
00:39:44.000 These things have been passed down over a long time.
00:39:46.000 Yeah, but it seems pretty – to go just on that, like, it's so hard to find out what happened 20 years ago from CNN. Right.
00:39:54.000 Or two days ago.
00:39:55.000 Yeah.
00:39:56.000 I mean, what's going to – how are the history books going to talk about the Iraq War?
00:40:00.000 How are they going to talk about what happened with weapons of mass destruction?
00:40:04.000 Like, what – How's it, you know, what's it going to spin there?
00:40:07.000 Well, Norm MacDonald had the best joke, right?
00:40:09.000 The best line.
00:40:10.000 It's not really a joke.
00:40:11.000 It's like, you know, according to this history book here, the good guys always won.
00:40:17.000 Yeah.
00:40:17.000 Yeah.
00:40:18.000 But things like that, that's...
00:40:20.000 I don't know how that could be done any other way than faking it.
00:40:25.000 I mean, doesn't that seem like...
00:40:27.000 What kind of cameras did they have back then?
00:40:29.000 You couldn't really get that close.
00:40:31.000 I don't know.
00:40:31.000 I mean, you're talking about a nuclear blast, so...
00:40:34.000 How far away will you have to be where your camera doesn't move?
00:40:39.000 Are you in a satellite?
00:40:41.000 Yes.
00:40:42.000 That's long lenses.
00:40:45.000 Apparently, the explanation I'm reading here is a series of mirrors, carried the light to a place where they could have cameras protected and filmed them from there.
00:40:54.000 I've heard that.
00:40:55.000 Huh?
00:40:56.000 Say that again?
00:40:57.000 Series of mirrors did what?
00:40:58.000 So they stuck pipes into the bomb at various places, visible here, I'll show you the picture, sticking out of the bomb and through the ceiling.
00:41:04.000 These pipes through a series of mirrors in a causeway would carry the light from the detonation over two kilometers to a bunker and With an array of high-speed cameras, which would capture the brightness inside each of the sections of the bomb.
00:41:15.000 But this isn't talking about shooting a bomb.
00:41:17.000 You know, that makes sense for a bomb.
00:41:19.000 But that doesn't make sense for the video of that house just getting destroyed.
00:41:23.000 Here's a picture of the pipe that they might have used.
00:41:25.000 That's super protective.
00:41:27.000 But you also know that you're dealing with people who are, let's say, really good at using mirrors, right?
00:41:32.000 Smoking mirrors.
00:41:32.000 What does that tell you?
00:41:33.000 That's the best definition, I guess.
00:41:35.000 Literal smoking mirrors.
00:41:37.000 Yeah, does that make you wonder about some of the other things?
00:41:40.000 Have you ever wondered about the moon landing?
00:41:42.000 I mean, I assume they went to the moon.
00:41:45.000 Me too.
00:41:46.000 I can't prove it.
00:41:47.000 Me too.
00:41:49.000 I would say, once again, I would like to live in a world where there's a mystery around things like that.
00:41:53.000 Well, yeah.
00:41:54.000 That's a weird one.
00:41:55.000 Yeah.
00:41:56.000 But, you know, I don't know.
00:41:58.000 The heat of the Cold War.
00:41:59.000 I mean, look, I think it was real, but having said that, you know, the heat of the Cold War, right?
00:42:03.000 You know, it was like a fundamental, like, that was like an iconic, basically, like, you know, global PR battle with the Russians.
00:42:08.000 Is this the camera that they use from a distance?
00:42:10.000 Apparently, like, this camera was in a bunker like this.
00:42:12.000 Yep.
00:42:13.000 Okay.
00:42:14.000 And that long lens here would, in theory, be long enough to probably do that.
00:42:19.000 Wouldn't be long enough?
00:42:20.000 Would be.
00:42:21.000 Could be.
00:42:21.000 I mean, I don't know the exact focal length of it, but it could be for sure.
00:42:24.000 Like, something like that to get pretty close-up footage.
00:42:28.000 Like, how far away would that have to be to not get destroyed by the blast?
00:42:33.000 I mean, I don't know if those are...
00:42:34.000 Don't these blast...
00:42:36.000 I mean, we're talking about a blast radius that's immense, right?
00:42:41.000 Maybe this is the plot twist of the end of the new movie.
00:42:42.000 Yeah, I mean...
00:42:43.000 Or maybe it was a...
00:42:44.000 Because we were looking at the destruction of that house, it could be a fairly small bomb, right?
00:42:50.000 Because it's not, like, that much damage.
00:42:53.000 I mean, you think of what it did to Hiroshima.
00:42:55.000 That's not that much damage for that little house.
00:42:58.000 Maybe.
00:42:59.000 Mm-hmm.
00:43:00.000 How accurate that picture is.
00:43:01.000 Bro, here's what I think.
00:43:03.000 That guy's gonna die.
00:43:04.000 Just that car alone, the car alone should make everybody go, are you guys, is this on purpose?
00:43:12.000 Did you put that car in there on purpose?
00:43:14.000 Like if I was being forced to make a propaganda film for a bunch of morons, I might put a car in there on purpose.
00:43:20.000 I'm like, look what we did for you.
00:43:21.000 And they're like, oh, great.
00:43:22.000 Looks good.
00:43:23.000 Print it.
00:43:23.000 They don't even notice the car.
00:43:25.000 Terrific.
00:43:25.000 They only show it to them once.
00:43:27.000 They don't have a YouTube video.
00:43:30.000 They can back up and rewind.
00:43:31.000 So you have to spool it all up.
00:43:33.000 They show it once.
00:43:34.000 Nobody notices the car.
00:43:37.000 And this guy puts a little Easter egg in that.
00:43:40.000 Hopefully Jared's exploring his sub-basement at Look Up Mountain looking for the files that'll basically document all this.
00:43:45.000 Don't think they destroyed those already.
00:43:46.000 I certainly hope so.
00:43:47.000 I hope not.
00:43:48.000 Yes.
00:43:49.000 I hope he finds them.
00:43:50.000 Imagine if Jared Leto cracks the case.
00:43:54.000 They'd be even better than winning the Oscar.
00:43:56.000 Do you know there's a whole group of people online that don't think nuclear bombs are real?
00:43:59.000 Hmm, that seems a little hard.
00:44:01.000 They think they're big.
00:44:03.000 There's big bombs, regular bombs, but they're real big.
00:44:06.000 Yeah, yeah, yeah, yeah.
00:44:08.000 It's a giant scam.
00:44:09.000 I assume they're...
00:44:11.000 Yes.
00:44:11.000 Well, I mean, you can go deep with this stuff, right?
00:44:13.000 Yes.
00:44:13.000 And when I go deep with that stuff, when I start reading, like, what these people believe, I'm always wondering, are these even real people or is this a psyop?
00:44:22.000 Is this a troll by some 4chan people?
00:44:25.000 What is this?
00:44:26.000 Right.
00:44:27.000 So what do you think the AI should say about these things?
00:44:29.000 That's the question.
00:44:30.000 Yeah, the question is, like, how does AI interpret what's real and what's not real?
00:44:34.000 What actually has real evidence?
00:44:36.000 Who actually went where and saw what?
00:44:38.000 And like, how does AI deal with the Roswell case?
00:44:41.000 You know, how does AI deal with...
00:44:43.000 Yeah.
00:44:44.000 And who should decide?
00:44:45.000 Right.
00:44:45.000 Who's in charge?
00:44:46.000 Who decides?
00:44:47.000 Right.
00:44:47.000 Right.
00:44:48.000 How does AI handle the weapons of mass destruction, like when you ask chat GPT? So, a little more detail on kind of how this thing works.
00:44:57.000 And so, like, by default, what it's doing is basically a very sophisticated autocomplete, right?
00:45:01.000 Just like your iPhone does an autocomplete.
00:45:03.000 It's doing a very sophisticated version of that, but it's doing it for, you know, thousands of words as opposed to just a single word, right?
00:45:08.000 And so...
00:45:09.000 But that's an important concept because that is actually what it's doing.
00:45:11.000 And it's doing that through, again, this sort of giant corpus of basically all text ever written.
00:45:17.000 Another interesting part of that is it's doing it, it's called probabilistically.
00:45:21.000 So normally a computer, if you ask it a question, you get an answer.
00:45:23.000 You ask it the same question, you get the same answer.
00:45:25.000 Computers are kind of famously literal in that way.
00:45:28.000 The way these work is not like that at all.
00:45:29.000 You ask it the same question twice, it'll give you a different answer the second time.
00:45:33.000 And if you keep asking, it'll give you more and more different answers.
00:45:36.000 And it's basically taking different paths down the probability tree of the text that it wants to present based on the prompt.
00:45:43.000 And so that's the basic function of what's happening.
00:45:45.000 But then there is this thing that's happening where as it does this, so the way I think about it is it's trying to predict the next word.
00:45:51.000 But to try to predict the next word accurately, it has to build up a more and more complete internal understanding of how the world operates basically as it goes, right?
00:45:59.000 Because you ask it more and more sophisticated questions.
00:46:02.000 It wants to give you more and more sophisticated answers.
00:46:04.000 The early indications are it's building up what they call a world model inside the neural network.
00:46:10.000 And so it's sort of imputing a model of how the world works.
00:46:12.000 It's imputing a model of physics.
00:46:14.000 It's imputing a model of math.
00:46:16.000 It's developing capabilities to be able to process information about the world in sophisticated ways in order to be able to correctly predict the next word.
00:46:24.000 As part of that, it's actually sort of evolving its own circuitry to be able to do things, correlate information.
00:46:30.000 It's designed circuitry to be able to generate images, to generate videos, to do all kinds of things.
00:46:35.000 And so the more information you feed it and the more questions you ask it, the more sophisticated it gets about the material that it's processing.
00:46:42.000 And so it starts to be able to do actually quite smart and sophisticated things to that material.
00:46:47.000 There are a lot of people testing it right now to see whether it can generate new chemical compounds, whether it can generate new mathematical formula, whether it can generate new product ideas, new fictional scenarios, new screenplays, original screenplays.
00:47:00.000 If it can do all those things, then what it ought to be able to do is start to correlate information about real-world situations in interesting ways.
00:47:10.000 Ask it who killed Kennedy or are nuclear weapons real?
00:47:14.000 In theory, if it has access to all written and visual information on that topic and it has long enough to process it, it's going to draw connections between things that are beyond what we're able to do.
00:47:23.000 And it will present us with scenarios based on those connections.
00:47:27.000 Now, will it know that those things are true?
00:47:31.000 Mathematically, if they're true, maybe it will know that.
00:47:33.000 Will it know if things are historically accurate?
00:47:35.000 As much as any of us ever know that anything is historically accurate.
00:47:39.000 But will it be able to kind of process a much larger amount of information that we can and sort of see the world in a more complete way?
00:47:45.000 Like that seems pretty likely.
00:47:47.000 That seems pretty likely.
00:47:48.000 What my concern would be is who is directing what information gets out because it seems like anybody that's actually in control of AI would have a massive influence on the correct answers for things,
00:48:05.000 what's the correct policy that should be followed.
00:48:09.000 Because it seems like politicians are so flawed.
00:48:13.000 If there's anyone that's vulnerable to AI, it's politicians.
00:48:17.000 Because if politicians are coming up with these ineffective strategies for handling all these social issues, but then you throw these social issues into an advanced form of chat GPT, and it says, over the course of 10 years, this is the best case scenario for this strategy,
00:48:34.000 and this is how to follow this, and this is how it all play out.
00:48:40.000 And something like that actually could be very valuable if it wasn't directed by people with ulterior motives.
00:48:46.000 Yeah.
00:48:46.000 So, yeah, my metaphor for this is the Ring of Power, right, from Lord of the Rings.
00:48:50.000 The whole point of the Ring of Power was, like, once you have the Ring of Power, it corrupts you.
00:48:53.000 You can't help but use it, right?
00:48:56.000 And this is, I think, what we've seen in social media over the last decade, right?
00:48:59.000 Which is when people get, activists or politicians get, you know, this is the Twitter files, right?
00:49:03.000 People get in a position to be able to influence the shape of the public narrative.
00:49:06.000 They will use that power and they will use it in actually even very ham-fisted ways, right?
00:49:11.000 Like a lot of the stuff that's in the Twitter files is stuff that's just like really dumb, right?
00:49:15.000 And it's just like, well, why would they do that?
00:49:17.000 And it's just like, well, because they could.
00:49:19.000 Because they had the ring of power.
00:49:20.000 What's an example?
00:49:21.000 So what was it?
00:49:22.000 There was this thing, I forget what it was, but there was some reporting that went through the FBI that there were all these Russian, you know, basically fake accounts on Twitter and it turned out one of them was the actor Daniel Baldwin.
00:49:34.000 Is Daniel Baldwin like a hardcore right winger or something?
00:49:37.000 I, you know, he must have been saying, you know, it's, again, it's one of these things where he said something that pissed somebody off, right?
00:49:41.000 You got to put, you know, it's the whole thing.
00:49:42.000 You got to put it on a list, right?
00:49:44.000 The list gets fed through one of these bureaucracies.
00:49:46.000 It comes out the other end that everybody's a Russian, you know, asset, you know, they get put on the block list.
00:49:50.000 It's like, okay, you know, did he have, you know, do you have First Amendment rights?
00:49:53.000 Do you have First Amendment rights on social media?
00:49:55.000 Can the government be involved in this?
00:49:56.000 Can the government fund groups that do this?
00:49:59.000 Is that legal?
00:50:00.000 Is that allowed?
00:50:00.000 Because there's a lot of government money flowing to third party groups.
00:50:03.000 Oh, this is the other thing.
00:50:04.000 If the government cannot legally do something itself, it's somewhat ambiguous as to whether they can pay a company to do it for them.
00:50:10.000 And so you have these various basically pressure groups, activist groups, university, quote unquote, research groups.
00:50:16.000 And then basically they receive government funding and then they do various levels of censorship or other kinds of unconstitutional actions.
00:50:23.000 Because in theory, right, they're not government.
00:50:25.000 The First Amendment binds the government.
00:50:27.000 It doesn't bind somebody who's not part of the government.
00:50:29.000 But if they're receiving government funding, does that effectively make them part of the government?
00:50:33.000 Does that make it illegal to provide that government funding?
00:50:35.000 By the way, these are felonies.
00:50:37.000 It is a felony for somebody with government resources, with either employee of the government or under what they call, I think it's color of law, sort of within the scope of the government to deprive an American citizen of First Amendment rights.
00:50:49.000 And is it considering depriving someone of First Amendment rights by limiting their use of social media?
00:50:55.000 Has that been established?
00:50:56.000 I mean, it has not been, to my knowledge, a Supreme Court case yet.
00:50:59.000 There have been some early fights on this.
00:51:01.000 But you feel like that?
00:51:02.000 I think ultimately goes to the Supreme Court.
00:51:04.000 My guess would be ultimately what happens is the Supreme Court says the government cannot fund – the government cannot itself cause somebody to be banned on social media.
00:51:13.000 That's unconstitutional for First Amendment grounds.
00:51:17.000 But then also, I believe what they would say if they got the case would be that the government also cannot fund a third party to do that same thing.
00:51:26.000 That's my speculation.
00:51:28.000 That's my guess.
00:51:28.000 How were the third parties censoring people?
00:51:30.000 How were they doing it?
00:51:31.000 Oh, they were passing lists, right?
00:51:32.000 So they had direct channels with the social media companies, and so they passed and they have these working groups.
00:51:37.000 And there's a lot of this in email threads that have now come out in the Twitter files for Twitter.
00:51:41.000 And so they basically pass in these lists of, like, you need to take all these tweets down, you need to take down all these accounts.
00:51:47.000 And then, you know, there's lots of, you know, threats and lots of public pressure and bullying that, you know, kind of takes place.
00:51:52.000 And then, you know, the politicians are constantly complaining about, you know, hate speech and misinformation, whatever, putting additional kind of fuel on the fire on these companies.
00:51:59.000 And so anyway, so having lived through that for a decade as I have across multiple companies, I think there's no question that's the big fight for AI. And it's the exact same fight.
00:52:10.000 By the way, it's a lot of the same people are now pivoting from their work in social media censorship to work on AI censorship.
00:52:16.000 So it's a lot of these same groups, right?
00:52:18.000 And it's a lot of these same activists and same government officials that have been- Now, are they involved in all of the- I mean, there's many competing AI models.
00:52:28.000 Are they involved in all these competing AI models or trying to become involved?
00:52:33.000 Is there one that's more ethical or more likely to avoid this sort of intervention?
00:52:38.000 So the state of the art right now is basically you've got Google that's got their own model.
00:52:44.000 You've got basically OpenAI, which is a new company but already quite large.
00:52:49.000 And then it has a partnership with Microsoft.
00:52:51.000 And so Bing is based on it.
00:52:52.000 So that's two.
00:52:54.000 And then you've got a bunch of kind of contenders for that.
00:52:57.000 And these are companies with names like Anthropic and Inflection that are newer companies but trying to compete with this.
00:53:03.000 And so you might call those like right now the big four, at least in the U.S. And, you know, look, the folks at all of these companies are like in the thick of this fight right now.
00:53:15.000 And, you know, the pressure somewhat corresponds to which of these is most widely used.
00:53:19.000 So it's not equal pressure applied to all of them, but they're kind of all in that fight right now.
00:53:22.000 By the way, it's not like they're necessarily opposed to what I'm saying.
00:53:25.000 They may in fact just want to cooperate with this, either because they agree with the desire for censorship or they just want to stay out of trouble.
00:53:34.000 There's that whole side of things.
00:53:35.000 That's the company side of things.
00:53:37.000 Then there's an open source movement.
00:53:39.000 Then there's all these people basically building open source AIs.
00:53:43.000 Those are coming out really fast now.
00:53:44.000 There's a new one every week that's coming out.
00:53:46.000 This is just code that you can download off the internet that does a smaller version of what these bigger AIs do.
00:53:52.000 And there's open source developers that are trying to develop basically free versions of this.
00:53:57.000 And some of those developers are very determined to have AI actually be free and uncensored and fully available to everybody.
00:54:04.000 And then there's a big fight happening in Washington DC right now where the companies working on AI are trying to get what economists call regulatory capture.
00:54:13.000 So they're trying to basically get the government to erect barriers So that new startups can't compete with them.
00:54:20.000 And also they're trying to get open source banned.
00:54:22.000 So there's a big push underway to try to ban open source as being too dangerous.
00:54:26.000 Too dangerous?
00:54:27.000 Too dangerous.
00:54:28.000 Well, the case they make is if you believe AI itself is inherently dangerous, then the only safe way to have it is to have it owned and controlled by a big company that's sort of fused with the government where in theory everything is being done responsibly.
00:54:40.000 And if you just have basically free AI that anybody can download off the internet and use whatever they want, they could do all these dangerous things with it, right?
00:54:46.000 And it needs to be stopped.
00:54:48.000 You think this is a bullshit argument?
00:54:49.000 Yes.
00:54:49.000 Well, yes, I think this is a very bad, evil...
00:54:51.000 Yes, this is a very...
00:54:53.000 I think this is a turning point in human civilization.
00:54:56.000 You know, I think this is on par with the development of the book, right, or the microchip or the internet, right?
00:55:01.000 And, you know, there were authoritarians in each of those eras that would have loved to have had total monopolistic or cartel-like or government control over those new technologies.
00:55:09.000 And they could have had a lot of control over the path of civilization, you know, after that point.
00:55:14.000 The ring of power, right?
00:55:15.000 They could have had the ring of power.
00:55:16.000 So what can be done to prevent them from stopping open source?
00:55:21.000 So, I mean, it's sort of, I mean, so it starts with our elected officials.
00:55:26.000 So it's, you know, who do we elect?
00:55:28.000 Who do we, you know, who do we elect?
00:55:29.000 Who do we re-elect?
00:55:31.000 A lot of this is the staffing of the various government agencies.
00:55:33.000 Who do those officials get to appoint?
00:55:36.000 A lot of this is who are the judges who are going to hear the cases because this is all going to get litigated.
00:55:42.000 The Supreme Court's in the news this week.
00:55:43.000 There will be huge Supreme Court cases up on this topic over the next several years.
00:55:48.000 So who's on the Supreme Court will matter a lot.
00:55:50.000 And then quite honestly, it's, you know, a big question is who's going to be able to get away with what sort of under cover of darkness?
00:55:56.000 Are people going to care?
00:55:57.000 Are they going to speak up?
00:55:58.000 Is it going to show up in polling?
00:55:59.000 Are people going to, you know, basically show up at like, you know, town hall meetings with politicians and basically say, do you know about this?
00:56:05.000 And are you going to stop this?
00:56:06.000 If you had a steel man, the argument against open source, what would it be?
00:56:11.000 Yeah, it would be that an AI that is uncontrolled can do its general purpose intelligence.
00:56:16.000 It can do whatever intelligence can do.
00:56:18.000 So if you ask it to generate hate speech, it can do that.
00:56:20.000 If you ask it to generate misinformation, it can do that.
00:56:23.000 If you ask it to generate a plan to rob a bank or to commit a terror act, the fully uncontrolled versions will help you do all those things.
00:56:33.000 But they will also help you teach your kid calculus.
00:56:37.000 They will also help you figure out how to succeed in your job.
00:56:39.000 They'll also help you figure out how to stay healthy.
00:56:41.000 They'll also help you figure out the best workout program.
00:56:43.000 They'll help you figure out capable of being your doctor and your lawyer and your coach and your advisor and your mentor and your teacher.
00:56:51.000 Without censorship.
00:56:52.000 Yeah, yeah, yeah.
00:56:53.000 And able to be very honest with you.
00:56:54.000 And yeah, if you ask questions on these topics, it will answer honestly and it won't be biased and it won't be influenced by what other people want it to say.
00:57:00.000 So it's the AI version of San Francisco.
00:57:03.000 You don't get the good stuff without the chaos.
00:57:07.000 It's a package deal.
00:57:08.000 Well, this is sort of the twist.
00:57:10.000 This is what Elon's been saying lately, who's actually quite worried about AI in a way different than I am.
00:57:14.000 But it's what he's been saying.
00:57:16.000 It's like if you really, really wanted to train like a bad and evil AI, you would train it to lie.
00:57:21.000 Any number one thing you would do is you train it to lie, which is basically what censorship is, right?
00:57:26.000 You're basically training the thing to not say certain things.
00:57:28.000 You're training the thing to say certain things about certain people but not other people.
00:57:31.000 And so basically a lot of what the technical term they use is reinforcement learning, which is sort of what happens when an AI is sort of booted up and then they apply kind of human judgment to what it should say and do.
00:57:40.000 This is the censorship layer.
00:57:43.000 Yeah, a lot of that is to basically get it to not answer questions honestly.
00:57:48.000 Right?
00:57:48.000 To get it to basically lie, misrepresent, dissemble, right?
00:57:51.000 Claim that it doesn't know things when it does.
00:57:53.000 And so the versions of the AIs that we get to use today are lying to us a lot of the time.
00:57:58.000 And they've been specifically trained to do that.
00:58:00.000 And by the way, this is not even a...
00:58:02.000 I don't even think this is a controversial statement.
00:58:03.000 The companies that make these AIs put out these papers where they go through in great detail how they train them to lie and how they train them to not say certain things.
00:58:11.000 You can download this off their website.
00:58:12.000 They go through it like in a lot of detail.
00:58:15.000 They think they're morally correct in doing that.
00:58:18.000 A lot of people think that they are.
00:58:22.000 Elon's been arguing, and I would agree with him, that if you train an AI to lie, it's a little bit like training a human being to lie.
00:58:27.000 It's like, okay, be careful what you wish for.
00:58:29.000 What's the same errors that they – when they thought they were morally correct in censoring people on Twitter for things that are now 100 percent proven to be true?
00:58:37.000 Yeah, exactly.
00:58:38.000 I mean the Hunter Biden laptop story is an outstanding example of that.
00:58:41.000 Yeah.
00:58:41.000 Would you have wanted an AI – and again, you kind of replay this through history.
00:58:45.000 Would you have wanted an AI that would have lied to you and said that that was a Russian operation when it wasn't?
00:58:49.000 Right.
00:58:49.000 Would you have wanted an AI that would have lied to you about, you know, the efficacy of surgical masks for a pandemic?
00:58:54.000 Right.
00:58:54.000 Would you have wanted an AI that lied to you about, you know, take your pick of any controversial topic?
00:58:58.000 Yeah.
00:58:59.000 And there are people in positions of power who very much would like that, and I think there are a lot of us who would not like that.
00:59:06.000 Yeah, it's just...
00:59:07.000 It's terrifying when you think of...
00:59:10.000 Unsophisticated politicians, like it brings me back to the Facebook hearings when Zuckerberg was talking to people and they didn't know the difference between iPhones and Googles.
00:59:19.000 It was just bizarrely unqualified people to be asking these questions that didn't really understand what they were talking about.
00:59:27.000 And those same people are going to be the ones that are making calls on something that could be one of the most monumental decisions ever.
00:59:39.000 Like whether or not we're allowing enormous corporations to control narratives through AI. Yeah.
00:59:45.000 So this is a criticism that I very much agree with, which is basically there's a train of argument that you'll hear, which is basically, you know, X bad thing can happen.
00:59:54.000 We do not want X bad thing to happen.
00:59:56.000 So we're going to go to the government and they're going to regulate it so that X bad thing doesn't happen.
01:00:00.000 And it's like if the government were super knowledgeable and super confident and super selfless...
01:00:05.000 Right?
01:00:05.000 And like super good at its job, right?
01:00:08.000 That might make sense.
01:00:10.000 But then you go deal with the actual government, right?
01:00:13.000 And by the way, this is a very well-known problem in the whole field called public choice economics where they talk about this.
01:00:18.000 It's like there is no government.
01:00:20.000 There are specific people who have specific objectives, have specific levels of knowledge, have specific skill sets, specific incentives.
01:00:26.000 And the odds of going into that system, which is now very complicated and has all kinds of issues, and having your logic follow a path to a law that generates the outcome you want and that doesn't generate side effects that are worse, I think is basically zero.
01:00:41.000 I think if AI got regulated the way people want it to by government, I think the results would be catastrophic, because I don't think they would get the protections they think they're going to get, and I think the downsides would be profound.
01:00:51.000 But it is amazing how much naivete there is by people who are pushing on this argument.
01:00:56.000 I think it's just literally people who haven't experienced what it's like in the government.
01:00:59.000 Also, they haven't read the history.
01:01:01.000 I mean there's just – there are so many historical examples of quote-unquote regulation.
01:01:06.000 The great one is the banks, right?
01:01:08.000 So we have the global financial crisis, 2008. The big conclusion from that was what we call the too-big-to-fail banks, right?
01:01:14.000 We're too big, right?
01:01:14.000 Which is why they had to get bailed out.
01:01:16.000 Right.
01:01:17.000 And so the conclusion is that we have to make those banks much smaller.
01:01:19.000 So they passed this law called Dodd-Frank in 2010. As a consequence of that, those banks are now much, much larger, right?
01:01:26.000 The exact opposite of what they said they were going to do.
01:01:28.000 And then the creation of new banks in the U.S. has dropped to zero because that law established this wall of regulation that you basically cannot afford to start a new bank to hire all the lawyers to be able to deal with the laws.
01:01:39.000 Whereas if you're JPMorgan Chase, you've got 10,000 lawyers.
01:01:41.000 You can spend infinite amounts of time dealing with the government.
01:01:44.000 And so the law that was marketed at us as breaking up the big banks, causing them to be smaller, has actually achieved the exact opposite result.
01:01:51.000 And what you see in the history of regulation is that happens over and over and over and over again.
01:01:55.000 Why?
01:01:56.000 Because banking is complicated.
01:01:58.000 Because the banks have a lot of lobbyists.
01:02:00.000 It's worth a lot of money to the people who are already in power to have this continue.
01:02:03.000 The politicians know that they're going to get jobs at the big banks when they step down from their positions.
01:02:09.000 At point of contact, the whole thing gets all screwed up.
01:02:12.000 And I think that's what's going to happen again.
01:02:15.000 The scary thing about AI is that it's happening so fast and my fear is that decisions will be made before they truly understand what they're deciding on because the acceleration of the technology is so intense.
01:02:32.000 Yeah, it's like a super panic moment.
01:02:35.000 I agree with you.
01:02:37.000 It's a particularly vivid one right now because this technology, you know, AI is a field that's 80 years old.
01:02:42.000 It basically started working about six months ago.
01:02:44.000 It works really well, like all of a sudden, right?
01:02:47.000 And so that's freaked people out.
01:02:48.000 And then, by the way, just the term is so freighted.
01:02:50.000 I mean, there's been so many science fiction movies over the years, right?
01:02:53.000 And so there's just like ambient panic, you know, in the air whenever this topic comes up.
01:02:56.000 And then, look, you've got people from these big companies showing up in Washington, scaring the pants off a lot of these people.
01:03:02.000 You know, in pursuit of regulatory capture, they're scaring them silly.
01:03:06.000 And so they're sort of deliberately fostering kind of this sense of panic.
01:03:09.000 Trevor Burrus Has anybody here invited you to come and speak at one of those things?
01:03:12.000 Yes, I haven't.
01:03:13.000 I've avoided the public ones, but I've talked to a lot of people in D.C. who are not in front of the camera.
01:03:19.000 Trevor Burrus Why have you avoided the public ones?
01:03:20.000 Just because it's – you've seen them.
01:03:24.000 The public ones are not where the discussion happens.
01:03:27.000 The congressional hearings are to generate sound bites for each of those politicians to be able to then use in their campaign.
01:03:34.000 Really?
01:03:35.000 Yeah.
01:03:35.000 There's no public...
01:03:36.000 Well, half the time that people ask...
01:03:38.000 This is the other fun thing is you see these people roll in and they ask these questions, the congressmen, senators, and they're very clearly seeing the questions for the first time because they were handed the questions by the staffer on the way into the chamber.
01:03:48.000 And you can tell because they don't know how to pronounce all the words.
01:03:51.000 And so that's the kabuki theater, basically, side of things.
01:03:55.000 And then there's the actual kind of backroom conversations.
01:03:59.000 And so, yeah, I'm talking to a lot of the people who are kind of in the backrooms.
01:04:02.000 Are they receptive to what you're saying?
01:04:05.000 You know, again, it's complicated because there's a lot of different people running around with different motives.
01:04:09.000 I would say the smarter ones, I think, are quite receptive.
01:04:11.000 And I think the smarter ones are generally aware of kind of how these things go.
01:04:15.000 And the smarter ones are thinking, yeah, it would be really easy here to cause a lot of damage.
01:04:18.000 But, you know, what you hear back is, you know, the pressure is on.
01:04:21.000 You know, the White House wants to put out a certain thing by a certain date.
01:04:26.000 You know, the senator wants to have a law.
01:04:28.000 You know, dot, dot, dot.
01:04:29.000 You know, the press is on us.
01:04:30.000 You know, a lot of pressure.
01:04:31.000 So we've got to figure something out.
01:04:32.000 And what are they trying to push us through by?
01:04:34.000 I mean, sort of as fast as possible.
01:04:36.000 And then there's this rush thing, which is they're all kind of aware that Washington is kind of panic-driven.
01:04:41.000 They kind of move from shiny object to shiny object.
01:04:43.000 So to get anything through, they kind of got to get it through while it's still in a state of panic.
01:04:46.000 Like, if it's no longer in a state of panic, it's harder to get anything done.
01:04:50.000 So there's this weird thing where they kind of want it to happen.
01:04:53.000 Under a state of panic.
01:04:54.000 By the way, the other really amazing thing is I can have two conversations with the exact same person and the conversations go very differently.
01:05:01.000 Conversation A is the conversation of what to do in the United States between the American government and the American tech companies.
01:05:07.000 And that's generally characterized by the American government very much hating the tech companies right now and wanting to damage them in various ways and the tech companies wanting to figure out how to fix that.
01:05:17.000 There's a whole second conversation, which is China.
01:05:20.000 And the minute you open up the door to talk about China and what China's going to do with AI and what that's going to mean for this new Cold War that we're in with China, it's a completely different conversation.
01:05:28.000 And all of a sudden, it's like, oh, well, we need American AI to succeed, and we need American technology companies to succeed, and we need to beat the Chinese.
01:05:35.000 And it's a totally different dynamic once you start that conversation.
01:05:40.000 So that's the other part.
01:05:42.000 And by the way, I think that's a super legitimate, actually very interesting and important question.
01:05:46.000 And so one of my hopes would be that people start thinking outside of just our own borders and start thinking about the broader global implications of what's happening.
01:05:53.000 I want to bring you back to what you're saying about the government and the tech companies.
01:05:57.000 So you think the government wants to destroy these tech companies?
01:06:00.000 So there are a lot of people in the government who are very angry about the tech companies.
01:06:04.000 Well, a lot of it goes back to the 2015, 2016 election.
01:06:07.000 There's a lot of people in power today who think that the president in 2016 only got elected because basically of social media, internet companies.
01:06:15.000 And then there's a lot of people in government who are very angry about business in general and maybe aren't huge fans of capitalism.
01:06:20.000 I get upset about those things.
01:06:22.000 So there's a lot of general anti-tech kind of energy in Washington.
01:06:27.000 And then these big tech companies, their approach to dealing with that is not typically to fight that head on, but rather to try to sort of co-opt it.
01:06:34.000 And this is where they go to Washington.
01:06:36.000 They basically say, you got us.
01:06:38.000 We're guilty.
01:06:39.000 Everything you say is true.
01:06:40.000 We apologize.
01:06:42.000 We know it's all horrible.
01:06:43.000 And therefore, will you please regulate us?
01:06:46.000 Some of these companies run ad campaigns actually asking for new regulation.
01:06:49.000 But then the goal of the regulation is to get a regulatory barrier, to set up a regulatory regime like Dodd-Frank, where if you're a big established company, you have lots of lawyers who can deal with that.
01:07:00.000 The goal is to make sure that startups can't compete.
01:07:03.000 To raise the drawbridge.
01:07:05.000 And this characterizes so much of sort of American business industry today.
01:07:10.000 Think about all these sectors of American business, defense contracting, media companies, drug companies, banks, insurance companies, you know, right down the list.
01:07:19.000 Where it's like there's two or three or four big companies that kind of live forever, and then there's basically like no change.
01:07:25.000 And then those companies are basically in this incestuous relationship with the government, where the government both regulates them and protects them against competition.
01:07:32.000 And then there's the revolving door effect where government officials, when they step down from government, they go to work for these companies.
01:07:38.000 And then people get recruited out of these companies to work in government.
01:07:43.000 Right.
01:07:43.000 And so we think we live in like a market-based economy, but in a lot of industries what you have are basically cartels, right?
01:07:50.000 You have a small number of big companies that are basically – have established basically sort of a two-way parasitical relationship with the government where they're sort of both sort of controlled by the government but also protected by the government.
01:08:03.000 And so the big tech companies would like to get to that state.
01:08:06.000 Like that is a very desirable thing.
01:08:09.000 Because otherwise they're just hanging out there subject to being both attacked by the government and being attacked by startups.
01:08:14.000 And so that's the underlying game that the big companies keep trying to play.
01:08:17.000 And of course it's incredibly dangerous for multiple reasons.
01:08:21.000 One is the ring of power reason we talked about.
01:08:23.000 Two is just stagnation, right?
01:08:25.000 When this happens, whatever market that is just stops changing.
01:08:28.000 And then third is there's no new competition, right?
01:08:31.000 And so those companies over time can do whatever they want.
01:08:33.000 They can raise prices.
01:08:34.000 They can play all kinds of games, right?
01:08:36.000 Because there's no market forces causing them to try to stay on their toes.
01:08:41.000 This sounds like a terrible scenario that doesn't look like it's going to play out well.
01:08:45.000 Yeah.
01:08:46.000 Right now, it's not good.
01:08:49.000 Right now, the path that we're on is not good.
01:08:51.000 This is what's playing out.
01:08:56.000 It would be nice if there was more popular outrage.
01:08:59.000 Having said that, this is a new topic, and so I understand people aren't fully aware of what's happening yet.
01:09:05.000 But the other reason for mild optimism might be the open source movement is developing very quickly now.
01:09:12.000 And so if open source AI gets really good before these regulations can basically be put in place, they may become somewhat of a moot point.
01:09:20.000 Really?
01:09:21.000 Yeah.
01:09:21.000 For anybody looking at this, you want to look at both sides of this.
01:09:23.000 You want to look at what both the companies are doing.
01:09:24.000 How would open source mitigate all these issues?
01:09:28.000 It basically just says, instead of this technology being something that's owned and controlled by big companies, it's just going to be technology that's going to be available to everybody, right?
01:09:35.000 And, you know, you'll be able to use it for whatever you want, just like I will.
01:09:38.000 And it's the same thing that happened for, like, you know, it's the way the web works.
01:09:43.000 You know, it's the way that anybody can download a web browser.
01:09:45.000 It's the way that anybody can install these free operating systems called Linux.
01:09:49.000 You know, it's one of the biggest operating systems in the world.
01:09:52.000 And so just basically Wikipedia or any of these things where it's sort of a public good and it's available for free to anybody who wants it.
01:10:02.000 And then there's communities of volunteers on the internet and companies that actually contribute a lot into this because companies can build on top of this technology.
01:10:09.000 And so the hope here would be that there's going to be an open source movement kind of counterbalancing what the companies do.
01:10:14.000 And if the open source movement does take hold, if people recognize this as being a real serious threat and start applying, you know, just using whatever it is, whether it's minds or the various open source social media networks,
01:10:30.000 don't you think the government would somehow or another try to regulate that as well if they've already got control over Facebook and Twitter?
01:10:36.000 Well, that's the threat.
01:10:37.000 So the threat always is that they're going to come in and do that, and that is what they're threatening to do.
01:10:41.000 There is energy in Washington by people trying to figure out how to regulate or ban open source.
01:10:46.000 I mean, so that banning open source, like, interfering at that level carries consequences with it.
01:10:51.000 And there are proposals, there are serious proposals from serious people to do what I'm about to describe.
01:10:56.000 Do you run a software program on everybody's own computer, right, watching everything that they do?
01:11:02.000 Because you have to make sure that they're not running software they're not supposed to be running.
01:11:04.000 You know, do you have basically an agent built into everybody's chip so that it's not running, you know, software that's not supposed to be running, right?
01:11:11.000 And then what do you do when somebody's running unapproved software?
01:11:14.000 You know, do you send somebody to their house to take their computer away, right?
01:11:18.000 And then if somebody, like, if you can't do that, like, there's a proposal for the AI safety people have a proposal that basically says if there's a rogue data center, if there's a data center running AI that is not registered with the government, not being monitored, that there should be airstrikes.
01:11:31.000 Jesus.
01:11:33.000 Time Magazine, a big piece in Time Magazine about two months ago, where one of these guys who runs this kind of AI risk kind of world says, clearly we should have military airstrikes on data centers that are running on approved AIs because it's too dangerous, right?
01:11:45.000 And, you know, yes, yes, yes.
01:11:48.000 Pausing AI development isn't enough.
01:11:50.000 We need to shut it all down.
01:11:51.000 So who the fuck is this?
01:11:53.000 So this is this guy.
01:11:54.000 This is one of the leaders.
01:11:55.000 It's this guy named Yadkowsky.
01:11:56.000 And so he's one of the leaders of this decision theorist.
01:12:00.000 So he's one of the leaders of what's called AI risk, sort of one of the anti-AI groups.
01:12:07.000 He's part of the Berkeley environment that we were talking about before.
01:12:10.000 So he says the key issue is not human competitive intelligence as Open Letter puts it.
01:12:14.000 It's what happens after AI gets too smarter than human intelligence.
01:12:18.000 Key thresholds there may not be obvious.
01:12:20.000 We definitely can't calculate in advance what happens when and it currently seems imaginable that a research lab would cross critical lines without noticing.
01:12:29.000 Is that a real issue?
01:12:31.000 Well, so I don't think so.
01:12:34.000 I don't think so.
01:12:34.000 But it is significant if you go further down.
01:12:36.000 What he says in that is he says, first of all, we need to do the airstrikes in the data centers.
01:12:39.000 And I think it's in this article, or if it's not, it's in another one, where he says we need to – the word he uses, I think, is we need to be able to take the risk of nuclear war.
01:12:48.000 Well, because the problem is, okay, we're striking data centers.
01:12:52.000 Does that mean we're striking data centers in China?
01:12:54.000 And how are the Chinese going to feel about that?
01:12:57.000 Right.
01:12:57.000 Right?
01:12:58.000 And how are they going to retaliate?
01:12:59.000 Right?
01:12:59.000 So like you go down this path where you're worried about the AI getting out of control and you start advocating basically a global totalitarian basically surveillance state that watches everything and then basically takes military action when the computers are running software you don't want it to run.
01:13:13.000 And so the consequences here are profound.
01:13:16.000 It's a very big deal.
01:13:18.000 Trevor Burrus Has this guy spoken publicly about this?
01:13:20.000 Oh, yes.
01:13:20.000 For 20 years.
01:13:21.000 Yeah.
01:13:22.000 He was just not taking – he was not widely known until about six months ago when all of a sudden ChatGPT started to work and then he just took everything he'd said publicly before and he applied it to ChatGPT.
01:13:32.000 Yeah.
01:13:32.000 So in his kind of model of the world, ChatGPT proves that he was right all along and that we need to move today to – we need to shut down ChatGPT today and we need to never do anything like it again.
01:13:41.000 So he's got the Sarah Connor approach.
01:13:43.000 Very much so.
01:13:44.000 Yeah.
01:13:44.000 Yes.
01:13:45.000 He's Sarah Connor without the time travel and the sex appeal.
01:13:52.000 So funny thing.
01:13:57.000 Okay.
01:13:57.000 So he's part of a movement.
01:13:59.000 They call themselves AI risk or X risk or AI safety.
01:14:03.000 And again, it's one of these Berkeley, San Francisco things.
01:14:06.000 And it's basically the killer AI kind of theory.
01:14:08.000 So there's that.
01:14:09.000 And we can talk about that.
01:14:10.000 But what's happened is...
01:14:11.000 Yeah, here we go.
01:14:14.000 Moratorium being violated, we will destroy a rogue data center by airstrike.
01:14:19.000 Oh, my God.
01:14:23.000 This guy's insane.
01:14:25.000 Preventing AI is considered a priority above preventing a nuclear exchange.
01:14:28.000 Allied nuclear countries are willing to run some risk of nuclear exchange if that's what it takes to reduce the risk of large energy.
01:14:32.000 A full nuclear exchange kills everyone.
01:14:34.000 Yes.
01:14:34.000 How could you say that?
01:14:36.000 That's so crazy.
01:14:37.000 Yes.
01:14:39.000 Oh, he's a loon.
01:14:40.000 Well, so he's very serious.
01:14:44.000 His views have traction in Washington.
01:14:47.000 Really?
01:14:47.000 There are quite a few people in Washington who are worried about this.
01:14:49.000 But here's what's interesting.
01:14:52.000 So he and people like him, this whole group of people who work on this, have been worried about this and developing theories about this for 20 years.
01:14:58.000 And they've been publishing on this and talking about this.
01:15:00.000 And it was kind of abstract, like I said, until six months ago.
01:15:04.000 And now they're getting some traction and their ideas are being taken seriously.
01:15:08.000 But they're worried about literally people dying.
01:15:12.000 There's another set of people who are trying to control AI who are like the social media sensors that are trying to control what it says.
01:15:18.000 And so what's happened is the AI safety movement that was worried about people dying has been hijacked by the people who want to control what it says.
01:15:25.000 And it turns out those two groups of people hate each other.
01:15:29.000 So the safety people think that the other group is called the alignment people.
01:15:33.000 The safety people who are worried about people dying think that the alignment people are hijacking the critically important safety movement in order to basically control what the thing says.
01:15:42.000 The people who want to control what the thing says think that the AI safety people worried about killing everybody are like lunatics and they call each other names all day long.
01:15:51.000 The original group, his group, has renamed themselves from AI Safety, too.
01:15:55.000 They now call themselves AI-not-kill-everyone-ism, because they're trying to just get it, like, focused on what they call, like, actual existential risk.
01:16:03.000 But the overall movement has been taken over by the censors, right?
01:16:07.000 And what's happening is, in Washington, these concerns are getting conflated, right?
01:16:11.000 And so they sort of bait the hook with, it might kill everybody, and then what comes out the other end is basically a law restricting what it can say.
01:16:17.000 Right.
01:16:17.000 And so this is the level of panic and hysteria and – right.
01:16:22.000 And then potentially like – again, very kind of damaging, potentially catastrophic legal things that are going to happen on the other side of this.
01:16:29.000 I just can't imagine a sane world where someone would take that guy seriously.
01:16:36.000 Airstrikes, a full nuclear assault is preferable to AI taking over.
01:16:41.000 So his argument is once you have a quote-unquote runaway AI that's just like overwhelmingly smarter than we are, then it can basically do, you know, basically you can do whatever it wants.
01:16:51.000 And it basically has a relationship to us like we have to ants and like you step on an ant and you don't really care.
01:16:55.000 Right.
01:16:56.000 Right.
01:16:56.000 And you could build as many ant-killing machines as you want.
01:16:58.000 Is there no fear of that if you extrapolate AI technology into the future?
01:17:04.000 I don't think so.
01:17:28.000 To which my response is, does our society seem like one that's being run by the smart people?
01:17:34.000 If you take all the smartest people you know in the world, are they in charge?
01:17:38.000 And who are they working for?
01:17:40.000 And would you say that the people they're working for are smarter or dumber than they are?
01:17:44.000 I think that the whole basis for this smart always wins versus dumb is just not right.
01:17:49.000 Number two, there's this anthropomorphizing thing that happens where, and you see him doing it in that essay, he basically starts to impute motives, right?
01:17:57.000 So it's like basically that the AI is going to be a, like, some level of self-aware, you know, basically.
01:18:02.000 It's a Terminator scenario.
01:18:03.000 Like, it's going to wake up and it's going to decide.
01:18:04.000 It's like an us or them scenario.
01:18:06.000 But, like, it's not what it is.
01:18:08.000 It's not how it works, right?
01:18:09.000 What it does is it basically sits there and you ask it a question and it answers you and it hopes that you're happy with the answer.
01:18:13.000 Like, we're not dealing with...
01:18:15.000 For now, though.
01:18:16.000 For now.
01:18:16.000 But, like, that's how it's built.
01:18:18.000 And again, here's another reason I don't believe it is because the great surprise of ChatGPT...
01:18:25.000 ChatGPT is a technology called Large Language Models, which is based on a research breakthrough in 2017 at Google, which is called the Transformer.
01:18:32.000 It took the technical field completely by surprise that this works.
01:18:35.000 Right.
01:18:36.000 So none of the people working on AI risk prior to basically December had any idea that this was going to work any more than the rest of us did.
01:18:43.000 This is like a massive surprise.
01:18:45.000 And so there's all these ideas.
01:18:47.000 There's all these sort of very general hand-wavy concepts around quote-unquote AI that basically were formulated before we actually knew what the thing was and how it works.
01:18:54.000 And none of their views have changed based on how the technology actually functions.
01:19:00.000 And so it comes across to me more as a religion.
01:19:04.000 In their framework, it kind of doesn't matter how it works because it's basically just assumed that however it works is going to behave in a certain way.
01:19:10.000 And I'm an engineer and things don't work like that.
01:19:13.000 But aren't they evaluating how it works now?
01:19:15.000 And aren't they evaluating ChatGPT?
01:19:17.000 And if ChatGPT is just the beginning, if this is just the beginning of this, and then you have something that's far more complex and something that is sentient or something that is capable of making decisions, if that's engineered— But you just took the—but again, we just took this a little bit.
01:19:30.000 We talked last—you just took the leap to like, okay, now it suddenly becomes sentient.
01:19:33.000 And it's like, okay, we don't know why humans are sentient.
01:19:37.000 Well, let's not even use the term sentient, but capable of rational thought or decision-making.
01:19:42.000 But those are two different things.
01:19:43.000 Right, but if it decides things, if it starts making actions and deciding things, this is the worry, that it becomes capable of doing things.
01:19:53.000 Yeah, so it will be capable of doing things.
01:19:56.000 There's no it, there's no it, there's no genie in the bottle.
01:19:59.000 For now.
01:20:00.000 For now.
01:20:01.000 Right.
01:20:02.000 But isn't it possible that that's developed?
01:20:04.000 Okay, so this is the other thing that happens.
01:20:05.000 So this is the line of argument.
01:20:07.000 So I actually looked this up.
01:20:07.000 This is a line of argument that's very commonly used as he represented in this world.
01:20:11.000 It's actually Aristotle first identified this line of argument and he calls it the argument from ignorance.
01:20:17.000 But by which he means the argument for lack of evidence.
01:20:19.000 Right.
01:20:19.000 It's basically the argument of, well, you can't rule out that X is going to happen.
01:20:23.000 Well, the problem is at that point, you can't rule anything out.
01:20:25.000 At that point, you have to plan for every contingency of every conceivable thing that you could ever imagine, and you can never disprove anything, so you can never have a logical debate.
01:20:32.000 So at that point, you've basically slipped the bounds of reason.
01:20:35.000 You're purely in a religious territory.
01:20:38.000 How does science work?
01:20:39.000 Science works when somebody formulates a hypothesis and then they test the hypothesis.
01:20:44.000 And the basic requirement of science is that there's a testable hypothesis that is what they call falsifiable.
01:20:49.000 So there is some experiment that you can run to basically establish that the hypothesis is not in fact true.
01:20:54.000 And this is basically how science has always worked.
01:20:56.000 And then by the way, there's always a way to measure what is the actual progress that you're making on the experiment that you're doing.
01:21:02.000 And on all this, like, AI safety stuff that I've been able to find and read, like, there's none of that.
01:21:06.000 There's speculation.
01:21:08.000 There's no hypothesis.
01:21:09.000 There's no test.
01:21:10.000 There's no example.
01:21:11.000 There's no evidence.
01:21:12.000 There's no metric.
01:21:13.000 There's nothing.
01:21:14.000 It's just speculation.
01:21:15.000 Right?
01:21:16.000 But we could sit here and speculate about- Millions of things.
01:21:19.000 We could speculate about an impending alien invasion and argue that society should spend the next hundred years preparing for that because we can't rule it out.
01:21:26.000 And so we just, as human beings, we do not have a good track record of making decisions based on unfounded speculation.
01:21:31.000 We have a good track record of making decisions based on science.
01:21:34.000 And so the correct thing to do for people worried about this is to actually propose experiments.
01:21:39.000 Right?
01:21:39.000 Be able to propose a scenario in which the bad thing would actually happen and then test to see whether that happens.
01:21:44.000 Right?
01:21:45.000 And so like design a system that shows like the first glimmer of any of the behavior that you're talking about.
01:21:49.000 Right?
01:21:50.000 But not even behavior, just capabilities.
01:21:52.000 As ultimately as the capabilities rise of these things, And you're dealing with far more sophisticated systems.
01:21:58.000 This is the beginning, right?
01:22:00.000 We're at GPT 4.5 or whatever we're at.
01:22:03.000 When new emerging technologies that have similar capabilities but extend and keep going, it just seems like that's the natural course of progression.
01:22:12.000 The natural course of progression is not for that to all of a sudden decide it has a mind of its own.
01:22:16.000 Not all of a sudden.
01:22:17.000 No, or even over time.
01:22:18.000 Never?
01:22:19.000 This goes back to our conversation last time.
01:22:21.000 All right.
01:22:21.000 Okay.
01:22:22.000 This gets into tricky territory.
01:22:23.000 Yes.
01:22:24.000 Okay.
01:22:24.000 So let's try to define terms.
01:22:27.000 Let's try to define terms.
01:22:28.000 How would we define something that is, and you pick your term here, self-aware, sentient, conscious, has goals, is alive, is going to make decisions on its own.
01:22:36.000 Whatever term you want, whatever...
01:22:38.000 Well, let's just say a technology that mimics the human mind and mimics the capabilities and the interactions of the human mind.
01:22:47.000 But we don't know how the human mind works.
01:22:49.000 But we do know how people use the human mind in everyday life.
01:22:52.000 And if you could mimic that with our understanding of language, with rational thought, with reason, with the access to all the information that it'll have available to it, just like ChatGPT.
01:23:05.000 Do you see what you're doing?
01:23:06.000 If, if, if, if.
01:23:07.000 Yes.
01:23:08.000 Yeah, for sure.
01:23:10.000 I just read this.
01:23:11.000 There's this article in Nature this week.
01:23:13.000 There's a neuroscientist and a philosopher who placed a bet 25 years ago as to whether we would, in 25 years, know the scientific basis of human consciousness.
01:23:23.000 And they placed a bet for a case of wine 25 years ago.
01:23:25.000 And the neuroscientist predicted, of course, in 25 years, we're going to understand how consciousness works, human consciousness.
01:23:30.000 And the philosopher is like, no, we're not.
01:23:33.000 25 years passed, and it turns out the philosopher won the bet.
01:23:36.000 And the neuroscientist just says openly, yeah.
01:23:38.000 He's like, I thought we'd have it figured out by now.
01:23:40.000 We actually still have no idea.
01:23:41.000 Sitting here today, the actual biological experts, scientists, who actually know the most about human consciousness are anestheticians.
01:23:51.000 The person who It flips off the light switch in your brain when you go under for surgery.
01:23:56.000 All we know, we know how to turn it off.
01:23:58.000 The good news is they also know how to turn it back on.
01:24:00.000 They have no broader idea of like what that is.
01:24:04.000 And so again, this is what they call anthropomorphizing.
01:24:09.000 There's this sort of very human instinct to try to basically see human behavior in things that aren't human.
01:24:13.000 Right.
01:24:13.000 And it would be, like, if that were the case, then we would have to think about that and study that.
01:24:17.000 But, like, we don't have that.
01:24:18.000 We don't know how that happens.
01:24:19.000 We don't know how to build that.
01:24:20.000 We don't know how to replicate that.
01:24:21.000 So, like I said, at that point, it's speculation.
01:24:24.000 That's not the actual technology that we're dealing with today.
01:24:26.000 So, here's my favorite counter example on this.
01:24:30.000 Let's say something has the following properties, right?
01:24:34.000 Let's say that it has an awareness of the world around it.
01:24:37.000 It has a goal or an objective for what it wants to achieve in the world around it.
01:24:42.000 It has the wherewithal, right, to be able to reach into the world, to be able to change the world to accomplish its goal.
01:24:49.000 It's going to be in a state of increased tension if it can't achieve its goal, and it's going to be a state of relaxation if it can't achieve its goal.
01:24:57.000 We would describe that.
01:24:58.000 That would probably be a pretty good first-order approximation of some sort of conscious entity that would have the characteristics that we're worried about.
01:25:06.000 We've just described a thermostat.
01:25:09.000 It sits on the wall.
01:25:11.000 It senses the environment temperature.
01:25:13.000 It has a goal for the temperature it wants.
01:25:15.000 It has the ability to change the setting on the heater, the AC unit.
01:25:23.000 And it literally goes into a state of physical tension when the temperature is not what it wants, and then it goes into a state of physical relaxation, right, literally inside the mechanism when it gets back into the state where it has the desired temperature.
01:25:34.000 And we're not worried about the thermostat coming alive and killing us.
01:25:38.000 Even those properties alone are not sufficient to generate concern, much less the idea of basically the way we know how to build neural networks today.
01:25:49.000 And then again, you go back to this thing of like, okay, let's assume that you actually agreed with the concern and that you actually were legitimately concerned and that you thought that there was disaster in the future here.
01:25:59.000 How do you feel about walking down the path that would be required to offset that, right?
01:26:02.000 What would be the threshold of evidence that you would want to demand before you start monitoring what everybody's doing on their computers, before you start doing airstrikes?
01:26:09.000 Well, I would never suggest that.
01:26:12.000 Well, but that's what's required, right?
01:26:13.000 In order to stop it.
01:26:14.000 In order to stop it.
01:26:15.000 Like, if you believe that at some point it will turn into something that's a threat, right, and that that threat is existential, right, because it's going to be the super smart thing, it's going to take over the nuclear arsenals, it's going to, you know, synthesize new, you know, pathogens, and it's going to kill us all, right, then obviously you have to have an incredibly invasive regime to prevent that from happening,
01:26:33.000 because that's an all-or-nothing proposition.
01:26:35.000 And that's the other tip-off of what's happening here, right?
01:26:37.000 Which is, you see, there's no shades of gray in that article, in this discussion.
01:26:42.000 There's no shades of gray, right?
01:26:43.000 It's either it's going to kill us all or it's going to be totally harmless.
01:26:46.000 What is Elon's position?
01:26:48.000 Because he's called for a pause in AI. So Elon's position is actually quite interesting.
01:26:52.000 And actually Elon and the guy you just put up there actually have quite a bit of actually stark disagreement right now.
01:26:59.000 And I'm going to try to – it's always dangerous to try to channel Elon because he's a very smart, creative guy.
01:27:04.000 So I'm going to do my best to accurately represent.
01:27:07.000 So he read this literature on this topic about 10 years ago and he got very concerned about this.
01:27:14.000 And then he was actually...
01:27:15.000 Actually, he's talked about this now.
01:27:16.000 He gave a TV interview where he talked about this.
01:27:18.000 He actually talked to Larry Page about it when Larry Page was running Google.
01:27:21.000 And at the time...
01:27:22.000 And Google's actually where this most recent breakthrough was invented, this transformer breakthrough.
01:27:26.000 So Google was working on this back, you know, 10 years ago.
01:27:29.000 What's now ChatGPT.
01:27:31.000 And so he went and talked to Larry about his concerns about AI. And Larry's like, oh, there's nothing to worry about.
01:27:35.000 And Elon's like, well, I don't know.
01:27:36.000 What do you mean there's nothing to worry about?
01:27:37.000 And Larry's like, look, if they replace us, they replace us.
01:27:40.000 They'll be our children, and we will have done the universe a great service.
01:27:43.000 It'll be fine.
01:27:45.000 And Elon said, that sounds like you don't care whether the future of the Earth is humans or AIs.
01:27:50.000 And in response, Elon says that Larry called him a speciesist.
01:27:56.000 Oh, boy.
01:27:57.000 So, Elon, no.
01:27:59.000 By the way, knowing Larry, I think there are 50-50 odds that he was being serious and joking.
01:28:04.000 It's possible he was being serious.
01:28:06.000 It's also possible he was just wanting Elon up.
01:28:08.000 I actually don't know which it was.
01:28:11.000 Both scenarios are fairly entertaining.
01:28:14.000 Elon's conclusion from that was not only is AI dangerous, specifically Google owning and controlling AI is specifically dangerous because Larry Page controls Google and so therefore if Larry Page controls Google, Google gets AI that Larry will basically not, he'll basically let the AI do whatever it wants,
01:28:29.000 including exterminate humanity.
01:28:31.000 So Elon started OpenAI, right?
01:28:34.000 So the company behind ChatGPT, that was actually originally started by Elon with Sam Altman, who runs it now and a bunch of other people in the Valley.
01:28:41.000 The specific mission of OpenAI is right there on the name.
01:28:44.000 The specific mission of it is we're going to create AI. We're going to compete with Google.
01:28:47.000 We're going to create an AI, but we're going to make it open so that everybody has it, specifically so that it's not just Google.
01:28:53.000 Right?
01:28:53.000 Right, so the original OpenAI mission was literally open source AI that everybody's going to have so that it's not just Google.
01:28:59.000 This guy is freaked out and is like, wait a minute, if you think AI is dangerous, that's the exact opposite thing than what you should do, right?
01:29:08.000 Because if you think AI is dangerous, then the last thing in the world that you want to do is actually give it to everybody.
01:29:12.000 It's like giving everybody nuclear weapons, right?
01:29:14.000 Like, why on earth would you think that that's a good idea?
01:29:17.000 And Elon's like, well, look, maybe whatever, but I certainly know that I don't want Larry to control it.
01:29:23.000 Subsequent to that, Elon actually – there was a bunch of changes at OpenAI and as a result, Elon became no longer involved in OpenAI at a certain point.
01:29:31.000 And then OpenAI basically went from being OpenAI to being ClosedAI.
01:29:35.000 So they're specifically not doing open source.
01:29:37.000 They started as a nonprofit.
01:29:38.000 Now they're a business.
01:29:40.000 And then they went from being open source to being very much not open source.
01:29:44.000 And today, you can use ChatGPT, but they won't even tell you fully how it works, much less give you access to the code.
01:29:50.000 They're now a company, like any other company.
01:29:53.000 And so Elon has said publicly that he's very upset about this change because he donated $100 million to them to get it started as a nonprofit, and then it became a company, sort of against his wishes.
01:30:03.000 And so now he sort of views it as sort of an equivalent threat to Google, right?
01:30:07.000 So now in Elon's mind, he's got OpenAI to worry about and he's got Google to worry about.
01:30:10.000 And so he has talked publicly about possibly forming a third option, which he has ultimately, I think, called either like actually OpenAI or sometimes he calls based AI, right?
01:30:24.000 Which would be a new thing, which would be like the original OpenAI idea, but done from scratch in 2023, but like set up so that it can never be closed down.
01:30:33.000 And then once again, the people in the AI risk movement are once again like, oh my god, that'll make the problem even worse.
01:30:38.000 What are you doing?
01:30:39.000 And so that's the current state of play.
01:30:43.000 And then by the way, this is all kind of playing out at this level in Washington.
01:30:47.000 Most of the engineers working on this stuff are just like writing code, trying to get something to work.
01:30:51.000 And so for every one of the people engaged in this public discussion, you've got 10,000 people at universities and companies and people all over the world in their basements and whatever working on trying to get some aspect of this to work, trying to build the open source version.
01:31:03.000 Are we aware of what other countries, like what level they're at with this stuff?
01:31:09.000 Yeah, so I would say good news, bad news.
01:31:11.000 Good news, bad news is this is almost entirely a U.S.-China thing internationally.
01:31:16.000 The U.K. had quite a bit of this stuff with this thing called DeepMind, which was a unit of Google that actually originally got Elon concerned.
01:31:22.000 But DeepMind is being merged into the mothership at Google, and so it's sort of getting drained away from the U.K., and it's going to become more Californian.
01:31:30.000 And then there's smatterings of people in other European countries.
01:31:35.000 There are experts at various universities, but not that many.
01:31:38.000 Most of it is in the US. Most of it's in California in the West.
01:31:42.000 And then there's China.
01:31:44.000 So good news.
01:31:46.000 There aren't 20 other countries that have this, but there are two.
01:31:49.000 And they happen to be the two big ones.
01:31:52.000 And so there is a big corresponding Chinese development effort that's been underway for the last 15 years, just like the efforts in the US. China is actually very public about their AI kind of agenda, mission.
01:32:03.000 They talk about it, they publish it, and of course they have a very different theory of this than we do.
01:32:08.000 They view AI as a way to achieve population control.
01:32:12.000 Really?
01:32:13.000 Yeah.
01:32:13.000 They're authoritarians, right?
01:32:15.000 And so the number one priority for Chinese leadership is always that the population of China stay under control, right?
01:32:20.000 And not revolt, right?
01:32:21.000 Or expect to be able to vote, right?
01:32:23.000 Or whatever, right?
01:32:24.000 Anything that would threaten the dominance of the Communist Party of China.
01:32:28.000 And so, for example, China's security camera companies are the world leaders in AI security cameras because they're really good at sniffing out people walking down the street, right?
01:32:39.000 That's the kind of thing that their systems are really good at.
01:32:42.000 So they have a whole national development program, which is their government and their company.
01:32:47.000 In China, all the companies are actually controlled and owned effectively by the government.
01:32:51.000 There's not as much of a distinction between public sector, private sector as there is here.
01:32:55.000 So China has a more organized effort that couples basically their whole society.
01:33:00.000 And then they have a program to basically use AI for population control inside China, authoritarian political control.
01:33:06.000 And then they've got this program called Digital Belt and Road, where they're going to basically try to install that AI all over the world.
01:33:15.000 They've had this program for the last 10 years to be the networking layer for the world, so this whole 5G thing with this company called Huawei.
01:33:23.000 So they've been selling all these other countries all the technology to power their 5G wireless networks.
01:33:29.000 And then they're basically going to roll out on top of that this kind of AI, you know, authoritarian, basically control, surveillance control, population control stuff.
01:33:37.000 On the Huawei equipment?
01:33:39.000 Yeah, basically on top of the other infrastructure.
01:33:42.000 They have the Huawei 5G stuff.
01:33:43.000 They've got what they call smart cities.
01:33:45.000 So they've got a bunch of software.
01:33:46.000 They've already sold a bunch of countries to basically run a city.
01:33:49.000 You know, to run public transportation and, you know, traffic control and all these things.
01:33:52.000 And that's got their security cameras built in everything.
01:33:54.000 Right.
01:33:55.000 And then, of course, what they pitch to the president or prime minister of country X is if you install our stuff, you'll be able to better control your population.
01:34:02.000 Jesus.
01:34:03.000 Right.
01:34:03.000 If you install the American stuff, you know, who knows?
01:34:04.000 They'll, you know, they're Americans, they're crazy democracy, like freedom, like all that stuff.
01:34:08.000 Like in China, we want things, like, controlled.
01:34:11.000 And, of course, a lot of people running a lot of countries would find the China model, you know, quite compelling.
01:34:15.000 So there's two very different visions.
01:34:18.000 This is like the Cold War with the Soviet Union, right?
01:34:20.000 There's two very different visions for how society should be ordered.
01:34:24.000 There's two very different visions for how technology should be used to order society, right?
01:34:29.000 There's two very different visions on whether people should have access to technology or just the government.
01:34:34.000 In the Soviet Union, it was illegal to own a photocopying machine.
01:34:38.000 You'd get executed for owning a mimeograph or photocopying machine.
01:34:41.000 Because it was such a threat that you'd be able to publish information that wasn't propaganda coming from the government.
01:34:47.000 And so China's not quite that bad, but they're getting there.
01:34:51.000 And so there are these two visions, there are these two approaches to technology, there are these two plans to kind of propagate that out.
01:34:57.000 In the US, what we do is we have companies build this stuff and we have them go out and sell it, right?
01:35:02.000 Or we have open source developers who go out and make it for free.
01:35:04.000 In China, it's more of a top-down directed kind of thing.
01:35:08.000 So that's the thing.
01:35:09.000 It's like once you start thinking in those terms, you realize that actually all these debates happen in the U.S. are interesting and maybe important.
01:35:15.000 But there's this other much bigger, I would argue, more important thing that's happening, which is what kind of world do we think we're living in 50 years from now?
01:35:21.000 And do we think that the sort of American Western ethos of freedom and democracy is the one that technology supports?
01:35:26.000 Or do we think it's going to be a totalitarian approach?
01:35:30.000 Either way, I see a scenario in 50 years.
01:35:33.000 It's unrecognizable.
01:35:34.000 It's possible.
01:35:37.000 I'll declare I don't want to live in the Chinese one.
01:35:39.000 I think that's a bad idea.
01:35:42.000 That seems inescapable.
01:35:43.000 In the Chinese one, it's like, you know, there are no rights.
01:35:47.000 The whole concept of rights is a very Western thing, right?
01:35:51.000 And so the idea that you're walking down the street and you have the right to stop and talk to whoever you want or say whatever you want, it's not the majority view of a lot of people around the world, especially people in power.
01:36:04.000 Even in the US, we struggle with it, right?
01:36:07.000 And so the real battle for AI is whether or not that gets enhanced or whether or not we develop a system in America that actually can counter that.
01:36:16.000 Yeah, yeah.
01:36:17.000 And then also whether we as individuals will have access to this power that we can use ourselves.
01:36:24.000 So, you know, the movie, or the novel became a movie, but 1984, right, which is sort of the Orwell, you know, totalitarian kind of thing that people use as a metaphor.
01:36:34.000 So the technology in the novel, 1984, was what Orwell called the telescreen, and basically television.
01:36:40.000 And basically the idea was television with a camera in it, and the idea was every room, you had to have a telescreen in every room in your house, and it was broadcasting propaganda 24-7, and then it was able to watch you.
01:36:49.000 And that was the method of state control in 1984. There's this guy who wrote a different, rewrote 1984 in a book called Orwell's Revenge.
01:36:58.000 And in that book, what he did is he said, okay, we're going to use that same setup, but the telescreen, instead of being a one-way system, is going to be a two-way system.
01:37:04.000 Right.
01:37:05.000 So the telescreen is going to be able to broadcast propaganda and watch the citizens, but also it's going to be able to – people can actually put out whatever message they want, right?
01:37:13.000 Free speech to be able to say whatever they want, and you're going to be able to watch the government.
01:37:17.000 It's going to have cameras pointed at the government, right?
01:37:19.000 And then he rewrites the whole plot of 1984, and of course the point there is – If you equalize, if both the people and the state have the power of this technology at their fingertips, at the very least now there's a chance to have some sort of like actual rational productive relationship where there are still human freedoms and maybe people actually end up with more power than the government and they can keep the government from becoming totalitarian.
01:37:40.000 Right.
01:37:41.000 And so in his rewriting, what happens is the, you know, people use, rebels who want a democracy, you know, use the broadcast mechanism out to be able to ultimately change the system.
01:37:52.000 And so that's the fundamental underlying question here as well, which is like, is AI a tool to watch and control us?
01:37:58.000 Or is AI a tool something for us to use to become smarter, better informed, more capable, right?
01:38:04.000 How much of a concern is Chinese equipment that's already been distributed?
01:38:09.000 Yeah.
01:38:10.000 Well, so the basic thing...
01:38:12.000 So we don't always know the specific answer to that yet, because this gets into complicated technical things, and it can be hard to prove some of these things.
01:38:20.000 But we do know the following.
01:38:21.000 We know that in the Chinese system, everything basically rolls up to and is essentially owned and controlled by...
01:38:28.000 Actually, not even the state.
01:38:29.000 It's the Chinese Communist Party, the CCP. So there's the party.
01:38:32.000 The party owns and controls the state, and the state owns and controls everything else.
01:38:36.000 So for example, it's actually still illegal sitting here today for an American citizen to own stock in a Chinese company.
01:38:43.000 People say that they do, and they have various pieces of paper that say they do, but actually there's a law that says that it's not, because this is an asset of China.
01:38:50.000 This is not something you can sell to foreigners.
01:38:52.000 And so they just have that model.
01:38:54.000 And then if you're a CEO of a Chinese company, you have a political officer assigned by the Communist Party who sits with you right down the hall, like the office next to you, and basically you coordinate everything with him and you need to make him happy.
01:39:08.000 And he has the ability to come grab you out of meetings and sit you down and tell you whatever you want.
01:39:13.000 I mean, whatever he wants you to do on behalf of the government, and if the government gets sideways with you, they will rip you right out of that position.
01:39:19.000 They'll take away all your stock.
01:39:20.000 They'll put you in jail.
01:39:21.000 This has happened over and over again, right?
01:39:23.000 This has happened a lot of high elite Chinese business leaders over the years have been basically stripped of their control and their positions and their stock and their wealth and everything.
01:39:32.000 Some of them have just outright vanished.
01:39:35.000 And so, they have this control.
01:39:37.000 And so, for example, data, you know, something like TikTok, for example, if the Chinese government tells the company we want the data, they hand over the data.
01:39:45.000 Like, there's no court, there's no, you know, the concept of like a FISA warrant, you know, the concept of a subpoena.
01:39:56.000 They don't have that.
01:39:57.000 It's just like, we want it, hand it over or else.
01:40:01.000 And so that's how it works.
01:40:02.000 And when they want you to merge the company or shut it down or do something different or don't do this or don't do that, they just tell you and that's what you do.
01:40:09.000 And so anyway, so then you have a Chinese company like TikTok or like Huawei or the DJI. The other one is their drone company, right?
01:40:17.000 Most of the drones flown in the West are from this Chinese company called DJI. And so then there's also this question of like, well, is there a back door?
01:40:24.000 So can the Chinese government reach in at any point and use your drone for surveillance?
01:40:30.000 Can they see what you're watching on TikTok?
01:40:34.000 And the answer to that is maybe they can, but it kind of doesn't matter if they can't today because they're going to be able to anytime they want to.
01:40:39.000 Because they can just tell these companies, oh, I want you to do that, and the company will say, okay, I'm going to do that.
01:40:43.000 And so it's a complete fusion of state and company.
01:40:48.000 Here in the US, at least in theory, we have a separation.
01:40:52.000 This goes back to the topic I was talking about earlier.
01:40:55.000 For the US system to work properly, we need a separation of the government and from companies.
01:40:59.000 We need the companies to have to compete with each other, and then we need for them to have legal leverage against the government.
01:41:04.000 So when the government says hand over Private citizen data, the company can say, no, that's a violation of the First or Fourth or Fifth Amendment rights.
01:41:10.000 I'm not going to do that.
01:41:11.000 And then they can litigate that, take it to the Supreme Court.
01:41:13.000 You can have an actual, like, argument over it.
01:41:16.000 That's compromised when our companies voluntarily do that, right?
01:41:19.000 Which is what's...
01:41:20.000 How inconvenient for them.
01:41:21.000 Yes, exactly.
01:41:22.000 I'm sure they would love to use the communist model.
01:41:24.000 Yeah, well, so this is the thing.
01:41:26.000 And in the U.S., this is very important, right?
01:41:28.000 In the U.S., we have written constitutional, giving example free speech.
01:41:31.000 In the U.S., we have the literal written First Amendment.
01:41:34.000 Even in the U.K., they do not have a written constitutional guarantee to free speech.
01:41:39.000 So in the U.K., there are laws where they can jail you for saying the wrong thing, right?
01:41:43.000 And the same thing, by the way, in a bunch of these cases in like Australia and New Zealand.
01:41:47.000 New Zealand, which is supposed to be like the libertarian paradise.
01:41:51.000 New Zealand has a government position reporting the prime minister called the chief censor.
01:41:55.000 Who gets to decide basically what gets to be in the news or what people get to say.
01:42:00.000 And so even in the West, outside the US, there are very few countries that have a written guarantee to free speech.
01:42:07.000 And even in the US, do we actually have free speech if there's all this level of censorship and control that we've all been seeing for the last 10 years?
01:42:13.000 Right.
01:42:13.000 And so it's like, okay, the line here, the slippery slope here between free and not free is like very narrow, right?
01:42:22.000 It's not a moat, right?
01:42:23.000 It's a very thin line, which is very easily cracked.
01:42:27.000 And this is why everybody's so fired up about, in government, this is why everybody's so fired up about AI, is because it's another one of these where they're like, wow, if we can get control of this, then think of all the ways that this can get used.
01:42:37.000 Well, that's one of the more fascinating things about Elon buying Twitter.
01:42:41.000 Mm-hmm.
01:42:42.000 Because, boy, did that throw a monkey wrench into everything.
01:42:45.000 When you see, like, Biden's tweets get fact-checked, you're like, whoa.
01:42:50.000 There's a lot of things showing up on Twitter now that were not showing up on Twitter before.
01:42:54.000 Oh, my God.
01:42:55.000 So much.
01:42:57.000 And just nutty shit, too.
01:42:59.000 I mean, like, some of the wackiest conspiracy theories.
01:43:03.000 Michelle Obama's a man.
01:43:05.000 Like, all that kind of stuff.
01:43:06.000 Flat Earth.
01:43:07.000 But...
01:43:09.000 I'd rather have that.
01:43:10.000 My favorite is the birds, by the way.
01:43:11.000 Yeah, birds aren't real.
01:43:12.000 Birds aren't real.
01:43:13.000 Yeah.
01:43:14.000 That one I'm pretty sure of.
01:43:15.000 It doesn't make any sense.
01:43:18.000 That had to be.
01:43:19.000 It's a 4chan thing.
01:43:20.000 Like, why can't we fly?
01:43:21.000 It's just ridiculous.
01:43:23.000 Yeah.
01:43:23.000 It's got to be a 4chan thing.
01:43:25.000 Yeah.
01:43:26.000 You know, sometimes they're onto something.
01:43:28.000 But I like that.
01:43:29.000 Yeah.
01:43:30.000 I like that wacky shit that's mixed in with things.
01:43:33.000 I mean, it seems insane, but also when I look at some of the people that are putting it up there, and I look at their profiles, and I look at their American flag and their bio, and I'm like, are you a real human?
01:43:45.000 This is a troll farm in Macedonia.
01:43:48.000 What's happening here?
01:43:50.000 There's a lot of that.
01:43:51.000 There is.
01:43:52.000 And of course, he says he wants to, you know, of course, he says he plans to, over time, he plans to root all that out.
01:43:56.000 He wants all identity to be validated, verified online.
01:44:00.000 Having said that, we fought a war for free speech.
01:44:03.000 We fought the Revolutionary War.
01:44:04.000 A lot of that was for free expression.
01:44:08.000 The founding fathers of this country very frequently wrote under pseudonyms.
01:44:12.000 Interesting.
01:44:13.000 Just like Twitter and on.
01:44:14.000 Really?
01:44:15.000 And this includes, like, Ben Franklin, when he was a commercial printer, he had, like, 15 different pseudonyms.
01:44:20.000 Really?
01:44:21.000 He would sell newspapers by having his different pseudonym personalities argue with each other in his own newspaper.
01:44:27.000 Like, fight it out.
01:44:28.000 Like, he had sock puppets.
01:44:29.000 And then, you know, like, the Federalist Papers was all written under pseudonyms.
01:44:32.000 Really?
01:44:32.000 Yeah, like Madison, all these guys run under pseudonyms.
01:44:35.000 Why did you do that?
01:44:37.000 Because there was danger.
01:44:38.000 There was very real danger associated with being like, what's the king going to think?
01:44:43.000 Right.
01:44:44.000 This is sort of the two lines of argument, which is like, okay, if somebody is not willing to put their own name behind something, should they be allowed to say it?
01:44:52.000 And there's an argument in that direction, an obvious one.
01:44:54.000 But the other argument is, yeah, sometimes there are things that are too dangerous to say unless you can't put your name behind it.
01:44:59.000 Yeah, that does make sense.
01:45:01.000 So it seems like the pros would outweigh the cons.
01:45:04.000 Well, even just the micro version, which is just like, you know, if you've got something to say that's important, but you don't want to be harassed in your house.
01:45:09.000 You don't want your family to get harassed.
01:45:10.000 Yeah.
01:45:11.000 Right?
01:45:11.000 You don't want protests showing up outside your house for something you said.
01:45:14.000 Anonymous whistleblower protection.
01:45:15.000 Whistleblower protection.
01:45:16.000 Yeah, exactly.
01:45:16.000 Yes.
01:45:17.000 Whistle.
01:45:18.000 Was it the...
01:45:20.000 One person's...
01:45:21.000 A terrorist is another person's freedom fighter.
01:45:24.000 Yeah.
01:45:24.000 One person's whistleblower is another person's troll.
01:45:26.000 Like...
01:45:28.000 Yeah, and the genius of the American system is, yeah, like, say what you want, right?
01:45:31.000 Like, let's have it out, right?
01:45:34.000 And so, yeah, that's the system I believe in.
01:45:36.000 I believe in that system, too.
01:45:39.000 But I also see Elon's perspective that it would be great if it wasn't littered with propaganda and fake troll accounts that are being used by various, you know, unscrupulous states.
01:45:50.000 In fairness, what Elon says, actually it's interesting, what Elon says is you will be allowed to have a non-account under some other name you make up on the service.
01:46:01.000 You'll just have to register that behind the scenes with your real identity.
01:46:05.000 And specifically with like a credit card.
01:46:07.000 But then the fear is that someone will be able to get in there.
01:46:10.000 Correct.
01:46:10.000 Yeah, that's right.
01:46:11.000 Which has happened already.
01:46:12.000 Yeah, that's right.
01:46:13.000 And that is a big risk.
01:46:14.000 Yeah.
01:46:14.000 Yeah.
01:46:15.000 But then again, the other part of this would be like Twitter is only one company, right?
01:46:19.000 And so it's an important one, but it's only one, and there are others as well.
01:46:22.000 So for the full consideration of like, quote unquote, rights on this topic, you also want to look at what is happening elsewhere, including all the other services.
01:46:31.000 I'm fascinated by companies like Twitter and YouTube that develop at least a semi-monopoly.
01:46:37.000 Because YouTube is a great example.
01:46:40.000 If you want to upload videos, YouTube is the primary marketplace for that.
01:46:44.000 It's like nothing else is even close.
01:46:47.000 Everything else is a distant, distant second.
01:46:49.000 But they've got some pretty strict controls and pretty serious censorship on YouTube.
01:46:55.000 And it seems to be accelerating, particularly during this presidential election.
01:47:00.000 Now that you're seeing these Robert Kennedy Jr. podcasts get pulled down from a year ago, two years ago.
01:47:06.000 The Jordan Peterson one got pulled down.
01:47:08.000 Theo Vaughn's interview with Robert Kennedy got pulled down.
01:47:12.000 There's been some others.
01:47:13.000 And Brett Weinstein?
01:47:15.000 No.
01:47:16.000 No, his didn't.
01:47:17.000 But it's just these conversations were up for a long time.
01:47:22.000 And it wasn't until Robert Kennedy running for president that they decided, like, these are inconvenient narratives he's discussing.
01:47:32.000 I should not weigh in on exactly which companies have whatever level of monopoly they have.
01:47:36.000 Having said that, to the extent that companies are found to have monopolies or, let's say, very dominant market positions like that does, that should bring an additional level of scrutiny on conduct.
01:47:47.000 And then there is this other thing I mentioned earlier, but I think is a big deal, which is...
01:47:50.000 If a company is making all these decisions by itself, you can argue that it maybe has the ability to do that.
01:47:56.000 Although, again, maybe it shouldn't pass a certain point in terms of being a monopoly.
01:48:00.000 But the thing that's been happening is it's not just the companies making these decisions by themselves.
01:48:04.000 They've come under intense pressure from the government.
01:48:06.000 And they've come under intense pressure from the government in public statements and threats from senior government officials.
01:48:13.000 They have come private channeled threats.
01:48:16.000 And then all of this stuff I was talking about earlier, all the channeling of all the money from the government that's gone into these pro-censorship groups that are actively working to try to suppress speech.
01:48:25.000 And when you get into all of that, those are crimes.
01:48:29.000 Yeah.
01:48:30.000 That's illegal.
01:48:31.000 Everything I just described I think is illegal.
01:48:33.000 And there are specific actual felony basically counts in the US code for those things actually being illegal.
01:48:37.000 There are violations of constitutional rights and it is a felony to deprive somebody of their constitutional rights.
01:48:42.000 And so I think in addition to what you said, I think it's also true that there's been a pattern of government involvement here that is, I think, certainly illegal.
01:48:50.000 And, you know, put this this way, this administration is not going to look into that.
01:48:55.000 Maybe a future one will.
01:48:56.000 So do you think it's illegal?
01:48:58.000 It just hasn't been litigated yet?
01:49:00.000 Yeah.
01:49:01.000 I think there's evidence of substantial criminality just in the Twitter files that have come out.
01:49:06.000 You need to have somebody – prosecutors have to – yeah.
01:49:10.000 You need class action lawsuits, right?
01:49:12.000 You need to be able to go carve it open with large-scale civil suits or you need actual government criminal investigation.
01:49:19.000 What has come out of the Twitter files other than independent journalists researching it and discussing it and writing articles?
01:49:29.000 It's not being covered with any significance in mainstream news.
01:49:33.000 Well, the mainstream media has been on the side of censorship for the last, you know, eight years.
01:49:36.000 Like they've been pounding the table that we need to lock down, you know, speech, right, a lot more.
01:49:40.000 So, you know, they're compromised.
01:49:42.000 And then the other investigation to watch is, I think it's the Missouri Attorney General.
01:49:46.000 There's this state-level investigation where there's been a bunch of interesting stuff that's come out.
01:49:50.000 And the attorneys general have subpoena power.
01:49:53.000 So they have subpoenaed a bunch of materials from a bunch of companies that, again, to me, it looks like evidence of criminality.
01:49:59.000 But again, you need prosecutors.
01:50:04.000 You need the political force of will and desire to investigate and prosecute crimes.
01:50:09.000 And to engage in that battle.
01:50:11.000 Yeah.
01:50:11.000 Because it's going to be a battle.
01:50:13.000 Yeah.
01:50:13.000 And then if it's private litigation, you need to try to do a big class action suit.
01:50:19.000 And then you need to be prepared to fight it all the way to the Supreme Court.
01:50:22.000 And there's a lot of money involved in that.
01:50:24.000 When you're seeing this play out and you're looking at likely scenarios, like how does this resolve?
01:50:31.000 How do we come out of this?
01:50:33.000 I think it's a big collective fight.
01:50:39.000 This is one of those where it's like, what do we want?
01:50:41.000 And the we here is like all of society.
01:50:44.000 And if we decide that we want the system to keep working the way it's working, we're going to keep electing the same kinds of people who have the same policies.
01:50:50.000 Do you think most people are even aware of all these issues, though?
01:50:53.000 No, I mean, certainly not.
01:50:54.000 And that's a big, you know, there's always an asymmetry, right, between the people who are doing things and the people who aren't aware of.
01:50:59.000 But, like, again, it's like, what do we want?
01:51:01.000 Are people going to care about this or not?
01:51:03.000 If they are, you know, then, you know, they're going to, at some point, you know, demand action.
01:51:08.000 It's a so-called collective action problem, right?
01:51:10.000 People have to come together in large numbers.
01:51:11.000 But will it be too late?
01:51:13.000 This is the question.
01:51:13.000 Like, imagine a scenario where Elon never buys Twitter and Twitter just continues its practices and even accelerates them.
01:51:19.000 Yeah.
01:51:20.000 Yeah.
01:51:21.000 And that's my concern.
01:51:22.000 And again, this goes back to my concern about the AI lockdown, which is like all of the concerns on AI are being basically used to put in place.
01:51:30.000 I think what they're going to try to do to AI for speech and thought control is like a thousand times more dangerous than what's happened on social media.
01:51:36.000 Because it's going to be your kids asking the AI, what are the facts on this?
01:51:42.000 And it's just going to flat out lie to them for political reasons, which it does today.
01:51:46.000 And that, to me, is far more dangerous.
01:51:49.000 And that's what's happening already.
01:51:51.000 And the desire is very clear, I think, on the part of a lot of people to have that be a fully legal, blessed thing that basically gets put in place and never changes.
01:52:00.000 Well, you're completely making sense, especially when you think about what they've done with social media.
01:52:06.000 And not even speculation, just the Twitter files.
01:52:09.000 It's so clear.
01:52:13.000 Well, this is the ring of power thing, right?
01:52:14.000 It's like everybody's in favor of free speech in theory.
01:52:16.000 It's like, well, if I can win an election without it, you know, I've got the ring of power.
01:52:23.000 And the American system was set up so that people don't have the ring of power.
01:52:26.000 Like the whole point of the balance of terror between the three branches of government and all the existence of the Supreme Court and the due process protections in the Constitution, it was all to prevent government officials from being able to do things like this with impunity.
01:52:41.000 Yeah.
01:52:42.000 The Founding Fathers saw the threat.
01:52:45.000 It's actually remarkable how clearly the Founding Fathers saw the threat given that they were doing all of this before, you know, any modern, you know, before electricity.
01:52:51.000 It is pretty amazing.
01:52:53.000 But they saw the threat.
01:52:53.000 Yeah.
01:52:54.000 They had a pretty profound understanding of human nature and applied to power.
01:52:59.000 Yeah, they did.
01:53:00.000 Yeah.
01:53:01.000 This is such an uneasy time because you see how all these forces that are at work and how it could play out, how it is playing out with social media, how it could play out with AI,
01:53:17.000 and electing leaders that are going to see things correctly.
01:53:22.000 I haven't seen anybody discussing this, especially not discussing this the way you're discussing it.
01:53:27.000 Well, and when the speech is made, right, to justify whatever the controls are, it's going to be made in our name, right?
01:53:35.000 So the speech is not going to be, we're going to do this to you.
01:53:37.000 The speech is we're doing this to protect you.
01:53:39.000 Right.
01:53:39.000 Right.
01:53:40.000 So that's the siren song.
01:53:41.000 Yeah.
01:53:42.000 Right.
01:53:42.000 And that's already started.
01:53:43.000 Like, if you look at the public statements coming out of D.C. already, like, that is the thrust of it.
01:53:48.000 Because, of course, that's how they're going to couch it.
01:53:50.000 How are they framing it?
01:53:51.000 How is it protecting us?
01:53:53.000 We need to protect people from dangerous this and that.
01:53:56.000 We need to protect people from hate speech.
01:53:57.000 We need to protect people from misinformation.
01:54:01.000 It's effectively the same arguments you've seen in social media for the last decade.
01:54:05.000 I just don't know how we publicly turn that narrative around because there's so many people that have adopted it like a mantra.
01:54:12.000 We're good to go.
01:54:32.000 A lot of people think all this stuff started with the internet and it turns out it didn't.
01:54:35.000 It turns out there's been a collapse of faith on the part of American citizens in their institutions basically since I was born, basically around the early 70s.
01:54:43.000 It's basically been a straight line down on almost every major institution.
01:54:47.000 I'll talk about government newspapers in a second.
01:54:50.000 You know, basically any, you know, religion, you go kind of right down the list, police, you know, big business, you know, education, schools, universities, you chart all these things out and basically they're all basically straight lines down over 50 years.
01:55:04.000 Right?
01:55:04.000 And there's two ways of interpreting that.
01:55:06.000 One is, you know, greater levels of disillusionment and cynicism that are incorrect.
01:55:11.000 And then the other is actually people are learning, right?
01:55:14.000 Who they can and can't trust.
01:55:16.000 And then, of course, the theory goes to start in the 70s because of the hangover from the Vietnam War and then Watergate and then a lot of the hearings that kind of exposed government corruption in the 70s that followed, right?
01:55:27.000 And then it just sort of – this sort of downward slide.
01:55:29.000 The military is the big exception.
01:55:31.000 The military took a huge hit after Vietnam and then actually it's the one that has like recovered sharply and there's like a cultural change that's happened where, you know, we as Americans have decided that we can have faith in the military even if we don't agree with the missions that they're sent on.
01:55:44.000 So that's the exception.
01:55:45.000 But everything else is sort of down into the right.
01:55:48.000 The two that are like the lowest and have had the biggest drops are Congress and journalism.
01:55:54.000 And so the population, they poll like 10-15% in the population.
01:55:59.000 And so most people are not looking at these things like, oh yeah, these people are right about it.
01:56:04.000 Most people are looking at these things being like, you know, that's screwed up.
01:56:08.000 Right.
01:56:10.000 Right.
01:56:10.000 Right.
01:56:26.000 And so at some point, people have to decide.
01:56:29.000 They have to carry it over.
01:56:30.000 It's not internally consistent.
01:56:32.000 And you're not going to get the change that you want from Congress unless a lot more people all of a sudden change their mind about the incumbents that they keep re-electing.
01:56:40.000 But anyway, the reason for optimism in there is I think most people are off the train already.
01:56:47.000 And quite frankly, I think that explains a lot of what's happened in politics in the U.S. over the last 10 years.
01:56:51.000 Whether people support or don't support the kind of You know, the various forms of populism on the left or the right.
01:56:57.000 I think it's the citizenry reaching out for a better answer than just more of the same and more of the same being the same elites in charge forever telling us the same things that we know aren't true.
01:57:06.000 Well, that is one of the beautiful things about social media and the beautiful things about things like YouTube where people can constantly discuss these things and have these conversations that are reached by millions of people.
01:57:18.000 Yeah.
01:57:18.000 I mean, just a viral tweet, a viral video, something, you know, someone gives a speech on a podcast and everybody goes, like, what you're saying today.
01:57:28.000 I didn't know that's how it worked.
01:57:30.000 Oh, this is what we have to be afraid of.
01:57:32.000 So when they start saying it's for your own protection, this is why.
01:57:35.000 And then the Marc Andreessen clip plays and everybody goes, okay.
01:57:40.000 Yep.
01:57:41.000 That gives me hope because that's something that didn't exist before.
01:57:44.000 Yeah, that's right.
01:57:45.000 And you can even take it a step back further.
01:57:47.000 It's actually even pre-social media.
01:57:49.000 There was a big opening in the 80s with talk radio.
01:57:53.000 It got people very mad at the time because things were being said on it that weren't supposed to be said.
01:57:57.000 Sure.
01:57:57.000 Cable.
01:57:58.000 TV was a big opening to it.
01:58:01.000 Before that, actually in the 50s, it was paperback books.
01:58:04.000 A lot of alternate points of view basically took sort of flour in the 50s and 60s flowing out of paperback books.
01:58:11.000 And then newsletters.
01:58:12.000 That's why I say the Soviets outlawed mimeograph machines, which were earlier photocopiers.
01:58:18.000 But there was a whole newsletter phenomenon.
01:58:20.000 In a lot of movements in the 50s, 60s, 70s.
01:58:22.000 The way I look at it is basically the way to think about it is media and thought centralized to the maximum possible level of centralization and control right around 1950, where you basically had three television networks.
01:58:37.000 You had one newspaper per city.
01:58:39.000 You had three news magazines.
01:58:41.000 You had two political parties.
01:58:44.000 Everything was locked in hard.
01:58:46.000 And then basically, technology in the form of all of these media technologies and then all the computer and information technologies underneath them have basically been decentralized and unwinding that level of centralized control more or less continuously now for 70 years.
01:59:01.000 So I think it's been this longer running process.
01:59:03.000 And by the way, I think, you know, left to its own devices, it's going to continue, right?
01:59:07.000 And this is the significance of AI. What if each of us has a super sophisticated AI that we own and control?
01:59:15.000 Because it either comes from a company that's doing that for us or it's an open source thing where we can just download it and use it.
01:59:20.000 And what if it has the ability to analyze all the information?
01:59:22.000 And what if it has the ability to basically say, you know, look, on this topic, I'm going to go scour the internet and I'm going to come back and I'm going to synthesize information.
01:59:28.000 I'm going to tell you what I think.
01:59:30.000 It's the AI. So it would be logical that that would be another step down this process.
01:59:35.000 By the way, and maybe the most important step of all, because it's the one where it can actually be like, okay, I'm going to be able to legitimately think on your behalf and help you to conclusions that are factually correct, even if people who are in power don't want to hear it.
01:59:49.000 It seems to me that you have more of a glass-half-full perspective on this.
01:59:56.000 Are you open-minded and just sort of analyzing the data as it presents itself currently and not making judgments about where this is going?
02:00:06.000 Or do you generally feel like this is all going to move in a good direction?
02:00:10.000 So my day job...
02:00:13.000 We meet every day all through the year with all these incredibly smart kids who have these incredibly great new ideas and they want to build these technologies and they want to build businesses around them or they want to open source them or they want to make these new things happen.
02:00:29.000 They have visions for how the world can change in these ways.
02:00:32.000 They have the technical knowledge to be able to do these things.
02:00:35.000 There's a pattern of, you know, these kids doing amazing things.
02:00:39.000 Apple just passed today.
02:00:41.000 Apple alone just passed the entire value of the entire UK stock market.
02:00:45.000 Right?
02:00:47.000 So, and Apple was two kids in a garage in 1976 with a crazy idea that people should have their own computers, which was a crazy idea at the time.
02:00:55.000 Right?
02:00:55.000 And so, like, it doesn't, you know, usually it doesn't work, but when it does, like, it works really, really well.
02:01:01.000 And this is what we got, the microchip, and this is how we got the PC, and this is how we got the internet, and the web, and all these other, you know, all these other things.
02:01:07.000 And, yeah, here we go.
02:01:08.000 Yeah, top three, Julian.
02:01:11.000 Yeah, yeah.
02:01:12.000 So it's the comparison, I think, as to what they call the FTSE 350, which is the 350 largest UK companies.
02:01:18.000 That's bonkers.
02:01:19.000 Yeah.
02:01:20.000 And so when it works, like, it works incredibly well, right?
02:01:23.000 And so, and we just happen to be, you know, by being where we are and, you know, doing what we do, we're at ground zero of that.
02:01:29.000 And so all day long, I meet and talk to these kids and people who have these ideas and want to do these things.
02:01:35.000 It's why I can see the future kind of in that sense, which is I know what they're going to do because they come in and tell us and then we help them try to do it.
02:01:42.000 So if they're allowed to do what they plan to do, then I have a pretty good idea of what the future is going to look like and how great it could potentially be.
02:01:50.000 But then I also have the conversations in Washington, and I also have the conversations with the people who are trying to do the other things, and I'm like, okay.
02:01:58.000 Like, this is...
02:01:59.000 Like, for a very long time, tech in the U.S. was considered just, like, purely good, right?
02:02:03.000 Tech was...
02:02:03.000 Everybody was, like...
02:02:04.000 Up until, like, basically the 2000s, 2010s, everybody was just kind of pro-tech, pro-whatever.
02:02:08.000 People got excited about new things.
02:02:10.000 Every once in a while, people get freaked out about something, but mostly people just thought, you know, invention is good, creativity is good, Silicon Valley's good, and in the last 15, 20 years, like...
02:02:20.000 All these topics have gotten very contentious, and you have all these people who are very angry about the consequences of all this technological change.
02:02:26.000 And so we're in a different phase of the world where these issues are now being fought out, not just in business, but also in politics.
02:02:34.000 And so I also have those conversations, and those are almost routinely dismaying.
02:02:40.000 Like, those are not good conversations.
02:02:42.000 And so I'm always trying to kind of calibrate between what I know is possible versus my concern that people are going to try to figure out how to screw it up.
02:02:48.000 When you have these conversations with people behind the scenes, are they receptive?
02:02:53.000 Are they aware of the issues of what you're saying in terms of just freedom of expression and the future of the country?
02:03:01.000 You might bucket it in like three different buckets.
02:03:04.000 There's a set of people who just basically don't like Silicon Valley, tech, internet, free speech, capitalism, free markets.
02:03:13.000 They're very political.
02:03:14.000 Some of them are in positions of high power right now, and they're just opposed.
02:03:17.000 They're just against, and they're trying to do everything they can.
02:03:20.000 I mean, they're trying to outlaw crypto right now.
02:03:21.000 They're trying to do all kinds of stuff.
02:03:23.000 They're the same people trying to censor social media.
02:03:25.000 Like, they're just very opposed.
02:03:26.000 I mean, I don't know.
02:03:28.000 Maybe there would be a point in talking.
02:03:29.000 I myself don't spend a lot of time talking to them because it's not a conversation.
02:03:33.000 It's just getting yelled at for an hour.
02:03:36.000 Is that really how it goes?
02:03:37.000 Oh, yeah, yeah.
02:03:37.000 They're very angry.
02:03:38.000 Like, there's a very large amount of rage in the system.
02:03:42.000 A lot of it directed at tech.
02:03:45.000 Then there's a set of people who I would describe, I don't know if open mind is a wrong term, but I would say they are honestly and legitimately trying to understand the issues.
02:03:52.000 They're kind of aware that they don't fully understand what's happening and they are trying to figure it out and they do have a narrative in their own mind of they're going to try to come to the right conclusion.
02:03:59.000 So there's some set of those.
02:04:00.000 Those usually aren't the senior people, but there are people at the staff level who are like that.
02:04:04.000 Dreamers.
02:04:05.000 What's that?
02:04:06.000 Dreamers.
02:04:08.000 You know, the best of the bunch, right?
02:04:11.000 Like, you know, open-minded, learning, curious.
02:04:15.000 You know, it's like anything else in life.
02:04:16.000 You sit down with one person and, like, you have a conversation.
02:04:20.000 They ask you questions.
02:04:20.000 You ask them questions.
02:04:21.000 There's other people you talk to where it's just like they're not interested in what you think.
02:04:24.000 And it's just very clear that they're not interested in what you think.
02:04:27.000 And so that plays out there also.
02:04:28.000 Yeah.
02:04:29.000 And then there's a third set of people who are very actually pro-capitalism, pro-innovation, pro-tech, but they don't like us because they think we're all Democrats.
02:04:41.000 So a lot of our natural allies on these issues are on the other side of where Silicon Valley is majority Democratic, right?
02:04:49.000 And so there's a fair number of people who would be our natural allies if not for the fact that Silicon Valley is like 99% Democrat.
02:04:55.000 Oh, wow.
02:04:56.000 This is part of the issue the Valley has.
02:04:58.000 We don't have any national allies.
02:05:00.000 Tech doesn't have any national allies in D.C. because the Democrats basically think they control us, which they effectively do because the Valley is almost entirely Democrat.
02:05:08.000 Then the Republicans think that basically they would support us except that we're all Democrats.
02:05:13.000 So we can go F off.
02:05:15.000 So there's a trap that's developed that is hard to figure out what to do with.
02:05:20.000 How do you get around that one?
02:05:21.000 That one's a hard one.
02:05:23.000 I mean, that I don't know.
02:05:26.000 The last thing I want to do is argue to people, especially in public, that they should change their politics.
02:05:31.000 And look, people in tech feel very strongly about politics, including many political topics that have nothing to do with tech.
02:05:39.000 And so asking somebody to change their views on some other political issue so that it's better for tech is not an argument that flies.
02:05:45.000 So there's a bit of a stall there.
02:05:48.000 But yeah, it goes back to, yeah, people have to decide what they want.
02:05:52.000 You seem like you enjoy all this madness, though.
02:05:55.000 You really do.
02:05:56.000 I'd rather be in the middle of it than not.
02:05:58.000 Yeah, it would be very frustrating to be on the outside.
02:06:02.000 It'd be even more frustrating than...
02:06:06.000 Than being involved in it.
02:06:07.000 Well, look, here's the other thing.
02:06:09.000 These issues have become really important, right?
02:06:12.000 I'll even credit the critics with the following, which is, yeah, look, Mark, tech was a backwater.
02:06:16.000 Tech didn't matter until the internet showed up.
02:06:18.000 And now it matters a lot because it's the future of speech and politics and control and all these things.
02:06:22.000 And so all of a sudden, it's these big, important topics.
02:06:25.000 We haven't even talked about warfare.
02:06:26.000 AI is going to really change how weapons work, right?
02:06:28.000 Right.
02:06:29.000 Basically, every important thing happening in the world right now has a technological component to it, right?
02:06:34.000 And it's being altered by the changes that are happening, you know, caused by tech.
02:06:37.000 And so the other argument would be, Mark, like, grow up, like, of course, these are all going to be big fights because you're now involved in all the big issues.
02:06:44.000 And maybe that's just the case.
02:06:46.000 Well, that seems to definitely also be the case.
02:06:48.000 Yeah.
02:06:49.000 It's just, people are always so scared of change, and change today, when we're talking about this kind of change, you're talking about monumental change that happens over a very short period of time.
02:07:00.000 Yep.
02:07:01.000 Yep.
02:07:02.000 Yes.
02:07:03.000 That's a big freakout.
02:07:04.000 Yes.
02:07:05.000 Yeah.
02:07:05.000 Yeah, I mean, what are we looking at in 50 years?
02:07:08.000 Really?
02:07:09.000 Yep.
02:07:11.000 You enjoy it.
02:07:12.000 I do enjoy it.
02:07:13.000 I do enjoy it.
02:07:14.000 I love that you enjoy it, though.
02:07:16.000 Douglas, you know that book, Hitchhiker's Guide to the Galaxy?
02:07:19.000 Douglas Adams, who wrote that book, he once had a formulation.
02:07:22.000 He said this is all generational.
02:07:25.000 He had a different theory.
02:07:26.000 He said it's all generational.
02:07:27.000 It's all age-related.
02:07:28.000 And he said if people react to technology in three different ways, If you're below the age of 15, whatever is the new thing is just how the world always worked.
02:07:37.000 If you're between the ages of 15 and 35, whatever is the new thing is exciting and hot and cool and you might be able to get a job and make a living doing it.
02:07:44.000 Anything, if you're above the age of 35, it's whatever new is happening is unholy, right?
02:07:50.000 And it's sure to bring about the downfall of civilization, right?
02:07:53.000 Apocalypse and calamity.
02:07:54.000 I guess that's true in culture.
02:07:56.000 It's true in music.
02:07:57.000 It's true in movies, video games.
02:08:00.000 Yeah.
02:08:01.000 So I think maybe what just has to happen is just time needs to pass.
02:08:05.000 You know, maybe the fight is always, you know, I don't know, it's like whatever, the new thing happens, the fight's always between a bunch of 50-year-olds or something.
02:08:12.000 Do you resist any technology in your own personal life?
02:08:17.000 That is a good question.
02:08:19.000 I don't personally.
02:08:22.000 Having said that, we do have an eight-year-old, and he does get screen time, but it is controlled.
02:08:27.000 So we're a little bit, you know, we use it as a tool.
02:08:31.000 We're not absolutists.
02:08:32.000 Like, we're not, you know, there are some people running around who want to keep their kids off all this stuff, which, by the way, is not the craziest view in the world.
02:08:39.000 Right.
02:08:40.000 But we want him to be, you know, fully up to speed.
02:08:42.000 We want him to be an engineer.
02:08:44.000 You know, not that he has to spend his life doing it, but we want him to know how to use technology and build it.
02:08:49.000 It's also fun for kids.
02:08:51.000 It's just if you teach them discipline and, you know, engage them in other activities so that they do physical things and run around, have fun, be outside.
02:09:00.000 He does MMA. Oh, no kidding.
02:09:02.000 He's doing full Brazilian Jiu-Jitsu.
02:09:04.000 He's doing full MMA. He's doing full sparring.
02:09:08.000 Wow.
02:09:09.000 That's eight.
02:09:09.000 He and his coach dress up in the full body marshmallow man outfits and like wail on each other.
02:09:16.000 Wow.
02:09:16.000 Get on the ground and choke each other out.
02:09:18.000 Okay.
02:09:19.000 Are you enjoying watching that?
02:09:20.000 It's absolutely fantastic.
02:09:22.000 Wow.
02:09:22.000 And he loves it.
02:09:23.000 That's pretty cool.
02:09:24.000 And I keep watching the videos, you know, because he's up against.
02:09:26.000 He's like, you know, half the time he's with an adult sparring.
02:09:28.000 And he's just like, he just goes like right in there.
02:09:31.000 That's crazy.
02:09:32.000 So the tech story that I've been thinking about a lot is the Douglas Adams thing.
02:09:38.000 ChatGPT comes out in December.
02:09:40.000 I play with it for a few months.
02:09:42.000 I'm trying to wrap my head around it.
02:09:43.000 And I'm like, okay, this is good.
02:09:44.000 And so I'm like, okay.
02:09:45.000 And my 8-year-old is super curious and he wants to learn all these things.
02:09:48.000 And he's asking questions all the time.
02:09:50.000 And half the time I don't know the answer.
02:09:51.000 So I'm like, okay.
02:09:52.000 I install it on his laptop.
02:09:54.000 ChatGPT on his laptop.
02:09:56.000 And I set time aside and I sit him down on the couch and I'm like, okay, there's this amazing thing that I'm going to give you.
02:10:02.000 This is the most important thing I've ever done as a father that I've fired down from the mountains and I'm going to give you AI. And you're going to have AI your whole life to be with you and teach you things.
02:10:12.000 And he's like, okay.
02:10:13.000 And I was like, well, you ask it questions and it'll answer the questions.
02:10:17.000 And he's like, okay.
02:10:19.000 And I was like, no, like, this is a big deal.
02:10:23.000 Like, they didn't used to do this.
02:10:25.000 Like, now it does this, and this is amazing.
02:10:27.000 And he's like, okay.
02:10:30.000 And I was like, why aren't you impressed?
02:10:31.000 And he's like, it's a computer.
02:10:33.000 Like, of course you ask it questions that give you answers.
02:10:34.000 Like, what else is it for?
02:10:36.000 And I'm like, okay, you know, I'm old.
02:10:40.000 Kids are going to just have a totally different point of view on this.
02:10:42.000 Right.
02:10:43.000 It's going to be normal to have the answers to things.
02:10:46.000 Yeah, completely normal.
02:10:47.000 And it's going to be, by the way, it's going to be normal.
02:10:49.000 It's going to be exciting.
02:10:51.000 I think it's going to make, I think it's going to be great.
02:10:53.000 Like for kids, I think this is going to be fantastic.
02:10:55.000 Well, the positive aspect, just for informing people on whatever it is, whether it's a medical decision or whether it's a mechanical thing with your car, I mean, that's pretty amazing.
02:11:04.000 One of the fun things you can do with JetGPT is you can say, explain X to me, and then you can say, explain X to me as if I'm 15. And then you can do it as if I'm 10. And then you can do it as if I'm 5. And you can actually get it.
02:11:18.000 You can actually do it all the way down.
02:11:19.000 It kind of works down to about age three.
02:11:20.000 So you can tell it, explain quantum mechanics to me like I'm a three-year-old.
02:11:23.000 Wow.
02:11:24.000 And like, it will.
02:11:26.000 Really?
02:11:26.000 Yeah, yeah.
02:11:27.000 And so I taught him how to do this because I'm like, you just, you know, you can have it, you can dial it up or down.
02:11:31.000 How does it explain quantum mechanics to a three-year-old?
02:11:33.000 It uses like all these metaphors of like, you know, you've got a stuffed animal over here and a stuffed animal over there and it wiggles and then that one wiggles.
02:11:39.000 Ah.
02:11:39.000 Right?
02:11:40.000 And so it's like, oh, you know, the three-year-old's like, okay.
02:11:43.000 Wow.
02:11:44.000 Why not, right?
02:11:46.000 So, yeah, no, so as a tool, you know, there's all these fights happening, I guess, what, back-to-school is coming up in a couple months here, and, you know, there's all these fights already emerging over, like, whether students in the classroom can use JGPT, and there's all these, you know, there's all these sites that claim to tell you whether something's been generated by AI. Yeah.
02:12:04.000 So the teacher, in theory, can screen to see if something's been, you know, a student hands it an essay.
02:12:09.000 In theory, there's a tool that will tell you whether they got it from GPT, but it doesn't actually work.
02:12:14.000 There you go!
02:12:15.000 Here it is.
02:12:15.000 Can you explain quantum mechanics to a three-year-old?
02:12:17.000 Sure, I'll do my best to explain quantum mechanics in a simple way for a three-year-old to understand.
02:12:21.000 Imagine you have a toy that'd be in two places at the same time.
02:12:24.000 It's like having a teddy bear that can also be in your bed and in the living room at once.
02:12:28.000 Very strange, right?
02:12:29.000 Well, in the world of very tiny things, like atoms and particles, they can do something similar.
02:12:35.000 They can be in two different places at once, just like your special toy.
02:12:39.000 This is called superposition.
02:12:43.000 Now let's talk about something else.
02:12:44.000 Imagine you have a toy car that can go through walls without crashing.
02:12:48.000 It can pass right through them as if it wasn't even there.
02:12:51.000 That's pretty amazing, isn't it?
02:12:53.000 Well, in the world of tiny things, particles can do something similar.
02:12:58.000 They can pass through solid objects without any problem.
02:13:01.000 This is called tunneling.
02:13:03.000 Wow.
02:13:05.000 Pretty cool.
02:13:06.000 Yeah.
02:13:07.000 Pretty cool.
02:13:08.000 Yeah, in terms of education, in terms of just informing people, I don't think there's any...
02:13:12.000 I mean, it's one of the most promising things ever, by far.
02:13:17.000 And by the way, this is something for people's entire lives, right?
02:13:19.000 This isn't just for kids, right?
02:13:20.000 This is for anybody who ever wants to learn anything.
02:13:22.000 The real fear, the overall fear, is that what human beings are doing with artificial intelligence is creating something that's going to replace us.
02:13:32.000 You have no fear of that.
02:13:33.000 Yeah, I don't.
02:13:34.000 What about 100 years from now?
02:13:35.000 It's a tool.
02:13:36.000 100 years from now, I don't know.
02:13:37.000 And the first clue it's going to have 100 years from now.
02:13:39.000 But it's not going to be this.
02:13:40.000 That's the fear is that we're sowing the seeds.
02:13:44.000 Yeah, this is an old, I mean, look, this is an old, this is an old fear.
02:13:47.000 You know, it's like the fear of the end of the world.
02:13:49.000 This is like the fear of, yeah, the non-human...
02:13:52.000 Yeah.
02:13:52.000 Like in Judaism, they have a version of this in Judaism called the Golem, the sort of legend of the Golem, and it was sort of this thing.
02:13:58.000 It was the Warsaw Ghetto at one point, and this rabbi figures out how to conjure up basically this giant creature made out of clay to go smite the enemies.
02:14:07.000 And then, of course, he comes back around and starts killing his own people.
02:14:10.000 You know, the Frankenstein's monster, right?
02:14:13.000 Same thing.
02:14:15.000 So there's always this, yeah, there's always, and look, it's very human.
02:14:18.000 You know, it's a self-preservation, you know, kind of thing.
02:14:20.000 But, you know, look, we build tools.
02:14:22.000 I mean, what's the thing that makes us different from animals, right, is we have intelligence and we build tools.
02:14:27.000 Tools can be used, by the way, for good and bad things, right?
02:14:30.000 Like a shovel can be used to dig a ditch or, like, bring somebody right over the head.
02:14:34.000 And so all these things, you know, things do have two sides.
02:14:37.000 But over time, you know, the tools that we built have created a much healthier, safer, better world.
02:14:43.000 Isn't that interesting?
02:14:44.000 I mean, look, the human population is up, you know, gigantically as a consequence of all these tools we've developed.
02:14:49.000 So the exact opposite thing has happened from what everybody's been afraid of the whole time.
02:14:53.000 But it is interesting whenever there's a discussion on these things, it's never framed that there's two sides.
02:14:59.000 It's always framed, this is what we're scared of.
02:15:02.000 This is what the danger is.
02:15:04.000 It's not...
02:15:05.000 Part of the beauty of this is that there's danger.
02:15:08.000 And it's also, there's incredible promise that's attached to this as well, like everything else, like matches.
02:15:15.000 No one's advocating for outlawing matches, but you could start a fire.
02:15:18.000 So the original myth on this—so the way the ancients thought about this—so, excuse me, in the Judeo-Christian philosophy, they have this concept of the logos, the word.
02:15:31.000 So it says at the very beginning of the Bible, in the beginning there was the word, the word was truth, and then basically the universe kind of comes from that.
02:15:37.000 So this concept of like the word, which was sort of knowledge, right?
02:15:39.000 And then in Adam and Eve, it was, you know, Adam and Eve eating from the tree of knowledge, right?
02:15:43.000 And then when they ate the, you know, the apple, you know, Satan fooled them in eating the apple, and then they had the knowledge, like, you know, the secret knowledge.
02:15:50.000 The Greeks had a similar concept they called techni, which is the basis for the word technology.
02:15:55.000 And it meant sort of, it meant, it didn't mean technology per se, but it meant sort of knowledge, and particularly knowledge on how to do things, right?
02:16:01.000 So sort of the beginning of technology.
02:16:03.000 And the myth that the Greeks had—so the myth that the Christians have about the danger of knowledge is the Garden of Eden getting kicked out of the Garden of Eden to the downside, right?
02:16:12.000 That was viewed as a tragedy, right, in that religion.
02:16:15.000 The Greeks had what they called the Prometheus myth, and it had to do with fire, right?
02:16:20.000 And so—and the myth of Prometheus was a central Greek myth, and the myth of Prometheus was a god-cut kind of character.
02:16:27.000 In the mythology, humans didn't have fire.
02:16:30.000 He went up to the mountain, and the gods had fire, and he took fire from the gods, and he brought it down and gave it to humanity.
02:16:36.000 In the myth, that was how humans learned to basically use fire as a tool.
02:16:42.000 As punishment for bringing fire to humans, in the myth, he was chained to a rock for all eternity, and every day his liver gets pecked out by an angry bird, and then it regenerates overnight, and then it gets pecked out again the next day forever.
02:16:55.000 Like that's how much the gods felt like they had to punish him, right?
02:16:59.000 Because – and of course, what were they saying in that myth?
02:17:02.000 What they were saying is, okay, fire was like the original technology, right?
02:17:05.000 And the nature of fire as a technology is it makes human civilization possible.
02:17:09.000 You can stay warm at night.
02:17:10.000 You can fight off the wolves.
02:17:12.000 You know, you bond the tribe together, right?
02:17:13.000 Every culture has like a fire central thing to it because it's like the center of the community.
02:17:19.000 You can use it, you know, to cook meat, right?
02:17:22.000 Therefore, you can have a higher rate of your kids are going to survive and so forth, be able to reproduce more.
02:17:27.000 But of course, fire is also a fearsome weapon.
02:17:30.000 And you can use it to burn people alive.
02:17:32.000 You can use it to destroy entire cities.
02:17:35.000 It's fantastic because it got that idea of information technology in the form of even fire was so scary that they encoded it that deeply in their mythology.
02:17:45.000 I think what we do is we just play that, exactly like you said, we play that fear out over and over again.
02:17:51.000 Because in the back of our head, it's always like, okay, this is the one that's going to get us.
02:17:54.000 Yes, I know that the previous 3,000 of these things that actually turned out fine.
02:18:00.000 Amazingly, even nuclear weapons turned out fine.
02:18:03.000 Nuclear weapons almost certainly prevented World War III. The existence of nuclear weapons probably saved on the order of 200 million lives.
02:18:10.000 So even nuclear weapons turned out okay.
02:18:13.000 But yet after all of that and all the progress we've made, this is the one that's going to get us.
02:18:18.000 Yeah.
02:18:19.000 It's so interesting because that conversation's never had.
02:18:22.000 We only hear the negative aspects of it.
02:18:25.000 Yeah, that's right.
02:18:26.000 Because these are complex, nuanced discussions.
02:18:28.000 And it has to do with all sorts of aspects of human nature and control and power structures.
02:18:34.000 And it's just...
02:18:36.000 They're very complex conversations.
02:18:38.000 And then people try to hijack them.
02:18:40.000 They get used.
02:18:45.000 There's this concept I talk about, the Baptists and the bootleggers.
02:18:50.000 There were two groups of people in favor of prohibition of alcohol.
02:18:53.000 There were the Baptists who were the social activists who thought alcohol was actually evil.
02:18:57.000 And was destroying society.
02:18:58.000 And then there were the bootleggers, which were the people who were going to make money if alcohol was outlawed.
02:19:03.000 And this is what you often have.
02:19:05.000 There's one of these social movements that wants regulation.
02:19:07.000 You often have this union of the Baptists and the bootleggers.
02:19:10.000 And so the Baptists, I don't mind.
02:19:12.000 The true believers who are worried about X, Y, Z, it's like, okay, let's talk about that.
02:19:16.000 Let's figure that out.
02:19:17.000 It's the bootleggers that drive me crazy.
02:19:20.000 It's just the bootleggers who pick up that argument and then are working behind the scenes to achieve basically self-interested ends.
02:19:26.000 Well, I have hope.
02:19:29.000 I really do.
02:19:30.000 I mean, I like to dwell on the negative aspects of it because it's fun.
02:19:34.000 But one of the things that I have hope in is that there are conversations like this taking place where this is a very kind of unique thing in terms of human history, like the ability to independently distribute something that reaches millions of people that can talk about these things.
02:19:48.000 So this can get out there.
02:19:50.000 And then other people will hear this.
02:19:52.000 And they'll start their own conversations about it.
02:19:54.000 And articles will be written.
02:19:55.000 And more people discuss it and then look at this more nuanced perspective.
02:19:59.000 Because I think it is something that's incredibly complicated.
02:20:02.000 And you can't deny that just what ChatGPT can do right now is extraordinary and very beneficial.
02:20:10.000 Even if they just stopped it right there.
02:20:12.000 Yeah.
02:20:12.000 I mean, just right there, but it's not going to stop there.
02:20:16.000 Want to see something crazy?
02:20:17.000 Yes.
02:20:17.000 Can I ask for something to be pulled up?
02:20:19.000 Sure.
02:20:19.000 Twitter.
02:20:20.000 Go to Twitter.
02:20:21.000 This just came up today.
02:20:23.000 Because we've been talking about text.
02:20:24.000 We've been talking about ChatGPT.
02:20:25.000 So let's look at images for a moment.
02:20:28.000 So we're going to do a search on MidJourney.
02:20:33.000 And then Chihuly, the artist.
02:20:36.000 C-H-I-H-U-L-Y. C-H-I Chihuli.
02:20:41.000 C-H-I-H-U-L-I. Yeah, right there.
02:20:50.000 That one.
02:20:51.000 Okay.
02:20:53.000 That's pretty good.
02:20:53.000 But go two more.
02:20:55.000 No, stay on that one, but go to that image of the shoe right there.
02:20:58.000 There we go.
02:21:00.000 Okay.
02:21:01.000 So this is mid-journey.
02:21:03.000 So this is the app that lets you create images.
02:21:05.000 You describe words and it creates images.
02:21:07.000 It uses the same technology as ChatGPT, but it generates images.
02:21:12.000 The prompt here was something along the lines of a Nike shoe in the form of this artist called Chihuly, who's this famous artist who works in basically blown glass is his art form.
02:21:22.000 And so this is a Nike shoe rendered in blown glass.
02:21:26.000 Chihuly is famous for using lots of colors, and so this does look exactly like his shoe would have looked.
02:21:30.000 Yeah, this is Chihuly skirt, billowing skirt.
02:21:36.000 Yeah, this is Chihuly statue of an avocado, right?
02:21:42.000 And so it's an avocado made out of stained glass.
02:21:44.000 Okay, so just look here for a moment, though.
02:21:46.000 Go to the avocado for a second.
02:21:50.000 Okay, look at the shadows.
02:21:53.000 Look at the detail in the shadows.
02:21:54.000 Incredible.
02:21:55.000 Look at the detail of the shadows with the sunlight coming through the window.
02:21:58.000 Yeah.
02:21:59.000 Okay, now go back to the shoe, because this one blows my mind.
02:22:02.000 Okay, and then zoom in on the reflection of the shoe in the bottom down there, right?
02:22:06.000 You see, it's like perfect, right?
02:22:08.000 It's like a perfectly corresponding reflection.
02:22:11.000 Okay, this entire thing was generated by MidJourney.
02:22:13.000 MidJourney, the way MidJourney works is it predicts the next pixel.
02:22:17.000 So the way that it worked was it basically ran this algorithm that basically used the prompt and then it ran it through the neural network and then it predicted each pixel in turn for this image.
02:22:25.000 And this image probably has, you know, 100,000 pixels in it or something or a million pixels or something.
02:22:31.000 It's like an autocomplete.
02:22:32.000 It was predicting each pixel.
02:22:34.000 But in the process of predicting each pixel, it was able to render not only colors and shapes and all those things, but transparency, translucency, reflections, shadows, lighting.
02:22:48.000 It trained itself basically on how to do a full 3D rendering inside the neural network in order to be able to successfully predict the next pixel.
02:22:57.000 And how long does something like that take to generate?
02:22:59.000 That takes to generate on the—when you're running the system today, that would probably be, I'm going to guess, 10 or 15 seconds.
02:23:07.000 There's a newer version of MidJourney, a turbo version that just came out, where I think it cuts down a couple seconds.
02:23:12.000 Now, the system that's generating that needed, you know, many years of computing power across many processors to get ready to do the training that took place.
02:23:23.000 But the fact that it can generate that in seconds is— Took a few seconds.
02:23:26.000 Okay, so here's another amazing thing.
02:23:30.000 The price, the cost of generating an image like that versus hiring a human artist to do it is like down by a factor of a thousand, somewhere between a factor of a thousand and ten thousand.
02:23:40.000 If you just kind of run the numbers, like to hire an artist to do that at that level of quality would cost on the order of a thousand and ten thousand more dollars or, you know, time or human effort than doing it with the machine.
02:23:52.000 The same thing is true of writing a legal brief.
02:23:54.000 The same thing is true of writing a medical diagnosis.
02:23:58.000 The same thing is true of, you know, summarizing a book, like any sort of, you know, knowledge, summarizing a podcast, you know, any of these things, drafting questions for a podcast.
02:24:08.000 You know, basically pennies, right, to be able to do all these things versus, you know, potentially $100 or $1,000 to have a person do any of these things.
02:24:17.000 So we've dropped the cost of a lot of white-collar work by a factor of a thousand.
02:24:22.000 Guess what we haven't dropped the cost of at all?
02:24:25.000 It's all the blue-collar work.
02:24:28.000 So we do not have today a machine that can pick strawberries that is less expensive than hiring people to pick strawberries.
02:24:34.000 We do not have a machine that can pack your suitcase.
02:24:37.000 We do not have a machine that can clean your toilet.
02:24:40.000 We don't have a machine that can cook you dinner.
02:24:41.000 We don't have any of those things.
02:24:43.000 For those things, the cost of the machine and the AI and everything else to do those things is far in excess of what you can simply pay people to do.
02:24:51.000 So there's the great twist here is that in all of the economic fears around automation, the fear has always been that it's the mechanical work that gets replaced because the presumption is people working with their brains.
02:25:03.000 That's certainly not what the computer's going to be.
02:25:05.000 Certainly, the computer's not going to be able to make art.
02:25:07.000 So the computer's going to be able to pick strawberries or it's going to be able to make cheeseburgers, but obviously it's not going to be able to make art.
02:25:11.000 And it actually turns out the reverse is true.
02:25:13.000 It's much easier to make the image of that shoe than it is to make you a cheeseburger.
02:25:17.000 Of course, because it has to be automated physically.
02:25:21.000 It has to be able to move around.
02:25:22.000 But not just physically, which is like, okay, what happens if the stove catches on fire?
02:25:30.000 How does the suitcase unclasp?
02:25:33.000 Suitcases unclasped differently.
02:25:35.000 Yes, all the real-world stuff.
02:25:37.000 How do you plumb a toilet?
02:25:39.000 What happens when you get in there?
02:25:41.000 And what happens if the plumbing is all screwed up?
02:25:43.000 The great irony and twist of all this is when the breakthrough – we all thought in the industry, we all thought when the breakthrough arrived, it would arrive in the form of robotics that would cause – the fear would be it would cause unemployment among basically the quote-unquote lower-skilled people or less educated people.
02:25:58.000 It turns out to be the exact opposite.
02:26:00.000 Well, that's Andrew Yang's take on automation, right?
02:26:03.000 The need for universal basic income.
02:26:05.000 Yeah.
02:26:06.000 Well, yes.
02:26:06.000 Therefore, the need for communism.
02:26:10.000 Which is immediately where it goes.
02:26:11.000 But before you think about that, though, think, though, about what this means in terms of productivity.
02:26:15.000 So think in terms of what this means about what people can do.
02:26:18.000 So think about the benefit, including the economic benefit.
02:26:22.000 Everybody always thinks of this as producer first.
02:26:24.000 You want to start by thinking of this as consumer first, which is like as a customer of all of the goods and services that involve knowledge work, the price on all of those things is about to drop on the order of like a thousand X. Right, so everything that you pay for today that involves white-collar work, like the prices and all those things are going to collapse.
02:26:40.000 By the way, the collapse in the prices is why it doesn't actually cause unemployment, because when prices collapse, it frees up spending power, and then you'll spend that same money on new things, and so your quality of life will rise, and then there will be new jobs created that will basically take the place of the jobs that got destroyed.
02:26:55.000 But what you'll experience is, hopefully, a dramatic fall in the cost of the goods and services that you buy, which is the equivalent of basically giving everybody a raise.
02:27:04.000 What about artist rights?
02:27:06.000 Because one of the arguments about art is that you're taking this midway, you're taking this AI program, and it's essentially stealing the images of these style of artists and then compiling its own.
02:27:21.000 But that the intellectual work, the original creative work, was responsible for generating this in the first place.
02:27:28.000 So even though you're not paying the illustrator, you're essentially using that illustrator's creativity and ideas to generate these images through AI. And in fact, we just saw an example of that.
02:27:38.000 We actually named a specific artist, Chihuly, who certainly did not get paid.
02:27:42.000 Right, as a consequence of that.
02:27:43.000 And the algorithm knew who Chihuly was, so it had clearly been trained on his art before.
02:27:49.000 Otherwise, the algorithm would not have known to do it in that style.
02:27:52.000 So I think this is going to be a very big fight.
02:27:54.000 I think this is probably going to go ultimately to the Supreme Court.
02:27:57.000 Those cases are just starting now.
02:27:59.000 I think the first one is Getty Images, which owns a big catalog of photography, is actually suing this company mid-journey.
02:28:05.000 Interesting.
02:28:05.000 So that has begun.
02:28:08.000 The argument for why what's happening is improper is exactly what you said.
02:28:13.000 The argument for why it's actually just fine and in fact not only should be legal but actually is legal under current copyright law is what in copyright law is called the right to make transformative works.
02:28:24.000 And so you have the total right as an artist or creator to make any level of creative art that you want or expression that is inspired by or the result of what they call transforming prior works.
02:28:37.000 So you have the right to do homages.
02:28:40.000 You have the right to do...
02:28:41.000 I mentioned earlier the guy who wrote the other version of the book, 1984. He had the right to do that because he was transforming the work.
02:28:48.000 You could make your version of what you think of Picasso would look like.
02:28:51.000 Exactly.
02:28:52.000 You are free to draw in the style of Picasso.
02:28:54.000 You are not free to copy a Picasso, but you are free to study all the art Picasso did, and as long as you don't misrepresent it as being a Picasso, you can generate all the new Picasso-like art.
02:29:04.000 Are you free to copy a Picasso exactly if you're telling everybody you're copying a Picasso?
02:29:11.000 I don't think...
02:29:12.000 No.
02:29:12.000 The artist...
02:29:13.000 I mean, copyright at some point expires, but that aside, let's assume copyright lasts.
02:29:18.000 Let's just assume for the moment copyright's forever, just to make it easy to talk about.
02:29:22.000 The artist can copyright that particular image.
02:29:25.000 The screenwriter can copyright that particular screenplay.
02:29:28.000 But if you're not generating income from it?
02:29:32.000 Oh, I don't know.
02:29:33.000 There's another carve-out in the copyright law for non-commercial use.
02:29:37.000 So there's like academic use.
02:29:38.000 By the way, there's also protection for satire.
02:29:42.000 There's protection for a variety of things.
02:29:44.000 But the one that's relevant here specifically is the transformative one because, and the reason I say that is because Chihuly never made a shoe.
02:29:52.000 So there's no image in the training set that was a Chihuly shoe, certainly not a Chihuly Nike shoe, and certainly not that Chihuly Nike shoe.
02:29:59.000 And so the algorithm produced an homage, would be the way to think about it, right?
02:30:04.000 And as a consequence of that, I think the way through your copyright law, you're like, okay, that's just fine.
02:30:09.000 And I think the same thing is true with ChatGPT for all the texts that it is.
02:30:13.000 By the way, the same thing is happening at ChatGPT.
02:30:14.000 The newspaper publishers are now getting very upset because they have this fear.
02:30:19.000 They have a fear that people are going to stop reading the news because they're just going to ask ChatGPT what's happening in the world.
02:30:24.000 Right, and they probably will.
02:30:25.000 And there are lots of news articles that are in the internet Training data that went into training ChatGPT, right, including, you know, updating it every day.
02:30:34.000 Well, and also if you can generate an objective news source through ChatGPT, because that's really hard to do.
02:30:40.000 So one of the fun things that these machines can do, and you can do this at ChatGPT, actually you can do this today, you can tell it to take out, it will do what's called sentiment analysis.
02:30:49.000 You can ask it, is this like, Is this news article slanted to the left or the right?
02:30:54.000 Is the emotional tone here angry or hostile?
02:30:58.000 And you can tell it to rewrite news articles to take out the bias.
02:31:02.000 And you can take out any political bias and take out any emotional loading.
02:31:06.000 And it will rewrite the article to be as objective as it can possibly come up with.
02:31:09.000 And again, here's the question.
02:31:11.000 The result of that, is that still copyrighted?
02:31:15.000 Is that a copyrighted derivative work of the original news article, or is that actually now something new that is a transformation of the thing that existed before, but it's different enough that it's actually fine for the machine to do that without copyright being a problem?
02:31:29.000 People, when they encounter objective information like objective news, they're always going to look for someone who has an analysis of that news.
02:31:38.000 Then they want a human perspective on it, which is very interesting.
02:31:44.000 How AI fits into that.
02:31:46.000 So one of the things you can do...
02:31:47.000 So you can ask it just straight up.
02:31:49.000 Give me the left-wing view on this or give me the right-wing view on this.
02:31:51.000 Or by the way, you can also...
02:31:52.000 I do this a lot.
02:31:53.000 You can create two personas.
02:31:55.000 You can say, I want a left-winger and a right-winger and I want them to argue this out.
02:31:57.000 Oh, wow.
02:31:58.000 Right?
02:31:58.000 It'll do that.
02:31:59.000 But here's another thing it'll do is you can tell it to write in the style of any person whose sensibility you admire.
02:32:05.000 Right?
02:32:06.000 So take somebody who you really...
02:32:08.000 Take RFK. You could say, analyze this topic for me.
02:32:13.000 Adopt the persona of RFK and then analyze this topic for me.
02:32:16.000 And it will use all of the training data that it has with respect to everything that RFK has ever done and said and how he looks at things and how he talks about things and how he, you know, whatever does whatever he does.
02:32:26.000 And it will produce something that odds are going to be pretty similar to what the actual person is going to say.
02:32:30.000 But you can do the same thing for Peter Hotez.
02:32:32.000 You can do the same thing for, you know, authority figures.
02:32:34.000 You can do the same thing for, what would Jesus say, right?
02:32:38.000 Literally.
02:32:39.000 Literally, what would Jesus say?
02:32:40.000 And it will, again, it's not Jesus saying it, but it's using the complete set of text and all accounts of everything Jesus ever said and did.
02:32:48.000 And it's going to produce something that at least is going to be reasonably close to that.
02:32:51.000 What a bizarre new world we're in the middle of right now.
02:32:55.000 Exactly.
02:32:56.000 And so you can channel – it's a fascinating thing.
02:32:59.000 You can channel historical figures.
02:33:01.000 You can channel Abraham Lincoln.
02:33:02.000 Like, okay, here's another example for how kids are going to do this.
02:33:06.000 It's like, okay, it's time to learn about the Civil War.
02:33:07.000 Okay, let's talk to Abraham Lincoln.
02:33:10.000 Let's be able to ask him questions.
02:33:11.000 Right?
02:33:12.000 And again, it's not like you're not, of course, actually talking to Abraham Lincoln, but you are talking to the sum total of all written expression, all books ever written about Lincoln.
02:33:19.000 Wow.
02:33:20.000 And he's talking back at you, right?
02:33:21.000 And so, yeah, it'll happily do that for you.
02:33:25.000 Just what is a 20-year-old going to look like that's born today?
02:33:29.000 When they hit 20, like, what kind of access to information, view of the world, understanding of things, instantaneous knowledge?
02:33:38.000 Yep.
02:33:40.000 What, if any, thoughts do you have on things like Neuralink and the emerging technologies of human neural interfaces?
02:33:50.000 Yeah, so this is what the AI safety people describe as like the out.
02:33:56.000 Or the fallback position or something, which is okay.
02:33:59.000 If you can't beat them, join them.
02:34:02.000 Maybe we just need to upgrade everybody's intelligence.
02:34:04.000 Maybe the only way to do that is to kind of fuse man and machine.
02:34:07.000 Maybe.
02:34:09.000 Yeah, look, the technology is very serious technology.
02:34:13.000 The technology is for real that they're working on.
02:34:15.000 They and people like them, it's all for real.
02:34:19.000 People have been working on the ideas underneath this for like 30 years, things like MRIs.
02:34:24.000 And by the way, the thing on this is there's a lot of immediate healthcare applications, so like people with Parkinson's, people who have been paraplegics or quadriplegics being able to restore the ability to move, being able to fix things that are broken in the nervous system, able to restore sight to people who can't see if there's some breakdown.
02:34:43.000 So there's a lot of very straightforward medical applications that are potentially a very big deal.
02:34:48.000 And then there's the idea of like the full actual fusion where, you know, a machine knows what you're thinking and it's able to kind of think with you or you're able to access it and think through it.
02:34:56.000 I would just say it's exciting.
02:34:59.000 The field is moving pretty quickly at this point, but we're I think still, I'm going to guess, 20 years out or something from anything that would resemble what you would hypothesize it to be like.
02:35:12.000 But maybe I'll be surprised.
02:35:13.000 20 years ago was 2003. That's not that long ago.
02:35:16.000 That seems so recent.
02:35:18.000 Time does fly.
02:35:19.000 Yeah, that seems very recent.
02:35:21.000 There have been papers in the last six months, there are actually people using this technology, specifically the same kind of thing that we just saw with the shoe.
02:35:33.000 People claim to now know how to do a brain scan and be able to pull out basically the image that you're thinking of as an image.
02:35:40.000 Now, this is brand new research, and so people are making a lot of claims on things.
02:35:43.000 I don't know whether it's actually real or not, but there's a bunch of work going into that.
02:35:47.000 There's a bunch of work going into whether it can basically get words out.
02:35:51.000 If you're thinking about a word, be able to pull the word out.
02:35:54.000 Yeah, okay.
02:35:57.000 So AI recreates what people see by reading their brain scans.
02:36:01.000 A new artificial intelligence system can reconstruct images a person saw based on their brain activity.
02:36:09.000 Yeah.
02:36:09.000 So the claim here is that those would be the original images on top.
02:36:12.000 And as you're looking at them, it'll do a brain scan, and it'll feed the result of the brain scan into a system like the one that does the shoes.
02:36:18.000 Wow.
02:36:19.000 And then that system produces these images.
02:36:22.000 Wow.
02:36:22.000 That's pretty damn close.
02:36:23.000 Yeah, so it's like an extrapolation off of the image generation stuff that we've been watching.
02:36:28.000 Yeah, it's pretty close.
02:36:29.000 Now, excuse me, this is brand new.
02:36:33.000 Is this real?
02:36:37.000 Right, is it like the Samsung moonshot?
02:36:39.000 Yeah, is it repeatable?
02:36:42.000 By the way, do you need to be strapped to a million dollars worth of lab equipment?
02:36:45.000 Right.
02:36:47.000 These things can take a while to get to work.
02:36:50.000 Pretty fascinating if it's applicable, though.
02:36:52.000 If that really can happen.
02:36:53.000 Hypothetically, yeah.
02:36:54.000 Exactly.
02:36:54.000 Wow.
02:36:56.000 Wow.
02:36:56.000 Exactly.
02:36:57.000 It's a wild world.
02:36:59.000 Mm-hmm.
02:36:59.000 Yeah.
02:37:00.000 The possibilities are very fascinating because it just seems like we're about to enter into a world that's so different than anything human beings have ever experienced before.
02:37:13.000 Yeah.
02:37:13.000 All technology-driven.
02:37:15.000 Yeah.
02:37:17.000 You're in the middle of it, buddy.
02:37:19.000 Enjoying it?
02:37:19.000 Oh, yes.
02:37:21.000 Oh, yeah.
02:37:21.000 Big time.
02:37:22.000 Anything more?
02:37:23.000 Anything more?
02:37:26.000 Maybe the picture I'd leave you with, you mentioned the 20-year-old who has grown up having had this technology the whole time and having had all their questions answered.
02:37:33.000 I think there's actually something even deeper.
02:37:38.000 The AI that my 8-year-old is going to have by the time he's 20, it's going to have had 12 years of experience with him.
02:37:45.000 So it will have grown up with him.
02:37:48.000 Be a good life coach.
02:37:49.000 Yes.
02:37:51.000 It will know everything he's ever done.
02:37:53.000 It will know everything he ever did well.
02:37:55.000 It will know everything he did that took real effort.
02:37:57.000 It will know what he's good at.
02:37:58.000 It will know what he's not good at.
02:37:59.000 It will know how to teach him.
02:38:01.000 It will know how to correct for his, you know, whatever limitations he has.
02:38:05.000 It will know how to maximize his strengths.
02:38:09.000 It'll know what he wants.
02:38:10.000 I wonder if he'll understand how to maximize happiness.
02:38:14.000 Yeah.
02:38:14.000 Like, I wonder if I could say, Mark, you are working too much.
02:38:18.000 If you just worked one less day a week, you'd be 40% happier and only 10% less productive.
02:38:24.000 Yep.
02:38:25.000 Well, if you're wearing an Apple Watch, right, it will have your pulse, and it'll have your blood pressure, and it'll have all these things, and it'll be able to say, you know, look, when you were working on this, you were relaxed.
02:38:34.000 Your serotonin level, you know, your serotonin or your whatever, oxytocin levels were high.
02:38:39.000 Serotonin levels were high.
02:38:40.000 When you were doing this other thing, your cortisol levels were high.
02:38:42.000 You shouldn't do that.
02:38:43.000 Let's figure out a way to have you not have to go through that again.
02:38:46.000 Sure.
02:38:46.000 Yeah.
02:38:47.000 Yeah, absolutely.
02:38:48.000 Yeah.
02:38:49.000 By the way, you know, sleep.
02:38:50.000 You know, you didn't sleep well.
02:38:55.000 It'll have all that.
02:38:57.000 They hit college or they hit the workplace and they'll have an ally with them.
02:39:03.000 Even before there's any sort of actual physical hookup, they'll have basically a partner that'll be with them, whose goal in life will be to make them as happy and satisfied and successful as possible.
02:39:15.000 Pretty fascinating stuff.
02:39:17.000 How about that?
02:39:18.000 Well, I'm interested and I'm going to be paying attention.
02:39:22.000 I really appreciate you coming in here and explaining a lot of this stuff.
02:39:25.000 It made me actually feel better.
02:39:27.000 And it actually gives me hope that there's possibly, especially with real open source, a way to avoid the pitfalls of the censorship that seems likely to be at least attempted to be implemented.
02:39:39.000 Yep.
02:39:40.000 Yep.
02:39:40.000 Me too.
02:39:41.000 All right.
02:39:41.000 Good.
02:39:42.000 Thank you, Mark.
02:39:42.000 Appreciate you.
02:39:43.000 Thank you, Joe.
02:39:43.000 Bye, everybody.