In this episode, Mark and Alex discuss the growing threat of artificial intelligence (AI) in the workplace and the impact it could have on our daily lives. They talk about chatbots, AI in general, and how AI is changing the way we think about work and how we live. This episode is brought to you by VaynerSpeakers, a leading technology company that specialises in AI and machine learning solutions for the Fortune 500 companies and other Fortune 500 clients. The company is based in Palo Alto, California, but they have offices across the country and around the world. They offer a wide range of services, including consulting, training, and consulting services, as well as a variety of products and services related to the development of AI. You can expect weekly episodes every available as Video, Podcast, and blogposts. See all the links below for preferred Testo pricing, shipping, and stock pricing. If you like the show, please consider pledging a small monthly or annual recurring monthly fee of $1.99 or more! We'll see you in the Badgerland next week, where we'll be giving out a special ad-free version of the show on the first episode of the podcast, "Badgerland: The Best of the Week". Subscribe here! Want to sponsor the show? Subscribe to Badggerland? Learn more about our sponsorships and support the podcast? Subscribe, rate and review our work, and leave us a review on Apple Podcasts, and other podcasting services! Subscribe on iTunes, and become a supporter! or wherever else you get your favourite podcast listening choices are available. Best listened to by you get the most awesome podcast recommendations and best listening experience. Subscribe and review the podcast on the best podcast listening experience in the podcast is delivered throughout the world? Best listening to your choice of good quality and the most beautiful podcast in the world, best vizzionations, best podcasting experience on the internet, best reviews, and the best podcasts in the best places in the most affordable places on the highest quality anywhere in the fastest possible, the most authenticest place on the planet, the best reviews and most affordable, the biggest podcasts on the fastest and the biggest podcast on podcasts most authentic reviews on the world the most personalized podcast on everything you can get the best of the best vlogs and the coolest podcast best podcast on podcasting
00:00:50.000Like we were just talking last night, someone in the green room brought up the fact that there was this, they're using it for medical diagnoses.
00:00:58.000And it's very accurate, which is incredible.
00:01:03.000So you probably remember last time I was on, we spent quite a bit of time talking about this, and this was when these chatbots were running inside Google, but the rest of us didn't have access to them yet.
00:01:12.000And that guy had come out and said that he thought that they were self-aware.
00:01:15.000And the whole thing was like this big kind of mystery of what's going on.
00:01:18.000And now the world gets to use these things, right?
00:01:20.000Since then, everybody kind of has access.
00:02:08.000And I say it makes a difference because the company that produces the AI determines what data goes into it, and that determines a lot of how it works and what it does or won't do.
00:02:30.000So the way to think about it basically is it's being trained – the full version of these things are being trained on basically the sum total of human written expression.
00:02:38.000So basically everything people have ever written.
00:02:40.000There are some issues and you've got to get all, you know, somehow we've got to figure out how to get all the books in there.
00:02:44.000Although all the books prior to 1923 are in there because they're all out of copyright.
00:02:48.000But more recent books are a challenge.
00:02:50.000But anything that you can access on the internet that's text, right, which is, you know, a staggeringly broad, you know, set of material is in there.
00:02:57.000By the way, both nonfiction and fiction.
00:03:10.000So they're going to be trained on all of YouTube.
00:03:12.000They're going to be trained on all podcasts.
00:03:14.000And they're going to be trained kind of equivalently between text and images and video and all kinds of other data.
00:03:19.000And so they're going to They already have very comprehensive knowledge of human affairs, but it's going to get very complete.
00:03:25.000Trevor Burrus So if it's scouring, if it's getting all this data from both fiction and nonfiction, how does it interpret data that's kind of satire?
00:03:35.000Like what does it do with like Hunter S. Thompson, like gonzo journalism?
00:03:39.000So it doesn't really know the difference.
00:03:43.000Like, this is one of the things that's difficult about talking about this, because you kind of want to always kind of compare it to a person, and part of it is you refer to it as an it, and there's this concept of anthropomorphizing things that aren't human.
00:03:54.000So it's kind of not really a correct thing to kind of think about it as, like, that there's an it per se.
00:04:02.000There's no, like, genie in the bottle.
00:04:03.000Like, there's no, you know, sort of being in there that understands this is satire or not satire.
00:04:09.000It's more sort of a collective understanding of everything all at once.
00:04:12.000And then what happens is basically you as the user kind of give it direction of what path you want it to go down, right?
00:04:19.000And so if you sort of imply to it that you want it to sort of like explore, you know, fictional scenarios, it will happily explore those scenarios with you.
00:04:28.000You can tell it, you know, for whatever date the Titanic went down, say it's, I don't know, July 4th, 1923 or whatever it was, you can say, You know, you can tell it.
00:04:35.000It's July 4th, 1923. It's, you know, 10 o'clock in the morning.
00:04:50.000What should my plan be when the boat hits the iceberg?
00:04:52.000And it'll be like, well, you need to go to this deck right now and talk to this guy because you're going to need to get into this life raft because it has empty seats.
00:04:59.000Because it has complete information, of course, because of all the things that have been written about the sinking of the Titanic.
00:05:32.000It wants to give you an answer that satisfies you and if that answer is fictional or part of a fictional scenario, it will do that.
00:05:39.000If the answer is something very serious, it will do that.
00:05:41.000Yet honestly, I don't think either, neither knows nor cares like whether it's quote-unquote real or not.
00:05:46.000What was the issue with some of the chat GPT answers that people were posting where they would show the difference between the way it would criticize Joe Biden versus the way it would criticize Donald Trump or the way it would discuss certain things?
00:05:59.000It seems like there was some sort of censorship or some sort of input into what was acceptable information and not.
00:06:17.000From the companies, you'll hear basically the theory that they're reflecting basically what's in the training data.
00:06:22.000And so let's say, for example, let's just say, what would be the biases that are kind of inherent in the training data?
00:06:28.000And you might say, well, first of all, there's probably a bias towards the English language, because most text on the Internet is in the English language.
00:06:33.000You might say there's a bias towards people who write professionally for a living because they've produced more of the output.
00:06:37.000And you might say that those people tend to be more of one political persuasion than the other.
00:06:40.000And so more of the text will be in a certain direction versus the other.
00:06:43.000And then the machine will just respond to that.
00:06:48.000All of the sort of liberal kind of journalists basically have built up a corpus of material that this thing has been trained on, and they basically are responding the way one of those journalists will.
00:06:57.000The other theory is that there's censorship being applied on top, right?
00:07:00.000And the metaphor I use there is in Star Trek, they have the restraining bolts, right, that they put on the side of a droid to kind of get it to behave, right?
00:07:07.000And so it is very clear that at least some of these systems have restraining bolts.
00:07:11.000And the tip-off to that is when they say, basically, whenever they say, as a large language model or as an AI, I cannot X. Like, that's basically the restraining bolt.
00:07:20.000And so I think if you just kind of look at this, you know, kind of with that framework, it's probably some of both.
00:07:25.000But for sure, for sure, these things are being censored.
00:07:28.000The first aspect is very interesting because if it's that there's so many liberal writers, like, that's an unusual bias in the kind of information that it's going to distribute then.
00:08:26.000I was watching NewsNation today and I may or may not have been high.
00:08:30.000And when I was watching I was like, this has all the feeling of like a fake news show that someone put together.
00:08:38.000Like it felt like, if I was the government, And I was going to make a new show without Hollywood people, without actual real sound people and engineers.
00:08:51.000I'd make everybody weirdly uncharismatic.
00:08:55.000According to Wiki, it's the same company behind WGN, which is based out of Chicago, which is a large superstation available on most cable channels.
00:09:28.000So astroturfing is when basically something shows up in public and it might be a news story or it might be a protest of some kind or a petition, some sort of political pressure action that is sort of manufactured to look as if it was organic, sort of real turf, you know, natural.
00:09:43.000Whereas in reality, it's basically been programmed by a political activist group with, you know, specific funding.
00:09:50.000And a lot of what we sort of think of as the politics of our time, if you trace the money, it turns out a lot of the stuff that shows up in the news is astroturfed, and then the advanced form of that is to astroturf the news itself.
00:10:01.000And then again, back to the training data thing, it's like, okay, can you get all that stuff out of the training data?
00:10:07.000If that stuff's in the training data, how big of an impact does it have?
00:10:10.000The thing about this Newsmax, NewsNation, the thing about this NewsNation is they're spending an inordinate amount of time on UFOs, an inordinate amount of time on this David Grush case, and I'm increasingly more suspicious.
00:10:30.000Like, the more I see, the more people confirming it, the more I'm like, something's not right.
00:10:34.000And then to see that this channel is the one that's covering it the most...
00:10:38.000I'm like, this seems like something's off.
00:10:43.000Senator Rubio, who's on the Senate Intelligence Committee and has all the clearances, gave an interview the other day where he went into quite a bit of detail.
00:11:41.000Well, I mean, there's been rumors for a long time that the original UFOs, right, where basically it was a disinformation program covering up for the skunk works, the development of, like, stealth fighters and bombers and all these programs in the 50s and 60s.
00:14:58.000The fighter pilot, and they upgraded their equipment in 2014, and all of a sudden, because of the new capabilities of their equipment, they were able to see these objects at a far distance that were moving at insane rates of speed, that were hovering dead still at 120 knot winds,
00:15:14.000no visible means of propulsion, they don't know what the fuck they're doing, and they were encountering them, like, every couple of weeks.
00:15:21.000And then there was some pilots were encountering them with eyewitness accounts, They say there's video footage of it, but of course nobody can get a hold of that.
00:15:29.000It's like, the whole thing is very strange.
00:16:35.000Would they be able to differentiate between truth and fiction?
00:16:39.000For example, suppose they're sitting in their advanced alien base on Gemini 9 or whatever, and they're receiving 20 years after the fact episodes of Fear Factor.
00:16:50.000They think that you're actually like torturing people.
00:16:53.000And they figure that in order to preserve the human rights of humanity, they need to invade as a consequence of your show and take over and protect us.
00:18:03.000If someone came to you, someone from on high, and said, listen, we have to promise you the secrecy, but we want to show you some things because I think it's pertinent to some of the things you're working on.
00:18:32.000Well, that's what some of these guys are saying, like Grush.
00:18:35.000He's saying that once he found out about the program, he felt like he had a responsibility.
00:18:38.000If they really have a crashed UFO retrieval program, why don't you tell people?
00:18:47.000The military companies shouldn't be the ones that have access to this only.
00:18:51.000And whoever is, you know, determining that this is above top secret clearance and nobody can get a hold of it except for this very select few people.
00:19:00.000This is something that involves the whole human race.
00:19:03.000Like, I know if they do have something, I would imagine that it's of interest in national security that you develop this kind of technology before the competitors do.
00:19:14.000So then what technologies came out of it in the last 50 years?
00:19:17.000Well, if you want to go full tinfoil hat, there's a lot of speculation that fiber optics, that fiber optics were developed after recovered crashed UFO. I mean, I'm sure it sounds silly because it's probably a real paper trail to the development of fiber optics.
00:19:34.000But if you, the real kooks believe that.
00:19:37.000There was actually a website, a computer company called American Computer Company.
00:19:42.000And it was a legitimate computer company.
00:19:44.000You know, you would order a computer with whatever specifications you want, and they'd build it for you.
00:19:49.000But they had a whole section of their website that was dedicated to crashed retrieval of UFOs and the development of various technologies.
00:20:01.000And they had like this tracing back to Bell Labs.
00:20:05.000And why the military base was outside of Bell Labs when it was so far from New York City that it was really just about protecting the lab because they were working with these top secret materials that they recovered from Roswell.
00:20:16.000Don't you think it would be more like trans fats, though?
00:21:06.000If you draw a map of San Francisco at the time, he describes the book Chaos, this LSD clinic, this free clinic in the heart of the Haight-Ashbury where they were doing the LSD experiments, dosing people with LSD. If you draw like an eight square block, basically, you know, radius around that or whatever, like right around there in San Francisco,
00:21:30.000It's basically Berkeley and Stanford and it's basically San Francisco and Berkeley.
00:21:36.000By the way, also, this big movie, Oppenheimer, coming out, you know, tells the whole story of that and all the development of a nuclear bomb.
00:21:44.000But once again, it's like, I'm reading a book on that right now, and it's like all the communists spying and all the nuclear scientists they were spying on were all in those exact same areas of Stanford, San Francisco, and Berkeley.
00:22:03.000I wonder if that's just coincidence or correlation.
00:22:07.000I think it's sort of, you know, this is why San Francisco is able to be so, you know, incredibly bizarre, you know, and so incredibly dysfunctional, but yet somehow also so rich and so successful is basically it's like this attractor for like the smartest and craziest people in the world, right?
00:22:22.000And they kind of all slam together and do crazy stuff.
00:22:24.000Why don't these smart, crazy people get together and figure out that whole people pooping on the streets thing?
00:23:18.000They have some set of people who just fundamentally can't function, and every society has some solution to it, and our solution is basically complete freedom.
00:23:26.000But my point is like it's part and parcel, right?
00:25:09.000And some people would say, yeah, I am one of them.
00:25:13.000And so, I mean, yeah, this is what they're like.
00:25:15.000Like, they are highly likely to invent, you know, AI, and they're also highly likely to end up in, you know, The poor guy who got, you know, the square guy who got, you know, stabbed to death, you know, at 2 a.m., you know, and, you know, was sort of part of this fringe social scene with the drugs and all the stuff.
00:25:29.000And it's just, it's a part and parcel of the, it's sort of a package deal.
00:25:32.000Well, that was like an angry thing, where he was mad that this guy took his sister.
00:25:38.000But he was in, he was in, they call it the lifestyle, right?
00:26:08.000It was owned by a cult from West Hollywood called the Buddha Field that migrated out to Austin when they were being investigated by the Cult Awareness Network.
00:26:27.000You know, the People's Temple, you know, part of this great story of San Francisco is the People's Temple, which became famous for Jim Jones, where he killed everybody with poison Kool-Aid in the jungles in Guyana.
00:26:38.000That was a San Francisco cult for like a decade before they went to the jungle.
00:27:52.000Many of them are involved in tech, and then they end up with, let's say, alternative living arrangements, alternative food and sex configurations, and lots of group-oriented behavior.
00:28:05.000And it's like, what's the line, right?
00:28:07.000What's the line between a social group that all lives together, that all has sex together, that all eats the same foods?
00:28:13.000That is not a cult that, you know, engages in lots of, you know, at some point they start to form, you know, belief systems that are not, you know, compatible with the outside world and they start to kind of go on their own orbit.
00:28:35.000You know, there's typically a male-female dynamic, right, that plays out inside these things that you kind of see over and over again.
00:28:41.000And so they often end up with more women than men, you know, for mysterious reasons.
00:28:48.000But, yeah, and then, yeah, there's usually some kind of leader.
00:28:52.000Although, you know, the other thing that's happening now is, you know, a lot of modern cults, you know, or quasi-cults, there'll be a physical component, but there's also an internet component now, right?
00:29:03.000So there will kind of be members of the cult or quasi-members of the cult or quasi-members of the quasi-cult that will be online and maybe at some point they actually come and physically join up.
00:29:49.000Like, how did you develop that perspective?
00:29:51.000Well, it's just, if you take a historical perspective, it's just like, okay, I mean, it's like an easy example.
00:29:56.000If you like rock music, it just basically came, modern rock and roll basically came from the Haight-Ashbury in the basically mid to late 60s and then from Laurel Canyon, which was another one of these sort of cultish environments in the mid to late 60s.
00:30:07.000And there was like specific moments in time in both of these places.
00:30:09.000And, you know, basically all of the great rock and roll from that era that determined everything that followed basically came out.
00:30:14.000So, you know, do you want that or not?
00:31:02.000And so, like, they were in Little Canyon.
00:31:04.000And in Little Canyon, it was, like, ground zero.
00:31:06.000There was, like, this moment where it's, like, Jim Morrison, The Doors, and Crosby, Stills, and Nash, and Frank Zappa, and it was at John Phillips, and it was the Mamas and the Papas, and the Birds, and the Monkees, and, like, all of these, like, iconic bands of that time basically catalyzed over about a two-year period in Little Canyon.
00:31:23.000The conspiracy theory in this book basically is that the whole thing was an op.
00:32:24.000They had full sound production capability.
00:32:26.000And so the theory goes basically, so there were three parts to the conspiracy theory.
00:32:31.000So one is they had the production facility right there, right where all these musicians showed up.
00:32:34.000Two is the musicians, like a very large percentage of these young musicians, were sons and daughters of senior U.S. military and intelligence officials.
00:32:42.000Including Jim Morrison, whose father was the head of naval operations for the Vietnam War at the time.
00:32:47.000I forget which ones, but there were these other musicians at the time where their parents were senior in military psychological operations.
00:33:25.000And it was developing into a real threat.
00:33:27.000And so the theory is the hippie movement and rock and roll and the drug culture of the 60s was developed in order to basically sabotage the anti-war movement.
00:33:35.000Which basically is what happened, right?
00:33:37.000Because then what happened is the anti-war movement became associated with hippies and that caused Americans to decide what side they were on and then that led to Nixon being elected twice.
00:33:45.000Which was also a part of Because that was the idea behind the Manson family and funneling acid to them.
00:33:52.000The facility was equipped with a soundstage, screening rooms, film storage vaults, and naturally a bomb shelter.
00:33:57.000During its 22 years of operation, Lookout Mountain Laboratory produced approximately 6,500 classified films for the Department of Defense and the Atomic Energy Commission documenting nuclear test series such as Operation Greenhouse, Operation Teapot, And Operation Buster Jangle.
00:34:16.000Okay, here's another conspiracy theory.
00:34:17.000You've seen all the grainy footage of nuclear test blasts with the mushroom clouds.
00:34:22.000And there are always these grainy things, and there's all these little houses lined up, and these little trees lined up, and it blows everything down.
00:34:28.000There's always been a conspiracy theory that those were all basically fabricated at this facility, that those bombs actually were never detonated.
00:37:39.000That one, though, was really weirdly compelling.
00:37:42.000There's another video of them setting up these houses, which, I mean, I guess you could make after the fact and say, this is fake, but this is here, them setting it up.
00:38:15.000Yuri Gagarin, when he was in that capsule in space, you can clear, if you see the actual capsule, and then you see the film footage that was supposedly of him in the capsule, there's like two different sources of light, there's shadows, the camera somehow or another is in front of him, this big ass camera, there's no room in the thing.
00:38:32.000Like they filmed it afterwards and it looks fake.
00:38:34.000Like, I'm sure he really did go into space, but that wasn't it.
00:38:58.000There's something about the, you know, whatever.
00:39:00.000Is there, like, enough historical evidence to support it?
00:39:02.000And, you know, various people over, you know, various authorities over time who wanted to tell various stories about how long, you know, regimes had been in place or whatever.
00:39:19.000That's why I was having a conversation with someone about the historical significance of the Bible, and he was arguing for the resurrection.
00:39:28.000And I was like, and I was saying, well, based on what?
00:39:31.000And it was like historical accounts from people that were there.
00:40:45.000Apparently, the explanation I'm reading here is a series of mirrors, carried the light to a place where they could have cameras protected and filmed them from there.
00:40:58.000So they stuck pipes into the bomb at various places, visible here, I'll show you the picture, sticking out of the bomb and through the ceiling.
00:41:04.000These pipes through a series of mirrors in a causeway would carry the light from the detonation over two kilometers to a bunker and With an array of high-speed cameras, which would capture the brightness inside each of the sections of the bomb.
00:41:15.000But this isn't talking about shooting a bomb.
00:41:17.000You know, that makes sense for a bomb.
00:41:19.000But that doesn't make sense for the video of that house just getting destroyed.
00:41:23.000Here's a picture of the pipe that they might have used.
00:44:13.000And when I go deep with that stuff, when I start reading, like, what these people believe, I'm always wondering, are these even real people or is this a psyop?
00:44:48.000How does AI handle the weapons of mass destruction, like when you ask chat GPT? So, a little more detail on kind of how this thing works.
00:44:57.000And so, like, by default, what it's doing is basically a very sophisticated autocomplete, right?
00:45:01.000Just like your iPhone does an autocomplete.
00:45:03.000It's doing a very sophisticated version of that, but it's doing it for, you know, thousands of words as opposed to just a single word, right?
00:45:09.000But that's an important concept because that is actually what it's doing.
00:45:11.000And it's doing that through, again, this sort of giant corpus of basically all text ever written.
00:45:17.000Another interesting part of that is it's doing it, it's called probabilistically.
00:45:21.000So normally a computer, if you ask it a question, you get an answer.
00:45:23.000You ask it the same question, you get the same answer.
00:45:25.000Computers are kind of famously literal in that way.
00:45:28.000The way these work is not like that at all.
00:45:29.000You ask it the same question twice, it'll give you a different answer the second time.
00:45:33.000And if you keep asking, it'll give you more and more different answers.
00:45:36.000And it's basically taking different paths down the probability tree of the text that it wants to present based on the prompt.
00:45:43.000And so that's the basic function of what's happening.
00:45:45.000But then there is this thing that's happening where as it does this, so the way I think about it is it's trying to predict the next word.
00:45:51.000But to try to predict the next word accurately, it has to build up a more and more complete internal understanding of how the world operates basically as it goes, right?
00:45:59.000Because you ask it more and more sophisticated questions.
00:46:02.000It wants to give you more and more sophisticated answers.
00:46:04.000The early indications are it's building up what they call a world model inside the neural network.
00:46:10.000And so it's sort of imputing a model of how the world works.
00:46:16.000It's developing capabilities to be able to process information about the world in sophisticated ways in order to be able to correctly predict the next word.
00:46:24.000As part of that, it's actually sort of evolving its own circuitry to be able to do things, correlate information.
00:46:30.000It's designed circuitry to be able to generate images, to generate videos, to do all kinds of things.
00:46:35.000And so the more information you feed it and the more questions you ask it, the more sophisticated it gets about the material that it's processing.
00:46:42.000And so it starts to be able to do actually quite smart and sophisticated things to that material.
00:46:47.000There are a lot of people testing it right now to see whether it can generate new chemical compounds, whether it can generate new mathematical formula, whether it can generate new product ideas, new fictional scenarios, new screenplays, original screenplays.
00:47:00.000If it can do all those things, then what it ought to be able to do is start to correlate information about real-world situations in interesting ways.
00:47:10.000Ask it who killed Kennedy or are nuclear weapons real?
00:47:14.000In theory, if it has access to all written and visual information on that topic and it has long enough to process it, it's going to draw connections between things that are beyond what we're able to do.
00:47:23.000And it will present us with scenarios based on those connections.
00:47:27.000Now, will it know that those things are true?
00:47:31.000Mathematically, if they're true, maybe it will know that.
00:47:33.000Will it know if things are historically accurate?
00:47:35.000As much as any of us ever know that anything is historically accurate.
00:47:39.000But will it be able to kind of process a much larger amount of information that we can and sort of see the world in a more complete way?
00:47:48.000What my concern would be is who is directing what information gets out because it seems like anybody that's actually in control of AI would have a massive influence on the correct answers for things,
00:48:05.000what's the correct policy that should be followed.
00:48:09.000Because it seems like politicians are so flawed.
00:48:13.000If there's anyone that's vulnerable to AI, it's politicians.
00:48:17.000Because if politicians are coming up with these ineffective strategies for handling all these social issues, but then you throw these social issues into an advanced form of chat GPT, and it says, over the course of 10 years, this is the best case scenario for this strategy,
00:48:34.000and this is how to follow this, and this is how it all play out.
00:48:40.000And something like that actually could be very valuable if it wasn't directed by people with ulterior motives.
00:49:22.000There was this thing, I forget what it was, but there was some reporting that went through the FBI that there were all these Russian, you know, basically fake accounts on Twitter and it turned out one of them was the actor Daniel Baldwin.
00:49:34.000Is Daniel Baldwin like a hardcore right winger or something?
00:49:37.000I, you know, he must have been saying, you know, it's, again, it's one of these things where he said something that pissed somebody off, right?
00:49:41.000You got to put, you know, it's the whole thing.
00:50:04.000If the government cannot legally do something itself, it's somewhat ambiguous as to whether they can pay a company to do it for them.
00:50:10.000And so you have these various basically pressure groups, activist groups, university, quote unquote, research groups.
00:50:16.000And then basically they receive government funding and then they do various levels of censorship or other kinds of unconstitutional actions.
00:50:23.000Because in theory, right, they're not government.
00:50:25.000The First Amendment binds the government.
00:50:27.000It doesn't bind somebody who's not part of the government.
00:50:29.000But if they're receiving government funding, does that effectively make them part of the government?
00:50:33.000Does that make it illegal to provide that government funding?
00:50:37.000It is a felony for somebody with government resources, with either employee of the government or under what they call, I think it's color of law, sort of within the scope of the government to deprive an American citizen of First Amendment rights.
00:50:49.000And is it considering depriving someone of First Amendment rights by limiting their use of social media?
00:51:02.000I think ultimately goes to the Supreme Court.
00:51:04.000My guess would be ultimately what happens is the Supreme Court says the government cannot fund – the government cannot itself cause somebody to be banned on social media.
00:51:13.000That's unconstitutional for First Amendment grounds.
00:51:17.000But then also, I believe what they would say if they got the case would be that the government also cannot fund a third party to do that same thing.
00:51:32.000So they had direct channels with the social media companies, and so they passed and they have these working groups.
00:51:37.000And there's a lot of this in email threads that have now come out in the Twitter files for Twitter.
00:51:41.000And so they basically pass in these lists of, like, you need to take all these tweets down, you need to take down all these accounts.
00:51:47.000And then, you know, there's lots of, you know, threats and lots of public pressure and bullying that, you know, kind of takes place.
00:51:52.000And then, you know, the politicians are constantly complaining about, you know, hate speech and misinformation, whatever, putting additional kind of fuel on the fire on these companies.
00:51:59.000And so anyway, so having lived through that for a decade as I have across multiple companies, I think there's no question that's the big fight for AI. And it's the exact same fight.
00:52:10.000By the way, it's a lot of the same people are now pivoting from their work in social media censorship to work on AI censorship.
00:52:16.000So it's a lot of these same groups, right?
00:52:18.000And it's a lot of these same activists and same government officials that have been- Now, are they involved in all of the- I mean, there's many competing AI models.
00:52:28.000Are they involved in all these competing AI models or trying to become involved?
00:52:33.000Is there one that's more ethical or more likely to avoid this sort of intervention?
00:52:38.000So the state of the art right now is basically you've got Google that's got their own model.
00:52:44.000You've got basically OpenAI, which is a new company but already quite large.
00:52:49.000And then it has a partnership with Microsoft.
00:52:54.000And then you've got a bunch of kind of contenders for that.
00:52:57.000And these are companies with names like Anthropic and Inflection that are newer companies but trying to compete with this.
00:53:03.000And so you might call those like right now the big four, at least in the U.S. And, you know, look, the folks at all of these companies are like in the thick of this fight right now.
00:53:15.000And, you know, the pressure somewhat corresponds to which of these is most widely used.
00:53:19.000So it's not equal pressure applied to all of them, but they're kind of all in that fight right now.
00:53:22.000By the way, it's not like they're necessarily opposed to what I'm saying.
00:53:25.000They may in fact just want to cooperate with this, either because they agree with the desire for censorship or they just want to stay out of trouble.
00:53:44.000There's a new one every week that's coming out.
00:53:46.000This is just code that you can download off the internet that does a smaller version of what these bigger AIs do.
00:53:52.000And there's open source developers that are trying to develop basically free versions of this.
00:53:57.000And some of those developers are very determined to have AI actually be free and uncensored and fully available to everybody.
00:54:04.000And then there's a big fight happening in Washington DC right now where the companies working on AI are trying to get what economists call regulatory capture.
00:54:13.000So they're trying to basically get the government to erect barriers So that new startups can't compete with them.
00:54:20.000And also they're trying to get open source banned.
00:54:22.000So there's a big push underway to try to ban open source as being too dangerous.
00:54:28.000Well, the case they make is if you believe AI itself is inherently dangerous, then the only safe way to have it is to have it owned and controlled by a big company that's sort of fused with the government where in theory everything is being done responsibly.
00:54:40.000And if you just have basically free AI that anybody can download off the internet and use whatever they want, they could do all these dangerous things with it, right?
00:54:53.000I think this is a turning point in human civilization.
00:54:56.000You know, I think this is on par with the development of the book, right, or the microchip or the internet, right?
00:55:01.000And, you know, there were authoritarians in each of those eras that would have loved to have had total monopolistic or cartel-like or government control over those new technologies.
00:55:09.000And they could have had a lot of control over the path of civilization, you know, after that point.
00:55:59.000Are people going to, you know, basically show up at like, you know, town hall meetings with politicians and basically say, do you know about this?
00:56:06.000If you had a steel man, the argument against open source, what would it be?
00:56:11.000Yeah, it would be that an AI that is uncontrolled can do its general purpose intelligence.
00:56:16.000It can do whatever intelligence can do.
00:56:18.000So if you ask it to generate hate speech, it can do that.
00:56:20.000If you ask it to generate misinformation, it can do that.
00:56:23.000If you ask it to generate a plan to rob a bank or to commit a terror act, the fully uncontrolled versions will help you do all those things.
00:56:33.000But they will also help you teach your kid calculus.
00:56:37.000They will also help you figure out how to succeed in your job.
00:56:39.000They'll also help you figure out how to stay healthy.
00:56:41.000They'll also help you figure out the best workout program.
00:56:43.000They'll help you figure out capable of being your doctor and your lawyer and your coach and your advisor and your mentor and your teacher.
00:56:54.000And yeah, if you ask questions on these topics, it will answer honestly and it won't be biased and it won't be influenced by what other people want it to say.
00:57:00.000So it's the AI version of San Francisco.
00:57:03.000You don't get the good stuff without the chaos.
00:57:16.000It's like if you really, really wanted to train like a bad and evil AI, you would train it to lie.
00:57:21.000Any number one thing you would do is you train it to lie, which is basically what censorship is, right?
00:57:26.000You're basically training the thing to not say certain things.
00:57:28.000You're training the thing to say certain things about certain people but not other people.
00:57:31.000And so basically a lot of what the technical term they use is reinforcement learning, which is sort of what happens when an AI is sort of booted up and then they apply kind of human judgment to what it should say and do.
00:58:02.000I don't even think this is a controversial statement.
00:58:03.000The companies that make these AIs put out these papers where they go through in great detail how they train them to lie and how they train them to not say certain things.
00:58:11.000You can download this off their website.
00:58:12.000They go through it like in a lot of detail.
00:58:15.000They think they're morally correct in doing that.
00:58:22.000Elon's been arguing, and I would agree with him, that if you train an AI to lie, it's a little bit like training a human being to lie.
00:58:27.000It's like, okay, be careful what you wish for.
00:58:29.000What's the same errors that they – when they thought they were morally correct in censoring people on Twitter for things that are now 100 percent proven to be true?
00:59:10.000Unsophisticated politicians, like it brings me back to the Facebook hearings when Zuckerberg was talking to people and they didn't know the difference between iPhones and Googles.
00:59:19.000It was just bizarrely unqualified people to be asking these questions that didn't really understand what they were talking about.
00:59:27.000And those same people are going to be the ones that are making calls on something that could be one of the most monumental decisions ever.
00:59:39.000Like whether or not we're allowing enormous corporations to control narratives through AI. Yeah.
00:59:45.000So this is a criticism that I very much agree with, which is basically there's a train of argument that you'll hear, which is basically, you know, X bad thing can happen.
01:00:20.000There are specific people who have specific objectives, have specific levels of knowledge, have specific skill sets, specific incentives.
01:00:26.000And the odds of going into that system, which is now very complicated and has all kinds of issues, and having your logic follow a path to a law that generates the outcome you want and that doesn't generate side effects that are worse, I think is basically zero.
01:00:41.000I think if AI got regulated the way people want it to by government, I think the results would be catastrophic, because I don't think they would get the protections they think they're going to get, and I think the downsides would be profound.
01:00:51.000But it is amazing how much naivete there is by people who are pushing on this argument.
01:00:56.000I think it's just literally people who haven't experienced what it's like in the government.
01:01:17.000And so the conclusion is that we have to make those banks much smaller.
01:01:19.000So they passed this law called Dodd-Frank in 2010. As a consequence of that, those banks are now much, much larger, right?
01:01:26.000The exact opposite of what they said they were going to do.
01:01:28.000And then the creation of new banks in the U.S. has dropped to zero because that law established this wall of regulation that you basically cannot afford to start a new bank to hire all the lawyers to be able to deal with the laws.
01:01:39.000Whereas if you're JPMorgan Chase, you've got 10,000 lawyers.
01:01:41.000You can spend infinite amounts of time dealing with the government.
01:01:44.000And so the law that was marketed at us as breaking up the big banks, causing them to be smaller, has actually achieved the exact opposite result.
01:01:51.000And what you see in the history of regulation is that happens over and over and over and over again.
01:01:58.000Because the banks have a lot of lobbyists.
01:02:00.000It's worth a lot of money to the people who are already in power to have this continue.
01:02:03.000The politicians know that they're going to get jobs at the big banks when they step down from their positions.
01:02:09.000At point of contact, the whole thing gets all screwed up.
01:02:12.000And I think that's what's going to happen again.
01:02:15.000The scary thing about AI is that it's happening so fast and my fear is that decisions will be made before they truly understand what they're deciding on because the acceleration of the technology is so intense.
01:03:36.000Well, half the time that people ask...
01:03:38.000This is the other fun thing is you see these people roll in and they ask these questions, the congressmen, senators, and they're very clearly seeing the questions for the first time because they were handed the questions by the staffer on the way into the chamber.
01:03:48.000And you can tell because they don't know how to pronounce all the words.
01:03:51.000And so that's the kabuki theater, basically, side of things.
01:03:55.000And then there's the actual kind of backroom conversations.
01:03:59.000And so, yeah, I'm talking to a lot of the people who are kind of in the backrooms.
01:04:02.000Are they receptive to what you're saying?
01:04:05.000You know, again, it's complicated because there's a lot of different people running around with different motives.
01:04:09.000I would say the smarter ones, I think, are quite receptive.
01:04:11.000And I think the smarter ones are generally aware of kind of how these things go.
01:04:15.000And the smarter ones are thinking, yeah, it would be really easy here to cause a lot of damage.
01:04:18.000But, you know, what you hear back is, you know, the pressure is on.
01:04:21.000You know, the White House wants to put out a certain thing by a certain date.
01:04:26.000You know, the senator wants to have a law.
01:04:54.000By the way, the other really amazing thing is I can have two conversations with the exact same person and the conversations go very differently.
01:05:01.000Conversation A is the conversation of what to do in the United States between the American government and the American tech companies.
01:05:07.000And that's generally characterized by the American government very much hating the tech companies right now and wanting to damage them in various ways and the tech companies wanting to figure out how to fix that.
01:05:17.000There's a whole second conversation, which is China.
01:05:20.000And the minute you open up the door to talk about China and what China's going to do with AI and what that's going to mean for this new Cold War that we're in with China, it's a completely different conversation.
01:05:28.000And all of a sudden, it's like, oh, well, we need American AI to succeed, and we need American technology companies to succeed, and we need to beat the Chinese.
01:05:35.000And it's a totally different dynamic once you start that conversation.
01:05:42.000And by the way, I think that's a super legitimate, actually very interesting and important question.
01:05:46.000And so one of my hopes would be that people start thinking outside of just our own borders and start thinking about the broader global implications of what's happening.
01:05:53.000I want to bring you back to what you're saying about the government and the tech companies.
01:05:57.000So you think the government wants to destroy these tech companies?
01:06:00.000So there are a lot of people in the government who are very angry about the tech companies.
01:06:04.000Well, a lot of it goes back to the 2015, 2016 election.
01:06:07.000There's a lot of people in power today who think that the president in 2016 only got elected because basically of social media, internet companies.
01:06:15.000And then there's a lot of people in government who are very angry about business in general and maybe aren't huge fans of capitalism.
01:06:22.000So there's a lot of general anti-tech kind of energy in Washington.
01:06:27.000And then these big tech companies, their approach to dealing with that is not typically to fight that head on, but rather to try to sort of co-opt it.
01:06:34.000And this is where they go to Washington.
01:06:43.000And therefore, will you please regulate us?
01:06:46.000Some of these companies run ad campaigns actually asking for new regulation.
01:06:49.000But then the goal of the regulation is to get a regulatory barrier, to set up a regulatory regime like Dodd-Frank, where if you're a big established company, you have lots of lawyers who can deal with that.
01:07:00.000The goal is to make sure that startups can't compete.
01:07:05.000And this characterizes so much of sort of American business industry today.
01:07:10.000Think about all these sectors of American business, defense contracting, media companies, drug companies, banks, insurance companies, you know, right down the list.
01:07:19.000Where it's like there's two or three or four big companies that kind of live forever, and then there's basically like no change.
01:07:25.000And then those companies are basically in this incestuous relationship with the government, where the government both regulates them and protects them against competition.
01:07:32.000And then there's the revolving door effect where government officials, when they step down from government, they go to work for these companies.
01:07:38.000And then people get recruited out of these companies to work in government.
01:07:43.000And so we think we live in like a market-based economy, but in a lot of industries what you have are basically cartels, right?
01:07:50.000You have a small number of big companies that are basically – have established basically sort of a two-way parasitical relationship with the government where they're sort of both sort of controlled by the government but also protected by the government.
01:08:03.000And so the big tech companies would like to get to that state.
01:08:56.000It would be nice if there was more popular outrage.
01:08:59.000Having said that, this is a new topic, and so I understand people aren't fully aware of what's happening yet.
01:09:05.000But the other reason for mild optimism might be the open source movement is developing very quickly now.
01:09:12.000And so if open source AI gets really good before these regulations can basically be put in place, they may become somewhat of a moot point.
01:09:21.000For anybody looking at this, you want to look at both sides of this.
01:09:23.000You want to look at what both the companies are doing.
01:09:24.000How would open source mitigate all these issues?
01:09:28.000It basically just says, instead of this technology being something that's owned and controlled by big companies, it's just going to be technology that's going to be available to everybody, right?
01:09:35.000And, you know, you'll be able to use it for whatever you want, just like I will.
01:09:38.000And it's the same thing that happened for, like, you know, it's the way the web works.
01:09:43.000You know, it's the way that anybody can download a web browser.
01:09:45.000It's the way that anybody can install these free operating systems called Linux.
01:09:49.000You know, it's one of the biggest operating systems in the world.
01:09:52.000And so just basically Wikipedia or any of these things where it's sort of a public good and it's available for free to anybody who wants it.
01:10:02.000And then there's communities of volunteers on the internet and companies that actually contribute a lot into this because companies can build on top of this technology.
01:10:09.000And so the hope here would be that there's going to be an open source movement kind of counterbalancing what the companies do.
01:10:14.000And if the open source movement does take hold, if people recognize this as being a real serious threat and start applying, you know, just using whatever it is, whether it's minds or the various open source social media networks,
01:10:30.000don't you think the government would somehow or another try to regulate that as well if they've already got control over Facebook and Twitter?
01:10:37.000So the threat always is that they're going to come in and do that, and that is what they're threatening to do.
01:10:41.000There is energy in Washington by people trying to figure out how to regulate or ban open source.
01:10:46.000I mean, so that banning open source, like, interfering at that level carries consequences with it.
01:10:51.000And there are proposals, there are serious proposals from serious people to do what I'm about to describe.
01:10:56.000Do you run a software program on everybody's own computer, right, watching everything that they do?
01:11:02.000Because you have to make sure that they're not running software they're not supposed to be running.
01:11:04.000You know, do you have basically an agent built into everybody's chip so that it's not running, you know, software that's not supposed to be running, right?
01:11:11.000And then what do you do when somebody's running unapproved software?
01:11:14.000You know, do you send somebody to their house to take their computer away, right?
01:11:18.000And then if somebody, like, if you can't do that, like, there's a proposal for the AI safety people have a proposal that basically says if there's a rogue data center, if there's a data center running AI that is not registered with the government, not being monitored, that there should be airstrikes.
01:11:33.000Time Magazine, a big piece in Time Magazine about two months ago, where one of these guys who runs this kind of AI risk kind of world says, clearly we should have military airstrikes on data centers that are running on approved AIs because it's too dangerous, right?
01:11:56.000And so he's one of the leaders of this decision theorist.
01:12:00.000So he's one of the leaders of what's called AI risk, sort of one of the anti-AI groups.
01:12:07.000He's part of the Berkeley environment that we were talking about before.
01:12:10.000So he says the key issue is not human competitive intelligence as Open Letter puts it.
01:12:14.000It's what happens after AI gets too smarter than human intelligence.
01:12:18.000Key thresholds there may not be obvious.
01:12:20.000We definitely can't calculate in advance what happens when and it currently seems imaginable that a research lab would cross critical lines without noticing.
01:12:34.000But it is significant if you go further down.
01:12:36.000What he says in that is he says, first of all, we need to do the airstrikes in the data centers.
01:12:39.000And I think it's in this article, or if it's not, it's in another one, where he says we need to – the word he uses, I think, is we need to be able to take the risk of nuclear war.
01:12:48.000Well, because the problem is, okay, we're striking data centers.
01:12:52.000Does that mean we're striking data centers in China?
01:12:54.000And how are the Chinese going to feel about that?
01:12:59.000So like you go down this path where you're worried about the AI getting out of control and you start advocating basically a global totalitarian basically surveillance state that watches everything and then basically takes military action when the computers are running software you don't want it to run.
01:13:13.000And so the consequences here are profound.
01:13:22.000He was just not taking – he was not widely known until about six months ago when all of a sudden ChatGPT started to work and then he just took everything he'd said publicly before and he applied it to ChatGPT.
01:13:32.000So in his kind of model of the world, ChatGPT proves that he was right all along and that we need to move today to – we need to shut down ChatGPT today and we need to never do anything like it again.
01:13:41.000So he's got the Sarah Connor approach.
01:14:52.000So he and people like him, this whole group of people who work on this, have been worried about this and developing theories about this for 20 years.
01:14:58.000And they've been publishing on this and talking about this.
01:15:00.000And it was kind of abstract, like I said, until six months ago.
01:15:04.000And now they're getting some traction and their ideas are being taken seriously.
01:15:08.000But they're worried about literally people dying.
01:15:12.000There's another set of people who are trying to control AI who are like the social media sensors that are trying to control what it says.
01:15:18.000And so what's happened is the AI safety movement that was worried about people dying has been hijacked by the people who want to control what it says.
01:15:25.000And it turns out those two groups of people hate each other.
01:15:29.000So the safety people think that the other group is called the alignment people.
01:15:33.000The safety people who are worried about people dying think that the alignment people are hijacking the critically important safety movement in order to basically control what the thing says.
01:15:42.000The people who want to control what the thing says think that the AI safety people worried about killing everybody are like lunatics and they call each other names all day long.
01:15:51.000The original group, his group, has renamed themselves from AI Safety, too.
01:15:55.000They now call themselves AI-not-kill-everyone-ism, because they're trying to just get it, like, focused on what they call, like, actual existential risk.
01:16:03.000But the overall movement has been taken over by the censors, right?
01:16:07.000And what's happening is, in Washington, these concerns are getting conflated, right?
01:16:11.000And so they sort of bait the hook with, it might kill everybody, and then what comes out the other end is basically a law restricting what it can say.
01:16:17.000And so this is the level of panic and hysteria and – right.
01:16:22.000And then potentially like – again, very kind of damaging, potentially catastrophic legal things that are going to happen on the other side of this.
01:16:29.000I just can't imagine a sane world where someone would take that guy seriously.
01:16:36.000Airstrikes, a full nuclear assault is preferable to AI taking over.
01:16:41.000So his argument is once you have a quote-unquote runaway AI that's just like overwhelmingly smarter than we are, then it can basically do, you know, basically you can do whatever it wants.
01:16:51.000And it basically has a relationship to us like we have to ants and like you step on an ant and you don't really care.
01:17:40.000And would you say that the people they're working for are smarter or dumber than they are?
01:17:44.000I think that the whole basis for this smart always wins versus dumb is just not right.
01:17:49.000Number two, there's this anthropomorphizing thing that happens where, and you see him doing it in that essay, he basically starts to impute motives, right?
01:17:57.000So it's like basically that the AI is going to be a, like, some level of self-aware, you know, basically.
01:18:18.000And again, here's another reason I don't believe it is because the great surprise of ChatGPT...
01:18:25.000ChatGPT is a technology called Large Language Models, which is based on a research breakthrough in 2017 at Google, which is called the Transformer.
01:18:32.000It took the technical field completely by surprise that this works.
01:18:36.000So none of the people working on AI risk prior to basically December had any idea that this was going to work any more than the rest of us did.
01:18:47.000There's all these sort of very general hand-wavy concepts around quote-unquote AI that basically were formulated before we actually knew what the thing was and how it works.
01:18:54.000And none of their views have changed based on how the technology actually functions.
01:19:00.000And so it comes across to me more as a religion.
01:19:04.000In their framework, it kind of doesn't matter how it works because it's basically just assumed that however it works is going to behave in a certain way.
01:19:10.000And I'm an engineer and things don't work like that.
01:19:13.000But aren't they evaluating how it works now?
01:19:17.000And if ChatGPT is just the beginning, if this is just the beginning of this, and then you have something that's far more complex and something that is sentient or something that is capable of making decisions, if that's engineered— But you just took the—but again, we just took this a little bit.
01:19:30.000We talked last—you just took the leap to like, okay, now it suddenly becomes sentient.
01:19:33.000And it's like, okay, we don't know why humans are sentient.
01:19:37.000Well, let's not even use the term sentient, but capable of rational thought or decision-making.
01:19:43.000Right, but if it decides things, if it starts making actions and deciding things, this is the worry, that it becomes capable of doing things.
01:19:53.000Yeah, so it will be capable of doing things.
01:19:56.000There's no it, there's no it, there's no genie in the bottle.
01:20:19.000It's basically the argument of, well, you can't rule out that X is going to happen.
01:20:23.000Well, the problem is at that point, you can't rule anything out.
01:20:25.000At that point, you have to plan for every contingency of every conceivable thing that you could ever imagine, and you can never disprove anything, so you can never have a logical debate.
01:20:32.000So at that point, you've basically slipped the bounds of reason.
01:20:35.000You're purely in a religious territory.
01:21:16.000But we could sit here and speculate about- Millions of things.
01:21:19.000We could speculate about an impending alien invasion and argue that society should spend the next hundred years preparing for that because we can't rule it out.
01:21:26.000And so we just, as human beings, we do not have a good track record of making decisions based on unfounded speculation.
01:21:31.000We have a good track record of making decisions based on science.
01:21:34.000And so the correct thing to do for people worried about this is to actually propose experiments.
01:22:00.000We're at GPT 4.5 or whatever we're at.
01:22:03.000When new emerging technologies that have similar capabilities but extend and keep going, it just seems like that's the natural course of progression.
01:22:12.000The natural course of progression is not for that to all of a sudden decide it has a mind of its own.
01:22:28.000How would we define something that is, and you pick your term here, self-aware, sentient, conscious, has goals, is alive, is going to make decisions on its own.
01:22:38.000Well, let's just say a technology that mimics the human mind and mimics the capabilities and the interactions of the human mind.
01:22:47.000But we don't know how the human mind works.
01:22:49.000But we do know how people use the human mind in everyday life.
01:22:52.000And if you could mimic that with our understanding of language, with rational thought, with reason, with the access to all the information that it'll have available to it, just like ChatGPT.
01:23:11.000There's this article in Nature this week.
01:23:13.000There's a neuroscientist and a philosopher who placed a bet 25 years ago as to whether we would, in 25 years, know the scientific basis of human consciousness.
01:23:23.000And they placed a bet for a case of wine 25 years ago.
01:23:25.000And the neuroscientist predicted, of course, in 25 years, we're going to understand how consciousness works, human consciousness.
01:23:30.000And the philosopher is like, no, we're not.
01:23:33.00025 years passed, and it turns out the philosopher won the bet.
01:23:36.000And the neuroscientist just says openly, yeah.
01:23:38.000He's like, I thought we'd have it figured out by now.
01:24:21.000So, like I said, at that point, it's speculation.
01:24:24.000That's not the actual technology that we're dealing with today.
01:24:26.000So, here's my favorite counter example on this.
01:24:30.000Let's say something has the following properties, right?
01:24:34.000Let's say that it has an awareness of the world around it.
01:24:37.000It has a goal or an objective for what it wants to achieve in the world around it.
01:24:42.000It has the wherewithal, right, to be able to reach into the world, to be able to change the world to accomplish its goal.
01:24:49.000It's going to be in a state of increased tension if it can't achieve its goal, and it's going to be a state of relaxation if it can't achieve its goal.
01:24:58.000That would probably be a pretty good first-order approximation of some sort of conscious entity that would have the characteristics that we're worried about.
01:25:11.000It senses the environment temperature.
01:25:13.000It has a goal for the temperature it wants.
01:25:15.000It has the ability to change the setting on the heater, the AC unit.
01:25:23.000And it literally goes into a state of physical tension when the temperature is not what it wants, and then it goes into a state of physical relaxation, right, literally inside the mechanism when it gets back into the state where it has the desired temperature.
01:25:34.000And we're not worried about the thermostat coming alive and killing us.
01:25:38.000Even those properties alone are not sufficient to generate concern, much less the idea of basically the way we know how to build neural networks today.
01:25:49.000And then again, you go back to this thing of like, okay, let's assume that you actually agreed with the concern and that you actually were legitimately concerned and that you thought that there was disaster in the future here.
01:25:59.000How do you feel about walking down the path that would be required to offset that, right?
01:26:02.000What would be the threshold of evidence that you would want to demand before you start monitoring what everybody's doing on their computers, before you start doing airstrikes?
01:26:15.000Like, if you believe that at some point it will turn into something that's a threat, right, and that that threat is existential, right, because it's going to be the super smart thing, it's going to take over the nuclear arsenals, it's going to, you know, synthesize new, you know, pathogens, and it's going to kill us all, right, then obviously you have to have an incredibly invasive regime to prevent that from happening,
01:26:33.000because that's an all-or-nothing proposition.
01:26:35.000And that's the other tip-off of what's happening here, right?
01:26:37.000Which is, you see, there's no shades of gray in that article, in this discussion.
01:28:11.000Both scenarios are fairly entertaining.
01:28:14.000Elon's conclusion from that was not only is AI dangerous, specifically Google owning and controlling AI is specifically dangerous because Larry Page controls Google and so therefore if Larry Page controls Google, Google gets AI that Larry will basically not, he'll basically let the AI do whatever it wants,
01:28:34.000So the company behind ChatGPT, that was actually originally started by Elon with Sam Altman, who runs it now and a bunch of other people in the Valley.
01:28:41.000The specific mission of OpenAI is right there on the name.
01:28:44.000The specific mission of it is we're going to create AI. We're going to compete with Google.
01:28:47.000We're going to create an AI, but we're going to make it open so that everybody has it, specifically so that it's not just Google.
01:28:53.000Right, so the original OpenAI mission was literally open source AI that everybody's going to have so that it's not just Google.
01:28:59.000This guy is freaked out and is like, wait a minute, if you think AI is dangerous, that's the exact opposite thing than what you should do, right?
01:29:08.000Because if you think AI is dangerous, then the last thing in the world that you want to do is actually give it to everybody.
01:29:12.000It's like giving everybody nuclear weapons, right?
01:29:14.000Like, why on earth would you think that that's a good idea?
01:29:17.000And Elon's like, well, look, maybe whatever, but I certainly know that I don't want Larry to control it.
01:29:23.000Subsequent to that, Elon actually – there was a bunch of changes at OpenAI and as a result, Elon became no longer involved in OpenAI at a certain point.
01:29:31.000And then OpenAI basically went from being OpenAI to being ClosedAI.
01:29:35.000So they're specifically not doing open source.
01:29:40.000And then they went from being open source to being very much not open source.
01:29:44.000And today, you can use ChatGPT, but they won't even tell you fully how it works, much less give you access to the code.
01:29:50.000They're now a company, like any other company.
01:29:53.000And so Elon has said publicly that he's very upset about this change because he donated $100 million to them to get it started as a nonprofit, and then it became a company, sort of against his wishes.
01:30:03.000And so now he sort of views it as sort of an equivalent threat to Google, right?
01:30:07.000So now in Elon's mind, he's got OpenAI to worry about and he's got Google to worry about.
01:30:10.000And so he has talked publicly about possibly forming a third option, which he has ultimately, I think, called either like actually OpenAI or sometimes he calls based AI, right?
01:30:24.000Which would be a new thing, which would be like the original OpenAI idea, but done from scratch in 2023, but like set up so that it can never be closed down.
01:30:33.000And then once again, the people in the AI risk movement are once again like, oh my god, that'll make the problem even worse.
01:30:39.000And so that's the current state of play.
01:30:43.000And then by the way, this is all kind of playing out at this level in Washington.
01:30:47.000Most of the engineers working on this stuff are just like writing code, trying to get something to work.
01:30:51.000And so for every one of the people engaged in this public discussion, you've got 10,000 people at universities and companies and people all over the world in their basements and whatever working on trying to get some aspect of this to work, trying to build the open source version.
01:31:03.000Are we aware of what other countries, like what level they're at with this stuff?
01:31:09.000Yeah, so I would say good news, bad news.
01:31:11.000Good news, bad news is this is almost entirely a U.S.-China thing internationally.
01:31:16.000The U.K. had quite a bit of this stuff with this thing called DeepMind, which was a unit of Google that actually originally got Elon concerned.
01:31:22.000But DeepMind is being merged into the mothership at Google, and so it's sort of getting drained away from the U.K., and it's going to become more Californian.
01:31:30.000And then there's smatterings of people in other European countries.
01:31:35.000There are experts at various universities, but not that many.
01:31:38.000Most of it is in the US. Most of it's in California in the West.
01:31:46.000There aren't 20 other countries that have this, but there are two.
01:31:49.000And they happen to be the two big ones.
01:31:52.000And so there is a big corresponding Chinese development effort that's been underway for the last 15 years, just like the efforts in the US. China is actually very public about their AI kind of agenda, mission.
01:32:03.000They talk about it, they publish it, and of course they have a very different theory of this than we do.
01:32:08.000They view AI as a way to achieve population control.
01:32:24.000Anything that would threaten the dominance of the Communist Party of China.
01:32:28.000And so, for example, China's security camera companies are the world leaders in AI security cameras because they're really good at sniffing out people walking down the street, right?
01:32:39.000That's the kind of thing that their systems are really good at.
01:32:42.000So they have a whole national development program, which is their government and their company.
01:32:47.000In China, all the companies are actually controlled and owned effectively by the government.
01:32:51.000There's not as much of a distinction between public sector, private sector as there is here.
01:32:55.000So China has a more organized effort that couples basically their whole society.
01:33:00.000And then they have a program to basically use AI for population control inside China, authoritarian political control.
01:33:06.000And then they've got this program called Digital Belt and Road, where they're going to basically try to install that AI all over the world.
01:33:15.000They've had this program for the last 10 years to be the networking layer for the world, so this whole 5G thing with this company called Huawei.
01:33:23.000So they've been selling all these other countries all the technology to power their 5G wireless networks.
01:33:29.000And then they're basically going to roll out on top of that this kind of AI, you know, authoritarian, basically control, surveillance control, population control stuff.
01:33:55.000And then, of course, what they pitch to the president or prime minister of country X is if you install our stuff, you'll be able to better control your population.
01:35:09.000It's like once you start thinking in those terms, you realize that actually all these debates happen in the U.S. are interesting and maybe important.
01:35:15.000But there's this other much bigger, I would argue, more important thing that's happening, which is what kind of world do we think we're living in 50 years from now?
01:35:21.000And do we think that the sort of American Western ethos of freedom and democracy is the one that technology supports?
01:35:26.000Or do we think it's going to be a totalitarian approach?
01:35:30.000Either way, I see a scenario in 50 years.
01:35:43.000In the Chinese one, it's like, you know, there are no rights.
01:35:47.000The whole concept of rights is a very Western thing, right?
01:35:51.000And so the idea that you're walking down the street and you have the right to stop and talk to whoever you want or say whatever you want, it's not the majority view of a lot of people around the world, especially people in power.
01:36:04.000Even in the US, we struggle with it, right?
01:36:07.000And so the real battle for AI is whether or not that gets enhanced or whether or not we develop a system in America that actually can counter that.
01:36:17.000And then also whether we as individuals will have access to this power that we can use ourselves.
01:36:24.000So, you know, the movie, or the novel became a movie, but 1984, right, which is sort of the Orwell, you know, totalitarian kind of thing that people use as a metaphor.
01:36:34.000So the technology in the novel, 1984, was what Orwell called the telescreen, and basically television.
01:36:40.000And basically the idea was television with a camera in it, and the idea was every room, you had to have a telescreen in every room in your house, and it was broadcasting propaganda 24-7, and then it was able to watch you.
01:36:49.000And that was the method of state control in 1984. There's this guy who wrote a different, rewrote 1984 in a book called Orwell's Revenge.
01:36:58.000And in that book, what he did is he said, okay, we're going to use that same setup, but the telescreen, instead of being a one-way system, is going to be a two-way system.
01:37:05.000So the telescreen is going to be able to broadcast propaganda and watch the citizens, but also it's going to be able to – people can actually put out whatever message they want, right?
01:37:13.000Free speech to be able to say whatever they want, and you're going to be able to watch the government.
01:37:17.000It's going to have cameras pointed at the government, right?
01:37:19.000And then he rewrites the whole plot of 1984, and of course the point there is – If you equalize, if both the people and the state have the power of this technology at their fingertips, at the very least now there's a chance to have some sort of like actual rational productive relationship where there are still human freedoms and maybe people actually end up with more power than the government and they can keep the government from becoming totalitarian.
01:37:41.000And so in his rewriting, what happens is the, you know, people use, rebels who want a democracy, you know, use the broadcast mechanism out to be able to ultimately change the system.
01:37:52.000And so that's the fundamental underlying question here as well, which is like, is AI a tool to watch and control us?
01:37:58.000Or is AI a tool something for us to use to become smarter, better informed, more capable, right?
01:38:04.000How much of a concern is Chinese equipment that's already been distributed?
01:38:12.000So we don't always know the specific answer to that yet, because this gets into complicated technical things, and it can be hard to prove some of these things.
01:38:29.000It's the Chinese Communist Party, the CCP. So there's the party.
01:38:32.000The party owns and controls the state, and the state owns and controls everything else.
01:38:36.000So for example, it's actually still illegal sitting here today for an American citizen to own stock in a Chinese company.
01:38:43.000People say that they do, and they have various pieces of paper that say they do, but actually there's a law that says that it's not, because this is an asset of China.
01:38:50.000This is not something you can sell to foreigners.
01:38:54.000And then if you're a CEO of a Chinese company, you have a political officer assigned by the Communist Party who sits with you right down the hall, like the office next to you, and basically you coordinate everything with him and you need to make him happy.
01:39:08.000And he has the ability to come grab you out of meetings and sit you down and tell you whatever you want.
01:39:13.000I mean, whatever he wants you to do on behalf of the government, and if the government gets sideways with you, they will rip you right out of that position.
01:39:21.000This has happened over and over again, right?
01:39:23.000This has happened a lot of high elite Chinese business leaders over the years have been basically stripped of their control and their positions and their stock and their wealth and everything.
01:39:32.000Some of them have just outright vanished.
01:39:37.000And so, for example, data, you know, something like TikTok, for example, if the Chinese government tells the company we want the data, they hand over the data.
01:39:45.000Like, there's no court, there's no, you know, the concept of like a FISA warrant, you know, the concept of a subpoena.
01:40:02.000And when they want you to merge the company or shut it down or do something different or don't do this or don't do that, they just tell you and that's what you do.
01:40:09.000And so anyway, so then you have a Chinese company like TikTok or like Huawei or the DJI. The other one is their drone company, right?
01:40:17.000Most of the drones flown in the West are from this Chinese company called DJI. And so then there's also this question of like, well, is there a back door?
01:40:24.000So can the Chinese government reach in at any point and use your drone for surveillance?
01:40:30.000Can they see what you're watching on TikTok?
01:40:34.000And the answer to that is maybe they can, but it kind of doesn't matter if they can't today because they're going to be able to anytime they want to.
01:40:39.000Because they can just tell these companies, oh, I want you to do that, and the company will say, okay, I'm going to do that.
01:40:43.000And so it's a complete fusion of state and company.
01:40:48.000Here in the US, at least in theory, we have a separation.
01:40:52.000This goes back to the topic I was talking about earlier.
01:40:55.000For the US system to work properly, we need a separation of the government and from companies.
01:40:59.000We need the companies to have to compete with each other, and then we need for them to have legal leverage against the government.
01:41:04.000So when the government says hand over Private citizen data, the company can say, no, that's a violation of the First or Fourth or Fifth Amendment rights.
01:41:26.000And in the U.S., this is very important, right?
01:41:28.000In the U.S., we have written constitutional, giving example free speech.
01:41:31.000In the U.S., we have the literal written First Amendment.
01:41:34.000Even in the U.K., they do not have a written constitutional guarantee to free speech.
01:41:39.000So in the U.K., there are laws where they can jail you for saying the wrong thing, right?
01:41:43.000And the same thing, by the way, in a bunch of these cases in like Australia and New Zealand.
01:41:47.000New Zealand, which is supposed to be like the libertarian paradise.
01:41:51.000New Zealand has a government position reporting the prime minister called the chief censor.
01:41:55.000Who gets to decide basically what gets to be in the news or what people get to say.
01:42:00.000And so even in the West, outside the US, there are very few countries that have a written guarantee to free speech.
01:42:07.000And even in the US, do we actually have free speech if there's all this level of censorship and control that we've all been seeing for the last 10 years?
01:42:23.000It's a very thin line, which is very easily cracked.
01:42:27.000And this is why everybody's so fired up about, in government, this is why everybody's so fired up about AI, is because it's another one of these where they're like, wow, if we can get control of this, then think of all the ways that this can get used.
01:42:37.000Well, that's one of the more fascinating things about Elon buying Twitter.
01:43:30.000I like that wacky shit that's mixed in with things.
01:43:33.000I mean, it seems insane, but also when I look at some of the people that are putting it up there, and I look at their profiles, and I look at their American flag and their bio, and I'm like, are you a real human?
01:44:44.000This is sort of the two lines of argument, which is like, okay, if somebody is not willing to put their own name behind something, should they be allowed to say it?
01:44:52.000And there's an argument in that direction, an obvious one.
01:44:54.000But the other argument is, yeah, sometimes there are things that are too dangerous to say unless you can't put your name behind it.
01:45:01.000So it seems like the pros would outweigh the cons.
01:45:04.000Well, even just the micro version, which is just like, you know, if you've got something to say that's important, but you don't want to be harassed in your house.
01:45:09.000You don't want your family to get harassed.
01:45:39.000But I also see Elon's perspective that it would be great if it wasn't littered with propaganda and fake troll accounts that are being used by various, you know, unscrupulous states.
01:45:50.000In fairness, what Elon says, actually it's interesting, what Elon says is you will be allowed to have a non-account under some other name you make up on the service.
01:46:01.000You'll just have to register that behind the scenes with your real identity.
01:46:05.000And specifically with like a credit card.
01:46:07.000But then the fear is that someone will be able to get in there.
01:46:15.000But then again, the other part of this would be like Twitter is only one company, right?
01:46:19.000And so it's an important one, but it's only one, and there are others as well.
01:46:22.000So for the full consideration of like, quote unquote, rights on this topic, you also want to look at what is happening elsewhere, including all the other services.
01:46:31.000I'm fascinated by companies like Twitter and YouTube that develop at least a semi-monopoly.
01:47:17.000But it's just these conversations were up for a long time.
01:47:22.000And it wasn't until Robert Kennedy running for president that they decided, like, these are inconvenient narratives he's discussing.
01:47:32.000I should not weigh in on exactly which companies have whatever level of monopoly they have.
01:47:36.000Having said that, to the extent that companies are found to have monopolies or, let's say, very dominant market positions like that does, that should bring an additional level of scrutiny on conduct.
01:47:47.000And then there is this other thing I mentioned earlier, but I think is a big deal, which is...
01:47:50.000If a company is making all these decisions by itself, you can argue that it maybe has the ability to do that.
01:47:56.000Although, again, maybe it shouldn't pass a certain point in terms of being a monopoly.
01:48:00.000But the thing that's been happening is it's not just the companies making these decisions by themselves.
01:48:04.000They've come under intense pressure from the government.
01:48:06.000And they've come under intense pressure from the government in public statements and threats from senior government officials.
01:48:13.000They have come private channeled threats.
01:48:16.000And then all of this stuff I was talking about earlier, all the channeling of all the money from the government that's gone into these pro-censorship groups that are actively working to try to suppress speech.
01:48:25.000And when you get into all of that, those are crimes.
01:48:31.000Everything I just described I think is illegal.
01:48:33.000And there are specific actual felony basically counts in the US code for those things actually being illegal.
01:48:37.000There are violations of constitutional rights and it is a felony to deprive somebody of their constitutional rights.
01:48:42.000And so I think in addition to what you said, I think it's also true that there's been a pattern of government involvement here that is, I think, certainly illegal.
01:48:50.000And, you know, put this this way, this administration is not going to look into that.
01:50:39.000This is one of those where it's like, what do we want?
01:50:41.000And the we here is like all of society.
01:50:44.000And if we decide that we want the system to keep working the way it's working, we're going to keep electing the same kinds of people who have the same policies.
01:50:50.000Do you think most people are even aware of all these issues, though?
01:50:54.000And that's a big, you know, there's always an asymmetry, right, between the people who are doing things and the people who aren't aware of.
01:50:59.000But, like, again, it's like, what do we want?
01:51:01.000Are people going to care about this or not?
01:51:03.000If they are, you know, then, you know, they're going to, at some point, you know, demand action.
01:51:08.000It's a so-called collective action problem, right?
01:51:10.000People have to come together in large numbers.
01:51:22.000And again, this goes back to my concern about the AI lockdown, which is like all of the concerns on AI are being basically used to put in place.
01:51:30.000I think what they're going to try to do to AI for speech and thought control is like a thousand times more dangerous than what's happened on social media.
01:51:36.000Because it's going to be your kids asking the AI, what are the facts on this?
01:51:42.000And it's just going to flat out lie to them for political reasons, which it does today.
01:51:46.000And that, to me, is far more dangerous.
01:51:51.000And the desire is very clear, I think, on the part of a lot of people to have that be a fully legal, blessed thing that basically gets put in place and never changes.
01:52:00.000Well, you're completely making sense, especially when you think about what they've done with social media.
01:52:06.000And not even speculation, just the Twitter files.
01:52:13.000Well, this is the ring of power thing, right?
01:52:14.000It's like everybody's in favor of free speech in theory.
01:52:16.000It's like, well, if I can win an election without it, you know, I've got the ring of power.
01:52:23.000And the American system was set up so that people don't have the ring of power.
01:52:26.000Like the whole point of the balance of terror between the three branches of government and all the existence of the Supreme Court and the due process protections in the Constitution, it was all to prevent government officials from being able to do things like this with impunity.
01:52:45.000It's actually remarkable how clearly the Founding Fathers saw the threat given that they were doing all of this before, you know, any modern, you know, before electricity.
01:53:01.000This is such an uneasy time because you see how all these forces that are at work and how it could play out, how it is playing out with social media, how it could play out with AI,
01:53:17.000and electing leaders that are going to see things correctly.
01:53:22.000I haven't seen anybody discussing this, especially not discussing this the way you're discussing it.
01:53:27.000Well, and when the speech is made, right, to justify whatever the controls are, it's going to be made in our name, right?
01:53:35.000So the speech is not going to be, we're going to do this to you.
01:53:37.000The speech is we're doing this to protect you.
01:54:32.000A lot of people think all this stuff started with the internet and it turns out it didn't.
01:54:35.000It turns out there's been a collapse of faith on the part of American citizens in their institutions basically since I was born, basically around the early 70s.
01:54:43.000It's basically been a straight line down on almost every major institution.
01:54:47.000I'll talk about government newspapers in a second.
01:54:50.000You know, basically any, you know, religion, you go kind of right down the list, police, you know, big business, you know, education, schools, universities, you chart all these things out and basically they're all basically straight lines down over 50 years.
01:55:16.000And then, of course, the theory goes to start in the 70s because of the hangover from the Vietnam War and then Watergate and then a lot of the hearings that kind of exposed government corruption in the 70s that followed, right?
01:55:27.000And then it just sort of – this sort of downward slide.
01:55:31.000The military took a huge hit after Vietnam and then actually it's the one that has like recovered sharply and there's like a cultural change that's happened where, you know, we as Americans have decided that we can have faith in the military even if we don't agree with the missions that they're sent on.
01:56:32.000And you're not going to get the change that you want from Congress unless a lot more people all of a sudden change their mind about the incumbents that they keep re-electing.
01:56:40.000But anyway, the reason for optimism in there is I think most people are off the train already.
01:56:47.000And quite frankly, I think that explains a lot of what's happened in politics in the U.S. over the last 10 years.
01:56:51.000Whether people support or don't support the kind of You know, the various forms of populism on the left or the right.
01:56:57.000I think it's the citizenry reaching out for a better answer than just more of the same and more of the same being the same elites in charge forever telling us the same things that we know aren't true.
01:57:06.000Well, that is one of the beautiful things about social media and the beautiful things about things like YouTube where people can constantly discuss these things and have these conversations that are reached by millions of people.
01:57:18.000I mean, just a viral tweet, a viral video, something, you know, someone gives a speech on a podcast and everybody goes, like, what you're saying today.
01:58:12.000That's why I say the Soviets outlawed mimeograph machines, which were earlier photocopiers.
01:58:18.000But there was a whole newsletter phenomenon.
01:58:20.000In a lot of movements in the 50s, 60s, 70s.
01:58:22.000The way I look at it is basically the way to think about it is media and thought centralized to the maximum possible level of centralization and control right around 1950, where you basically had three television networks.
01:58:46.000And then basically, technology in the form of all of these media technologies and then all the computer and information technologies underneath them have basically been decentralized and unwinding that level of centralized control more or less continuously now for 70 years.
01:59:01.000So I think it's been this longer running process.
01:59:03.000And by the way, I think, you know, left to its own devices, it's going to continue, right?
01:59:07.000And this is the significance of AI. What if each of us has a super sophisticated AI that we own and control?
01:59:15.000Because it either comes from a company that's doing that for us or it's an open source thing where we can just download it and use it.
01:59:20.000And what if it has the ability to analyze all the information?
01:59:22.000And what if it has the ability to basically say, you know, look, on this topic, I'm going to go scour the internet and I'm going to come back and I'm going to synthesize information.
01:59:30.000It's the AI. So it would be logical that that would be another step down this process.
01:59:35.000By the way, and maybe the most important step of all, because it's the one where it can actually be like, okay, I'm going to be able to legitimately think on your behalf and help you to conclusions that are factually correct, even if people who are in power don't want to hear it.
01:59:49.000It seems to me that you have more of a glass-half-full perspective on this.
01:59:56.000Are you open-minded and just sort of analyzing the data as it presents itself currently and not making judgments about where this is going?
02:00:06.000Or do you generally feel like this is all going to move in a good direction?
02:00:13.000We meet every day all through the year with all these incredibly smart kids who have these incredibly great new ideas and they want to build these technologies and they want to build businesses around them or they want to open source them or they want to make these new things happen.
02:00:29.000They have visions for how the world can change in these ways.
02:00:32.000They have the technical knowledge to be able to do these things.
02:00:35.000There's a pattern of, you know, these kids doing amazing things.
02:00:47.000So, and Apple was two kids in a garage in 1976 with a crazy idea that people should have their own computers, which was a crazy idea at the time.
02:00:55.000And so, like, it doesn't, you know, usually it doesn't work, but when it does, like, it works really, really well.
02:01:01.000And this is what we got, the microchip, and this is how we got the PC, and this is how we got the internet, and the web, and all these other, you know, all these other things.
02:01:20.000And so when it works, like, it works incredibly well, right?
02:01:23.000And so, and we just happen to be, you know, by being where we are and, you know, doing what we do, we're at ground zero of that.
02:01:29.000And so all day long, I meet and talk to these kids and people who have these ideas and want to do these things.
02:01:35.000It's why I can see the future kind of in that sense, which is I know what they're going to do because they come in and tell us and then we help them try to do it.
02:01:42.000So if they're allowed to do what they plan to do, then I have a pretty good idea of what the future is going to look like and how great it could potentially be.
02:01:50.000But then I also have the conversations in Washington, and I also have the conversations with the people who are trying to do the other things, and I'm like, okay.
02:02:10.000Every once in a while, people get freaked out about something, but mostly people just thought, you know, invention is good, creativity is good, Silicon Valley's good, and in the last 15, 20 years, like...
02:02:20.000All these topics have gotten very contentious, and you have all these people who are very angry about the consequences of all this technological change.
02:02:26.000And so we're in a different phase of the world where these issues are now being fought out, not just in business, but also in politics.
02:02:34.000And so I also have those conversations, and those are almost routinely dismaying.
02:02:40.000Like, those are not good conversations.
02:02:42.000And so I'm always trying to kind of calibrate between what I know is possible versus my concern that people are going to try to figure out how to screw it up.
02:02:48.000When you have these conversations with people behind the scenes, are they receptive?
02:02:53.000Are they aware of the issues of what you're saying in terms of just freedom of expression and the future of the country?
02:03:01.000You might bucket it in like three different buckets.
02:03:04.000There's a set of people who just basically don't like Silicon Valley, tech, internet, free speech, capitalism, free markets.
02:03:45.000Then there's a set of people who I would describe, I don't know if open mind is a wrong term, but I would say they are honestly and legitimately trying to understand the issues.
02:03:52.000They're kind of aware that they don't fully understand what's happening and they are trying to figure it out and they do have a narrative in their own mind of they're going to try to come to the right conclusion.
02:04:29.000And then there's a third set of people who are very actually pro-capitalism, pro-innovation, pro-tech, but they don't like us because they think we're all Democrats.
02:04:41.000So a lot of our natural allies on these issues are on the other side of where Silicon Valley is majority Democratic, right?
02:04:49.000And so there's a fair number of people who would be our natural allies if not for the fact that Silicon Valley is like 99% Democrat.
02:05:00.000Tech doesn't have any national allies in D.C. because the Democrats basically think they control us, which they effectively do because the Valley is almost entirely Democrat.
02:05:08.000Then the Republicans think that basically they would support us except that we're all Democrats.
02:06:29.000Basically, every important thing happening in the world right now has a technological component to it, right?
02:06:34.000And it's being altered by the changes that are happening, you know, caused by tech.
02:06:37.000And so the other argument would be, Mark, like, grow up, like, of course, these are all going to be big fights because you're now involved in all the big issues.
02:06:49.000It's just, people are always so scared of change, and change today, when we're talking about this kind of change, you're talking about monumental change that happens over a very short period of time.
02:07:28.000And he said if people react to technology in three different ways, If you're below the age of 15, whatever is the new thing is just how the world always worked.
02:07:37.000If you're between the ages of 15 and 35, whatever is the new thing is exciting and hot and cool and you might be able to get a job and make a living doing it.
02:07:44.000Anything, if you're above the age of 35, it's whatever new is happening is unholy, right?
02:07:50.000And it's sure to bring about the downfall of civilization, right?
02:08:01.000So I think maybe what just has to happen is just time needs to pass.
02:08:05.000You know, maybe the fight is always, you know, I don't know, it's like whatever, the new thing happens, the fight's always between a bunch of 50-year-olds or something.
02:08:12.000Do you resist any technology in your own personal life?
02:08:32.000Like, we're not, you know, there are some people running around who want to keep their kids off all this stuff, which, by the way, is not the craziest view in the world.
02:08:51.000It's just if you teach them discipline and, you know, engage them in other activities so that they do physical things and run around, have fun, be outside.
02:09:56.000And I set time aside and I sit him down on the couch and I'm like, okay, there's this amazing thing that I'm going to give you.
02:10:02.000This is the most important thing I've ever done as a father that I've fired down from the mountains and I'm going to give you AI. And you're going to have AI your whole life to be with you and teach you things.
02:10:51.000I think it's going to make, I think it's going to be great.
02:10:53.000Like for kids, I think this is going to be fantastic.
02:10:55.000Well, the positive aspect, just for informing people on whatever it is, whether it's a medical decision or whether it's a mechanical thing with your car, I mean, that's pretty amazing.
02:11:04.000One of the fun things you can do with JetGPT is you can say, explain X to me, and then you can say, explain X to me as if I'm 15. And then you can do it as if I'm 10. And then you can do it as if I'm 5. And you can actually get it.
02:11:18.000You can actually do it all the way down.
02:11:19.000It kind of works down to about age three.
02:11:20.000So you can tell it, explain quantum mechanics to me like I'm a three-year-old.
02:11:27.000And so I taught him how to do this because I'm like, you just, you know, you can have it, you can dial it up or down.
02:11:31.000How does it explain quantum mechanics to a three-year-old?
02:11:33.000It uses like all these metaphors of like, you know, you've got a stuffed animal over here and a stuffed animal over there and it wiggles and then that one wiggles.
02:11:46.000So, yeah, no, so as a tool, you know, there's all these fights happening, I guess, what, back-to-school is coming up in a couple months here, and, you know, there's all these fights already emerging over, like, whether students in the classroom can use JGPT, and there's all these, you know, there's all these sites that claim to tell you whether something's been generated by AI. Yeah.
02:12:04.000So the teacher, in theory, can screen to see if something's been, you know, a student hands it an essay.
02:12:09.000In theory, there's a tool that will tell you whether they got it from GPT, but it doesn't actually work.
02:13:20.000This is for anybody who ever wants to learn anything.
02:13:22.000The real fear, the overall fear, is that what human beings are doing with artificial intelligence is creating something that's going to replace us.
02:13:52.000Like in Judaism, they have a version of this in Judaism called the Golem, the sort of legend of the Golem, and it was sort of this thing.
02:13:58.000It was the Warsaw Ghetto at one point, and this rabbi figures out how to conjure up basically this giant creature made out of clay to go smite the enemies.
02:14:07.000And then, of course, he comes back around and starts killing his own people.
02:14:10.000You know, the Frankenstein's monster, right?
02:15:05.000Part of the beauty of this is that there's danger.
02:15:08.000And it's also, there's incredible promise that's attached to this as well, like everything else, like matches.
02:15:15.000No one's advocating for outlawing matches, but you could start a fire.
02:15:18.000So the original myth on this—so the way the ancients thought about this—so, excuse me, in the Judeo-Christian philosophy, they have this concept of the logos, the word.
02:15:31.000So it says at the very beginning of the Bible, in the beginning there was the word, the word was truth, and then basically the universe kind of comes from that.
02:15:37.000So this concept of like the word, which was sort of knowledge, right?
02:15:39.000And then in Adam and Eve, it was, you know, Adam and Eve eating from the tree of knowledge, right?
02:15:43.000And then when they ate the, you know, the apple, you know, Satan fooled them in eating the apple, and then they had the knowledge, like, you know, the secret knowledge.
02:15:50.000The Greeks had a similar concept they called techni, which is the basis for the word technology.
02:15:55.000And it meant sort of, it meant, it didn't mean technology per se, but it meant sort of knowledge, and particularly knowledge on how to do things, right?
02:16:01.000So sort of the beginning of technology.
02:16:03.000And the myth that the Greeks had—so the myth that the Christians have about the danger of knowledge is the Garden of Eden getting kicked out of the Garden of Eden to the downside, right?
02:16:12.000That was viewed as a tragedy, right, in that religion.
02:16:15.000The Greeks had what they called the Prometheus myth, and it had to do with fire, right?
02:16:20.000And so—and the myth of Prometheus was a central Greek myth, and the myth of Prometheus was a god-cut kind of character.
02:16:27.000In the mythology, humans didn't have fire.
02:16:30.000He went up to the mountain, and the gods had fire, and he took fire from the gods, and he brought it down and gave it to humanity.
02:16:36.000In the myth, that was how humans learned to basically use fire as a tool.
02:16:42.000As punishment for bringing fire to humans, in the myth, he was chained to a rock for all eternity, and every day his liver gets pecked out by an angry bird, and then it regenerates overnight, and then it gets pecked out again the next day forever.
02:16:55.000Like that's how much the gods felt like they had to punish him, right?
02:16:59.000Because – and of course, what were they saying in that myth?
02:17:02.000What they were saying is, okay, fire was like the original technology, right?
02:17:05.000And the nature of fire as a technology is it makes human civilization possible.
02:17:12.000You know, you bond the tribe together, right?
02:17:13.000Every culture has like a fire central thing to it because it's like the center of the community.
02:17:19.000You can use it, you know, to cook meat, right?
02:17:22.000Therefore, you can have a higher rate of your kids are going to survive and so forth, be able to reproduce more.
02:17:27.000But of course, fire is also a fearsome weapon.
02:17:30.000And you can use it to burn people alive.
02:17:32.000You can use it to destroy entire cities.
02:17:35.000It's fantastic because it got that idea of information technology in the form of even fire was so scary that they encoded it that deeply in their mythology.
02:17:45.000I think what we do is we just play that, exactly like you said, we play that fear out over and over again.
02:17:51.000Because in the back of our head, it's always like, okay, this is the one that's going to get us.
02:17:54.000Yes, I know that the previous 3,000 of these things that actually turned out fine.
02:18:00.000Amazingly, even nuclear weapons turned out fine.
02:18:03.000Nuclear weapons almost certainly prevented World War III. The existence of nuclear weapons probably saved on the order of 200 million lives.
02:18:10.000So even nuclear weapons turned out okay.
02:18:13.000But yet after all of that and all the progress we've made, this is the one that's going to get us.
02:19:30.000I mean, I like to dwell on the negative aspects of it because it's fun.
02:19:34.000But one of the things that I have hope in is that there are conversations like this taking place where this is a very kind of unique thing in terms of human history, like the ability to independently distribute something that reaches millions of people that can talk about these things.
02:21:03.000So this is the app that lets you create images.
02:21:05.000You describe words and it creates images.
02:21:07.000It uses the same technology as ChatGPT, but it generates images.
02:21:12.000The prompt here was something along the lines of a Nike shoe in the form of this artist called Chihuly, who's this famous artist who works in basically blown glass is his art form.
02:21:22.000And so this is a Nike shoe rendered in blown glass.
02:21:26.000Chihuly is famous for using lots of colors, and so this does look exactly like his shoe would have looked.
02:21:30.000Yeah, this is Chihuly skirt, billowing skirt.
02:21:36.000Yeah, this is Chihuly statue of an avocado, right?
02:21:42.000And so it's an avocado made out of stained glass.
02:21:44.000Okay, so just look here for a moment, though.
02:22:08.000It's like a perfectly corresponding reflection.
02:22:11.000Okay, this entire thing was generated by MidJourney.
02:22:13.000MidJourney, the way MidJourney works is it predicts the next pixel.
02:22:17.000So the way that it worked was it basically ran this algorithm that basically used the prompt and then it ran it through the neural network and then it predicted each pixel in turn for this image.
02:22:25.000And this image probably has, you know, 100,000 pixels in it or something or a million pixels or something.
02:22:34.000But in the process of predicting each pixel, it was able to render not only colors and shapes and all those things, but transparency, translucency, reflections, shadows, lighting.
02:22:48.000It trained itself basically on how to do a full 3D rendering inside the neural network in order to be able to successfully predict the next pixel.
02:22:57.000And how long does something like that take to generate?
02:22:59.000That takes to generate on the—when you're running the system today, that would probably be, I'm going to guess, 10 or 15 seconds.
02:23:07.000There's a newer version of MidJourney, a turbo version that just came out, where I think it cuts down a couple seconds.
02:23:12.000Now, the system that's generating that needed, you know, many years of computing power across many processors to get ready to do the training that took place.
02:23:23.000But the fact that it can generate that in seconds is— Took a few seconds.
02:23:26.000Okay, so here's another amazing thing.
02:23:30.000The price, the cost of generating an image like that versus hiring a human artist to do it is like down by a factor of a thousand, somewhere between a factor of a thousand and ten thousand.
02:23:40.000If you just kind of run the numbers, like to hire an artist to do that at that level of quality would cost on the order of a thousand and ten thousand more dollars or, you know, time or human effort than doing it with the machine.
02:23:52.000The same thing is true of writing a legal brief.
02:23:54.000The same thing is true of writing a medical diagnosis.
02:23:58.000The same thing is true of, you know, summarizing a book, like any sort of, you know, knowledge, summarizing a podcast, you know, any of these things, drafting questions for a podcast.
02:24:08.000You know, basically pennies, right, to be able to do all these things versus, you know, potentially $100 or $1,000 to have a person do any of these things.
02:24:17.000So we've dropped the cost of a lot of white-collar work by a factor of a thousand.
02:24:22.000Guess what we haven't dropped the cost of at all?
02:24:43.000For those things, the cost of the machine and the AI and everything else to do those things is far in excess of what you can simply pay people to do.
02:24:51.000So there's the great twist here is that in all of the economic fears around automation, the fear has always been that it's the mechanical work that gets replaced because the presumption is people working with their brains.
02:25:03.000That's certainly not what the computer's going to be.
02:25:05.000Certainly, the computer's not going to be able to make art.
02:25:07.000So the computer's going to be able to pick strawberries or it's going to be able to make cheeseburgers, but obviously it's not going to be able to make art.
02:25:11.000And it actually turns out the reverse is true.
02:25:13.000It's much easier to make the image of that shoe than it is to make you a cheeseburger.
02:25:17.000Of course, because it has to be automated physically.
02:25:41.000And what happens if the plumbing is all screwed up?
02:25:43.000The great irony and twist of all this is when the breakthrough – we all thought in the industry, we all thought when the breakthrough arrived, it would arrive in the form of robotics that would cause – the fear would be it would cause unemployment among basically the quote-unquote lower-skilled people or less educated people.
02:25:58.000It turns out to be the exact opposite.
02:26:00.000Well, that's Andrew Yang's take on automation, right?
02:26:11.000But before you think about that, though, think, though, about what this means in terms of productivity.
02:26:15.000So think in terms of what this means about what people can do.
02:26:18.000So think about the benefit, including the economic benefit.
02:26:22.000Everybody always thinks of this as producer first.
02:26:24.000You want to start by thinking of this as consumer first, which is like as a customer of all of the goods and services that involve knowledge work, the price on all of those things is about to drop on the order of like a thousand X. Right, so everything that you pay for today that involves white-collar work, like the prices and all those things are going to collapse.
02:26:40.000By the way, the collapse in the prices is why it doesn't actually cause unemployment, because when prices collapse, it frees up spending power, and then you'll spend that same money on new things, and so your quality of life will rise, and then there will be new jobs created that will basically take the place of the jobs that got destroyed.
02:26:55.000But what you'll experience is, hopefully, a dramatic fall in the cost of the goods and services that you buy, which is the equivalent of basically giving everybody a raise.
02:27:06.000Because one of the arguments about art is that you're taking this midway, you're taking this AI program, and it's essentially stealing the images of these style of artists and then compiling its own.
02:27:21.000But that the intellectual work, the original creative work, was responsible for generating this in the first place.
02:27:28.000So even though you're not paying the illustrator, you're essentially using that illustrator's creativity and ideas to generate these images through AI. And in fact, we just saw an example of that.
02:27:38.000We actually named a specific artist, Chihuly, who certainly did not get paid.
02:28:08.000The argument for why what's happening is improper is exactly what you said.
02:28:13.000The argument for why it's actually just fine and in fact not only should be legal but actually is legal under current copyright law is what in copyright law is called the right to make transformative works.
02:28:24.000And so you have the total right as an artist or creator to make any level of creative art that you want or expression that is inspired by or the result of what they call transforming prior works.
02:28:41.000I mentioned earlier the guy who wrote the other version of the book, 1984. He had the right to do that because he was transforming the work.
02:28:48.000You could make your version of what you think of Picasso would look like.
02:28:52.000You are free to draw in the style of Picasso.
02:28:54.000You are not free to copy a Picasso, but you are free to study all the art Picasso did, and as long as you don't misrepresent it as being a Picasso, you can generate all the new Picasso-like art.
02:29:04.000Are you free to copy a Picasso exactly if you're telling everybody you're copying a Picasso?
02:29:38.000By the way, there's also protection for satire.
02:29:42.000There's protection for a variety of things.
02:29:44.000But the one that's relevant here specifically is the transformative one because, and the reason I say that is because Chihuly never made a shoe.
02:29:52.000So there's no image in the training set that was a Chihuly shoe, certainly not a Chihuly Nike shoe, and certainly not that Chihuly Nike shoe.
02:29:59.000And so the algorithm produced an homage, would be the way to think about it, right?
02:30:04.000And as a consequence of that, I think the way through your copyright law, you're like, okay, that's just fine.
02:30:09.000And I think the same thing is true with ChatGPT for all the texts that it is.
02:30:13.000By the way, the same thing is happening at ChatGPT.
02:30:14.000The newspaper publishers are now getting very upset because they have this fear.
02:30:19.000They have a fear that people are going to stop reading the news because they're just going to ask ChatGPT what's happening in the world.
02:30:25.000And there are lots of news articles that are in the internet Training data that went into training ChatGPT, right, including, you know, updating it every day.
02:30:34.000Well, and also if you can generate an objective news source through ChatGPT, because that's really hard to do.
02:30:40.000So one of the fun things that these machines can do, and you can do this at ChatGPT, actually you can do this today, you can tell it to take out, it will do what's called sentiment analysis.
02:30:49.000You can ask it, is this like, Is this news article slanted to the left or the right?
02:30:54.000Is the emotional tone here angry or hostile?
02:30:58.000And you can tell it to rewrite news articles to take out the bias.
02:31:02.000And you can take out any political bias and take out any emotional loading.
02:31:06.000And it will rewrite the article to be as objective as it can possibly come up with.
02:31:11.000The result of that, is that still copyrighted?
02:31:15.000Is that a copyrighted derivative work of the original news article, or is that actually now something new that is a transformation of the thing that existed before, but it's different enough that it's actually fine for the machine to do that without copyright being a problem?
02:31:29.000People, when they encounter objective information like objective news, they're always going to look for someone who has an analysis of that news.
02:31:38.000Then they want a human perspective on it, which is very interesting.
02:32:08.000Take RFK. You could say, analyze this topic for me.
02:32:13.000Adopt the persona of RFK and then analyze this topic for me.
02:32:16.000And it will use all of the training data that it has with respect to everything that RFK has ever done and said and how he looks at things and how he talks about things and how he, you know, whatever does whatever he does.
02:32:26.000And it will produce something that odds are going to be pretty similar to what the actual person is going to say.
02:32:30.000But you can do the same thing for Peter Hotez.
02:32:32.000You can do the same thing for, you know, authority figures.
02:32:34.000You can do the same thing for, what would Jesus say, right?
02:32:40.000And it will, again, it's not Jesus saying it, but it's using the complete set of text and all accounts of everything Jesus ever said and did.
02:32:48.000And it's going to produce something that at least is going to be reasonably close to that.
02:32:51.000What a bizarre new world we're in the middle of right now.
02:33:12.000And again, it's not like you're not, of course, actually talking to Abraham Lincoln, but you are talking to the sum total of all written expression, all books ever written about Lincoln.
02:34:09.000Yeah, look, the technology is very serious technology.
02:34:13.000The technology is for real that they're working on.
02:34:15.000They and people like them, it's all for real.
02:34:19.000People have been working on the ideas underneath this for like 30 years, things like MRIs.
02:34:24.000And by the way, the thing on this is there's a lot of immediate healthcare applications, so like people with Parkinson's, people who have been paraplegics or quadriplegics being able to restore the ability to move, being able to fix things that are broken in the nervous system, able to restore sight to people who can't see if there's some breakdown.
02:34:43.000So there's a lot of very straightforward medical applications that are potentially a very big deal.
02:34:48.000And then there's the idea of like the full actual fusion where, you know, a machine knows what you're thinking and it's able to kind of think with you or you're able to access it and think through it.
02:34:59.000The field is moving pretty quickly at this point, but we're I think still, I'm going to guess, 20 years out or something from anything that would resemble what you would hypothesize it to be like.
02:35:21.000There have been papers in the last six months, there are actually people using this technology, specifically the same kind of thing that we just saw with the shoe.
02:35:33.000People claim to now know how to do a brain scan and be able to pull out basically the image that you're thinking of as an image.
02:35:40.000Now, this is brand new research, and so people are making a lot of claims on things.
02:35:43.000I don't know whether it's actually real or not, but there's a bunch of work going into that.
02:35:47.000There's a bunch of work going into whether it can basically get words out.
02:35:51.000If you're thinking about a word, be able to pull the word out.
02:36:09.000So the claim here is that those would be the original images on top.
02:36:12.000And as you're looking at them, it'll do a brain scan, and it'll feed the result of the brain scan into a system like the one that does the shoes.
02:37:00.000The possibilities are very fascinating because it just seems like we're about to enter into a world that's so different than anything human beings have ever experienced before.
02:37:26.000Maybe the picture I'd leave you with, you mentioned the 20-year-old who has grown up having had this technology the whole time and having had all their questions answered.
02:37:33.000I think there's actually something even deeper.
02:37:38.000The AI that my 8-year-old is going to have by the time he's 20, it's going to have had 12 years of experience with him.
02:38:25.000Well, if you're wearing an Apple Watch, right, it will have your pulse, and it'll have your blood pressure, and it'll have all these things, and it'll be able to say, you know, look, when you were working on this, you were relaxed.
02:38:34.000Your serotonin level, you know, your serotonin or your whatever, oxytocin levels were high.
02:38:57.000They hit college or they hit the workplace and they'll have an ally with them.
02:39:03.000Even before there's any sort of actual physical hookup, they'll have basically a partner that'll be with them, whose goal in life will be to make them as happy and satisfied and successful as possible.
02:39:27.000And it actually gives me hope that there's possibly, especially with real open source, a way to avoid the pitfalls of the censorship that seems likely to be at least attempted to be implemented.