In this episode, we discuss the best sequel of all time, Al Pacino's performance in 'The Godfather' and why you should get married to someone who's not into you. Also, we talk about how a high school put on an Alien play and Sigourney Weaver showed up to say thanks or whatever and told them they did awesome. We also talk about movies that are transformational for society and movies that aren't so much. And of course, we have a love/hate relationship with Adam Sandler. We're not talking about romantic comedies here, we're talking about serious comedies and dramatic moments. And we're not even joking about that! Logo by Courtney DeKorte. Theme by Mavus White. Music by PSOVOD and tyops. Thank you for listening and supporting the podcast. If you like what you hear, please HIT SUBSCRIBE and leave us a rating and review on Apple Podcasts. I'll be listening to your favorite streaming service so we can keep improving the quality of the podcast and making sure to bring you quality and quantity of content going forward. Thank you again, guys! XOXO, Lex, Lex and I'm not mad at you anymore. xoxo, Lex & I'm sorry for all the support the podcast, I really appreciate it. Love ya. -Jon and I hope you enjoy this episode. -Jonah and I love you. -Tune in next week! -Jaxon and I'll see you soon! -P.J. & I'll talk about the new music, too! -Jude and I will be back in a little bit more in the next week. -A.A. -Josie and I can't wait to see you back in the future! -S.B. -R.Y. & Alyssa and I know you're not mad about it. -D. -PJ & I love ya. -BJ. ~ -TJ & A.J.'s -M.S. -E. & JUICY! -BEN JEAN DANDSETTER, SONGS -PODCAST -JOSYANCHE CHEER, JOSIE AND AYAN CHEESE -JORDY -JOSH & JAYE -JAYE RYAN LYNNE
00:04:46.000I have a movie that I throw by people whenever I want to find out whether or not I want to listen to anything they have to say about movies.
00:10:42.000Allowed or some kind of supplementation.
00:10:45.000Where's the line when you start to talk about the future of martial arts, the future of sport?
00:10:50.000If you can control the levels so that they're healthy, I mean, isn't that the reason that they're not allowed is because if abused, they become unhealthy?
00:10:59.000They damage long-term well-being of the person?
00:11:03.000Look, if that was the case, we wouldn't allow fighting.
00:11:07.000Because fighting is more damaging than steroids, for sure.
00:11:30.000You will hurt someone, potentially even...
00:11:33.000Look, there's going to be a time where someone dies in a mixed martial arts event.
00:11:37.000And if that's someone who was the victor, who did not die, was on steroids, it is going to be a huge national tragedy and a massive disaster for the sport, for everything, if that ever does happen.
00:13:01.000We allow creatine, we allow supplements in terms of There's certain things that can slightly elevate your testosterone, slightly elevate your growth hormone.
00:13:12.000We allow sauna and ice baths and all these things that have shown to enhance recovery, but that's too much.
00:14:43.000And when you watch that documentary and you realize, oh, well, the real negative consequences of taking steroids are that it shuts down your endocrine system.
00:14:54.000So it stops your body's natural production.
00:14:57.000Of testosterone and growth hormone and hormones.
00:15:02.000And for young people, that can be very devastating.
00:15:04.000And it can lead to depression and suicidal thoughts and all sorts of really bad things when your testosterone shuts down.
00:15:11.000But as far as like death, boy, I mean, there's...
00:15:17.000People are prescribed pain pills every day of the week, and fighters that are on injuries that have gotten surgery, they're prescribed pain pills every day of the week, and those pain pills kill people left and right.
00:16:29.000So, you know, it's an interesting possibility where in moderation you'll be able to allow steroids in future athletics because with an argument that if done in moderation you can actually create healthier athletes.
00:16:43.000Yeah, that's a real argument for the Tour de France.
00:16:46.000The Tour de France, they say that you actually are better off and healthier taking steroids and EPO than you are doing it without it because it's so unbelievably grueling on the body.
00:20:16.000But I don't want people to give in to the impulse.
00:20:18.000I think fighting is something that you should do correctly.
00:20:24.000There's principles that you should follow to fight correctly.
00:20:28.000It doesn't mean that you shouldn't take chances.
00:20:31.000But you know there's moments like Ricardo Lamas, when he fought Max Holloway, and they just stood in the center of the ring for the last few seconds of the fight, and Max Holloway pointed down at the ground.
00:20:48.000He's like, come on, right here, right here.
00:20:49.000And they just started swinging haymakers.
00:21:32.000He's beaten everybody in front of him at featherweight.
00:21:35.000The idea that this one moment where they decided to throw out all his skill and technique and just swing for the bleachers in the middle of the octagon.
00:21:53.000Olympics bring that when, like, the thing that you don't think should happen or can't possibly happen or is not wise, where people just throw everything away.
00:22:52.000Well, in terms of its ability to change lanes and its ability to drive without you doing anything, I just put my hand on the wheel and hold it there, and it does all the work.
00:23:02.000So, because, like, one or two people listen to this podcast...
00:23:05.000I want to take this opportunity and tell people, if you drive a Tesla, whether you listen to this now or a year from now, two years from now, Tesla or any other car, keep your damn eyes on the road.
00:23:18.000So, whatever you think the system is able to do, you will have to still monitor the road.
00:23:40.000No, this is your level of expertise, obviously.
00:23:42.000I mean, I'm not throwing down with you on this.
00:23:44.000No, I think it's really important in this transitionary phase, whatever the car company, whatever the system, that we don't overtrust the system.
00:31:46.000I don't remember if we brought this up last time, but I just remembered seeing this video where you're playing guitar while you were driving.
00:33:38.000And they have just a lot more information.
00:33:40.000So if you're going to build artificial intelligence systems, so machine learning systems that learn from huge amounts of data, camera is the way to go.
00:34:16.000And there's a lot of other debates, but that's one of the core ones.
00:34:19.000It's basically, for camera, if you go camera like you do in the Tesla, there's seven cameras in your Tesla, three looking forward, there's all around, so on, one looking inside.
00:34:51.000Yeah, it's a very large amount of data.
00:34:54.000So you're talking about over 500,000 vehicles have Autopilot.
00:35:00.000450, I think, thousand have the new version of Autopilot, Autopilot 2, which is the one.
00:35:06.000You're driving, and all of that is data.
00:35:08.000So all of those, all the edge cases, what they call them, all the difficult situations that occur, is feeding the machine learning system to become better and better and better.
00:35:19.000And the open question is, How much better does it need to get to the human level of performance?
00:35:26.000One of the big assumptions of us human beings is that we think that driving is actually pretty easy, and we think that humans suck at driving.
00:35:39.000You think like driving, you know, you stay in the lane, you stop at the stop sign, it's pretty easy to automate.
00:35:45.000And then the other one is you think like humans are terrible drivers, and so it'll be easy to build a machine that outperforms humans at driving.
00:35:54.000Now there's, that's, I think, there's a lot of flaws behind that intuition.
00:35:59.000We take for granted how hard it is to look at the scene, like everything you just did, picked up, moved around some objects, It's really difficult to build an artificial intelligence system that does that.
00:36:11.000To be able to perceive and understand the scene enough to understand the physics of the scene, like all these objects, like how to pick them up, the texture of those objects, the weight, to understand glasses folded and unfolded, open water bottle, all those things is common sense knowledge that we take for granted.
00:36:31.000But there is no artificial system in the world today, nor will there be for perhaps quite a while that can do that kind of common sense reasoning about the physical world.
00:37:49.000If there's a person who doesn't have the right of way who begins crossing, we're going to either maintain speed or speed up potentially if we want them to not cross.
00:37:59.000So that game there, to get that right.
00:38:04.000And for us to be rationally, if that, God forbid, leads to a fatality, for us as a society to rationally reason about that and think about that.
00:38:16.000I mean, a fatality like that could basically bankrupt a company.
00:38:19.000There's a lawsuit going on right now about an accident in Northern California with Tesla.
00:38:30.000What was the circumstances about that one?
00:38:33.000So there was, I believe, in Mountain View, a fatality in a Tesla, where it...
00:38:39.000This is a common problem for all lane-keeping systems, like Tesla Autopilot, is there was a divider in the highway, and basically the car was driving along the lane, and then the car in front moved to an adjacent lane,
00:39:52.000The only information they have is hands on steering wheel and they were saying that like half the minute leading up to the crash, the hands weren't on the steering wheel or something like that.
00:40:04.000Basically trying to infer were the person paying attention or not.
00:40:07.000But we don't have the information exactly where were their eyes.
00:40:11.000You can only make guesses as far as I know, again.
00:40:15.000So the question is, this is the eyes on the road thing, because I think I've heard you on a podcast saying you're tempted to sort of look off the road at your new Tesla, or at least become a little bit complacent.
00:40:28.000Yeah, the worry is that you just rely on the thing, that you would relax too much.
00:40:34.000But what would that relaxation lead to?
00:40:38.000If you weren't, you know, when you're driving, I mean, we've discussed this many times on the podcast that the reason why people have road rage, one of the reasons, is because you're in a heightened state, because cars are flying around you and your brain is prepared to make split-second decisions and moves.
00:40:56.000And the worry is that you would relax that because you're so comfortable with that thing driving.
00:41:02.000Everybody that I know that's tried that, they say you get really used to it doing that.
00:41:06.000You get really used to it just driving around for you.
00:41:08.000So the question is what happens when you get used to it?
00:44:44.000I mean, the thing is, in a lot of the things he does, which I admire greatly from any man or woman innovator, it's just boldly, fearlessly pursuing new ideas or jumping off the cliff and learning to fly on the way down.
00:45:01.000I mean, no matter what happens, you'll be remembered as the great innovators of our time.
00:45:06.000Whatever you say, maybe in my book, Steve Jobs was as well.
00:45:11.000Even if you criticize, perhaps he hasn't contributed significantly to the technological development of the company or the different ideas they did.
00:45:18.000Still, his brilliance was in all the products of iPhone, of the personal computer, the Mac, and so on.
00:45:29.000And yes, in this space of autonomous vehicles, of semi-autonomous vehicles, of driver assistance systems, it's a pretty tense space to operate in.
00:45:41.000There's several communities in there that are very responsible but also aggressive in their criticism.
00:45:48.000So in driving in the automotive sector, obviously, since Henry Ford and before, there's been a culture of safety, of just great engineering.
00:45:58.000These are like some of the best engineers in the world in terms of large-scale production.
00:46:03.000You talk about Toyota, you talk about Ford, GM. These people know how to do safety well.
00:46:08.000And so here comes Elon with Silicon Valley ideals that throws a lot of it out the window and says we're going to revolutionize the way we do automation in general.
00:46:20.000We're going to make software updates to the car once a week, twice a week, over the air, just like that.
00:46:27.000That makes people and the safety engineers and human factories engineers really uncomfortable.
00:46:33.000Like, what do you mean you're going to keep updating the software of the car?
00:46:43.000Because the way in the automotive sector you test the system, you come up with the design of the car, every component, and then you go through really rigorous testing before it ever hits the road.
00:46:54.000Here's an idea from the Tesla side is where they basically They, in shadow mode, test the software, but then they just release it.
00:47:03.000So essentially the drivers become the testing.
00:47:07.000And then they regularly update it to adjust if any issues arise.
00:47:13.000That makes people uncomfortable because there's not a standardized Testing procedure, there's not at least a feeling in the industry of rigor, because the reality is we don't know how to test software with the same kind of rigor that we've tested the automotive system in the past.
00:47:32.000So I think it's extremely exciting and powerful to make software sort of approach automotive engineering with at least in part a software engineering perspective.
00:47:47.000So just doing what's made Silicon Valley successful.
00:47:50.000So updating regularly, aggressively innovating on the software side.
00:47:54.000So your Tesla over the air, while we're sitting here, could get a totally new update.
00:47:59.000With a flip of a bit, as Elon Musk says, it can gain all new capabilities.
00:48:07.000That's really exciting, but that's also dangerous.
00:48:17.000So, the apps on your phone fail all the time.
00:48:23.000We're, as a society, used to software failing, and we just kind of reboot the device or restart the app.
00:48:29.000The most complex software systems in the world today, if we think outside of nuclear engineering and so on, They're too complex to really thoroughly test.
00:48:42.000So thorough, complete testing, proving that the software is safe is nearly impossible on most software systems.
00:48:51.000That's nerve-wracking to a lot of people because there's no way to prove that the new software update is safe.
00:49:05.000Do you know how they create software, they update it, and then they test it on something?
00:49:12.000How much testing do they do, and how much do they do before they upload it to your car?
00:49:18.000Yeah, so I don't have any insider information, but I have a lot of sort of public available information, which is they test the software in shadow mode, meaning they see how the new software compares to the current software by running it in parallel on the cars and seeing if there's disagreements,
00:49:36.000like seeing if there's any major disagreements and bringing those up and seeing what...
00:49:42.000By parallel, I'm sorry, do you mean both programs running at the same time?
00:49:48.000One, the original update, yes, at the same time, the original update actually controlling the car, and the new update is just...
00:49:59.000Making the same decisions without them being actuated.
00:50:03.000Without actually affecting the vehicle's dynamics.
00:50:06.000And so that's a really powerful way of testing.
00:50:09.000I think the software infrastructure that Tesla has built allows for that.
00:50:14.000And I think other companies should do the same.
00:50:16.000That's a really exciting, powerful way to approach not just automation, not just autonomous vehicles or semi-autonomous vehicles, but just safety.
00:50:25.000Is basically all the data that's on cars, bring it back to a central point to where you can use the edge cases, all the weird situations in driving to improve the system, to test the system, to learn, to understand where the car is used,
00:50:43.000misused, how it can be improved and so on.
00:51:32.000Because it's a huge amount of data, right?
00:51:35.000I think in the recent autonomy day a couple of weeks ago, they had this big autonomy day where they demonstrated The vehicle driving itself on a particular stretch of road.
00:51:46.000They showed off that, you know, they're able to query the data, basically ask questions of the data, saying, the example they gave is there's a bike on the back of a car, the bicycle on the back of a car.
00:51:58.000And they're able to say, well, when the bicycle is in the back of a car, that's not a bicycle.
00:52:05.000And they're able to now look back into the data and find all the other cases, the thousands of cases that happened all over the world, in Europe and Asia, in South America and North America and so on, and pull all those elements and then train the perception system of Autopilot to be able to better recognize those bicycles as part of the car.
00:52:27.000So every edge case like that, they go through saying, okay, the car freaked out in this moment.
00:52:33.000Let me find moments like this in the rest of the data and then improve the system.
00:52:39.000So this kind of cycle is the way to deal with problems, with failures of the system.
00:52:48.000It's to say, every time the car fails at something, say, is this part of a bigger set of problems?
00:55:30.000And using your app, sort of get from point A to point B. But out of the cars that are semi-autonomous, where there is an autonomous program but you do have to keep your hands on the wheel and pay attention to the road, what are the leaders?
00:55:43.000Besides Tesla, there's Tesla and who else is doing it?
00:57:09.000Well, okay, so semi-autonomous, we have to be careful because the Waymo cars, the quote-unquote fully autonomous cars, are currently semi-autonomous.
00:57:19.000That's the highest level of semi-autonomous, right?
00:57:23.000Yeah, I guess it's not even a highest level, it's a principle, it's a philosophy difference.
00:57:28.000Because they're saying we're going to do full autonomy, we're just not quite there yet.
00:57:33.000Most other companies, they're doing semi-autonomous, better called driver assistance systems, is they're saying we're not interested in full autonomy, we just want a driver assistance system that just helps you steer the car.
00:57:46.000So let's call those semi-autonomous vehicles or driver assistance systems.
00:57:51.000There's several leaders in that space.
00:57:54.000One car we're studying that's really interesting is a Cadillac Super Cruise system.
00:58:00.000So GM has a system, it's called Super Cruise, that I think is the best comparable system to Autopilot today.
00:58:11.000The key differentiator there is, there's a lot of little elements, but the key differentiator is there's a driver monitoring system.
00:58:18.000So there's a camera that looks at you and tells you if your eyes are on the road or not.
00:58:22.000And if your eyes go off the road for, I believe, more than six seconds, it starts warning you and says you have to get your eyes back on the road.
00:58:32.000That's one of the big disagreements, for example, between me and Elon and many experts in the field and Elon and the Tesla approach is that there should be a driver monitoring system.
00:58:44.000Why does Elon feel like there shouldn't be?
00:58:47.000I think his focus, the Tesla's focus, is on just improving the system so fast and so effectively that it doesn't matter what the driver does.
00:59:06.000Operate like that in many ideas that they work with.
00:59:11.000They sort of boldly proceed forward to try to make the car extremely safe.
00:59:17.000Now the concern there is you have to acknowledge the psychology of human beings.
00:59:21.000Unless the car is perfect or under our definition perfect, which is much better than human beings, then you have to be able to You have to be able to make sure that the people are still paying attention to help the car out when it fails.
00:59:39.000And for that, you have to have drive a modern.
01:01:45.000I don't know if that's the CT6, but the one we're looking at is the CT6. Yeah, that's the 2018 CT6. Yeah.
01:01:55.000But they want to add it to their full fleet.
01:01:58.000Does that have the same amount of cameras as the Tesla system does?
01:02:02.000No, and it has a very different philosophy as well in another way, which is it only works on very specific roads, on interstate highways.
01:02:11.000There's something called ODD, Operational Design Domain.
01:02:15.000So they define that this thing, Super Cruise System, only works on this particular set of roads and they're basically just major highways.
01:02:24.000The Tesla approach is saying basically what Elon jokingly referred to as ADD, right, is it works basically anywhere.
01:02:34.000So if you try to turn on your autopilot, you can basically turn it on anywhere where the cameras are able to determine either lane markings or the car in front of you.
01:02:42.000And so that's a very different approach, saying you can basically make it work anywhere or, in the Cadillac case, make it work on only specific kinds of roads.
01:02:50.000So you can test the heck out of those roads.
01:02:54.000So you can use, actually, LIDAR to map the full roads so you know the full geometry of all the interstate highway system that it can operate on.
01:03:04.000Does it also coordinate with GPS so it understands where, like, bumps in the road might be or hills?
01:03:45.000Yeah, but I'd rather you blow out your tire than, I mean, the kind of fatality that happened in the Mountain View with Tesla, I believe, is slightly construction-related.
01:03:57.000So, I mean, there's a lot of safety-critical events that happen construction-related stuff.
01:04:01.000I would like it if that stupid Tesla could figure out the hole in the ground, though, so I didn't have to blow a tire out.
01:04:40.000Because when you swerve, you now introduce, as opposed to sort of breaking the vehicle only, swerving into another lane means you might create a safety situation elsewhere.
01:04:51.000You might put somebody else in danger.
01:05:26.000But in terms of construction zones, in terms of other weird things that change the dynamics, the geometry of the road, that's difficult to get right.
01:05:36.000So Cadillac's doing a version of it, but it sounds like it's a little bit less involved, less comprehensive.
01:05:44.000Maybe there's a better way of describing it.
01:05:45.000Yeah, and less, I would say, it's more safety-focused.
01:05:50.000It's a sort of, what's the right word to use here?
01:05:54.000It's more cautious in its implementation.
01:05:58.000So GM, again, has a tradition for better or for worse.
01:06:22.000It's hard to talk about without actually experiencing the system.
01:06:25.000What's more important than driver monitoring and any of the details we talk about is how the whole thing feels, the whole thing together, how it's implemented, the whole interface.
01:06:35.000The Cadillac system is actually done really well in the sense that there's a clarity to it.
01:06:41.000There's a green color and a blue color and you know exactly when the system is on and when it's off.
01:06:46.000That's one of the big things people struggle with is just confusing in other cars drivers not being able to understand when the system is on or off.
01:09:09.000I don't want to speak too much to the details, but they have lane keeping systems.
01:09:13.000They're basically systems that keep you in the lane.
01:09:16.000That is similar to what, in spirit, Autopilot is supposed to do, but is less aggressive in how often you can use it and so on.
01:09:23.000If you look at the performance of the actual, how often the system is able to keep you in lane, Autopilot is currently the leader in that space.
01:09:32.000And they're also the most aggressive innovators in that space.
01:09:38.000They're really pushing it to improve further and further.
01:09:41.000And the open question is, the worrying question is if it improves much more, are there going to be effects like complacency, like people will start texting more, will start looking off-road more.
01:09:55.000It's a totally open question and nobody knows the answer to it really.
01:10:00.000And there's a lot of folks, like I mentioned, in the safety engineers and human factors community, so these psychology folks who have roots in aviation, that there's been 70 years of work that looks at vigilance.
01:10:14.000If I force you to sit here and monitor for something weird happening, like radar operators in World War II had to watch for the dot to appear.
01:10:26.000If I sit you behind that radar and make you do it, after about 15 minutes, but really 30 minutes, your rate of being able to detect any problems will go down significantly.
01:10:56.000You have to be, you know, there has to be a dance attention.
01:10:59.000We don't have a mode for watching autonomous things, right?
01:11:04.000If you consider historically the kind of modes that people have for observing things, we don't really have a mode for making sure that an autonomous thing does its job.
01:12:20.000And obviously there's politics that FAA is – I think FAA is supposed to supervise and there's a close relationship between Boeing and FAA. There's questions around – I mean there's better experts at that than me.
01:12:32.000But on the software side, it is worrying because it was a single software update essentially that helps prevent the vehicle – the airplane from stalling.
01:12:43.000So, if the nose is tilting up, increasing the chance of stalling, it's going to automatically point the nose down of the airplane.
01:12:54.000And the pilots, in many cases, as far as I understand, weren't even informed of this update, right?
01:13:00.000They weren't even told this is happening.
01:13:02.000The idea behind the update is that they're not supposed to really know.
01:13:06.000It's supposed to just manage the flight for you, right?
01:13:09.000The problem happened when there's an angle of attack sensor.
01:13:13.000So the sensor that tells you the actual tilt of the plane.
01:13:17.000And there's a malfunction in that sensor, as far as I understand, in both planes.
01:13:21.000And so the plane didn't actually understand its orientation.
01:13:25.000So the system started freaking out and started pointing the nose down aggressively.
01:13:29.000And the pilots were like trying to restabilize the plane and couldn't.
01:13:32.000So shortly after liftoff, they just crashed.
01:13:53.000One way is to be sort of a little bit Luddite.
01:13:55.000I use the term carefully and just be afraid and say, you know what, we should really not allow so many software updates.
01:14:02.000The other one is sort of embracing it and redefining what it means to build safe AI systems in this modern world with updates multiple times a week.
01:14:17.000I think updates, regular updates, so combining the two cultures but really letting good software engineering lead the way is the way to go.
01:14:27.000I wish other companies were competing with Tesla on this.
01:14:31.000On the software side, Tesla is far ahead of everyone else in the automotive sector.
01:14:56.000So most cars are not able to do over-the-air.
01:15:00.000As far as I know, no cars are able to do major over-the-air updates except Tesla vehicles.
01:15:06.000They do over-the-air updates to the entertainment system.
01:15:10.000Like, you know, if your radio is malfunctioning.
01:15:13.000But in terms of the control of the vehicle, you have to go to the dealership to get an update.
01:15:17.000Tesla is the only one that over-the-air, like it can multiple times a week do the update.
01:15:23.000I think that should be a requirement for all car companies.
01:15:26.000But that requires that they rethink the way they build cars.
01:15:30.000That's really scary when you manufacture over a million cars a year in Toyota and GM. To say, especially old school Detroit guys and gals that are like legit car people, to say we need to hire some software engineering,
01:15:47.000It's a totally, you know, I don't know how often you've been to Detroit, but there's a culture difference between Detroit and Silicon Valley.
01:15:54.000And those two have to come together to solve this problem.
01:15:57.000So I have the adult responsibility of Detroit, of how to do production well, manufacture, how to do safety well, how to test the vehicles well, and do the bold, crazy, innovative spirit of Silicon Valley, which Elon Musk in basically every way represents.
01:16:14.000I think that will define the future of AI in general.
01:16:22.000Interacting with AI systems just even outside the automotive sector requires these questions of safety, of AI safety, of how we supervise the system, how we manage them from misbehaving and so on.
01:16:35.000There's a concern about those systems being vulnerable to third-party attacks.
01:16:45.000I think there is a whole discipline called adversarial machine learning in AI, which basically any kind of system you can think of, how we can feed it examples.
01:16:57.000How we can add a little bit of noise to the system to fool it completely.
01:17:02.000So there's been demonstrations on Alexa, for example, where you can feed noise into the system that's imperceptible to us humans and make it believe you said anything.
01:17:17.000So, fool the system into thinking, so ordering extra toilet paper, I don't know.
01:17:22.000And the same for cars, you can feed noise into the cameras to make it believe that there is or there isn't a pedestrian, that there is or there isn't lane markings.
01:17:47.000You can construct a situation where a pedestrian can wear certain types of clothing or put up a certain kind of sign where they disappear from the system.
01:17:54.000I have to ask you this because now I just remember this.
01:17:56.000You'd be the perfect person to talk about this.
01:17:58.000I'm not sure if you remember this case, but there was a guy named Michael Hastings.
01:18:01.000Michael Hastings was a journalist and he was, I believe, in Iraq or Afghanistan.
01:18:08.000He was somewhere overseas and he was stuck there because of this volcano that erupted in, I believe, Iceland.
01:18:49.000And Michael Hastings was fearing for his life because he thought that they were going to come and get him because these people were very, very angry at him.
01:18:58.000He wound up driving his car into a tree going like 120 miles an hour.
01:19:04.000And the car exploded and the engine went flying.
01:19:07.000And people that were the conspiracy theorists were saying they believed that that car had been rigged to work autonomously or that someone, some third party bad person decided to,
01:19:22.000or good person depending on your perspective, decided to drive that guy's car into a fucking tree at 120 miles an hour.
01:19:29.000Do you think that that, and this is 2011?
01:20:56.000And I just think it's a very difficult technical challenge that if hacking happens… It would be at a different level than hacking the AI systems.
01:21:50.000I'm just asking you because you're actually an expert.
01:21:52.000I mean, it's very rare that you get an expert in autonomous vehicles and you get to run a conspiracy theory by them to see if they can just put a stamp on it being possible or not.
01:22:01.000Let me just say that Alex Jones is officially not allowed to say MIT scientist says.
01:22:07.000Which is exactly what he's going to try to do.
01:23:52.000The systems back then, though, were far more primitive, correct?
01:23:58.000Yeah, but it's really, again, the attack vectors here.
01:24:03.000So the way you hack these systems, I have more to do with the software, low-level software that can be primitive than the high-level AI stuff.
01:24:11.000Right, but my issue with it was there's no cameras on the outside of the vehicle like there is on Tesla of today, which has autonomous driving as an option.
01:24:52.000And so obviously it becomes amenable, susceptible to...
01:24:58.000Bugs that can be exploited to hack the code.
01:25:02.000And so people are worried legitimately so that these security attacks would lead to these kind of, well, at the worst case, assassinations, but really sort of just basic attacks, basic hacking attacks.
01:25:20.000I think that's something that people in the automotive industry and certainly Tesla is really working hard on and making sure that everything is secure.
01:25:29.000There's going to be, of course, vulnerabilities always, but I think they're really serious about preventing them.
01:25:35.000But in the demonstration space, you'll be able to demonstrate some interesting ways to trick the system in terms of computer vision.
01:25:44.000This all boils down to That these systems are actually, the ones that are camera-based, are not as robust as our human eyes are to the world.
01:25:55.000So like I said, if you add a little bit of noise, you can convince it to see anything.
01:26:00.000To us humans, it'll look like the same road, like the same three pedestrians.
01:26:03.000Could you draw like a little person on the camera lens?
01:26:31.000So next gen system is something you're going to have to bring that Cadillac into the dealership and they're going to have to update the software.
01:26:42.000And the question was, so that's an exciting, powerful capability, but then the Boeing, the flip side, is, you know, it can significantly change the behavior of the system.
01:27:00.000Especially with a lot of, I mean, that number, whatever it is, it's like 300 combined, 300 plus people dead, maybe even 400. I mean, I don't even know how to think about that number.
01:27:26.000It's a lot of burden, and it's one of the reasons it's one of the most exciting things to work on, actually, is the code we write has the capability to save human life, but the terrifying thing is it also has the capability to take human life.
01:27:43.000And that's a weird place to be as an engineer, where directly a little piece of code, you know, I write thousands of them a day.
01:27:52.000You know, basically notes you're taking could eventually lead to somebody dying.
01:27:58.000Now, is there, I don't know anything about coding, but do you have like, is there a spell check for coding?
01:28:04.000Yeah, so it's kind of called debugging.
01:28:47.000And so, the way people were looking at it, that was like a frivolous suggestion.
01:28:56.000And that it was ridiculous to try to get someone who was 50 years old, who doesn't have any education in computers at all, to change their job from being a coal miner to learning how to code.
01:29:07.000So they started saying it to politicians and people mocking it.
01:29:10.000But then what Twitter alleged was that what was going on was it was being connected to white supremacy and anti-Semitism and a bunch of different things like people were saying learn to code and they were putting in a bunch of these other phrases in.
01:29:26.000My suggestion would be, well, that's a different fucking thing.
01:29:29.000Now you have a problem with Nazis and white supremacists, but the problem is with Nazis and white supremacists.
01:29:36.000When someone is just saying learn to code, mocking this ridiculous...
01:29:42.000Idea that you're going to teach, you know, that's a legitimate criticism of someone's perspective, that you're going to get a coal miner to learn how to fucking do computer coding.
01:32:01.000I tend to believe, again, this might be my naive nature, is that they don't have bias and they're trying to manage this huge flood of tweets and what they're trying to do is not to remove conservatives or liberals and so on.
01:32:23.000They're trying to remove people that Lead to others leaving the conversation.
01:32:32.000So they want more people to be in the conversation.
01:33:14.000There's some progressive people or liberal people that post all sorts of crazy shit, and they don't get banned at the same rate.
01:33:22.000It's really clear that someone in the company, whether it's up for manual review, whether it's at the discretion of the people that are employees, when you're thinking about a company that's a Silicon Valley company, you are...
01:33:35.000Without doubt, you're dealing with people that are leaning left.
01:33:40.000There's so many that lean left in Silicon Valley.
01:33:44.000The idea that that company was secretly run by Republicans is ridiculous.
01:33:48.000They're almost all run by Democrats or progressive people.
01:33:52.000So at the leadership level, there's a narrow-mindedness that permeates all of Silicon Valley, you're saying?
01:33:59.000Well, the question is – I think there's a leaning left that permeates Silicon Valley.
01:34:07.000I mean I think if you had a poll, the people that work in Silicon Valley, where their political leanings are, I think it would be – By far, left.
01:34:14.000I think it would be the vast majority.
01:34:16.000Does that mean that affects their decisions?
01:34:33.000There's absolutely people that work there that lean.
01:34:36.000And there's been videos where they've captured people that were Twitter employees talking about it, talking about how you do that, how you find someone who's using Trump talk or saying sad at the end of things, and someone's talking, certain characteristics they look for.
01:34:53.000There's been videos of, what is that, Project Veritas, where that guy and his employees got undercover footage of Twitter employees talking about that kind of stuff.
01:35:02.000The question is how much power do those individuals have?
01:35:04.000How many individuals are there like that?
01:35:07.000Are those people exaggerating their ability and what they do at work?
01:35:12.000Or are they talking about something that used to go on but doesn't go on anymore?
01:36:11.000The thought of being open-minded and acting in that ethic is probably one of the most important things that we could go forward with right now because things are getting so greasy.
01:36:27.000And we're at this weird position that I don't recall ever in my life there being such a divide between the right and the left in this country.
01:37:42.000I'm more concerned with the state of the economy and the way we trade with the world than I am with certain social issues that the Democrats embrace.
01:37:50.000So I'll lean that way, even though I do support gay rights, and I do support this, and I do support all these other progressive ideas.
01:38:13.000And the question is, this is where the role of AI comes in.
01:38:16.000Does the AI that recommends what tweets I should see, what Facebook messages I should see, is that encouraging the darker parts of me or the Steven Pinker better angels of our nature?
01:38:32.000Because if it shows me stuff that If the AI trains purely on clicks, it may start to learn when I'm in a bad mood and point me to things that might be upsetting to me.
01:38:46.000And so escalating that division and escalating this vile thing that can be solved most likely with people training a little more jiu-jitsu or something.
01:39:24.000I mean, the anti-vax arguments on Facebook, I don't know if you ever dip into those waters for a few minutes and watch people fight back and forth in fury and anger.
01:39:35.000It's another one of those things that becomes an extremely lucrative thing.
01:39:41.000It's a subject for any social media empire.
01:39:45.000If you're all about getting people to engage, and that's where the money is in advertising, getting people to click on the page, and the ads are on those pages, you get those clicks, you get that money.
01:39:54.000If that's how the system is set up, and I'm not exactly sure how it is because I don't really use Facebook, but that's what it benefits.
01:40:00.000I mean, that's what it gravitates towards.
01:40:04.000So, and when we think about concern for AI systems, we talk about sort of Terminator, I'm sure we'll touch on it, but I think of Twitter as a whole as one organism.
01:40:14.000That is the thing that worries me the most, is the artificial intelligence that is very kind of dumb and simple, simple algorithms that are driving the behavior of millions of people.
01:40:25.000And together, the kind of chaos that we can achieve...
01:40:30.000I mean, that algorithm has incredible influence on all society.
01:40:33.000Twitter, our current president is on Twitter.
01:40:58.000And that, if you think about the long term, if you think about as one AI organism, that is a super intelligent organism that we have no control over.
01:41:07.000And I think it all boils down, honestly, to the leadership.
01:41:11.000To Jack and other folks like him, making sure that he's open-minded, that he goes hunting, that he does some jiu-jitsu, that he eats some meat and sometimes goes vegan.
01:41:24.000He just did a 10-day talkless retreat.
01:41:29.000Where you don't talk at all for 10 days.
01:45:55.000A shout out to Sarah Block, a judo lady with a black belt in jiu-jitsu as well that was willing to put up with like hundreds or thousands of throws.
01:46:31.000Your brain should be exhausted by the end of it too because you're visualizing the whole thing.
01:46:36.000You're like going through, you're imagining how your opponent would It's really strengthening your imagination while you're also doing the drilling.
01:47:07.000They drill like crazy, and they do a lot of live drills, and they do a lot of pathway drills, where they'll do a whole series of movements, and then the escape, and then the reversal.
01:47:18.000These are long pathways, so that when you're actually in a scrap and you're rolling, you recognize it.
01:50:27.000Eddie invented a series of pathways from mission control to set up various techniques, arm bars, triangles, all these different things.
01:50:35.000But there had been people that had toyed with doing high guard, like Nino Chambri.
01:50:40.000He did a lot of rubber guard-esque stuff.
01:50:44.000There was a lot of things that people did, but Eddie has his own pathway and his own system.
01:50:49.000And then there's a lot of guys that branch off from that system, like Jeremiah.
01:50:53.000Like Vinnie Magalès, that have their own way that they prefer to set various techniques up to.
01:50:59.000But what's really good about that, if you have the flexibility, is that when you're on the bottom, not only is it not a bad place to be, but you could put someone in some real trouble.
01:51:10.000When you have your ability, you're holding onto your ankle and using your leg, which is the strongest fucking limb in your body, right?
01:51:18.000Pulling down on someone with your leg, clamping down with your arm, and then you get your other leg involved.
01:51:43.000I remember it being, you know when somebody does a nice move on you, especially like a lower rank, your first reaction is like, oh, this would never, like you're annoyed.
01:52:31.000You're holding the back of the foot across the back of the neck, and so your shin is underneath someone's throat, and then you're pushing that shin with your other heel while you're squeezing with your arm.
01:53:37.000Like, so, if you go to the real big guy, like I'm rolling with a 240-pound guy, I'm not going to get to that spot.
01:53:42.000Like, I better have a good guard, otherwise I can't do anything, right?
01:53:46.000When someone's bigger than you and stronger than you, I mean, that's what Hoist Gracie basically proved to the world.
01:53:52.000Like, as long as you have technique, it doesn't matter where you are.
01:53:55.000But if you only have top game, which a lot of people do, a lot of people only have top game, You know, you're kind of fucked if you wind up on your back.
01:54:04.000We see that a lot with wrestlers in MMA. As wrestlers, they can get on top of you and they'll fuck you up.
01:54:09.000They'll strangle you, they'll take you back, they'll beat you up from the mount, but they don't have nearly the same game when they're on their back.
01:54:17.000And then there's guys like Luke Rockhold, who's like an expert at keeping you on your back.
01:54:21.000He's one of those guys, when he gets on top of you, you're fucked.
01:58:36.000I wonder if he's ever considered MMA. I know there was some talk about it, but I wonder if he ever really...
01:58:43.000I think at this point, he is basically a no, but there are a few terrifying people, especially on the Russian side, that I think the heavyweight division and UFC should be really worried.
01:58:58.000I don't know if you heard about the Russian tank, the 22-year-old from Dagestan.
02:01:49.000Well, you know, that was sort of evident, and the mindset behind them was sort of evident at the end of that fight with Conor, where they went crazy and he jumped into the crowd.
02:03:12.000I think security could have been handled far better and will be in the future to prevent things like that from happening where people just jumped into the cage.
02:05:25.000I know you're gonna shut this down, as most fans do, but I... If he drops everything and goes to, like, Siberia to train, I would love to see him and Khabib, too.
02:08:20.000It exists purely in software and in hardware.
02:08:25.000And in ones and zeros and that this is a new form of life and this is when the inevitable Rise of a sentient being the inevitable.
02:08:34.000I mean, I think if we don't get hit when the asteroid within a thousand years or whatever the number the time frame is Someone is going to figure out how to make a thing that just walks around and does whatever it wants and lives like a person and That's not outside the realm of possibility.
02:08:52.000And I think that if that does happen, that's artificial life.
02:08:58.000And it's probably going to be better than what we are.
02:09:00.000I mean, what we are is basically, if you go back and look about, you know, 300,000, 400,000 years ago, when we were some Australopithecus-type creature, How many of them would ever look at the future and go, I hope I never get a Tesla.
02:09:17.000The last thing I want is a fucking phone.
02:09:18.000The last thing I want is air conditioning and television.
02:09:21.000The last thing I want is to be able to talk in a language that other people can understand and to be able to call people on the phone.
02:10:13.000Let's talk about sentience and creating artificial life, but I think even those life forms, even those systems need to have the darker parts.
02:11:00.000You had to have that jaguar there in order to inspire you to make enough safety so that your kids can grow old enough that they can get information from all the people that did survive as well and they can accumulate all that information and create air conditioning and automobiles and guns and keep those fucking jaguars from eating your kids.
02:11:18.000This is what had to take place as a biological entity.
02:11:22.000But once you surpass that, Once you become this thing that doesn't need emotion, doesn't need conflict, it doesn't need to be inspired, it never gets lazy.
02:11:32.000It doesn't have these things that we have built into us as a biological system.
02:11:36.000If you looked at us as Wetware operating software.
02:11:44.000It's software designed for cave people.
02:11:46.000And we're just trying to force it into cars and force it into cubicles.
02:11:52.000But part of the problem with people and their unhappiness Is that all of these human reward systems that have been set up through evolution and natural selection to have these instincts to stay alive, they're no longer relevant in today's society.
02:12:07.000So they become road rage, they become extracurricular violence, they become depression, they become all these different things that people suffer from.
02:16:00.000That has to be an option if we are here, right?
02:16:02.000If we can't see any others out there, and even though there's the Fermi Paradox and there's all this contemplation that if they do exist, maybe they can't physically get to us, or maybe they're on a similar timeline to us.
02:16:17.000Also, it's also possible, as crazy as it might sound, that this is as good as it's ever gotten anywhere in the world.
02:16:34.000There's 15 armed caterpillar people that live on some other fucking planet and they just toss their own shit at each other and they never get any work done.
02:16:44.000But even if that's true, Even if this beauty that we perceive, even if that this beauty requires evil to battle and requires Seemingly insurmountable obstacles you have to overcome and then through this you achieve beauty.
02:17:06.000That beauty is in the eye of the beholder, for sure.
02:17:10.000Objectively, the universe doesn't give a fuck if Rocky beats Apollo Creed in the second movie.
02:19:00.000I appreciate human beings in this life and human beings, their contributions.
02:19:05.000And as I get older, Particularly over the last few years, I started doing a lot of international travel.
02:19:12.000I fucking appreciate the shit of all these people that are living in this different way, with weird language and shit, weird smelling foods.
02:19:20.000And I like to think, what would it be like if I grew up here?
02:19:23.000These are just people, but they're in this weird sort of mode.
02:20:45.000I'd like to get to a sense of how you think about, and maybe I can talk about where the technology is, of what that artificial intelligence looks like in 20 years, in 30 years, that will surprise you.
02:21:00.000So you have a sense that it has a human-like form.
02:21:03.000No, I have a sense that it's going to take on the form the same way the automobile has.
02:21:08.000If you go back and look at it, C.T. Fletcher has a beautiful old patinaed pickup truck.
02:23:11.000They created a system called GPT-2, which does language modeling.
02:23:15.000This is something in machine learning where you basically, unsupervised, let the system just read a bunch of text and it learns to generate new text.
02:23:23.000And they've created this system called GPT-2 that is able to generate very realistic text.
02:23:46.000It paints a picture of a world in five, ten years plus where most of the text on the internet is generated by AI. And it's very difficult to know who's real and who's not.
02:23:58.000And one of the interesting things, I'd be curious from your perspective to get what your thoughts are.
02:24:02.000What OpenAI did is they didn't release the code for the full system.
02:24:06.000They only released a much weaker version of it publicly.
02:24:12.000So they felt that it was their responsibility to hold back.
02:24:17.000Prior to that date, everybody in the community, including them, had open-sourced everything.
02:24:22.000But they felt that now, at this point, part of it was for publicity.
02:24:26.000They wanted to raise the question, is, when do we hold back on these systems?
02:24:33.000When they're so strong, when they're so good at generating text, for example, in this case, or at deep fakes, at generating fake Joe Rogan faces.
02:24:43.000Jamie just did one with me on Donald Trump's head.
02:25:59.000Because you happen, your podcast happens to be one of the biggest data sets in the world of people talking in really high quality audio with high quality 1080p for most, for a few hundred episodes of people's faces.
02:26:36.000So you could basically make Joe Rogan say anything.
02:26:40.000Yeah, I think this is just one step before they finagle us into having a nuclear war against each other so they could take over the earth.
02:26:47.000What they're going to do is they're going to design artificial intelligence that survives off of nuclear waste.
02:26:52.000And so then they encourage these stupid assholes to go into a war with North Korea and Russia, and we blow each other up, but we leave behind all this precious...
02:27:02.000Radioactive material that they use to then fashion their new world.
02:27:06.000And we come a thousand years from now and it's just fucking beautiful and pristine with artificial life everywhere.
02:27:23.000Imagine if they did do that, they would have to have started with him in the 70s.
02:27:28.000I mean, he's been around for a long time and talking about being president for a long time.
02:27:32.000Maybe electronics have been playing the long game and they got him to the position.
02:27:36.000And then they're going to use all this...
02:27:38.000On the grand scale of time, it's not really long game, 70s.
02:27:41.000Well, you know about that internet research agency, right?
02:27:44.000You know about that, that's the Russian company that they're responsible for all these different Facebook pages where they would make people fight against each other.
02:29:27.000I think once people figure out how to manipulate that effectively and really create like an army of fake bots that will assume stances on a variety of different issues and just argue...
02:30:04.000I mean, there could be effects that we're not anticipating totally.
02:30:06.000There might be some ways in virtual reality we can authenticate our identity better.
02:30:12.000So it'll change the nature of communication, I think.
02:30:16.000The more you can generate fake text, then the more we'll distrust the information online and the way that changes society is totally an open question.
02:31:25.000It says, a move that threatens to push many of our most talented young brains out of the country and onto campuses in the developing world.
02:31:34.000Research by Oxford University warns that the UK would have spent nearly $1 trillion on post-Brexit infrastructure.
02:31:41.000That's crazy that that's all done by an AI that's like spelling this out in this very convincing argument.
02:31:47.000The thing is, the way it actually works algorithmically is fascinating because it's generating it one character at a time.
02:31:57.000You don't want to discriminate against AI, but as far as we understand, it doesn't have any understanding of what it's doing, of any ideas it's expressing.
02:33:11.000Like, if they're scared of it, they're the people that make it, and they're called OpenAI.
02:33:16.000I mean, this is the idea behind the group where everybody kind of agrees.
02:33:19.000That you're going to use the brightest minds and have this open source so everybody can understand it and everybody can work at it and you don't miss out on any genius contributions.
02:33:54.000Because they're directly going to be able to improve this now.
02:33:57.000Like, if we can generate basically 10 times more content of your face saying a bunch of stuff, what do we do with that?
02:34:08.000If Jamie all of a sudden on the side develops a much better generator and has your face, does an offshoot podcast essentially, fake Joe Rogan experience, what do we do?
02:34:40.000If they have all these people that are...
02:34:42.000Like, that little sentence that led to that enormous paragraph in that video was just a sentence that showed a certain amount of outrage and then it let the AI fill in the blanks.
02:34:54.000You could do that with fucking anything.
02:34:57.000Like, you could just set those things loose.
02:34:59.000If they're that good and that convincing and they're that logical...
02:36:21.000A million people asked me to talk about UBI. Are you still a supporter of UBI? I think we're probably going to have to do something.
02:36:31.000The only argument against UBI, in my eyes, is human nature.
02:36:37.000The idea that we could possibly take all these people that have no idea where their next meal is coming from and eliminate that and always have a place to stay.
02:36:48.000And then from there on, you're on your own.
02:36:50.000But that's what universal basic income essentially covers.
02:36:53.000It covers food, enough for food, right?
02:36:57.000It's not like you could just live high on the hog.
02:37:00.000But you gotta wonder what the fuck the world looks like when we lose millions and millions and millions of jobs almost instantly due to automation.
02:37:11.000Yeah, it's a really interesting question, especially with Andrew Ng's position.
02:37:16.000So there's a lot of economics questions on UBI. I think the spirit of it, just like I agree with you, we have to do something.
02:37:23.000Yeah, the economics seem kind of questionable, right?
02:40:23.000You have to both reward the crazy broke An entrepreneur who dreams of creating the next billion dollar startup that improves the world in some fundamental way.
02:40:36.000Elon Musk has been broke many times creating that startup.
02:40:40.000And you also have to empower the people who just lost their job because there were data entry Their data entry job, some basic data manipulation, data management that was just replaced by a piece of software.
02:41:33.000I think the questions there aren't about, so the enemy isn't, first of all, there's no enemy, but it certainly isn't AI or automation, because I think AI and automation will help make a better world.
02:41:50.000You sound like a spokesperson for AI and automation.
02:41:54.000And for UBI. I think we have to give people financial freedom to learn, like lifelong learning and flexibility to find meaningful employment.
02:42:19.000Giving people just money enough to survive doesn't make them happy.
02:42:22.000And if you look at any dystopian movie about the future, Mad Max and shit, it's like, what is it?
02:42:27.000Society's gone haywire, and people are like ragamuffins running through the streets, and everyone's dirty, and they're shooting each other and shit, right?
02:42:34.000And that's what we're really worried about.
02:42:36.000We're really worried about some crazy future where the rich people live in these, like...
02:42:42.000Protected sky rises with helicopters circling over them and down in the bottom it's desert chaos.
02:42:52.000Providing some backing, any kind of welfare program is a part of that.
02:42:57.000But also much more seriously looking at our broken education system throughout.
02:43:01.000I mean it's just like not blaming AI. Or technology, which are all inevitable developments which I think will make a better world.
02:43:09.000But saying we need to do lifelong learning, education, make it a lifestyle, invest in it, not stupid rote learning memorization that we do.
02:43:22.000It's sort of the way mathematics and engineering and chemistry and biology, the sciences, And even art is approached in high school and so on.
02:43:30.000But looking at education as a lifelong thing, finding passion, and that should be the big focus, the big investment.
02:43:40.000It's investing in the knowledge and development of knowledge of young people and Everybody.
02:43:45.000So it's not learn to code, it's just learn.
02:43:49.000I couldn't agree more and I also think you're always going to have a problem with people just not doing a really good job of raising children and screwing them up and making There's a lot of people out there that have terrible traumatic childhoods.
02:44:06.000To fix that with universal basic income, just to say, oh, we're going to give you $1,000 a month, I hope you're going to be happy, that's not going to fix that.
02:44:14.000We have to figure out how to fix the whole human race.
02:44:18.000And I think there's very little effort that's put into thinking about how to prevent So much shitty parenting and how to prevent so many kids growing up in bad neighborhoods and poverty and crime and violence.
02:44:35.000That's where a giant chunk of all of the momentum of this chaos that a lot of people carry with them into adulthood comes from.
02:44:44.000It comes from things beyond their control when they're young.
02:44:47.000And that is the struggle at the core of our society, at the core of our country, that's bigger than...
02:45:46.000And the more we can figure out how to make it a better place for these people that got a shitty roll of the dice, that grew up in poverty, that grew up in crime, that grew up with abusive parents, the more we can figure out how to help them.
02:46:21.000I mean, if you're growing up right now, And you're in West Virginia in a fucking coal town and everyone's on pills and it's just chaos and crime and face tattoos and fucking getting your teeth knocked out.
02:48:03.000The whole world is filled with these fucked up apes that are piloting the spaceship, and you're waking up in the middle of thousands of years of history.
02:48:35.000We're going to live in our mansions and fly around in our planes?
02:48:39.000And I think through the decades now, we've been developing a sense of empathy that allows us to understand that Elon Musk, Joe Rogan, and somebody in Texas, somebody in Russia, somebody in India, all suffer the same kind of things.
02:49:00.000And I think technology has a role to help there, not hurt.
02:49:05.000But we need to first really acknowledge that we're all in this together and we need to solve the basic problems of humankind as opposed to investing in sort of keeping immigrants out or blah,
02:53:03.000If you can get this right, if you're honest with me, do you think there'll ever be a time where human beings, as you know them, don't experience war?
02:53:52.000Well, it's essential in biological form.
02:53:55.000But why would it be essential in something that gets created and something that can innovate at a 10,000 – what is it like – what is the rate that they think once AI can be sentient and get 10,000 years of work done in a very short amount of time?
02:54:08.000That's random words that Sam Harris has come up with, and I'm going to talk to him about this.
02:54:17.000Oh, Kurzweil also has similar ideas, but sort of Sam Harris does like a thought experiment, say, if a system can improve that, you know, in a matter of seconds, then just as a thought experiment, you can think about it can improve exponentially,
02:54:34.000you can improve, you become 10,000 times more intelligent in a matter of a day.
02:55:23.000And so if you're dumb enough to turn that thing on, and all of a sudden this artificial life form that's infinitely smarter than any person that's ever lived, and has to deal with these little dumb monkeys that want to pull the plug?
02:55:37.000You idiots can never figure out how to operate on air.
02:55:40.000You're so stupid with your burning fossil fuels and choking up your own environment because you're all completely financially dependent upon these countries that provide you with this oil and this is how your whole system works and it's all intertwined and interconnected and no one wants to move from it because you make enormous sums of money from it.
02:58:09.000I've recently witnessed, because of this Tesla work, because of just the passion I've put out there about particularly automation, that there has been a few people, brilliant men and women, engineers and leaders,
02:58:25.000including Elon Musk, who've been sort of attacked, almost personally attacked, by Really people, critics from the sidelines.
02:58:34.000So I just wanted to, if I may, close by reading the famous excerpt from Teddy Roosevelt.
02:58:47.000It's not the critic who counts, not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better.
02:58:55.000The credit belongs to the man who's actually in the arena, whose face is marred by dust and sweat and blood, who strives valiantly, who errs, who comes short again and again, because there's no effort without error and shortcoming,
02:59:12.000but who does actually strive to do the deeds, Who knows great enthusiasms, the great devotions, who spends himself in a worthy cause, who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails,
02:59:28.000at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.