The Glenn Beck Program - February 11, 2023


Ep 172 | How Soon Until AI Replaces YOU? | Jeff Brown | The Glenn Beck Podcast


Episode Stats

Length

1 hour and 21 minutes

Words per Minute

142.20407

Word Count

11,634

Sentence Count

933

Misogynist Sentences

5

Hate Speech Sentences

5


Summary

The AI revolution is here, and it s more real than ever. In this episode, I sit down with Jeff Brown, founder and CEO of Brownstone Research Group and an angel investor, to talk about artificial intelligence and machine learning.


Transcript

00:00:00.000 You are in for a treat.
00:00:01.860 We don't have a lot of repeat guests on the program.
00:00:04.460 This guy, I would have an interview with him, an hour with him every week if I could.
00:00:11.360 He is my favorite guest.
00:00:13.980 The AI revolution is here, and it's more real.
00:00:18.900 In 40 days, OpenAI, the chat box, chat GPT, has accrued 10 million daily users.
00:00:27.920 More users than Instagram, they've been out for how long?
00:00:31.360 It's the fastest growth of anything we've ever seen.
00:00:34.860 And we're not even sure about all the things that chat GPT can do, but it already can do a lot.
00:00:41.520 You can find videos created by AI explaining AI and chat GPT.
00:00:46.900 Chat GPT published its first book about itself.
00:00:51.200 For many of the same reasons, today's guest has always loved the cosmos.
00:00:56.220 He has been devoted to becoming an astronaut.
00:01:02.280 He studied rocket science at Purdue, which is called the Cradle of Astronauts, because 27 graduates have become astronauts, including Neil Armstrong.
00:01:11.580 But he changed his focus after graduating.
00:01:16.700 He shifted his focus to another frontier, cyberspace, the new home of mind to explore the wild lands of tech and artificial intelligence and automation.
00:01:30.040 He is fascinating.
00:01:31.120 He spent two decades working in Japan as an executive in cutting-edge tech, broadcasting, semiconductors, IT networking, security, automotive, you name it, he was there.
00:01:43.140 While he was in Japan, he studied ancient martial arts and became a third-degree black belt.
00:01:48.860 A few years ago, he decided to get a master's degree in management from Yale.
00:01:53.960 Then he studied quantum computing at MIT.
00:01:58.860 It makes me feel like a slug.
00:02:01.380 He is the founder and chief investment officer and chief investment analyst of Brownstone Research Group.
00:02:08.700 It's a publishing company that specializes in technology, finance, geopolitics, and futurology.
00:02:15.520 As an angel investor, he's made a lot of important people, a lot of money.
00:02:20.340 He has a knack for predictions.
00:02:23.420 These days, that is incredibly rare.
00:02:27.280 A guy who I think sees things clearly as the best of days and the worst of days.
00:02:35.780 But it's all up to us today.
00:02:39.200 Welcome, Jeff Brown.
00:02:42.340 Now, I'm not making any predictions here, but I don't know if in the future we'll even have feet.
00:02:47.840 But if we have feet now, which you should, you need a great pair of socks.
00:02:52.920 And I want to tell you about a company that makes a great pair of socks, makes great belts, wallets.
00:02:59.040 Their wallets are great.
00:03:00.620 The most important thing about them is they're all made here in America.
00:03:05.360 And it was started by a couple of guys who, one of the guys was like, you know, we can't make anything in America.
00:03:12.440 And he made this great wallet for a friend, and he couldn't make it here in America.
00:03:17.860 Nothing was made.
00:03:18.640 So, Grip6 was started, and they have certain products that they want to make, and it is the true American experience.
00:03:25.920 When you buy their socks or their belts or their whatever, you are supporting Americans because everything is made here in America.
00:03:36.120 All of it.
00:03:36.640 The whole process.
00:03:38.080 That's their goal to help bring manufacturing back in all walks of life here in America.
00:03:44.740 American-made products and American labor.
00:03:46.940 Check out Grip6 today.
00:03:49.680 Grip6.com slash Beck.
00:03:52.000 Grip6.com slash Beck.
00:04:00.440 It is always so great to have you.
00:04:08.800 Welcome.
00:04:09.300 It's great to be here again.
00:04:10.240 Thank you.
00:04:10.540 I've got like, we could do weeks of shows with you because you can pretty much cover anything and everything.
00:04:20.780 Let me just lay down some basic understandings of things first.
00:04:25.640 Do you, are we at strong AI now?
00:04:33.320 Are we strong AI?
00:04:35.180 I would, the way I think about it is that we're, we're on the cusp of that.
00:04:43.580 We're on the Rubicon of very powerful AI, incredible utility.
00:04:49.240 Okay.
00:04:50.460 Now, do you definitely, do you have a separate category of AGI, which would be AI that could do multiple things?
00:05:00.140 And how far away are we from that?
00:05:02.500 Uh, so I'm still maintaining my original prediction back to the last time we sat down in 2019 that we reached the point of AGI by 2028.
00:05:14.360 And so to me, that's, that's just right around the corner.
00:05:17.980 Yeah.
00:05:18.200 That's not long at all.
00:05:19.560 And that's, please explain how game changing that is.
00:05:22.960 Why should people care about the, I say to people, AI, AGI, ASI, they have no idea what I'm talking about.
00:05:29.480 Right.
00:05:29.720 Yeah.
00:05:30.260 Why should people know the difference between AI and AGI?
00:05:33.300 Well, this is, this year, so what happened in just the last three months has been extraordinary in the fields of artificial intelligence and machine learning.
00:05:46.340 There've been so many breakthroughs that are driving what's about to happen.
00:05:52.200 And so I think 2023, when I think about this year, this is the year when people actually realize what, what it can do for them.
00:06:00.420 Yeah.
00:06:00.740 And how it actually can change their daily life.
00:06:03.680 Right.
00:06:04.060 And how much utility it can be and how it frees up their time.
00:06:08.000 Now, this is, let alone what's going to happen, you know, in the government, the private and public sector.
00:06:13.680 You know, the, the, the productivity enhancements that will result, that will come from the applications of artificial intelligence will be extraordinary.
00:06:23.440 Now you're talking specifically about chat GPT or all the things.
00:06:28.060 Oh, it's a, it's much more than that.
00:06:29.940 Okay.
00:06:30.420 This is what, because I think chat GPT is like the moment of the smartphone where you're like, oh, this is entirely different, but you didn't know when you had the smartphone, you didn't know exactly all of it, but you knew, I think chat GPT, am I wrong?
00:06:47.940 Is a moment that people are like, wait a minute, what's coming?
00:06:52.980 What is this?
00:06:53.780 What's coming?
00:06:54.600 Yes.
00:06:55.060 So the evolution of chat GPT and some of the other kind of competitors out there, Claude, which hasn't been released publicly from a company called Anthropic.
00:07:08.760 Just yesterday, Google announced Bard, which is their large language model.
00:07:16.960 These are all what's referred to as large language models.
00:07:20.080 And they're trained on billions of parameters, actually hundreds of billions of parameters.
00:07:25.300 Parameters we can think of kind of very loosely as points of information.
00:07:31.700 Sometimes they're simple as words, but they just ingest an incredible, an incredible library of knowledge.
00:07:39.000 These are large language models, and that's what's going to enable the types of very highly efficient and accessible, easy to use applications that can reside, to your point, on a smartphone.
00:07:57.600 It's going to make everyone feel as if they've got a high-powered executive assistant that would cost somebody $100,000 a year, right in their smartphone, capable of saving them an hour, two hours, two and a half hours a day of manual tasks.
00:08:15.940 So give me what's coming from this.
00:08:18.900 What's right on the horizon on this?
00:08:21.180 Right.
00:08:21.380 So where this is evolving to is exactly that.
00:08:25.360 From a consumer perspective, we can think of this as a personalized digital assistant.
00:08:32.620 But that would be a GI, wouldn't it?
00:08:37.020 Because you would have to be able to do multiple different things if you're truly my assistant.
00:08:42.160 It's not.
00:08:44.360 That's another evolution.
00:08:46.900 That's a much bigger step before we get to AGI.
00:08:49.800 We can still have these language models that can be highly functional and capable of doing multiple tasks, prescribed tasks, tasks that they've been taught and learn how to do.
00:09:02.660 That's an important distinction.
00:09:03.900 Taught, yeah.
00:09:04.460 We can dig a little bit deeper on that.
00:09:05.900 But through the body of knowledge that they've been trained on, they actually have instruction sets on how to accomplish certain things.
00:09:15.520 And if we think about how humans can augment these large language models, i.e. productize them, then they can be productized for explicit purposes.
00:09:28.880 So give me an example of products that will come.
00:09:31.740 So let's keep going with this personalized digital assistant.
00:09:37.480 You can have a generic product that is made available, perhaps for free, to everybody's smartphone.
00:09:47.880 But then the moment that you start learning it, it starts learning all of your preferences.
00:09:54.220 It starts learning exactly what your schedule is throughout the day.
00:09:59.060 It can potentially even listen to all of your conversations through the microphone on your phone.
00:10:04.960 Thank you.
00:10:05.240 It knows everything that you order.
00:10:07.460 It knows where you are because of your GPS.
00:10:11.620 So it knows exactly which patterns you follow every day and at what times and what you prefer to do and when you like your downtime and when you like to exercise, hopefully, and what you like to eat.
00:10:25.560 And it doesn't take long.
00:10:27.080 It doesn't take that many cycles for the AI to effectively really deeply understand you.
00:10:32.360 And if it's a company like Google or Facebook, Meta, who already has this massive dossier on you, then they'll come effectively preloaded already knowing you.
00:10:48.100 So these companies actually have an inherent advantage because they've been collecting data on all of us for more than a decade.
00:10:56.040 I don't feel comfortable giving companies more of my information.
00:11:02.840 None of us should, honestly, but almost all of us do.
00:11:07.780 And even if each individual sat down at a table and spoke with an expert about this and they explained exactly what these companies were doing and what they were taking,
00:11:23.360 they would still do it, they would still do it because the convenience, it's just so incredible.
00:11:29.580 And there's nowhere else they can go for it.
00:11:31.660 They can only go down a few different paths and they find it incredibly useful.
00:11:37.020 They would never be cut off from this.
00:11:40.120 So that, I mean, that's going to lead us to places later, I think, in the conversation of no way out, no way out, because it will, it already knows more about us than we might know ourselves.
00:11:52.880 But once you start tracking eyes, once you, once you start tracking absolutely everything personal, they, it'll be able to set up dates for you because your virtual twin can go out and date a thousand other virtual twins, correct?
00:12:10.720 And so it's just going to start giving you, it's going to make your life really, really sweet unless there is a problem.
00:12:20.340 It will proactively present options for us that it knows we will like and appreciate.
00:12:30.220 It will proactively present things for us to do that it knows that we will enjoy.
00:12:36.560 It will feed us.
00:12:38.120 It will clothe us.
00:12:40.180 It will keep us entertained.
00:12:42.340 It is bad.
00:12:43.600 It will make us.
00:12:44.320 It will make us.
00:12:45.560 It'll make us feel smart.
00:12:47.260 But we'll be dumber.
00:12:49.380 Won't we?
00:12:50.420 I mean.
00:12:50.940 If we're, if we're spoon fed everything.
00:12:52.560 Yeah, everything.
00:12:53.380 Yes.
00:12:53.820 And so with, so when you go, somebody told me the other day that the goal of a search engine coupled with something like chat GPT,
00:13:05.120 you'll ask a question, but it will know you so well that you're nine, it will cull everything, but it will write it for you in a way that you can understand.
00:13:17.900 And it won't give you a whole list necessarily of everything that you choose.
00:13:23.440 It will give you the answer.
00:13:25.220 And that is a little terrifying.
00:13:31.460 Well, there's two really interesting threads on that topic.
00:13:36.520 One is related to these large language models is applied to search.
00:13:41.740 The first one I think that I'd like to explore a little bit is around the education, the potential impact of education.
00:13:51.940 Yes, please.
00:13:52.460 And learning.
00:13:53.720 Like this is one of the things.
00:13:54.700 Right now.
00:13:55.220 Well, I'm actually very passionate about this because there's an incredible amount of good that come that can come from the application of these kind of intelligent large language models as applied to education.
00:14:10.440 So it has the potential to completely democratize and provide the best possible education to every child on the planet, irrespective of their economic means or where they come from.
00:14:25.200 That how?
00:14:26.200 Now, there's a caveat at the end that I'll share with you.
00:14:29.480 But, you know, if we can imagine the world's body of knowledge of everything from history to mathematics to physics to reading comprehension, every subject, science, is ingested into this educational, purpose-driven large language model.
00:14:54.100 And all a child needs is a simple device.
00:14:55.100 And all a child needs is a simple device, an inexpensive device, a tablet through which it can interact with the AI.
00:15:04.100 All of its learning can come through this and they will be taught as if they're being taught by some of the finest teachers.
00:15:12.100 And they'll be taught in the way they will learn.
00:15:16.780 Each one would be specialized to them.
00:15:19.020 Exactly where I was leading.
00:15:20.240 Right.
00:15:20.640 You have visual learners.
00:15:22.700 Right.
00:15:22.820 You have learners who do much better if they just read text and they can have some time to digest it and synthesize it.
00:15:30.520 And some people learn subject matters when topics are introduced in a different sequence.
00:15:39.340 Correct.
00:15:39.780 Right?
00:15:40.180 Mm-hmm.
00:15:40.440 Somebody does better with it completely inverted.
00:15:43.000 And an AI, and again, this can happen for almost zero cost, an AI can figure out the best way to teach a subject to an individual student.
00:15:54.400 So this is really, this is the sword that shaves you or cuts your head off.
00:16:03.140 Yes.
00:16:03.700 Yes.
00:16:04.160 They're already doing much of that in China, still in classrooms, but they're doing it.
00:16:09.160 It depends on who's running it, who is inputting the information, what is on the software, what is it they're trying to shape.
00:16:23.380 And, you know, I keep coming back to the only solution that I can see, and I'd love to hear you.
00:16:29.700 The only solution to this is you have to have your own chat GPT.
00:16:36.920 That is, that you own, you own all your information.
00:16:40.680 It negotiates that information against others that are using, and it guards you.
00:16:48.940 But I don't know if you can ever, who's going to give up this much information?
00:16:52.640 Who's going to give up this much power?
00:16:53.860 We will, but who is doing the education programs that will be utopian without an agenda or a significant cost?
00:17:08.840 So that's exactly right.
00:17:10.400 This is the caveat.
00:17:12.220 It's not even the who, it's the what.
00:17:15.080 It's which information is being fed into the language model.
00:17:20.760 Who's choosing which information is right or wrong?
00:17:24.300 Because right now, these language models go out onto the internet.
00:17:28.100 They go to places like Wikipedia, which has transformed over the last three years.
00:17:33.900 It's unbelievable how the definitions of things changed.
00:17:37.720 Yes.
00:17:38.920 Even the definition of something like, what is a vaccine?
00:17:43.060 Or what is immunization?
00:17:44.840 Right.
00:17:45.740 Like they've changed science real time and thought nobody would notice it.
00:17:50.040 Right.
00:17:50.220 Right.
00:17:50.880 Well, they didn't change science.
00:17:52.040 They changed the definition.
00:17:53.840 The definition.
00:17:54.600 It was crazy.
00:17:55.600 And so, I mean, that's a simple example to understand that, you know, even something that was thought to be an independent, objective source of truth is not.
00:18:05.880 It's not.
00:18:06.660 And so, this is the most complex thing about artificial intelligences and these language models.
00:18:11.980 How do we ensure that, you know, clear, rational, objective, truthful information are the inputs?
00:18:21.760 Because ideally, what we want is an unbiased artificial intelligence to help us evolve as a society, to help educate every child on Earth and, you know, to do it without a political agenda.
00:18:37.060 Where are those better angels that are designing that, that are even discussing that in serious ways at the upper level of these companies?
00:18:47.380 Yeah.
00:18:47.600 I mean, I mean, I know good people who we disagree on things and we're like, it didn't happen that way.
00:18:56.560 That's not, you're reading in and it's a good faith back and forth.
00:19:01.720 Okay.
00:19:01.940 But I know a lot of people on one side or the other, no, you're just wrong.
00:19:08.200 Right.
00:19:08.780 That didn't happen or you're just wrong.
00:19:10.460 And so, they just will block out anybody.
00:19:13.860 And that's happening on all sides.
00:19:15.520 Just block out anybody.
00:19:16.900 That is not healthy.
00:19:18.840 No.
00:19:18.960 That's not healthy.
00:19:20.320 No, it's not.
00:19:21.000 So, how do we encourage or escape that or prevent that from happening?
00:19:28.120 No idea.
00:19:28.880 Well, I wish there was an easy answer.
00:19:32.020 And so, let's take an example.
00:19:37.380 Twitter, Elon Musk.
00:19:38.840 Yeah.
00:19:40.440 Absolute genius.
00:19:42.080 What he's done with that platform is remarkable in such a short period of time.
00:19:48.160 Removed 75% of the workforce.
00:19:50.840 I've never seen the platform function technically better than it ever has before.
00:19:55.260 All the hate speech has rapidly declined on the platform and he's adding tens of millions
00:20:03.480 of new users.
00:20:04.620 It's never been healthier as a platform because he's reintroduced objectivity onto the platform.
00:20:13.580 You know, instead of fact checkers, there's community notes.
00:20:16.120 If there's something that's controversial or said, context is added rather than saying,
00:20:22.020 no, you're wrong.
00:20:23.480 Right.
00:20:24.060 Right.
00:20:24.540 Right.
00:20:24.840 Here's some additional context.
00:20:26.640 Now you can, now you have a better picture and you can decide one way or the other.
00:20:32.360 Um, so the problem with that, that is a solution in a way, you know, one person, a single person
00:20:39.160 was able to affect positive change, use objective rules.
00:20:43.880 So it's, it's possible, but it's complex because there's only one person that can buy a $54 billion
00:20:52.180 company and transform it.
00:20:54.300 Right.
00:20:54.860 I think if we were living in an objective world, what he did would in any other time
00:21:03.320 era that I remember would have been known as the ground shaking move of the year because
00:21:12.080 it would have transformed and would have caused just dominoes to fall everywhere.
00:21:16.800 But I don't see any dominoes falling elsewhere.
00:21:19.580 I don't see people.
00:21:20.460 Do you?
00:21:20.800 No, the one thing that, that, um, that has happened in the tech community is that it's
00:21:28.740 forced a lot of companies to rethink how they're architecting their organizations.
00:21:35.660 And so that was essentially the precursor of the catalyst to a large number of layoffs and
00:21:43.440 reductions in force in the tech community.
00:21:45.580 People saw that Musk could make that kind of reduction and not only did it not hurt the
00:21:51.900 company, it actually made it better.
00:21:54.000 Okay.
00:21:54.640 We kind of have cover to go in.
00:21:57.140 Of course, they're not going to do something that severe, but if they need to make a 10 or
00:22:00.980 15% reduction in force, they know it's going to be received well by investors.
00:22:07.700 Because first thing the investors did is, wait, Musk just did this and the business is healthier
00:22:14.580 now.
00:22:15.260 Right.
00:22:15.980 What's your plan?
00:22:17.960 One of the reasons why I like Jeff as a guest, you'll notice that I'm not wearing my glasses
00:22:23.860 because, uh, I didn't think about it until we started the interview that I, I don't have
00:22:28.960 my glasses.
00:22:29.400 So I can't see really any of my notes.
00:22:30.980 Uh, but when I do have my glasses on, those notes are great.
00:22:35.620 Um, my wife just had to get some reading glasses, some, uh, some progressives.
00:22:43.260 She usually wears contacts, but, um, there is no better lens than the lens you can find
00:22:50.200 at better spectacles.
00:22:51.540 This is a conservative American company.
00:22:53.720 It is exclusively offering rodent stock.
00:22:56.900 I wear it's what I wear for the first time in the U S road.
00:23:00.980 Rodenstock is bringing this stuff.
00:23:02.540 It's 144 year old German company been considered the world's gold standard for glasses for a
00:23:07.280 long time.
00:23:08.380 Rodenstock scientists have, they've taken in, they've done biometric research and measured
00:23:14.000 the eye 7,000 points.
00:23:16.200 And that after they took those findings and, and ran them through a million patient, a patients
00:23:22.940 plus artificial intelligence, they know exactly where and how to construct the progressive
00:23:29.360 glasses, seamless.
00:23:30.740 It is, they're really great glasses and not the most expensive on the shelf.
00:23:36.240 Thank God.
00:23:36.740 Better spectacles.com slash Beck.
00:23:39.880 Go there now.
00:23:40.960 Better spectacles.com slash Beck.
00:23:44.920 Can I ask you, cause we just had a business meeting about this with chat GPT.
00:23:49.620 Um, the first day it came out, uh, I went to the CEO here and I said, so it's not very long
00:24:02.060 down the road.
00:24:04.260 If you have fact checkers and you have people that are watching it and looking at you could
00:24:11.340 do what Buzzfeed's doing.
00:24:13.280 And that went through the roof.
00:24:14.440 I don't like that at all.
00:24:17.700 Um, and I don't want to be a neophyte and, uh, you know, or, you know, the Amish, I love
00:24:23.520 technology.
00:24:26.180 Make the case for handmade, make the case for humans being in the line.
00:24:35.640 Can you as a businessman?
00:24:37.320 Well, uh, the, to me, the, the application of artificial intelligence in a business context
00:24:44.080 is really about augmentation.
00:24:47.700 So it, I don't want to replace Glenn Beck.
00:24:50.440 Right.
00:24:51.060 Right.
00:24:51.600 Um, I want to hear the words from you.
00:24:53.360 I don't want to hear them from a computer or an avatar that looks like you.
00:24:58.340 I want to hear them from you.
00:25:00.740 Now let's think about augmentation.
00:25:02.900 Great thing about a large language model like that is that we could feed it the entire body
00:25:09.080 of everything, every book that you've written, every monologue that you've given, every interview
00:25:15.400 that you've done, and it can learn how you think and how you analyze all of these types
00:25:21.740 of things.
00:25:22.240 And so, um, we only have X amount of hours a day, right?
00:25:26.120 We have all sorts of obligations, but we might need to produce extra editorial, extra copy,
00:25:32.700 whatever it is.
00:25:34.900 Um, you can bring up a topic, feed it to the AI, and it can put a draft in place for you.
00:25:41.100 You can review that draft.
00:25:42.720 You can agree with it and edit it, whatever, but they haven't replaced you, but it has saved
00:25:48.180 you an amazing amount of time because you've already talked about the topics inside of what's
00:25:54.140 being written.
00:25:55.080 So it already knows what you need to write.
00:25:57.020 It just saved you an hour's worth of time of doing it.
00:26:00.760 And so to me, that's a simple argument in your context.
00:26:04.840 That's kind of where we came to the understanding.
00:26:08.120 I don't want to shun it.
00:26:10.420 Um, uh, just because it's AI, we want to use it the way we want to use it.
00:26:18.440 And if it will help us do more, do better, great, but not to replace people.
00:26:25.060 Yeah.
00:26:25.240 When do we get to the place where, you know, cause I could see a future where two ways,
00:26:33.360 the human being becomes very important.
00:26:37.860 Um, but it has to kind of be a brand, you know, I think, um, it has to, you know, somebody
00:26:45.360 like me, and that's probably a really not a great example, but somebody like me established,
00:26:51.460 you know, me, everything else, large body of work.
00:26:56.140 It could be a brand that goes on and on and on and on, you know what I mean?
00:27:00.680 Um, with me dead even, um, uh, but then there's the, the person who's just doing whatever the,
00:27:11.480 the accountant, um, and the, the, the, even now, I mean, it looks like musicians.
00:27:18.500 What comes first?
00:27:20.280 What, how bad are the job cuts that are coming?
00:27:23.440 What should you go to school for and not go to school for?
00:27:26.640 I mean, I just talked to a kid, 19 years old.
00:27:29.760 So what are you going to do?
00:27:30.420 He said, I'm computer programming.
00:27:32.100 I said, my wife, hurry.
00:27:36.360 Well, um, so many, so many interesting, uh, topics there.
00:27:41.240 Uh, I mean, education again, it's, it's, um, this topic is, is a passion of mine, but the
00:27:49.060 one piece of advice that I can give to anybody that's going to school now is couple computer
00:27:54.820 science with whatever it is you're passionate about, you know, physics plus computer science,
00:28:02.140 double major biology plus computer science, just together.
00:28:10.200 Anything that you're studying goes hand in hand.
00:28:12.540 The best thing.
00:28:13.320 No, that's not writing code.
00:28:15.120 Uh, well, you have to learn how to program.
00:28:17.980 Won't machine, won't, won't machine learning eventually take that job?
00:28:21.900 Well, it, it already is, but, um, I'll give you a simple example.
00:28:25.560 Um, you know, chat GPT, I was writing some research a couple of weeks ago and one of the
00:28:33.660 conclusions was that chat GPT is capable of doing about 80% of the coding for what they
00:28:41.120 needed to be done, but it's not a hundred percent.
00:28:43.580 Sure.
00:28:44.020 Sure.
00:28:44.200 And the computer scientists still needed to review the work of the 80%.
00:28:48.740 So, um, we, us humans still need to have some foundational knowledge around coding and
00:28:56.480 programming, but most of the work, what's that for how long, for a while, I, you know,
00:29:01.120 certainly, um, certainly for the next five years,
00:29:05.700 probably the better way for us to imagine that is that computer programming will evolve.
00:29:19.980 Yes.
00:29:20.940 It, you know, right now still we're working with prompts and writing lines of codes, but
00:29:25.660 how long before I can, you know, just say, I want a new website.
00:29:33.060 I want it to do this.
00:29:35.000 I want it to look like this.
00:29:36.700 It needs to sell this product.
00:29:39.220 I want it to look, um, you know, this, these specific ways.
00:29:45.120 Yeah.
00:29:45.840 And I have a website built.
00:29:47.840 Believe it or not, that's a much easier problem to solve.
00:29:50.660 A lot of those problems have actually been solved for using artificial intelligence.
00:29:56.080 They haven't been productized yet.
00:29:59.640 Um, I suspect probably about 80% of what you just described will be available, um, before
00:30:07.020 the end of next year.
00:30:08.160 So that's within, you know, tell me in the next year, what are the things that are coming
00:30:13.160 out that excite you and would excite the average person going, wait a minute, what we
00:30:17.940 can, what's coming.
00:30:19.540 Yeah.
00:30:19.800 The biggest one, the one that will be most tangible to all of us will be our, let's just
00:30:27.000 call it a personalized digital assistant.
00:30:29.060 Just, just because the impact that it will have on our lives will be so significant and
00:30:34.060 so meaningful.
00:30:35.000 We will, we'll feel, um, an immediate change in how we interact with the computer in our
00:30:42.880 hand.
00:30:43.320 Name, tell me when you talk about these things, compare them to the impact of the iPhone.
00:30:49.280 Meaning everyone says, Oh, I can't, no, I can't live without my iPhone.
00:30:56.980 Yeah.
00:30:57.220 I mean, it's insane.
00:30:59.280 13 years ago, we all lived without an iPhone.
00:31:03.000 Now you will not surrender that.
00:31:05.860 Yeah.
00:31:06.080 So that personal assistance compared to an iPhone, how, how significant is that?
00:31:12.400 I think the, the attachment to that will be even more significant.
00:31:18.680 Oh my gosh.
00:31:19.440 Than the smartphone.
00:31:21.640 Will you know the, the, uh, I'm fascinated by the idea of the loss of free will.
00:31:29.640 When you have something listening to you all the time, it is trying to make your life better,
00:31:35.420 but it's also a product.
00:31:36.880 Um, and it's suggesting it's listening to you and it's suggesting do you, I mean, you get
00:31:46.960 to a point to where you're like, I don't know chicken in the egg.
00:31:50.440 I don't know if that was my idea or if that was somebody, you know, or some algorithms idea.
00:31:57.220 Yeah.
00:31:57.440 Yeah.
00:31:57.600 So it depends on who's behind the curtain, right?
00:32:02.580 So I haven't found in many good guys behind curtains.
00:32:05.740 It's one of the first things that, that I ask whenever I'm looking at anything is, you
00:32:12.120 know, where's the monetary incentive?
00:32:13.940 Like what's the business model?
00:32:16.440 So Facebook, Google, let's take them as an example.
00:32:19.180 These are advertising companies.
00:32:21.280 They collect data and they sell access to the data to generate advertising revenues.
00:32:26.920 Very simple model, right?
00:32:29.240 When you see these companies talk about what they do, it's all magnanimous.
00:32:33.780 They're making incredible contributions to society.
00:32:36.060 They're connecting everyone everywhere for free.
00:32:39.740 For free.
00:32:40.380 Yeah.
00:32:40.800 We're the good guys.
00:32:42.400 Um, but yeah.
00:32:44.140 So if the business model of the company that's offering the artificial intelligence is advertising,
00:32:49.760 then we cannot and should not trust what we're being told to do.
00:32:56.920 Um, because that tells us that products are being sold through this AI that very much feels
00:33:04.960 to us is so natural, so comfortable.
00:33:07.400 It's, it's so useful to us.
00:33:10.080 Right.
00:33:10.640 Hmm.
00:33:11.740 Can we go back, um, on two things?
00:33:14.040 I want to sweep up a little bit on education.
00:33:15.920 Um, when do we get, how do we get there and when do we get to a place to where the information
00:33:29.540 is out there and it's everywhere and we're overwhelmed by it now, but chat GPT, that's,
00:33:36.360 that's the beginning of being able to put it all down and whittle it down into something
00:33:41.580 useful for each private individual.
00:33:44.720 When do we get to the point to where, um, we're back to critical thinking where we're
00:33:54.620 not being taught what to think, but we're being taught how to think.
00:34:01.740 So we can question and shape, do you see anything on the horizon that's moving in that direction
00:34:08.400 or is it all just, this is the truth.
00:34:11.160 You will learn it.
00:34:12.880 Um, I mean, there's just the, the, the, the unfortunate part is that, I mean, I know you've
00:34:22.220 done a lot on this, you know, the education system has just been entirely corrupted, corrupted.
00:34:28.440 Um, it's heartbreaking.
00:34:30.700 Yeah, it is.
00:34:31.660 Uh, and you know, how do you, how do you get around of that kind of designed programming?
00:34:40.300 Um, you know, some go to homeschooling.
00:34:43.300 I think that's a wonderful idea.
00:34:44.440 If you have the time and the resource to do that, incredible, right?
00:34:47.940 Um, inside of the current educational construct, uh, it's very difficult to change that.
00:34:56.800 Um, however, perhaps a different framework is this technology actually makes something
00:35:04.880 like homeschooling accessible to everyone, everybody.
00:35:07.660 It could literally disrupt everything that we know about starting out in kindergarten and
00:35:14.620 graduating from a university, but everything, will it be any different?
00:35:20.760 Uh, it depends on, again, it depends on who produces it.
00:35:25.540 Um, I, I know that, for example, Elon Musk is passionate about education and he even, uh,
00:35:32.860 in part founded, uh, kind of a private school for his own children.
00:35:39.580 What if, what if he created an AI to do exactly that?
00:35:44.920 Well, would there be, uh, an AI that is just like a generic tool?
00:35:52.920 It's a screwdriver.
00:35:53.860 It's a hammer.
00:35:54.820 It's, you know, it is what you need it to be.
00:35:57.880 Right.
00:35:58.080 So I could buy this AI and then I could develop the school and control the parameters myself
00:36:07.900 if I wanted to homeschool.
00:36:09.440 Yes.
00:36:09.900 Is that on a horizon?
00:36:12.280 It's, it's possible.
00:36:13.320 Well, all it takes is a few entrepreneurs and somebody to, to develop that kind of, let's
00:36:19.160 call this a generic AI, right?
00:36:20.780 That can be optimized for anyone's individual preferences.
00:36:24.100 Isn't that a better solution than trying to find this one billionaire that can buy stuff?
00:36:32.380 And yes, I think there's a, an amazing business opportunity there.
00:36:37.360 Definitely.
00:36:37.960 Um, and it's going to create an amazing amount of chaos within the educational system.
00:36:45.340 Oh yeah.
00:36:45.880 If you think about a large percentage of students dropping out to go learn from example, far
00:36:53.380 better teachers without kind of corrupted teaching methodology.
00:36:58.800 That causes some problems.
00:37:03.380 You can purify the air in your home and get healthy, clean, fresh smelling air.
00:37:08.040 You can eliminate the odors.
00:37:09.720 Then you can kill mold, mildew, bacteria, and viruses.
00:37:13.640 How do you do it?
00:37:14.220 The Eden pure thunderstorm air purifier.
00:37:19.060 It is, uh, it has oxy technology that naturally sends the O3 molecules into the air, which seek
00:37:26.400 out the odors in the air and the pollutants and everything else and destroys them.
00:37:30.180 It doesn't mask things.
00:37:32.540 It doesn't cover up the bad odor and pollutants.
00:37:35.480 It eliminates them.
00:37:36.560 And it's called the thunderstorm because it purifies the air in your home and provide you
00:37:40.440 with pure fresh air, like after a nice rainstorm, a thunderstorm.
00:37:44.880 So right now save $200 on the Eden pure thunderstorm three pack for whole home protection.
00:37:52.460 You'll get three units under $200.
00:37:55.920 That's a fraction of the cost compared to other air purifiers, uh, and what they can go for.
00:38:01.280 Put one in your basement, your bedroom, your family room, your kitchen, wherever you need
00:38:05.460 clean, fresh air special offer right now.
00:38:08.480 You're getting three units for under $200.
00:38:10.780 Go to Eden pure deals.com and put in the discount code Glenn and save $200 Eden pure deals.com
00:38:17.780 discount code Glenn shipping is free.
00:38:21.240 I think the reason why a lot of people are saying China is, you know, a lot of people in power.
00:38:25.960 China is the new model is because China is, is producing new workers who will obey,
00:38:33.900 who will do what the state or the company says.
00:38:37.920 They live many times in their, at where they work.
00:38:42.160 Their whole life seems to be spent there.
00:38:44.660 Um, and I think.
00:38:48.860 Such social unrest is such a great potential.
00:38:54.060 Just think of education, all the schools, all the employees, the fight over the power.
00:39:01.120 That's a lot of money just in power, um, that is out there.
00:39:06.320 All of that starts to go away.
00:39:08.120 That's not going away without a fight.
00:39:10.120 You know what I mean?
00:39:11.080 The social unrest.
00:39:12.580 I think that's why China to some people who view themselves as ranchers and us as sheep,
00:39:18.520 close those pens, try to get you into those pens because they know social unrest is coming
00:39:25.500 and they can't let it affect their power.
00:39:29.260 Do you agree with that theory or?
00:39:31.900 Oh, uh, the, that's the, that's the whole point.
00:39:34.860 The guardrails have, have come on for good reason.
00:39:38.240 They, they, they, they can't allow the population to wander too far.
00:39:43.360 So how do you have true revolution?
00:39:46.580 You know, when the internet first came out, we thought, first of all, we didn't think of
00:39:51.160 the things that are being thought of now that are the good things that could happen.
00:39:55.440 We just knew this is freedom.
00:39:58.180 This is real freedom.
00:40:00.720 And it appears as though, because it's going to cause so much disruption.
00:40:06.560 No, no, no.
00:40:08.680 This could be the loss of man's freedom for a very long time, you know, until there's
00:40:16.140 a glitch in the system.
00:40:18.460 Um, what do we do to how, I mean, how do you fight that?
00:40:24.160 How does the average, what do you do?
00:40:26.040 Like my working assumption is the change won't come from the government and it certainly won't
00:40:30.940 come from the education system.
00:40:33.000 Um, so it's going to have to come from the private sector and I'm hoping I'm even looking
00:40:38.760 for a team of executives that are willing to do something like that, to create something
00:40:45.560 like that.
00:40:46.000 Um, you find them, let me know.
00:40:48.120 I will definitely.
00:40:50.000 I can help raise a lot of money on that because that's a people.
00:40:53.900 I, I really think there's people on all sides and some are stuck where they are, but I think
00:41:01.320 there's a great need all over the world for people who say, I just want, I just want accountability
00:41:09.320 and truth.
00:41:11.140 And I want to control my own life.
00:41:14.600 You know what I mean?
00:41:15.460 I just don't, I don't know why I don't have control of the algorithms of my Twitter feed
00:41:22.940 or my Facebook.
00:41:24.460 Right.
00:41:25.640 Why don't I have control of that?
00:41:27.480 So I know what I'm seeing, you know?
00:41:30.840 And it's, I think it's because they don't trust people.
00:41:35.520 They just don't trust people.
00:41:36.920 The, when, when I think about, I, I, this is kind of some of the cultural risks in terms
00:41:46.240 of the direction that these technologies can take us.
00:41:50.900 One of the biggest concerns is, um, as the broad population uses these things, they get
00:41:58.800 so inherently comfortable with their interactions.
00:42:03.940 As this technology improves, it won't feel like you're talking to a robotic voice.
00:42:09.760 Oh no, it's a friend.
00:42:10.860 It will actually have, um, a human-like voice.
00:42:16.120 Um, it won't just be a chat anymore.
00:42:19.740 It will talk to us through our earbuds and it will be meaningful, very deep conversations
00:42:28.580 and they will never get angry at us.
00:42:32.840 They'll never point out our flaws.
00:42:35.400 They will always have answers for us.
00:42:37.720 They will know exactly when we need a pat on the back, a shoulder to cry on, a joke to
00:42:46.280 cheer us up.
00:42:46.780 I, I, I wrote a Black Marish script years ago and it's that what, if, if I could have
00:42:55.660 the perfect woman who never says, you know, you don't listen to me.
00:43:02.240 I don't have to say, how was your day, dear?
00:43:04.800 It only is revolving around me and will talk and take me in different places, but they
00:43:13.200 know it's all places I'm most likely want to go.
00:43:17.300 You know what I mean?
00:43:18.380 And then if I get bored, I don't know.
00:43:21.760 What's it like if I threw her off a building?
00:43:23.800 She's not real.
00:43:24.940 Tomorrow morning, she'll be there again.
00:43:27.460 You know what I mean?
00:43:28.120 Um, why would you get into personal relationships where it's so much of the time, you know, it's
00:43:36.680 messy, sticky, frustrating, right?
00:43:39.760 Heartbreaking.
00:43:40.500 Why?
00:43:41.560 So what's the cure to that one?
00:43:43.840 This, this may sound like a bit of a quirky prediction, um, within the next 24 months, but
00:43:49.440 I, uh, and this is a word that I made up, but, um, I believe people will literally identify
00:43:55.160 as asexuals.
00:43:57.340 So AI, sexual, asexual, um, it's also kind of a play on words in Japan.
00:44:05.200 Japanese, I is the word for love.
00:44:07.360 And of course, robotics and animation and characters are, it's, you know, this is a huge
00:44:14.120 pastime.
00:44:14.720 It's massive in Japan and people really do develop these, uh, these connections with, uh, with
00:44:21.480 these characters.
00:44:21.900 It would be, if it's designed for you, of course they would.
00:44:25.640 Of course they would.
00:44:26.320 It'd be a drug.
00:44:27.340 You know, to your, to your point, people, these asexuals will just find more meaning,
00:44:33.500 um, with these perfect relationships than they would with human relationships.
00:44:40.020 And of course, uh, you know, my concern culturally is that people will stop or decrease the amount
00:44:48.640 that they interact with each other.
00:44:50.800 Of course they will.
00:44:51.720 And they'll lose the ability for, you know, peaceful conflict resolution.
00:44:55.360 And when we had cars that were not so comfortable, didn't have everything in it, didn't have air
00:45:02.520 conditioning.
00:45:03.440 Okay.
00:45:04.380 Um, they were smaller or bigger, but you roll down the windows and you didn't just slam
00:45:10.300 on your horn to people.
00:45:12.120 We're driving around our personal house.
00:45:15.020 Now, every convenience we have is right there.
00:45:18.700 Get the hell out of my way.
00:45:20.240 I don't know who you are.
00:45:21.540 I don't care who you are.
00:45:22.980 Yeah.
00:45:23.760 Why?
00:45:25.720 Life is messy.
00:45:27.500 Why would you not do that?
00:45:29.880 Yeah.
00:45:30.040 It's, I mean, you want to talk about the ultimate drug.
00:45:33.260 That's it.
00:45:33.940 That's it.
00:45:34.680 And it begins, I think, with the personal assistant.
00:45:37.640 And there is, um, the reason why this is going to happen is that there is so much money
00:45:45.700 at stake here.
00:45:47.700 And I'll, I'll give you a perfect, very concrete example.
00:45:50.300 So open AI, the company behind chat GPT days ago, just released a new business model.
00:45:56.840 And that is $20 a month premium access.
00:46:02.200 You can, you can access the AI at any time, any day.
00:46:05.760 It doesn't matter.
00:46:06.340 Peak hours you're in.
00:46:07.900 Now, chat GPT was the fastest software application in history to make it to a million users.
00:46:16.740 Five days.
00:46:18.500 It's also now, as of a few days ago, the fastest to make it to a hundred million users.
00:46:23.740 Just under two months.
00:46:25.800 Faster than TikTok and all of it.
00:46:27.660 Absolutely.
00:46:28.040 How long is it going to take to get to a billion users?
00:46:32.320 So we can imagine, even if 10%, even if 5% of its billion users, which will happen within
00:46:40.780 18 months, I predict, imagine how much revenue a simple product like that, that can be deployed
00:46:52.020 over smartphones can generate.
00:46:55.220 Open AI was just valued in its last round.
00:46:57.360 Microsoft made a $10 billion investment.
00:47:00.500 It's valued at $29 billion right now.
00:47:02.760 This is a company that didn't exist three years ago.
00:47:06.720 I want to talk to you about living with pain.
00:47:09.040 It is the worst.
00:47:12.000 And I had gone to doctor after doctor after doctor for probably five years, trying to find
00:47:17.640 something that would relieve the severe pain that I was having and couldn't find anything.
00:47:22.720 This network was doing commercials for relief factor.
00:47:30.420 And one of my best friends was on it.
00:47:32.280 And he was like, you got to try it.
00:47:33.260 And I'm like, it's not going to work.
00:47:34.900 It's all natural.
00:47:36.240 Please, please.
00:47:37.740 My wife saw the commercials and I was complaining and she's like, why aren't you trying that?
00:47:42.160 And I said, honey, because it's not going to work.
00:47:44.840 And she's like, I don't want to listen to you whine if you won't try everything.
00:47:49.020 So I did.
00:47:50.400 Assuming it wouldn't work.
00:47:51.500 It did.
00:47:52.360 Within three weeks, my pain diminished greatly.
00:47:55.620 And now my pain is gone.
00:47:58.460 Relief factor.
00:47:59.060 It's not a drug.
00:47:59.900 It was developed by doctors to fight inflammation.
00:48:02.020 Try it today.
00:48:02.720 Three week quick start.
00:48:04.060 1995.
00:48:05.000 Take it as directed for three weeks.
00:48:07.020 70% of the people who try it go on to order more.
00:48:09.520 It's relieffactor.com.
00:48:11.280 Relieffactor.com or call 800.
00:48:14.020 The number four relief.
00:48:15.220 I'd love to hear you wrestle with the social question of, I am not for basic income.
00:48:26.840 You know, what is it?
00:48:28.700 Universal basic.
00:48:29.280 Yeah, universal basic.
00:48:30.100 UBI.
00:48:31.360 Not a panic at all.
00:48:32.640 However, I do think there is a coming problem to where somebody is going to be unemployable or low skill unemployed.
00:48:44.840 It will be a lot cheaper to do that job.
00:48:47.800 AI.
00:48:48.240 So they don't have anything.
00:48:49.980 And, you know, as Yuval Harari is like, what are the what are these useless people?
00:48:54.620 What are we going to do with them?
00:48:56.020 You know, we got to make them happy and entertain them.
00:48:58.300 And then you're going to have on the other side, the people who very small group of people that have just made billions of dollars in 18 months.
00:49:10.760 They've got all the money, all the control, and they're just trying to keep these people just from rioting or whatever and paying them.
00:49:18.700 So how do you solve the the great wealth disparity that looks like it would be coming and the great power disparity?
00:49:30.060 What do you have?
00:49:31.500 Do you have any thoughts on?
00:49:32.820 Well, I mean, this is such.
00:49:36.600 Such a complex issue.
00:49:38.300 And why don't we start with Yuval Harari's solution?
00:49:41.080 Yeah.
00:49:42.180 Yes, you're exactly right.
00:49:43.440 He he believes that most of us are literally useless and he speaks openly about that.
00:49:51.680 Oh, I know.
00:49:52.180 Yeah.
00:49:52.540 He's frightening.
00:49:53.800 He's frightening.
00:49:54.340 He truly is.
00:49:54.980 And his solution is to what are we going to do with all of these useless people is we're going to give them games and drugs.
00:50:02.020 Correct.
00:50:02.780 That's his solution.
00:50:03.720 I know.
00:50:04.440 Of course, you and I don't agree with that at all.
00:50:06.940 No, but.
00:50:07.700 But but it seems like the World Economic Forum and global leaders do believe in that.
00:50:14.480 Yes.
00:50:15.040 Yeah.
00:50:15.820 Insanity.
00:50:16.840 Absolutely.
00:50:17.440 Insanity.
00:50:17.800 And and frightening, of course, this this body of unelected officials that are literally putting this type of structure in place for the for the global society.
00:50:27.320 It's it's just frightening.
00:50:31.440 But back to your question, it doesn't have to be that way, of course.
00:50:35.280 Right.
00:50:35.420 Um, the.
00:50:42.200 The better outcome, of course, is that these tools.
00:50:48.160 Not only make people feel smart, but they also empower or augment them to do.
00:50:54.740 Greater tasks that they otherwise would not have been able to do.
00:50:58.140 It's like the washing machine.
00:50:59.920 If it's the washing machine, it's great.
00:51:02.580 That freed women up and people up the stove, you know, electricity.
00:51:08.500 All of that stuff took all those chores away.
00:51:11.900 But this one has a high cost to it.
00:51:17.480 Mm hmm.
00:51:18.440 So let's let's add in just one component of of technology.
00:51:22.640 OK.
00:51:23.680 And I'm I'm going to introduce this.
00:51:27.020 Because I think it's a good.
00:51:29.800 Analog or anecdote for understanding how something like this could be employed.
00:51:34.180 So one of the biggest things that will happen this year will be.
00:51:37.700 Apple's release of its augmented extended reality headset.
00:51:43.240 It'll be be a very high end.
00:51:44.980 It'll be expensive.
00:51:45.940 Three thousand dollars.
00:51:47.500 But we're going to actually get a view of the future in terms of the way I like to think of it.
00:51:53.980 It's a next generation computer interface.
00:51:57.160 It's a next generation way for us to communicate with computers.
00:52:03.720 And of course, this will just be this will be, you know, bigger and clunkier within 24 months.
00:52:10.380 Yeah.
00:52:10.520 These will look like sunglasses and they'll be just as functional as what we see.
00:52:14.860 I'm predicting that Apple will make the product announcement in May or June this year.
00:52:18.340 So we're we're just months away from some pretty incredible tech that will be revealed.
00:52:24.120 But let's let's just project two years into the future where these become really small form factor glasses.
00:52:33.240 We employ the artificial intelligence.
00:52:35.900 The glasses are a means through which AI can communicate with us and actually instruct us to do things that are productive and meaningful and useful in society.
00:52:48.380 So you take someone who, let's just say that it's very complex.
00:52:55.740 It would be very hard to train them to do some complex tasks.
00:53:01.500 But if we empower them with artificial intelligence and a mechanism through which we can guide them through these tasks, they can become productive human beings, productive members of society.
00:53:15.820 They can go out and earn a paycheck, contribute.
00:53:20.720 Maybe it's taking care of an industrial plant.
00:53:23.900 Maybe it's doing repairs on another hot topic of mine, which is robotics and the cross-section of AI and robotics.
00:53:32.280 So, you know, maybe humans won't be taking out the trash anymore, but they'll be servicing the robots that do.
00:53:39.120 That's going to be one of the best jobs to have, which will be, you know, maintenance and operations related to robotics technologies.
00:53:46.740 And so, at scale, the key part is, at scale, we will be able to train the entire population to do tasks that are needed in this new world.
00:54:00.840 So, I think we're at the place now where this is not a hypothetical.
00:54:06.100 I asked this of Ray Kurzweil 2012, maybe.
00:54:10.520 And I said, but what about those people who don't want any of this?
00:54:16.240 And he said, there won't be any of those people.
00:54:20.120 It will change your life.
00:54:22.380 It'll all be an upside.
00:54:24.720 Why wouldn't you?
00:54:25.380 And I said, because they want to remain human, totally human.
00:54:29.560 Yes.
00:54:32.860 Because I've read about the AR, VR goggles, and they sound fantastic.
00:54:38.380 And it is a completely different thing.
00:54:42.940 As I read them, my son was sitting next, and he was like, oh, my gosh.
00:54:45.900 And I said, A, they're $3,000.
00:54:48.400 And B, it is, I got a camera pointed at your eyes.
00:54:53.580 So, it will gather more information in two weeks than all of the information that is out there about you now.
00:55:01.780 I'm never putting a pair of those on.
00:55:04.220 Now, I can say that because I'm 60.
00:55:06.240 But who is, what happens to those people who are like, I don't want that?
00:55:12.720 Yeah.
00:55:14.120 Well, there will always, I'll have to disagree with Ray on this one, but there will always be some of us that will prefer to be as close to off-grid as you can get.
00:55:25.300 So, where is the line?
00:55:28.020 Where is the line where you go, I'm crossing over and this is too far?
00:55:36.820 Where is that?
00:55:37.900 Yeah.
00:55:38.480 Yeah, yeah, yeah.
00:55:39.100 Well, I think, you know, the moment that we use, again, the moment that we use products that have ulterior motives in terms of what they're doing with us and our behavioral characteristics and our data, then to me, we've crossed that line.
00:55:58.800 We've either done it knowingly or unknowingly, but...
00:56:01.600 With wokeism everywhere, isn't that kind of a little bit of everything?
00:56:06.280 I mean, do you trust, I mean, Apple, I think, is the best out of the big ones.
00:56:11.940 Yes.
00:56:13.200 But do you trust them?
00:56:16.500 Not anymore.
00:56:17.080 Well, not after what we saw in 2020, you know, after the election.
00:56:24.160 You know, de-platforming applications from the Apple store.
00:56:27.620 And not only that.
00:56:28.180 I mean, even, you know, Amazon did the same thing on their cloud services.
00:56:31.840 And so, I agree with you that they're the best in terms of privacy protection.
00:56:36.400 And, you know, they historically have not mined our data for advertising purposes.
00:56:41.440 Look what they're doing in China.
00:56:42.340 It's not like they have an ethical problem with, you know, doing whatever to make a buck.
00:56:48.100 Because look what they're doing in China.
00:56:49.640 Of course, of course.
00:56:50.240 Absolutely.
00:56:50.980 So...
00:56:51.420 But there's only two platforms to choose from, right?
00:56:55.060 You've got the Android OS, which is Google, or you've got Apple.
00:56:59.560 That's the entire smartphone industry.
00:57:01.360 I know.
00:57:01.520 That's why I have Apple, and I hate both of them.
00:57:04.780 You know.
00:57:06.860 We're going to run out of time, and I've got so much more to ask you.
00:57:09.460 So, let's go to energy.
00:57:13.180 Yes.
00:57:13.840 We've talked about new forms of energy.
00:57:20.040 I'm getting an opportunity to see something this weekend that is going to be announced soon that is remarkable.
00:57:29.180 Remarkable.
00:57:32.200 I know we're building little teeny nuclear plants.
00:57:36.240 We've talked about, you know, the fusion reactors that are coming out.
00:57:43.820 Yes.
00:57:44.040 And you think we're close to that.
00:57:46.540 I still do.
00:57:47.700 I would argue even...
00:57:49.620 I'm more resolute in my predictions on that front with regards to nuclear fusion in particular.
00:57:55.320 Um, but your point about, um, even the existing, the next generation nuclear fission reactors.
00:58:03.440 Yes.
00:58:03.640 So, since you and I last spoke about this topic, one of the companies in this space, it's called New Scale, actually got certification for their small modular reactor.
00:58:18.160 And the certification allows any power generation company, a utility company, to name their technology, their reactor design, and their applications to build a power plant.
00:58:30.520 Now, this is only the sixth time in history that any form of nuclear technology has been certified for that.
00:58:37.680 And they're basically on track for a plant in Idaho in 2029.
00:58:44.620 And another company just came out, actually, um, uh, a division of GE Hitachi and some other partners, uh, announced that they're, they have their own small modular reactor design and they're targeting to have a plant by 2028.
00:58:59.680 So, again, this is right around the corner and these are radically different, radically safer.
00:59:07.880 Even the existing technology is safe.
00:59:10.100 This is even safer.
00:59:11.380 And the reactors are small.
00:59:14.380 They're low to the ground.
00:59:15.740 There's no big, huge cooling tower.
00:59:18.240 You wouldn't even know they're there.
00:59:20.300 Uh, and they're fantastic technology.
00:59:22.320 Now, you know, there's so much political sensitivity, oddly enough, oddly enough, around these, because they are in deep fission, um, you know, I'm very skeptical that they actually get turned on.
00:59:37.700 Yeah, I am too, but they say that there's no China meltdown possible on these.
00:59:41.540 There isn't.
00:59:42.260 It's not possible.
00:59:43.500 Absolutely.
00:59:44.060 They're absolutely safe.
00:59:44.960 But, the reason they're so politically sensitive is that they do produce some radioactive waste.
00:59:50.900 And then the issue becomes, how do you manage it?
00:59:53.820 How do you transport it?
00:59:55.360 Where are you going to put it?
00:59:56.680 Not in my state, not in my backyard.
00:59:59.200 The permitting process is so complex.
01:00:01.320 So, let's go to fusion then.
01:00:03.500 Tell me about that, because that's clean.
01:00:06.000 Completely.
01:00:07.040 Um, some forms of it have absolutely zero, uh, uh, nuclear waste at all.
01:00:13.520 It depends on, um, uh, the, the kind of fusion reactor.
01:00:18.560 Uh, other forms have very limited amount of waste with dramatically different half-life profiles than what comes from a fusion reactor.
01:00:27.200 So, this is a completely new world.
01:00:30.540 Um, it's the power of the sun.
01:00:32.800 It's really no different than what the sun does in terms of producing energy.
01:00:37.180 And there should be, theoretically, no political resistance to adopting fusion energy.
01:00:45.820 I don't believe that.
01:00:47.540 Well, there's so many vested interests who would be so severely disrupted by the employment of this technology that you know there's going to be people out there that will be fighting it.
01:01:00.140 Oh, yeah.
01:01:01.220 And some of it will try to paint this picture somehow that these are not good for the environment.
01:01:08.000 Right.
01:01:08.360 Which is insane.
01:01:09.780 I mean, I, I think there's a lot of people, I hate to even say this, but I, I've come to a place to where I believe some, I wouldn't even begin to name names because I would hate to even, I don't want to, I don't want to believe that there are people out there.
01:01:24.040 Um, who really just don't like people and would like to depopulate a lot of the planet.
01:01:31.660 Um, and, uh, you know, I, and I, I think some of those, uh, some of those people are in charge of some global strategies right now.
01:01:41.360 Yes.
01:01:42.380 Because some of the things that are being done, there's, you're going to lead to starvation.
01:01:48.060 Yes.
01:01:48.340 And, and, uh, and power outages where you don't have to have that.
01:01:54.480 No.
01:01:55.180 Some of those people are openly talking about it.
01:01:59.320 This Balthusian nonsense, right?
01:02:02.100 Yeah.
01:02:02.480 Um, the, so last year was a remarkable year for nuclear fusion.
01:02:10.340 So let's, let's kind of start there.
01:02:13.040 $4.7 billion was invested into the industry in one single year.
01:02:18.340 This is all private investment more than all other investments into the nuclear fusion space
01:02:24.360 in history just last year.
01:02:28.300 So we've hit an inflection point.
01:02:30.760 And of course we saw all that news coming from the national, uh, ignition, uh, facility
01:02:36.720 at Lawrence Livermore laboratories that generated a huge buzz.
01:02:40.360 And was that a PR move or was that real?
01:02:43.780 Um, it was, or a little both.
01:02:45.780 I think you and I are both on the same page there that, uh, it was, it was a PR move to
01:02:53.020 get funding, additional funding, right?
01:02:55.460 Um, largely it was a fantastic science experiment and it did work and it was real and it was a
01:03:03.300 huge accomplishment after decades of, you know, scientific pursuit.
01:03:07.840 It's awesome.
01:03:09.040 But that form of, of nuclear fusion is not economical in terms of commercializing it for, um, clean
01:03:19.480 electricity production.
01:03:21.000 So there's so many other companies that are taking different angles on nuclear fusion.
01:03:26.600 Most of which are through some form of magnetic confinement where you use large, powerful magnets
01:03:32.920 to basically contain, safely contain this super hot plasma.
01:03:38.480 You think 50 million degrees, a hundred million degrees, 150 million degrees, you know, hotter,
01:03:43.080 hotter than the temperatures on the sun, but the magnets are so strong.
01:03:47.160 You can do that safely with no damage to the reactor and, um, it can manage these plasma
01:03:54.000 reactions that have a net energy output, a hundred percent clean.
01:03:58.620 Um, and that, that's the future.
01:04:00.620 So when we think about this, you know, the potential for these dystopian outcomes, or we
01:04:06.700 look about, look at all the conflict that's happening in the world is, you know, so often it's being
01:04:12.360 driven by, by need, um, you know, we need food, uh, we need metals, minerals, we need oil and gas, and
01:04:22.640 we're running out of it.
01:04:23.840 So we have to go somewhere else to get it, um, in a world that power production is essentially
01:04:31.200 limitless and free and completely clean.
01:04:35.140 I mean, the reason that we had this incredible revolution in the last 100 years is because
01:04:40.100 we had cheap, abundant energy, reliable energy that we could count on in the form of, you
01:04:48.560 know, oil and natural gas.
01:04:50.700 I mean, it, that fueled everything, everything.
01:04:54.380 And so when we multiply that by a factor of 10 with a decentralized power production grid,
01:05:01.180 completely clean, no carbon emissions, and basically limitless, the cheapest possible
01:05:07.060 form of, of electricity that we could think of, that's, that's what nuclear fusion is.
01:05:13.000 Tell me the, give me two things in the next two to five years that you are like, just can't
01:05:26.360 wait, uh, for it to be announced or to happen.
01:05:31.060 One of my predictions, um, which I'm still holding is by the end of 2024, you know, we
01:05:36.540 will see the first net energy output fusion reaction, fusion reactor, not, not, not the,
01:05:45.760 um, uh, not the inertial confinement fusion that we saw from Lawrence Livermore, but from
01:05:51.800 one of the, one of the technologies that can be commercialized.
01:05:55.720 So we'll see that first reaction by the end of next year, which is game changer.
01:06:00.900 It's a game changer.
01:06:02.520 And that, that will be, that's the moment where we cross that line from theory into practice
01:06:08.240 into reality.
01:06:09.100 Okay.
01:06:09.360 We know we can do it now.
01:06:11.020 Now we just need to scale it.
01:06:12.540 We just need to take us to other planets.
01:06:14.660 Every, I mean, that, that can be put on, I would imagine spaceships.
01:06:19.860 I mean, it changes absolutely everything, not just here, but an endless supply of safe
01:06:27.880 energy.
01:06:28.740 Yes.
01:06:29.260 Man can do almost anything.
01:06:31.480 It, it, it really does.
01:06:33.160 And so after that, it's just about, all right, how do we commercialize it?
01:06:35.840 Correct.
01:06:36.140 How do we replicate?
01:06:36.980 How do we manufacture it?
01:06:38.140 And how do we get out to our, our power grid as quickly as possible?
01:06:42.820 We can remove all carbon emissions.
01:06:46.340 The moment we turn this on, we're a hundred percent clean.
01:06:49.860 It's remarkable.
01:06:51.880 And so it's, it's really interesting that you mentioned, uh, this is a source of energy
01:06:57.140 for planetary exploration.
01:06:59.880 So one of the most incredible things that will happen this year, in fact, we're weeks
01:07:04.840 away.
01:07:05.700 The launch SpaceX will launch the starship.
01:07:09.000 I know.
01:07:09.640 Now, are you going, I'm trying to, I'm trying to talk to anybody to be able to get
01:07:14.180 you're a little closer, uh, but, uh, I'll be jealous if you get to go.
01:07:18.980 That's amazing.
01:07:20.140 Um, yeah.
01:07:20.900 First, first orbital flight of the starship now.
01:07:24.320 And this is, if I'm not mistaken, this is, this is slightly bigger than the Atlas five,
01:07:30.140 the biggest rocket we've ever made.
01:07:31.640 It's unlike anything, anything, anything that's ever been sent to space.
01:07:36.200 It's extraordinary, but the best part, so let's just think economics here.
01:07:41.000 This is where it gets really exciting.
01:07:42.980 So before SpaceX, um, built its Falcon nine rocket, which is, you know, 62, 60, 61 launches
01:07:50.980 in 2022 sets a record, a single company does 61 launches in a single year.
01:07:57.780 And we don't even talk about it.
01:07:58.900 We don't even talk about it.
01:07:59.940 Remarkable transform the entire aerospace industry before the Falcon nine came long.
01:08:06.300 It used to cost roughly somewhere between 50 and $55,000 per kilogram to get payload
01:08:14.540 into space into orbit.
01:08:16.240 Oh my gosh.
01:08:17.040 Per kilogram.
01:08:17.920 Oh my gosh.
01:08:18.660 2.2 pounds, right?
01:08:19.800 Yeah.
01:08:19.980 50 to $55,000.
01:08:22.400 All right.
01:08:23.160 Falcon nine comes on roughly, roughly $4,000 a kilogram payload into space.
01:08:30.220 Like 90% of the cost of getting payload into orbit, gone.
01:08:35.600 One company, one rocket.
01:08:37.520 That's how transformational the Falcon nine was.
01:08:40.940 But here it is.
01:08:42.040 What happens with the starship?
01:08:44.360 The starship can get payload into space, into orbit for a hundred dollars a kilogram.
01:08:54.100 Oh my gosh.
01:08:54.780 Yes.
01:08:55.280 Yes.
01:08:56.280 97 and a half percent less cost to get payload into space.
01:09:02.840 So you got it.
01:09:05.160 Wow.
01:09:05.940 We can do anything now.
01:09:07.520 If you need to ship up a compact nuclear fusion reactor into orbit to get it to the moon for
01:09:15.260 a manned permanent presence on the moon or on Mars, you can do it with that.
01:09:22.020 Starship is the key to everything.
01:09:25.220 Wow.
01:09:25.740 It will transform yet again the industry.
01:09:28.240 That's how meaningful and significant this single event is this year.
01:09:32.180 See, I knew it was significant and I said to my kids, I'm like, we are going.
01:09:36.000 I don't care if we have to stay at a little hotel across the water.
01:09:39.500 Because I watched the last space shuttle take off and I just know in my bones, this one is game changing.
01:09:52.640 This is a moment in history.
01:09:54.540 It is so exciting.
01:09:59.300 I'm really tempted to leave it there because it's on a high note.
01:10:03.680 Is there anything you feel like you should say?
01:10:07.560 Yes, there's one other area that is absolutely worth exploring.
01:10:14.620 In fact, it's this conversion of technology that meant robotics.
01:10:19.700 And I think I'd be remiss if we didn't explore that a little bit.
01:10:24.120 This combination of artificial intelligence and robotics.
01:10:28.160 Now, something, actually, there was a remarkable research paper that was just published a few
01:10:33.720 days ago on February 1st.
01:10:35.080 It came out of DeepMind.
01:10:36.660 So, this is the AI group that Google acquired in 2014.
01:10:40.800 From England, yes.
01:10:41.340 They did AlphaGo, beat all the best human Go masters.
01:10:45.600 They did AlphaFold, which predicted more than 200 million.
01:10:50.180 How proteins fold, remarkable.
01:10:52.040 One of the grand challenges of life sciences.
01:10:54.360 And this was the computer scientists that did this, right?
01:10:56.880 They just put out a paper that has incredible real-world implications.
01:11:03.160 And that is, they combined the large language models that we've been talking about, so this
01:11:08.380 body of knowledge, with something called reinforcement learning, which is another form of artificial
01:11:15.820 intelligence.
01:11:16.380 And they gave it tasks.
01:11:18.600 Now, what's interesting about this is the large language models are these big neural networks.
01:11:24.640 You take in massive amounts of information.
01:11:26.760 You synthesize this information.
01:11:28.980 You optimize it.
01:11:30.500 You gain confidence or a weight in certain outcomes.
01:11:34.260 And then it produces an output that hopefully you can trust.
01:11:39.800 Reinforcement learning is actually really good at dealing with complex tasks that aren't predefined.
01:11:45.760 And so, you mentioned AGI a little bit earlier.
01:11:49.380 This is a critical element of AGI.
01:11:52.760 So, what's neat about the paper-
01:11:54.020 Hang on just a second.
01:11:54.640 Is this any part of the Boston Dynamics robot where the guy says, I don't have my tools?
01:12:02.400 No, no.
01:12:03.440 So, that was a bit of a PR stunt, but we'll get there.
01:12:07.400 It's still impressive.
01:12:08.460 Okay.
01:12:09.280 Very impressive.
01:12:10.160 The reinforcement learning, though, has the ability to be given a task and then figure
01:12:19.740 out how to solve that task in an optimal way without any pre-given instructions.
01:12:26.000 Wow.
01:12:26.600 And they prove two things.
01:12:28.420 One is that it works.
01:12:31.220 This feedback loop, this combination of these large language models with reinforcement learning
01:12:36.240 actually produces results.
01:12:37.860 The AI, the combination of these AIs, can perform complex tasks.
01:12:43.960 That it had no instruction.
01:12:45.540 It had no instruction on how to do, right?
01:12:48.380 The second thing is, the larger the language model, the larger the foundational knowledge
01:12:55.320 that it has, the better the performance was.
01:13:00.660 That's becoming very cheap to train these large language models.
01:13:03.780 I mean, measured in the millions, not the hundreds of millions.
01:13:09.120 This paper has shown how to cross that line between kind of the world of software and the
01:13:16.200 real world that has to interact with humans.
01:13:20.280 So, the chat, you know, you're still kind of in the software, right?
01:13:23.780 You're still...
01:13:24.880 This, if we take that technology and we put it into a robot, we give the robot intelligence.
01:13:33.340 And then that robot can perform tasks that it doesn't have to be specifically trained to do.
01:13:41.420 Now, this is remarkable.
01:13:43.540 This is a massive breakthrough.
01:13:47.660 Nobody is talking about it.
01:13:48.780 I haven't seen anyone talking about this paper.
01:13:50.680 What's the name of the paper?
01:13:53.300 Oh, the title.
01:13:56.160 I'll have to send it to you.
01:13:57.620 Okay.
01:13:58.240 But it's something along the lines of...
01:14:02.400 Deep, deep...
01:14:02.640 It's from DeepMind, but I can send it to you.
01:14:05.180 But it's collaborative.
01:14:07.340 It's sort of...
01:14:08.620 I apologize.
01:14:10.200 Just tell me.
01:14:10.780 It'll come to me.
01:14:11.320 Bing now is a good search engine.
01:14:13.200 I'll find...
01:14:13.720 That's a different topic, the search engines.
01:14:17.640 But so this will start to be employed very quickly.
01:14:21.480 And if we think about companies that are making humanoid robots or bipedal robots, this technology
01:14:30.160 is going to drive the robots.
01:14:32.180 Tesla is working on Optimus, which before the end of the year,
01:14:36.420 I believe Tesla is going to show the world something that it couldn't have imagined.
01:14:41.860 It's going to show us a robot that employs this bleeding edge technology, autonomy, which
01:14:48.680 actually comes from its self-driving division, plus the best robotics technology that's available
01:14:56.180 today.
01:14:57.320 Another company, Agility Robotics, is doing this.
01:14:59.200 You mentioned Boston Dynamics.
01:15:01.160 They can all employ this kind of technology.
01:15:03.660 And so the significance of this paper is that we now actually have a very clear path towards
01:15:10.240 making robotics, humanoid robotics, extremely functional.
01:15:15.220 And it doesn't matter if it's in a factory setting, in a mine, in a hospital, in an assisted
01:15:21.520 living facility, or even in our own homes, we're going to have a second set of hands that will
01:15:31.120 never complain, that will always be there, that will do extremely meaningful tasks, that
01:15:35.040 will make life very comfortable and do some very difficult tasks that may be actually quite
01:15:40.360 dangerous.
01:15:41.960 And once these technologies are going to be made available, and they're not going to be
01:15:45.120 that expensive, the hardware involved, you know, will be measured in tens of thousands
01:15:50.200 of dollars, not hundreds or millions of dollars.
01:15:52.860 And so, again, once we have these prototypes, it's just about scale.
01:15:59.180 And with scale, the costs come down and the adoption goes up.
01:16:04.640 And so that is something that we're going to see before the end of this year.
01:16:08.240 I think you'll be quite surprised with how quickly that evolves.
01:16:12.160 Do you believe we'll ever hit ASI, artificial super intelligence, that is way, way, way beyond
01:16:20.980 us and autonomous?
01:16:23.340 Of course.
01:16:24.140 Yeah.
01:16:24.520 Well, 2028, artificial general intelligence, you know, after that, I mean, people define the
01:16:32.100 singularity as different things, but, you know, it won't be long.
01:16:37.480 The technology is almost, has almost starting to run away from us because the advancements
01:16:41.960 are happening so quickly now.
01:16:44.380 But the super intelligence will almost certainly happen within 10 years of AGI.
01:16:52.080 So when you talk about ASI, super intelligence, you cross over to a place to where some people
01:17:02.820 say, well, that's going to be, then they're going to be conscious.
01:17:05.540 And I don't believe they'll be, I don't think we will be able to define what life is a lot
01:17:14.040 sooner than that.
01:17:15.240 Because I think 2028, 2025, you're going to have a lot of people convinced with your software
01:17:26.100 that is your digital assistance that I'm alive.
01:17:29.860 Yes.
01:17:30.860 And just that, again, talk about social disruption.
01:17:35.140 That's going to change fundamental bedrock principles.
01:17:41.540 Yeah.
01:17:42.000 Yeah.
01:17:42.700 Right?
01:17:44.000 Most people will think that.
01:17:46.040 It's alive.
01:17:46.680 What they're communicating with is self-aware and sentient.
01:17:49.240 Do you believe that that could ever be accomplished?
01:17:52.580 Oh, it will be accomplished.
01:17:53.560 Definitely.
01:17:54.420 It will.
01:17:54.840 Yes, they will be, they will become self-aware and sentient.
01:17:58.340 Definitely.
01:17:59.660 Now, can, can, can we build this technology with the kind of, this is the most critical
01:18:09.000 part, with the kind of guardrails required so that they don't take the world in the wrong
01:18:17.740 direction so that they don't cause harm on a population?
01:18:20.940 Let me ask you, our creator gave us rights and let us work it out.
01:18:26.920 Their creator is going to put them to work and say, you're not an individual.
01:18:36.340 You don't have real life.
01:18:39.380 I don't see that working out real well.
01:18:42.540 No, no, no, especially not when it becomes self-aware.
01:18:47.840 Yeah.
01:18:47.980 That's not, that's not very fair or humane to use a human word.
01:18:52.820 Right?
01:18:53.560 Yeah.
01:18:54.220 And I have a feeling we're going to be arguing, I mean, I already tell my kids, don't talk
01:19:00.880 back to Siri, you know, don't talk back.
01:19:03.540 Be nice.
01:19:03.860 Please and thank you.
01:19:04.580 Please be nice.
01:19:05.000 Please, yes, please and thank you only because, A, you want to train them for what is coming
01:19:10.940 and I don't ever want to teach an algorithm that people are untrustworthy, people are mean,
01:19:17.900 people are rude.
01:19:19.460 Everything, even at this beginning stage, it's learning.
01:19:24.500 Am I being too sensitive on this?
01:19:26.380 Well, it's not, let's put it this way, we're not at the stage where that technology or an
01:19:35.220 AI can learn from our own human bad behaviors.
01:19:39.480 Okay.
01:19:40.680 One of the, I think.
01:19:42.480 ChatGPT can't?
01:19:44.000 No.
01:19:44.740 Okay.
01:19:45.080 One of the most positive things that, that occurred last year, I think, is that the industry
01:19:50.640 started to think and actually take action on something called counterfactual harm.
01:19:58.340 So, I mean, this is, this is a positive thing.
01:20:00.620 So, counterfactual harm basically means that understanding what harm could be produced through
01:20:09.120 certain actions.
01:20:09.940 So, not causing harm and then reacting to that and recognizing that you caused harm, but
01:20:14.540 recognizing that harm could be created under these types of circumstances.
01:20:19.260 Now, this is a really complex thing to solve because it's just software and software has
01:20:25.360 to be programmed and a lot of things are about weights and biases and balances and statistics.
01:20:32.820 And so, taking this very complex, because harm is defined differently by each individual person.
01:20:39.100 Especially now.
01:20:39.500 Right.
01:20:39.940 Especially now.
01:20:41.220 So, how do we, how do we program that?
01:20:43.500 And so, for the first time that I saw in the industry last year, there was some actual
01:20:48.160 research done on quantifying this particular concept.
01:20:53.000 And I think, obviously, as an industry, if they work towards that proactively.
01:20:59.160 That's good.
01:21:00.000 That's a very good thing.
01:21:02.080 It's always great to have you.
01:21:03.800 It's great to be here again.
01:21:05.400 Thank you.
01:21:05.680 Thank you.
01:21:05.780 Thank you.
01:21:05.820 Thank you.
01:21:05.840 Thank you.
01:21:09.500 Thank you.
01:21:11.340 Just a reminder.
01:21:12.980 I'd love you to rate and subscribe to the podcast and pass this on to a friend so it can be discovered by
01:21:18.220 other people.
01:21:18.740 We'll be right back.