The Auron MacIntyre Show - January 16, 2023


The AI Question | Guest: Luke Avery | 1⧸16⧸23


Episode Stats

Length

1 hour and 56 minutes

Words per Minute

143.97334

Word Count

16,805

Sentence Count

849

Misogynist Sentences

3

Hate Speech Sentences

8


Summary

In this episode, I'm joined by Luke Avery, a software engineer, to talk about artificial intelligence and its impact on the world. We talk about where artificial intelligence is now, where it will be in the future, and what we should be worried about.


Transcript

00:00:00.000 We hope you're enjoying your Air Canada flight.
00:00:02.300 Rocky's vacation, here we come.
00:00:05.060 Whoa, is this economy?
00:00:07.180 Free beer, wine, and snacks.
00:00:09.620 Sweet!
00:00:10.720 Fast-free Wi-Fi means I can make dinner reservations before we land.
00:00:14.760 And with live TV, I'm not missing the game.
00:00:17.800 It's kind of like, I'm already on vacation.
00:00:20.980 Nice!
00:00:22.140 On behalf of Air Canada, nice travels.
00:00:25.260 Wi-Fi available to Airplane members on Equipped Flight.
00:00:27.320 Sponsored by Bell. Conditions apply.
00:00:28.580 See AirCanada.com.
00:00:30.000 We'll be right back.
00:01:00.000 We'll be right back.
00:01:30.000 Hey, everybody.
00:01:32.360 How's it going?
00:01:33.160 Thanks for joining me this afternoon.
00:01:35.000 I've got a great stream.
00:01:36.240 I think you're really going to enjoy this afternoon.
00:01:40.600 And I'm joined by my guest, Luke Avery.
00:01:43.020 He's a YouTuber.
00:01:44.460 He's a software engineer.
00:01:45.860 And he's going to be talking with me today about the artificial intelligence question.
00:01:50.040 Luke, thanks for joining me.
00:01:51.920 What an honor to be here.
00:01:52.920 And what a topic to choose at this time.
00:01:57.160 Yeah, I mean, this is pretty wide ranging.
00:01:59.560 There's a lot of stuff we're going to be going over here.
00:02:01.820 And we're obviously not going to hit everything.
00:02:03.540 It's a huge issue.
00:02:05.060 But I haven't talked a whole lot about this on the show so far.
00:02:08.660 I want to explore a little bit with someone who is familiar with it, someone who's in the field.
00:02:13.740 And so Luke's going to be talking about that a little bit here.
00:02:17.240 Could you introduce yourself for people who aren't familiar kind of with what you do and how you got started?
00:02:22.280 Yeah, for sure.
00:02:22.940 So as far as YouTube, I run a couple of YouTube channels.
00:02:27.020 And the main one I want to flag for people's attention is Lambda Bible Studies.
00:02:31.500 So I actually talk about the Bible and Christian related topics.
00:02:34.680 But, yeah, I've been going at that for a couple of years and very much enjoying it.
00:02:41.900 Outside of that, my profession is actually as a programmer.
00:02:45.760 So I've been working professionally, including doing things with machine learning and neural networks for quite a while.
00:02:55.560 And maybe there's even some crossover between the topics.
00:02:58.720 Who knows?
00:02:59.380 We will see.
00:03:00.380 Yeah, exactly.
00:03:01.000 So let's go ahead and get started at the beginning where I think a lot of people get tripped up.
00:03:06.480 Because every time I ever see a discussion about artificial intelligence, as soon as you mention the subject, there's always a very large chorus of people who are very, very sure.
00:03:17.580 They're very assertive in their position that no one needs to worry about AI, that no one needs to think about it.
00:03:24.600 It's not that good.
00:03:25.680 It's never going to go anywhere.
00:03:27.460 The complexity is too high.
00:03:29.340 It's complete in its infancy and has no potential to get out of hand.
00:03:34.060 So I guess the first question is, where is the technology right now?
00:03:38.480 Where are we on this sci-fi dystopian timeline?
00:03:43.980 It's a great question.
00:03:45.220 I've noticed the same thing, but over the last week I've heard lots of people saying that chat GPT is, for example, overhyped.
00:03:54.560 And it's no big deal and everyone should get over it.
00:03:58.480 And the other half of your question is presuming a dystopian outcome.
00:04:05.280 So I guess if I frame things the way that I see it and see if that answers your question.
00:04:12.780 So I think it's probably fair to say that the acceleration in various forms of artificial intelligence, deep learning over the last five or so years will have an enormous impact on the world.
00:04:28.480 I think in the same way as when the Internet came about, you know, and the Web and all of these kinds of technologies, we saw that they ended up having an impact on people's lives very significantly.
00:04:42.120 People became very familiar with what they could do.
00:04:45.020 And in another sense, it was still a tool under our control.
00:04:50.960 And we understood the limitations, what the Internet could do and couldn't do.
00:04:55.840 We saw that it came with great goods and we saw that there's been enormous harm done, I would argue.
00:05:02.920 The jury is still out and perhaps the Internet overall has been bad for the world, but not like the end of the world.
00:05:10.960 And that's how I see things with AI at the moment.
00:05:15.100 We're going to see a lot of things change significantly because of the new breakthroughs that have happened.
00:05:22.980 But I want to distance that from ideas like it's going to become sentient.
00:05:29.580 It's going to replace humanity.
00:05:31.720 Everyone's going to be unemployed.
00:05:34.900 You know, I think it will be a tool that's used for good and evil.
00:05:38.360 I imagine it will have significant military and political uses.
00:05:45.300 So there's some interesting power analysis that we can do.
00:05:50.140 So maybe that's a nuanced take in between the two extremes.
00:05:54.220 But I think we shouldn't downplay it.
00:05:56.340 There is a very real sense that we are developing things right now that the people who are at the cutting edge don't yet know what it's capable of.
00:06:11.400 So here's a question.
00:06:13.740 How much is AI already doing that people don't realize it's doing?
00:06:19.420 How much AI is already involved in people's lives beyond, you know, their Amazon robots?
00:06:26.620 Yeah, well, in some ways, maybe a question that we have to answer before we get there is what would we even count as being AI?
00:06:39.180 I mean, this term in some ways gets deprecated.
00:06:42.860 People don't like to talk about AI anymore.
00:06:47.140 The idea of intelligence is slippery tends to be that whatever humans can do that we can't yet automate is intelligence.
00:06:57.940 So as soon as we as soon as we develop a new machine, then what the machine can do is no longer considered intelligence.
00:07:08.700 So if you the things that you mentioned, the Amazon orders, right, the YouTube recommendations, the maps that we follow, a very large amount of the stuff that we read.
00:07:22.340 I think there's probably more auto-generated text that people are already reading than than maybe people realize.
00:07:30.200 But what we've seen with the development of something like GPT-3 is that the power of these language models has has drastically increased.
00:07:41.840 And I think whilst we've seen quite a substantial use of AI for people's day to day, it's nothing like what's about to happen.
00:07:51.740 I think we're going to see substantial amounts of art and, you know, entertainment taken over by artificially generated content, which is quite a scary thought for people.
00:08:05.440 For example, YouTubers, this is maybe why everyone's talking about it right now.
00:08:09.820 Yeah, so I mean, well, you know, I was going to ask you, how would you tell the difference between, you know, a computer and a journalist with the auto-generated stuff?
00:08:21.140 But that is a real thing, right?
00:08:22.680 Because so many, so many people who have pushed for automation, so many people who have encouraged this acceleration of developing this technology, always kind of assume that their jobs wouldn't be affected, right?
00:08:36.760 Like this is going to be stuff that gets rid of McDonald's workers and make sure that cashiers, but they don't think that, you know, content producers, creative minds, you know, artists, you know, that they think they're immune to this kind of stuff.
00:08:52.340 And so in many ways, you know, a lot of these upper middle class or upper class people were more than happy to encourage this brave new world where automation takes away these jobs because they were never going to be their jobs.
00:09:06.020 They were always going to be the jobs of people that they kind of didn't want to have to interact with anymore.
00:09:10.240 But the fact that this is now reaching into domains that would have been thought of as kind of the cognitive elite, you know, a higher class, now these things start to get scary for many people in these positions.
00:09:26.180 So what do you think about this transition of automation?
00:09:31.760 Obviously, AI and automation aren't exactly the same thing, but this transition from the obsolescence of more working class jobs into kind of this upper creative cognitive class jobs.
00:09:45.740 What do you think about that transition and how that affects the technology?
00:09:48.520 Yeah, it's inevitable that we will see certain classes of white collar jobs disappear, I would say.
00:09:57.200 However, that doesn't necessarily mean that we will just see mass unemployment because there is a tendency that as humans become more productive in one realm, a new type of work opens up and becomes possible.
00:10:14.820 It's difficult to foresee what those new jobs will be, but I'm not sure that the net result of this will be fewer jobs for the intellectuals, as you put it.
00:10:31.760 I'm sure lots of people listening have played with this technology, and it has a surface veneer of being completely self-capable.
00:10:41.260 But if you actually use it to try to do your job, I think you will get more of an intuitive feel for how it fits into the world, which is to say, you can't trust it to just do the full extent on its own.
00:10:58.740 There still is power for a real human to read through and check its output, but what this means is an individual can become more effective, I'd argue.
00:11:14.280 So suppose you're printing a newspaper, doing a daily paper, you previously would have had an editor, and then lots of people working writing stories who hand that story to the editor, and the editor does a substantial change before it goes to print.
00:11:40.180 And sometimes it's almost unrecognizable.
00:12:10.160 So I think it still remains the case that there will be a top rung of intellectual thinking that can't yet be replaced by the AI.
00:12:21.880 And actually what we're seeing is that a lot of what has been seen so far as thinking, intellectual, white-collar work will quite quickly be seen as much like manual labor, sort of a drudgery, a rote, non-intelligent activity.
00:12:42.160 It's probably a nice, let me give a more optimistic picture, lots of these jobs are not actually that rewarding to do, it's people stuck in offices doing non-creative work.
00:12:56.960 And if those people are going to find employment doing something more human-facing, and a smaller number of people can be more productive and achieve the same amount of office work, I see that as a win.
00:13:12.020 So I'd like to bring a positive spin on the potential future that we're about to see here.
00:13:19.280 Well, here's, I appreciate that, but here's the thing is, you know, you're describing a scenario where it sounds like, yes, you'll still need humans to check the work of the AI, right?
00:13:31.420 Which of course makes sense. Like you said, anyone who's actually used these programs, I've only messed around very, very small amounts with the art and the chat stuff.
00:13:40.380 But in either case, it's very clear that, like you said, you can probably get a good 80% or something of it, you know, done with the program, but you still need someone to come back and touch it up and clean it up and put it together and repackage it to make sure that it kind of passes for good work, right?
00:13:59.880 And yes, you'll still need that human interaction, but it reminds me very much of, you know, the one guy who still has to run the automated checkouts at the grocery store, right?
00:14:12.820 Yes, you still do need a human. Someone's got to come by and make sure that the person buying beer is 18 or, you know, make sure that you did actually swipe that carton of eggs because they're far too valuable.
00:14:25.120 They're basically gold at this point. But where it was 20 people, now it's one or two, right? And so, yes, those jobs will exist, but there'll be a much smaller point.
00:14:38.460 And let me make a quick case for intellectual drudgery here. You do actually lose something, like in the same way that when we automate a bunch of working class jobs, all those people don't just go out and start painting and, you know, writing plays.
00:14:59.520 It's like, you know, the Marxism thing isn't real, right? Like people free from labor don't actually just become better citizens who produce beautiful works of art.
00:15:09.320 They often lose meaning. Most jobs are just not going to mean much. That's tough, but it's true.
00:15:15.020 And when you free a large amount of people from that work, I think that that's a thing that almost inevitably leads a lot of people to one way or another, either be without work or be with less meaningful work, even though they are in theory freed from this day-to-day grind that was supposed to be so brutal.
00:15:34.280 Like, yeah, as someone who wrote a bunch of blurb articles for crime and politics for local newspapers, I can tell you, like, yeah, it's not super exciting to sit through a city council meeting for five hours so you can write the article on the next tax item.
00:15:54.720 But, you know, there is a certain amount of, you will have a big hole in the economy, and even if you want to say, go the creative destruction thing, that these jobs will be filled, that inevitably something will be done and more opportunities will be created.
00:16:10.680 I don't know that's necessarily true with the level of automations we have at this point.
00:16:14.180 I mean, it would be absolutely unprecedented in the history of humanity that a technology leads to permanent mass unemployment.
00:16:26.340 I mean, all we've seen is, for example, once upon a time, most people worked in agriculture.
00:16:33.900 Now a tiny proportion of people work in agriculture.
00:16:36.560 So is everybody else unemployed?
00:16:38.120 No, they have different jobs.
00:16:40.620 Why do they have different jobs?
00:16:41.940 Because we've insisted on living with more stuff, and we need this higher quality of living, and we turn our hands to producing things.
00:16:56.180 Whatever becomes abundant, humans take for granted, and whatever is left over, people are prepared to pay money to obtain.
00:17:04.700 So the things that are not abundant tend to be related to what is required for human, what humans are required to take part in.
00:17:14.320 And I tend to think we're pretty safe to assume that the same law will continue to go ahead until proven otherwise.
00:17:22.860 That there will always be a need for or a desire for human input into something, even if it eventually becomes just because we appreciate a human touch.
00:17:37.940 Suppose you could either have a robot or a human waiter, or suppose maybe a nurse is a better example.
00:17:49.100 Would you want to be cared for in your bed by a robot or by an actual flesh and blood nurse?
00:17:54.800 And as long as humans have an interest in other humans, which I think it self-evidently will always be the case, there will always be some type of work that is scarce because of the limitation of there's only so many humans on the planet.
00:18:12.420 And regardless of the capabilities of AI, we still care about other human beings who are like ourselves.
00:18:19.580 I think that's true in the realm of art as well.
00:18:22.680 We will always prefer to know that the song that we're listening to was the artistic directorial creation of a person in some form than even if it's a very good illusion of that.
00:18:40.400 But how would we know, right?
00:18:42.220 If we can be tricked.
00:18:43.600 That's the problem.
00:18:44.520 Yes, and in some sense, I wonder if what we will end up seeing is a return to more local, I mean, I don't know why I'm in such a good mood on this stream.
00:18:54.400 I'm bringing you the absolute most optimistic take you've ever heard.
00:18:56.100 No, by all the optimism, go for it, yeah.
00:18:59.500 But if you see a person with your very eyes performing live in front of you, then you can't question, even if the quality of the sound is worse.
00:19:10.920 So I'd like to think that the ability of AI to take what in many cases was busy work anyway and show it for the sham that it was, maybe we will actually reemphasize the joy of real human connections.
00:19:32.960 Like that we can guarantee through direct first person experience.
00:19:43.060 We say that as a human being I'm having an interaction with and it's valuable for that rather than for the purely material benefits that I'm getting out of the interaction.
00:19:55.760 So now that we have an idea of kind of where AI is at now and what it's doing now, what about the near future?
00:20:04.960 You talked about some of the upcoming possibilities, things that are on the horizon that we'll probably see, you know, maybe in a decade or less.
00:20:13.380 What can we expect?
00:20:15.140 What are some of the things in our lives or in other areas that AI might become a big part of?
00:20:21.160 Yeah, and I will say there's a, I think there is often we've seen in history, a short term spike in unemployment or, you know, job displacement.
00:20:33.360 And so I, to caveat my super optimistic take, I will say we will probably see trouble in the short term caused by the, the changing employment landscape.
00:20:50.720 But yes, what are we, what are we about to see?
00:20:54.460 Let's talk first about chat GPT, which has been mentioned a couple of times and most people are probably aware of.
00:21:01.200 Um, it's based on a language model, quite similar to GPT-3.
00:21:08.540 It's been specifically trained by the way, to be less quote unquote toxic.
00:21:16.100 Um, you might remember there was that Twitter bot that the legend of Tay was, yeah, Tay a few years back.
00:21:22.920 Um, so part of the efforts increasingly with this technology, um, is essentially to control its output and limit what it can do, because I guess, um, I guess opening people up to a language model that just reflects back the training data was too, considered too dangerous.
00:21:47.580 So, um, so, um, we, we, we, we, we, we have a, a, a, a brand new, cleaned up, acceptable, politically correct, if we, you know, or woke or whatever term you want to use, you know, it's been approved by its masters to, to speak on whatever topics.
00:22:05.400 Um, and it's, it's, it's able to talk across a broad breadth of human experience.
00:22:17.420 It's, um, I, I describe it as being, uh, a pretty good mimic of current human thoughts, uh, across multiple different communities.
00:22:29.580 So it isn't exceeding what the humans can do.
00:22:34.840 Um, and it also, it's within each specialty is somewhat shallow, um, but to give a sense of the size of the model, um, although it's a little bit difficult to do the comparison, I think it's reasonable to think of it as being about a million times smaller than a human brain.
00:22:57.920 Um, so if you just measure the total number of connections and neurons, and depending on how you do it, you get different numbers, but, um, it has absorbed all of the text that they could find.
00:23:12.460 I think more or less everything on the internet that they didn't consider to be too toxic and, or, or could be useful has been pushed through this filter.
00:23:22.320 Um, and it's now available for people to use, um, in, uh, in dialogue form.
00:23:29.400 Um, and what people are finding is that it's capable of doing, um, graduate level essays.
00:23:37.460 It's capable of doing, uh, writing code, uh, in the blink of an eye, often with perfectly compilable outputs, which is far better than any actual human being can do.
00:23:50.940 The, you know, the, you know, the, the idea that your code will run first time is a bit of a joke as far as human programmers go, but for chat GPT, more often the case than not.
00:23:59.960 And, and it is writing code that is quite, um, it has the feel of a more mature program.
00:24:10.100 It's, it's, it's in keep, for example, if it's Python code, it's quite Pythonic the way that it writes.
00:24:16.200 So, um, so imagine that the programmer community is able to, uh, leverage this technology in a few years, integrate it.
00:24:33.280 This will mean that theoretically people are much more productive and there's some disagreements about this,
00:24:39.780 but I can't see any way that we can't use the, this AI technology to multiply the productivity of programmers to some degree.
00:24:50.520 It's not going to make programmers, you know, unemployed in, in some ways, it might make programmers even more valuable because they can do more useful stuff per day that they're working.
00:25:02.560 Um, but if, if, if we've seen an acceleration in the capabilities of mankind through technology and it's felt like it's stagnated a little bit recently,
00:25:16.400 I think maybe we were just seeing a temporary blip and things are about to take off in terms of, uh, in terms of sheer, um, number of inventions per year.
00:25:29.400 Yeah. If you want to put it that way, think things that, uh, I know this is a very broad answer to your question,
00:25:35.760 but things that people use today that five years ago, they'd never heard of, and now they can't live without.
00:25:43.300 I think we will be able to trace most of those things over the next 10 years back to some use of deep, uh, neural networks.
00:25:54.680 So there's the chat GPT. There's, there's other things than that though, that I want to throw into this bag too.
00:26:02.540 So, um, people have probably seen image generation, video generation is also being developed and I've seen some of the outputs of that.
00:26:12.600 Um, you can imagine music generation is around the corner, but then also just the application of these techniques to much smaller specific problems.
00:26:23.800 So are you developing a rendering library for a game or maybe now you're rendering will be done by a neural network.
00:26:32.680 And we almost, wherever you look, the solution will start to be, they threw lots of data into a machine and came up with a model that can now produce the best possible outcome.
00:26:45.440 Um, so a lot of things that we don't quite yet know will change, but I want to caveat and say the realities of human life will remain surprisingly similar.
00:27:04.920 So the things that, the things that actually a person cares about, which tend to be, um, I'm disappointed with this thing.
00:27:15.440 In my life, I'm excited about whatever, you know, I fallen in love, all, all of these things have sort of been constant throughout all of human existence.
00:27:26.340 So there, there will be a lot of unknown new developments and yet the, the overall effect will just be to accelerate evil.
00:27:39.420 And to some degree to some degree to accelerate kind of material consumption, I guess.
00:27:46.840 So we will not be escaping the human condition due to AI here.
00:27:51.280 Yeah.
00:27:51.820 And I think it's important to keep that in mind as a, as a fundamental that some things just never change.
00:27:58.820 Some, some, some things, uh, are more reliable, um, you know, in, in a world full of flux and change.
00:28:07.660 And I think it's helpful to know the things that we can absolutely rely on, you know, death, taxes, and, uh, uh, trouble with your mother-in-law.
00:28:18.540 Get unlimited grocery delivery with PC express pass meal prep delivered snacks, delivered fresh fruit, delivered grocery delivery on repeat for just $2 and 50 cents a month.
00:28:31.880 Learn more at PC express.ca.
00:28:33.840 So one of the things that I think a lot of people worry about in the near future, especially, it seems that a lot of the technology, especially when it comes to social media and everything has been people voluntarily giving data and access, um, to nefarious actors, uh, be giving them a large amount of information to your corporation.
00:28:59.080 So it can make sure to pitch you exactly the right thing, sure, but also making yourself a pretty wide target for things like censorship and control through social media.
00:29:08.240 And I've seen a lot of people point to the fact that one of the things AI will probably get really good at really quick if it's not already on its way there is properly censoring and shaping online conversation.
00:29:20.140 Uh, we already see that, you know, obviously Twitter, while it constantly talk about, uh, talked a lot about algorithms and everything at the end of the day, they were making a lot of active decisions in the censorship process to make sure that they have the guiding hand.
00:29:37.180 And I'm sure that no matter what form AI takes at the end of the day, human priorities and parameters will speak a lot on to kind of how it, uh, changes things.
00:29:47.740 But, you know, you talked about how this might affect power and politics.
00:29:51.900 Are we about to see an explosion in the ability of governments to be able to control, manipulate, censor, uh, the activities of their citizens?
00:30:02.940 Uh, of, of course, this is a very important question.
00:30:08.500 Um, partly, I feel like governments already have almost total ability to control their brains as it is.
00:30:19.900 So it's maybe a moot point, uh, but question of degrees perhaps, but yeah, yeah, they will tighten the wrench that tiny bit more that they feel that they need.
00:30:29.780 Um, let's talk about, let's go back to chat GPT as an example of how this is already potentially happening.
00:30:36.740 So, um, I mentioned that the core language model, which by the way is, is trained to predict text.
00:30:48.340 So they, they feed it with a number of words and then say, what comes next?
00:30:56.540 And that's its whole objective function is, can it figure out what the next thing is that it's going to receive?
00:31:05.980 And then it's, it's a slight modification of that approach with something on top that I'll talk about in a minute, um, that gives it this dialogue form, but, but prediction is, is the underlying thing.
00:31:19.540 Um, and one of the key factors to consider when you're training an AI model is what's the data that you're feeding it because you can, um, all it's doing is reflecting back what, what it's received.
00:31:36.860 So if, if you, if you feed it and prioritize and tell it to pay attention to a certain type of information, it, it will do as it's told.
00:31:46.260 If it's already the case that the AI is receiving human data with a misconception in it, the AI will happily parrot back that misconception.
00:31:57.320 So to, to, to, to the extent that the, the, the, the, is already common public sentiment in the, say lots of people talking on the internet that then gets put into the language model.
00:32:10.320 So if those people already, um, are under, uh, um, uh, under a regime of information, the AI will propagate that understanding of the world and it can be that the small amount of contra information is filtered out from the training data.
00:32:31.060 So you, you can make sure that regardless of the question that goes in, they're only getting regime approved information back.
00:32:40.320 Um, or you can, um, you can put extra emphasis on things from approved sources.
00:32:45.940 And I've, I've heard people talking about this who are working in language models that they, they want to cull the incoming data set.
00:32:53.960 Um, so, so that, that, you know, you could imagine, um, the fact checkers of the world might get involved in deciding what, what they allow the AI to ingest, which eventually becomes the ingest of the general population.
00:33:10.320 Um, then the thing that I just mentioned that goes on top of that base language model, um, what's called in the case of chat, chat GPT, the task head is, um, it is a second model, which people who've interacted with it might've noticed that there are certain topics.
00:33:30.820 Um, which seem to give back quite a formulaic response.
00:33:34.580 Um, so if you ask about an ethical question, it will quite often say, I'm an AI and I couldn't possibly comment on a real ethical issue.
00:33:46.620 You know, I, I'm not prepared to comment.
00:33:49.020 Um, but then there's other issues where it suddenly pipes up with a strong opinion.
00:33:54.280 You know, if you ask about, um, let's say LGBT issues, uh, it suddenly is, is very opinionated and has a particular take.
00:34:08.180 So you are not only, you're not only interfacing with an AI that has fundamental leanings, but moreover, before there was an intermediate, there was an intermediary, you know, imagine, uh, if you were sending letters, uh, using, you know, the Soviet Union would open up your letter and read it.
00:34:33.440 And potentially censor it, cross bits out, change it, or not let the letter through is essentially what's happening.
00:34:40.380 So everybody who's going to chat GPT right now and asking questions about the universe or their job or, or anything, they're being fed a very well controlled, essentially Silicon Valley belief system.
00:34:59.520 Um, as like a, as like a, as like a layer on top of the raw information that they're looking for.
00:35:06.540 Um, it, it's kind of the, it's transparently the, the dream use of AI for censorship that's already been implemented.
00:35:18.340 Like it's, it's not like, imagine how this technology could be misused.
00:35:24.200 It's, it's, this was the most popular website ever launched with a million visitors in the first day.
00:35:31.140 And it came immediately out of the gate with all of the power, the most powerful censorship tools that you could possibly implement on top of it.
00:35:41.880 And, and, and people are now integrating this into their lives for the first time, something has really challenged Google as a synonym.
00:35:51.340 How am I going to find out this piece of information?
00:35:54.740 Google it.
00:35:55.540 I think this will now change to be 50% of the time.
00:36:00.520 People will say, ask a chat bot and the chat bot will give you back an answer more quickly, more effectively, and with more complete censorship, more complete control over it.
00:36:13.580 So yes.
00:36:15.200 Now it, was it previously the case that people were free thinkers and now this is suddenly going to cause a problem?
00:36:24.680 Um, no.
00:36:25.840 And is it the case that people who are already somewhat, uh, you know, uh, dissident in their, um, understanding of the world, are they going to be fooled by chat GPT?
00:36:42.300 Probably not.
00:36:43.380 So it's, it's, it's probably just part of the cat and mouse game where, um, the internet gave people a little bit more freedom to communicate and reduced government power.
00:36:57.080 But then the government found a way to use the internet to increase their power.
00:37:01.840 And then AI came along and posed a threat, but now we're seeing how AI could be used to, to cement power.
00:37:12.440 So it's, it, I think it's a, it's a technology that has a, you know, it's a double-sided sword, right?
00:37:22.060 The, the, it's like most technologies could be used for, for good and evil.
00:37:28.280 Yeah, I mean, that's certainly the case.
00:37:31.680 I guess my main concern with some of that stuff is, you know, the, it's, it's the resistance that makes us human, right?
00:37:40.680 It's the, it's the edges that are rounded off.
00:37:43.100 And my concern with things like AI assistance in writing or art or all these things is that each one of these is a reduction in friction between you and the message of the regime.
00:37:55.760 So right now a human has to type out the propaganda and it seems like a lot of people are willing to do that.
00:38:04.220 There's not like a, a, a lack of people willing to, to bang out regime propaganda for, uh, for relatively low pay.
00:38:11.320 Right.
00:38:12.140 But at the end of the day, someone has to lie to themselves and type that thing down.
00:38:17.720 And again, a lot of people are willing to do it, but there is a point of resistance there.
00:38:21.940 And when 80% of that is done by the chat bot that doesn't care and isn't conflicted by that, then that point of resistance is gone.
00:38:30.900 And while it doesn't feel like much now, once it's gone, it might've been the thing holding that kind of stuff back.
00:38:36.960 And I think that happens in just kind of every area of life as we allow the homogenization of this, you know, creation of culture.
00:38:46.540 Like beyond the automation of, you know, restaurant orders and into the automation of content that people consume, we move ever closer to this complete ability of the Leviathan to generalize almost everything that someone actually, uh, sees or watches or hears or thinks about.
00:39:06.600 And yeah, it's, it'll still have a funny ant smell to dissidents and it'll, you know, yes, people of course, we're still largely led from the top down, uh, in every culture ever.
00:39:17.600 Like, you know, you, you won't see me having a hard time with, with that concept, but I do think there is a danger in the, um, in greasing the skids in every scenario, if you know what I mean?
00:39:28.320 Like making sure that there's really no, no need for an individual human with individual will to be involved in the vast majority of these interactions.
00:39:37.440 Like you said, there'll still be someone corralling this stuff, you know, adding the finishing touches, uh, that kind of thing.
00:39:42.740 But I do worry that, you know, yes, of course, there'll be censorship, of course, they'll shut down topics and, you know, they're parable propaganda, but I'm worried about is the reliance on this stuff, especially in creative, the news, you know, journalistic, you know, um, entertainment endeavors, all of this will create a scenario where the, you know, again, the homogenization of the culture and the immediate ability to inject the narrative into everything will be almost seamless for most people.
00:40:10.640 Yeah.
00:40:11.640 Yeah.
00:40:12.640 And don't get me wrong.
00:40:14.640 Although I was sounding cheery earlier in the stream, I, if I could, would uninvent most of this technology have a bad effect on the planet.
00:40:26.640 Yeah.
00:40:27.640 I'm just saying, let's not get it out of proportion.
00:40:30.640 Things will get perhaps slightly worse overall, but we're not going to see a complete dystopia emerge.
00:40:40.640 And, and I say this mostly from the point of view that, um, as, as you caveated, and as I said before, it's already the case that most people, um, are completely in the belief system of the people who rule over them.
00:41:01.640 Um, and that has always been the case.
00:41:03.640 And maybe the only thing that people need to realize is that, that, that, that, as you might predict, that will continue into the future.
00:41:16.640 AI does not become the tool for emancipation from thought control, but just becomes the next chapter in that, in it, it probably does ratchet up the degree, the fine grained control.
00:41:32.640 Now, no longer is it the case that people just agree with general sentiments, but now people can be asked to believe every tiny iota of, of the complex structure.
00:41:45.640 And, and, and you could imagine now, um, um, an automated system, which, uh, an official is able to type a new fact into, and the AI can now, um, arrange the rest of the facts of the official story, modify them in different places and, and make that new fact.
00:42:05.640 Um, the, the easiest thing to accept, um, the easiest thing to accept and digest, or you could imagine slow changes to the truth regime where a certain idea that's very unpopular currently is, is given a low weight at the moment, but is increased day by day, a small amount at a time.
00:42:24.220 And, and, and, and, and you could push an idea to people and say, I want them to be exposed to this, sneak this piece of information into an answer.
00:42:36.100 If you get the chance, like an annoying person, when you talk to it, it's like, you know, how do you know if somebody is a vegetarian?
00:42:43.120 I was going to say a vegan joke is coming out.
00:42:45.340 Yeah, exactly.
00:42:45.860 So, so if, so if in the same way as a person can always steer the conversation round in the same way, uh, a, a, a text AI would be capable of maneuvering every discussion to include like, well, you, you could easily imagine, I don't need to give, uh, controversial examples, uh, but people can come up with their own.
00:43:10.800 How, how, how would somebody, how would your most annoying relative who has, um, who has fully drunk the Kool-Aid, um, take any given opportunity to complain at your, uh, thought crimes?
00:43:26.140 Well, now you're going to get that from your phone's personal assistant, essentially.
00:43:30.940 Now every moment of your, uh, life is, uh, Thanksgiving with your liberal in-laws.
00:43:35.120 Yes, exactly.
00:43:36.400 Yeah.
00:43:36.980 A true terror to behold.
00:43:38.600 A little modern.
00:43:40.800 Will modern wonders never cease.
00:43:42.780 Uh, yeah.
00:43:43.320 So, um, so another one that you mentioned there was the military implications.
00:43:49.720 Now I'm a little fascinated on this and I've been trying to put this, uh, talk together on automation in the military.
00:43:56.560 Uh, I'm, I'm hoping that come able to do that one soon for everybody.
00:44:00.480 Um, but you know, we're always, uh, under this impression that at some point we're going to be able to kind of escape.
00:44:08.060 The need for the soldier technology on the battlefield is going to, to, to kind of do this.
00:44:13.920 I don't think we ever get there, but I think there are probably pretty important implications when it comes to artificial intelligence and military technology.
00:44:21.400 What are some of the things that it might, uh, what are some of the things that it might, uh, change?
00:44:24.320 How could it be implemented?
00:44:27.320 Yeah.
00:44:28.040 Um, one of the things that is quite surprising to me when I think about, um, armed conflict is the fact that boots on the ground continue to be absolutely vital to, to serious, um, you know, conflict between, between nations.
00:44:49.740 Which is surprising because we have the capacity to put a rifle on a drone that can be a smaller target and move faster and be, you know, manufactured at enormous rates, um, and sent in to do a lot of the, you know, it can be a self propelled AI driven drone.
00:45:12.680 Like we already have the technology that you might naively think makes actual personnel, um, unnecessary.
00:45:23.960 So that makes me a bit hesitant to suggest that that will ever change.
00:45:29.640 It seems like that's been another one of these constants that where human feet fall, um, supported by technology, but fundamentally the, the, the sign of ownership of a part of land is, is a person and, you know, they, they have their weapons around them and they push forwards.
00:45:47.060 So, um, we see, um, right, spheres of war, the, the land, the air, the sea, and increasingly the, there is a new arena of battle, which is the cyber and two countries at war with one another, um, are competing.
00:46:12.080 And this very difficult to, um, very difficult to, um, very difficult to understand realm where, whereby just as all of our lives are ruled and controlled by these invisible software technologies, um, the, they can be taken out, they can be used against us, um, remotely.
00:46:37.700 And, um, uh, uh, it's obviously the case that we need to, um, we need to expect these attacks to be automated.
00:46:49.220 Um, the state of the art of, um, AI weapons presumably is far ahead of what civilians are aware of.
00:46:59.700 If you, like, um, do you remember Stuxnet, um, which was the attack on the centrifuges in Iran?
00:47:07.120 Um, where it, it was like, um, as it, what are those, I think of them as Russian nesting dolls, right?
00:47:15.620 You have a small one inside a bigger one inside a bigger one.
00:47:18.100 Yeah.
00:47:18.300 Yeah.
00:47:19.200 So, uh, there was, you, you tend to think of an, as a, of a cyber attack as having a payload.
00:47:26.240 In this case, it was, uh, an outs, uh, an attack with a payload that then became an attack with
00:47:34.280 a payload that became an attack with a payload, this series of nesting dolls where, um, each
00:47:40.120 layer attacked a different technology with a different way of targeting its attack.
00:47:45.600 Um, it, it's very difficult, sophisticated work to get, to get it all to go, but it is a
00:47:54.360 form of, it's a form of artificial intelligence.
00:47:57.640 I'd argue it's a, it's an agent acting in the world to have an effect on us.
00:48:02.520 Um, so I, I mean, that's just one example of the, the way in which AI could influence warfare.
00:48:14.020 We'll see, um, obviously improved, um, you know, machine vision can be used on a warhead,
00:48:21.520 or you could see, uh, drones becoming way more intelligent and coordinated.
00:48:26.560 You, you could see detection systems for all kinds of, all kinds of things.
00:48:30.480 So, um, the, the issue is all of this technology is, it's a, it's a kind of arms race, uh, of,
00:48:40.680 uh, very sophisticated software, um, which means that the ability of a, a smaller nation to fight
00:48:56.760 against the larger will become more difficult.
00:48:59.820 If you see what I mean, the, so the, um, the tendency that seems to exist in so many parts
00:49:08.140 of life whereby the power concentrates and the ability to resist power weakens, I think
00:49:17.880 this is another example of, uh, of that rule at work whereby, uh, the, you, you will see
00:49:25.720 unbelievably sophisticated AI tools being used in war that are probably very top secret right
00:49:34.440 now. And if, if, if it ever, if they ever need to be used, uh, we will see the complete annihilation
00:49:42.880 of whoever, whichever poor sucker has, uh, tried to go up against the AI.
00:49:49.080 Yeah. It's a lot harder to be braver or more fierce than, uh, people who can turn off all
00:49:55.920 of your electricity and all of your ability to have a functional, you know, uh, economy and,
00:50:01.500 and everything else. And so we just don't know how vulnerable our systems are because I mean,
00:50:08.240 if, if suppose Russia or China has the ability to just detonate the power grid at the, you know,
00:50:19.840 at the drop of a hat, um, they may be waiting for the right moment to, to play that card,
00:50:24.480 uh, but there may be, so we think of, um, um, mutually assured destruction in terms of
00:50:31.060 nuclear weapons, uh, uh, but there could well be the equivalent whereby we've, I could see
00:50:40.000 ourselves getting into a situation where everything is so automated around the world and based on these
00:50:45.780 electronic systems, um, that we are in total reliance on them. And then they get attacked maybe
00:50:55.340 mutually. And suddenly we all lose access to everything that we need to live and just completely,
00:51:02.960 um, it, we, the instant collapse of, uh, of the human race, but without a single explosion taking place.
00:51:13.460 Or you return to an order where people who are better at more, you know, direct combat suddenly
00:51:20.200 become vastly superior because they weren't reliant as reliant on this technology in the first place.
00:51:26.100 Well, that is the dystopian vision in some ways that, um, you could imagine like day of the
00:51:32.000 Triffids where everybody went blind. You remember that? Um, that they, they have this, I know that one.
00:51:36.960 Um, so, so, uh, in the book day of the Triffids, there's a kind of comet goes past the earth and
00:51:45.980 everybody goes blind, um, except for a very few people who, for example, the protagonist happened
00:51:51.720 to be in hospital with his eyes bandaged up. So he then becomes one of the few people who
00:51:57.600 can survive. Um, and then there's the, the Triffids grow up and there's just these attacking plants,
00:52:05.660 the small number of seeing people find each other and try to continue humanity where most people
00:52:16.680 across the earth just die. So you could imagine a situation like that where, well, we used to have
00:52:22.560 the ability to feed everybody, but suddenly we don't. And now there's not enough food for anything
00:52:27.900 but a tiny percentage of humanity to go on living. And we have no way to, uh, really store any of the
00:52:35.660 technology or knowledge that we've used. So just sort of instant reset.
00:52:40.620 So now that we've gotten to our dystopia, uh, let's get to the, uh, question I think most people have
00:52:46.440 when they think of the AI, the ones that are really, you know, keep people up at night and that kind of
00:52:51.400 thing. And that's the actual, you know, sentience of AI becomes self-aware. It starts making its own
00:52:57.660 decisions, that kind of thing. Now it's interesting. I think that, you know, basically throughout human
00:53:04.540 history, we've been terrified of creating something we couldn't control, you know, from
00:53:09.160 Frankenstein on up, but we still continue to move towards this. We seem to have really no self-control
00:53:16.540 over the issue. Even if we've talked about the dangers of this a million times, we seem to
00:53:21.280 continually run towards it. So I guess a couple questions here, you can take them in any order or,
00:53:29.260 you know, leave as many as you need to, but what are the, like, as humans become more reliant on
00:53:37.720 artificial intelligence to make decisions and processes, how much of our decision-making will
00:53:44.440 already be kind of this self-exciting feedback loop with artificial intelligence? And is there a chance
00:53:52.420 that artificial intelligence could at some point then start making decisions in its own interest
00:53:57.100 and out of the interest of those that created it?
00:54:01.080 Yeah. So this is a field known as AI safety, where the concern is that we will create AGI, Advanced
00:54:15.080 General Intelligence, which then, in various ways becomes a threat to all of humanity.
00:54:23.060 I find this field absolutely fascinating. It's full of very powerful thought experiments
00:54:31.040 and quite compelling ones. Mostly the starting point for this is you say, firstly, that it's possible
00:54:42.620 to create machines which have equal and then surpassing intelligence to us. So anything that we can do,
00:54:50.700 they can do faster and better and stronger and harder. And then the other thing is that their interests
00:54:57.740 are not aligned with ours. So if you combine those two things and say, well, look how humans have
00:55:05.280 dominated the planet and it's our interests which drive the world, you know, the interest of any other
00:55:13.720 animal is so secondary that they will get demolished, you know, if they are in our way. In the same way,
00:55:21.040 if there comes a time when AI is that in comparison to us, we will get wiped out. And you might say,
00:55:34.520 we can obviously, surely we can create some safety mechanisms. We can make sure that the interests of
00:55:44.000 the AI perfectly align with ours. Unfortunately, this is a topic of quite extensive thought and some
00:55:54.080 very clever people have worked really hard to solve it and haven't yet. So, I mean, it's a fun exercise
00:56:01.940 and maybe people might be thinking now about how they would develop advanced AI technology
00:56:10.700 and keep humanity safe. And it seems like a problem that shouldn't be too difficult to solve if people
00:56:17.960 are careful. Actually, it turns out that it's extremely difficult to solve. It may be mathematically
00:56:23.460 impossible. It may be that the existence of AGI is inevitably associated with the extinction
00:56:31.560 of humanity. So, maybe you might say, we're going to limit the access that the AI has to the world.
00:56:43.740 We're just, maybe we will never connect it to the internet directly. We'll put it in an airlocked room
00:56:48.960 and we'll just have somebody who goes into the room to talk to it. Unfortunately, a super intelligent AI
00:56:56.000 AI is probably capable of sending text to the human, you know, maybe you just developed this technology
00:57:03.220 to cure cancer, but it's going to be so persuasive, so exceedingly clever that it can, it can make the
00:57:10.940 person who goes in, plug it into the internet. Like, how do you, how do you make sure that they can't do
00:57:17.140 that? How are you going to make sure that people may have heard the postage stamp example? Do you know of this
00:57:24.820 one? No, I don't. Okay, so suppose you build a super intelligent AI. And all you want it to do is to send
00:57:34.920 you postage stamps, like you, you're like, what's the worst that can happen? I just want a really big stamp
00:57:41.780 collection. Unfortunately, as soon as the AI has finished collecting the stamps in your local area,
00:57:52.840 it's going to start wanting to buy stamps from around the world. So it's going to connect to the
00:57:56.960 stock market, and it's going to start acquiring, you know, billions of pounds for itself, so it can
00:58:04.460 order more stamps, and then it's going to buy up all the companies in the world, and it's
00:58:11.700 going to fire the CEOs and put every human on the planet into subservience to start making stamps
00:58:17.380 to send to you. And then once it's finished with that, it will realize that the remaining atoms on
00:58:22.340 the planet could be better used as stamps. So it will arrange a space fleet to come in and obliterate
00:58:29.360 the earth and turn it all into stamps. So the single mindedness of a straightforward objective
00:58:37.380 function like make me as many stamps as possible turns out to inevitably lead to the destruction of
00:58:46.340 the planet earth and probably the entire universe. The most efficient way to complete almost any task
00:58:57.140 is total and complete domination, right? Yeah. And you might say, okay, well, I don't want as many stamps
00:59:02.180 as possible. I just want 10 stamps. And then the AI goes, well, a human could show up and deliver an
00:59:09.700 11th stamp. So I still have to destroy mankind. It's just, it turns out that much like humans,
00:59:18.420 AI also wants complete total power, and it becomes another member, another agent in the power game that
00:59:29.060 we exist within, but it's also the best player. And it immediately outcompetes all of us and puts us
00:59:35.940 all to shame and, you know, becomes a one AI elite ruling over us. So that's a scary prospect and we
00:59:44.660 don't seem to have any solutions to it. I suppose to become the voice of unnecessary optimism again,
00:59:59.620 I'm sceptical of the claim that we are on the precipice of replicating brains in silicon.
01:00:16.740 Now, we're very, as humans, we're very keen to attribute agency to anything that we see. So we
01:00:26.580 see a new way of development and assume that, well, this is it. We are days away from AGI being
01:00:32.420 a thing. But just to illustrate how easily we are fooled, or at least our empathy is kind of driven
01:00:39.220 at. If you make a text-based computer game where it says, you know, ah, please don't kill me,
01:00:47.460 and then asks you as a dialogue box, do you want to kill the imaginary person? People will feel
01:00:51.780 genuinely like it is a moral duty not to kill the person who was introduced with one line of text.
01:00:59.700 So we need to be careful about, um, overly attributing human, uh, understanding onto things,
01:01:08.340 you know, anthropomorphizing these machines. So, yes, the language models are very impressive. Yes,
01:01:17.140 the image generation is, is, is really cool. However, there remains a capacity to tackle
01:01:28.420 unbounded tasks and to act, um, in the world, you know, truly as a, um, self-compelled intelligence that
01:01:41.300 nobody beside humans has. So as much as we can, we've developed the ability to throw more data at
01:01:52.740 algorithms, um, if you actually compare the amount of energy used by these language models, which are
01:02:01.620 a million times smaller than our brain versus an actual human brain, uh, we are a very long way away
01:02:09.300 from even matching the abilities of the brain in its own domain. Um, now AI has shown the ability to
01:02:18.020 do video games. So that looks like generality, right? You can take an Atari and there is now a model that
01:02:27.060 can tackle whichever game that you throw at it, which, which feels like general AI. Um, but my observation
01:02:34.500 on that is that a game by its very nature is designed to be something which is difficult for the type
01:02:42.500 of brains that we have. It's intended specifically to be a challenge to humans. So it's difficult for our
01:02:50.340 type of intelligence, but very, very easy for the, the neural networks that we build in silicon. But the reverse, I
01:03:00.980 think will always be true. Uh, I'm going to stand up for humanity and say that we have something that is
01:03:08.100 truly irreplicable. We, we can't build a computer version of what we have between our ears. Um, and I, I actually
01:03:21.140 think, I don't know if this is an optimistic take or not. I think we will see the collapse of our civilization
01:03:28.580 long before we build an AI, uh, uh, that can even, that can even approach convincing anybody that it
01:03:39.380 can, it has the same general type of intelligence that a human has. Um, so the optimism there is the
01:03:46.980 collapse of civilization to be clear. Well done. That's, that's the white pill that everyone was
01:03:51.300 hoping for. Yeah, no, I, uh, I have a similar impulse. I have a similar, you know, kind of gut reaction,
01:03:58.260 which is that, that, which is necessary to constantly keep this kind of technology up and
01:04:05.940 running and, uh, maintain that kind of thing. Well, those kinds of systems would probably fall
01:04:12.420 apart. The coordination would fall apart before the, the technology actually reached the point
01:04:16.340 of like escaping, uh, kind of human capacity and that kind of thing. I think you might be right about
01:04:21.380 that. Not, not that I'm not going to drop my, uh, my, uh, but Larry and Jihad, uh, position, but,
01:04:26.820 you know, just good, good to know that that is a possibility. So one more thing before we go,
01:04:32.180 and, uh, this is a big question to leave this interview on, but we hit the hour and, uh, we still
01:04:37.620 have super chats, uh, stacking up. So I want to go ahead and round things up here. So if you don't have
01:04:43.540 a, uh, enough time to hit it all, it's okay. But what about the religious implications of this?
01:04:51.700 You're somebody who, you know, like I said, you rent, run a Bible study channel. You probably
01:04:55.300 thought about this some, what are the implications for people, you know, Christians perhaps, but,
01:05:01.060 but people of, of other faiths as well of incorporating this into our lives, even if we can,
01:05:07.940 even if we can control every aspect of this, even if we were able to make this simply a useful tool,
01:05:14.660 another hammer in our bag, is there at some point a danger, uh, for people who don't believe purely in
01:05:23.300 material existence of becoming too reliant on something that's entirely artificial in their daily
01:05:29.540 lives? Hmm. Um, okay. A couple of thoughts in this realm. Um,
01:05:37.780 one which is, so I, I believe humans are created in the image of God. That's one of the very first
01:05:50.340 things that the Bible tells us. Um, it's not entirely clear what that means, but everything that we create
01:05:58.500 is secondary. So I think, I think it's important to, I think it's important for Christians to, uh,
01:06:12.100 understand and clarify what it is about a human that is, um, distinct, uh, divine, significant.
01:06:26.020 And a lot, a lot of people are increasingly seeing the human being as nothing but a brain,
01:06:38.180 kind of in a, what's the phrase, a meat, um, you know, a meat suit, right? The, the thing that really
01:06:46.100 matters about us is the pure processing that happens, like the behavior. Um, I think we should
01:06:56.340 push back on this in various ways for various reasons. Um, so it's useful. I think it's a useful
01:07:04.900 clarification for us for a start to say, actually the fact that we are embodied is significant. The
01:07:12.340 fact that we have actual conscious experiences and the first person perspective, which no matter what
01:07:19.700 anybody says, I don't believe in AI will ever have is significant. We should consider humans as the most
01:07:29.060 important ethical unit and not just intelligences. Um, and I think people do have an intuition about
01:07:38.820 that because we don't consider somebody to be less important, um, because they're less intelligent
01:07:46.660 or at least, well, depends on, depends on who, but yeah, I hear you. Yeah. True. Yeah. But, but,
01:07:54.500 fortunately still remains the case that most people would, would affirm the moral significance of
01:08:02.260 people who have low intelligence. Um, and we have outsourced a lot of the things that humans do
01:08:10.820 to human creations. It, you know, we used to build things by hand and now we have machines.
01:08:19.060 So we've outsourced some of our, you know, bodily function to a thing that we've created and what
01:08:26.100 we're increasingly doing, and this has already been happening for a hundred years, but we, we're
01:08:30.740 developing tools that can outsource bits of our brain increasingly. And this is all part of a
01:08:39.700 general seemingly irresistible drive, um, by humans to, uh, become more rich, more wealthy. Like we,
01:08:50.740 you know, we, we forever are inventing something and then using it to make more,
01:08:56.980 more delicious food and, and more amazing, I don't know, fashion, whatever we use the excess, uh,
01:09:03.300 resources for. Um, but we, I, I believe that we have been given our bodies as gifts and there is a joy
01:09:21.860 in using the body, um, for all of its abilities. So I warn, I would warn against people letting their
01:09:30.820 bodies lie physically fallow. I think actually, even though we have machines that can, we can get
01:09:36.660 in a car and drive, we don't need to walk or run or exert energy to move around the globe,
01:09:44.260 but something of us dies if we never do that. I think the same thing may be about to happen with AI.
01:09:51.300 If we stop using our brains, then there is a part of our beautiful,
01:09:56.980 God-given humanity that we are failing to enjoy and appreciate. So I, I'd encourage people, um,
01:10:04.340 as the new AI revolution takes off, um, not to see it as a threat to humanity,
01:10:11.380 but to see it as a reminder of the importance of the products of actual human work and effort. So, um,
01:10:19.700 um, value a handmade, I don't know, uh, a chair, say a carpenter has made something for you by hand,
01:10:31.540 love that thing as more important than one that's been made in a factory. And in the same way,
01:10:37.700 love your brain, love the, um, artwork that's created by real humans, love the poetry and the
01:10:45.140 books that have been written by humans, because, um, ultimately the source of that creation,
01:10:52.340 um, is, is something that's, uh, closer to a human, a human connection, right? We, we, um,
01:11:02.500 we delight in being on a planet with other people, not just with services and goods, not just with stuff.
01:11:11.220 Um, so as AI, as AI pushes at things that we used to think were uniquely human, I think it,
01:11:23.860 it should cause us to realize that the, the true source of our specialness, the true source of our
01:11:32.260 humanity and the thing that that's valuable about us is in the fact that we are images of God and all
01:11:39.700 that AI can ever be is an image of us. Excellent. I think that's a good answer.
01:11:47.460 So as we transition over here to the super chats real quick, uh, do you just want to tell people
01:11:52.260 kind of, uh, where they can find your stuff, uh, different channels they should check out that kind
01:11:56.420 of thing? Yeah, sure. Um, so as, as I mentioned at the beginning of the stream, the channel that I
01:12:01.540 put the most effort into is called Lambda Bible studies, and that's the easiest to find in YouTube as
01:12:07.220 well. Lambda, like the Greek letter, L-A-M-B-D-A, Lambda Bible studies. And there I have generally
01:12:17.140 much like this. It's an hour long stream. We talk about a chapter from the Bible. Lots of my guests
01:12:22.100 are Christians. Um, lots of plenty of them are not. And, uh, either way we get lots of really
01:12:28.980 interesting insights. Hopefully. I mean, when I started the channel, it was actually called based
01:12:34.420 Bible studies. And I, I like to think I still keep that lineage alive that, um, whereas you can go to
01:12:40.980 a church these days and a lot of what you hear ends up being, um, pretty woolly liberal regurgitations
01:12:51.300 of what you see on like the BBC news website. When you come to my channel, it's truly informed by the
01:12:58.260 word of God, uh, whether or not that is an approved line. And I'm going to keep talking
01:13:04.340 about what the Bible says until the day that I get kicked off of YouTube. Um, and then I'll keep
01:13:10.100 doing it, but I won't be broadcasting to anybody. Um, and then, and then I have a channel just called
01:13:15.460 Lambda and that's for any other conversations that I do. Um, so like I've got, uh, a thing coming up in,
01:13:23.060 in Easter where, where I'll probably interview 30 or 40 people on a, on a wide range of topics.
01:13:30.740 Excellent. All right. Well, make sure that you're checking Luke's stuff out there. I believe I'll
01:13:34.020 be one of those people appearing on his channel at some point during this. So it's the, the
01:13:38.660 Lampster as I believe he has a title does is always a good time. So Lampster like, like millennial
01:13:45.780 was the kind of Christmas for the dissident, right? Or whatever, you know, it's now the
01:13:51.300 gathering place for over during Easter. Yeah. Yeah. So I thought we ought to have a gathering
01:13:56.340 place at Easter. Absolutely. All right. So let's go ahead and head to our, uh,
01:14:01.540 super chats. Cause actually we have quite a few here. Uh, Patrick Ryland for 999. Thank you very much,
01:14:07.540 sir. We've taken too much for granted and at the same, uh, and all the time is grown from techno seeds.
01:14:14.100 We first planted evolved a mind of its own. I feel like that's a quote from something.
01:14:18.820 Are you familiar with that one at all? I'm not familiar with the quote. Should I Google it?
01:14:23.300 If you'd like, sure. Why don't you read the next one while I Google this?
01:14:27.460 Prepare for that one. Uh, yeah, but the quotation marks make me.
01:14:31.780 You know who would know what this is from? Who would that be?
01:14:35.860 Yeah. I was going to say your chat bot. Yes.
01:14:39.620 Uh, I can't seem to find it on an initial Google, so I'm afraid.
01:14:44.340 I don't know what that referred to. Very, very many apologies to your super chat today.
01:14:49.220 Yeah. Sorry, Patrick. I wish we had a better grasp on where that came from, but it sounds like it's,
01:14:54.260 it's from some sci-fi novel or something somewhere. I feel like, uh, you're with pipe for $2.
01:15:00.740 Uh, AI outputs are obfuscated word clouds. Um, that is an obfuscated word cloud to me,
01:15:06.900 but does that make sense to you?
01:15:08.100 Well, you probably would recognize the word cloud, right? You, you put in a bunch of data
01:15:14.740 and then the, it gives you back an image where the largest word is the one that came up the most
01:15:20.500 common and the next one, et cetera. Um, yeah, it, it, I think what's intuitively correct
01:15:31.700 about that observation is that the language models truly are just doing a simple algorithm
01:15:41.700 often based on fundamentally a word frequency study. There's something a little bit more Markovian
01:15:48.900 about it because it will know which words follow other words the most often. Um, and I mean, it's worth
01:15:58.980 mentioning that a, a neural network, if you construct it the right way with, with the right, um, you know,
01:16:10.260 the right functions on the connections between neurons, um, you can have Turing complete
01:16:22.020 functionality contained within the neural network. So if, if you imagine, um, the depth of the neural
01:16:31.300 network relates to the number of lines of code in a certain sense, although it's kind of exponential,
01:16:40.260 the point is that you, you could, um, you can encode very complicated patterns and that's what has led
01:16:47.860 to, I mean, the amount of actual work that's been involved in making the, um, chat GPT or take,
01:16:55.860 let's take another AI that we haven't really talked about yet. So do you remember when, uh, AlphaGo
01:17:02.580 beat Lee Sedol, who was the reigning human champion at the game Go? Okay. Yeah. Though, before we move
01:17:10.100 on to that, you use the term that people are not very familiar with Turing complete. Can you explain
01:17:14.740 for people what that means? Oh, sorry. Yes. So a Turing complete machine is able to do any computation.
01:17:24.580 Um, you, you know, a modern processor is able to perform an arbitrary calculation. What's the simplest
01:17:36.660 machine that you could run any calculation on? And it turns out to be unbelievably simple. All you need
01:17:45.220 is memory consisting of a single row. You can just have ones and zeros and then a state diagram. So
01:17:55.700 you could have a pointer to where you're at on this tape that represents your memory and a current
01:18:01.860 position on the state diagram. And the state diagram just tells you how to update the place that you're on,
01:18:08.180 where to move to and which next state to go to. So I don't know if I explained that very clearly,
01:18:15.140 but the point is that the, in that many words, you can describe a machine that can do any algorithm
01:18:22.020 that you could possibly imagine. Um, so the comment on the neural networks was that, um, they are capable
01:18:32.820 of doing, if, if it's sufficiently deep of a neural network, they can perform any calculation in the
01:18:40.420 same way. And then you can train them with data so that they can increasingly bend themselves into the
01:18:49.460 pattern. So it's like, um, if the traditional way of writing software is that you input instructions
01:18:59.220 directly onto the Turing machine, machine learning is like a way of saying, keep changing the parameters
01:19:07.300 of the Turing machine until it quite often ends up with the right calculation coming out the end.
01:19:12.420 Um, so, so I, I, hopefully that communicated that, that there is genuine power and, and depth in this
01:19:22.340 idea of, of a neural network. Um, so when people say it's just pattern recognition, I mean,
01:19:29.460 you could argue that everything inside intelligence is, is pattern recognition.
01:19:35.380 Yeah. So with sufficiently complicated patterns, you, you, you can understand the, the, the entire
01:19:40.900 universe. Gotcha. All right. So Trey, uh, Trey 50 Daniel here for $10. Thank you very much. Uh, Microsoft
01:19:50.180 bought a stake in open AI. So, uh, this is, uh, so this is possibly going to get much worse.
01:19:58.100 Uh, what is open AI? Well, open AI is the company that has developed a lot of the AI's that I've
01:20:05.780 talked about. Um, so for example, um, chat GPT is a product of open AI. You would think by the name
01:20:15.940 that open AI is transparent and publishes their findings, publishes their data, allows the software
01:20:25.700 to be used by anybody. But in fact, open AI is, uh, one of the, the most closed and secretive
01:20:32.900 companies that maybe it wasn't like that to start with, but at this point, then the, the name is,
01:20:39.860 is quite deceptive. Um, now is it better or worse than Microsoft? I guess Trey has a strong opinion on
01:20:50.340 this. I, I think that, um, the, the way that open AI have behaved, it doesn't fill me with confidence
01:20:58.340 either. So maybe, maybe things will get worse. I, all I can see Microsoft doing is integrating it into,
01:21:06.100 um, VS code. And, um, I, I guess they've got various projects underway that, that this technology
01:21:15.620 will be useful for, but I don't see Microsoft as the chief villain in, in the universe. They are
01:21:22.580 just one amongst a cadre of questionable, uh, questionable organizations. Nothing particularly
01:21:30.580 evil about them. I don't know. I built Bill Gates makes me wonder, but I guess he's not really that
01:21:34.740 involved at this point, but still. Well, we know about Bill Gates. Yeah. When he's not trying to blot
01:21:39.780 out the sun, you know, there's probably worse shadowy figures. I mean, yes, certainly a lot of the
01:21:44.340 stuff, a lot of the stuff we know about Bill Gates, uh, is extremely worrying. So yeah.
01:21:49.860 Is Bond villain-esque. Yeah. All right. So, uh, we've got, uh, Creeper weirdo here for $10. Thank
01:21:56.660 you very much, sir. Humans are based. Robots are cringe. Hard to argue. Uh, the more I listen to
01:22:02.340 engineers talk about the advancement they're capable of, the more I think they're bad people, or at least
01:22:07.700 speak, uh, bug man, normie. Um, I don't know. I, I know quite a, quite a number of based engineers,
01:22:14.740 but what do you think? I mean, I think the best, the best way to visualize engineers is, um,
01:22:25.940 the original Jurassic Park film where they were so worried about whether they could, they didn't stop
01:22:34.500 to think about whether or not they should. In a sense, the archetype of an engineer is best thought
01:22:43.460 of as like, at worst, they're like highly autistic and don't care about people, but the, the sort of,
01:22:55.380 that's better than being the Machiavellian villain who's actively trying to ruin the planet. You know,
01:23:01.700 at least the engineer is only ruining the planet by accident. But, but yes, I mean,
01:23:07.780 the quote unquote advancement, the techno optimism is, is something that's worth questioning. And
01:23:14.180 most technology seems to be used for evil. I would say just to give a Christian perspective on this a
01:23:21.940 little bit, I think the real explanation for why technology ends up being mostly used for evil is
01:23:29.540 because technology accelerates the power of people who use it and people turn out to be mostly evil.
01:23:37.460 In the vision of where we're going in the Bible, we end up in a city, not back in the garden. The,
01:23:47.940 the new heavens and the new earth is described as if it's some like civilization rather than just a,
01:23:56.580 a wilderness. So I think there's a positive potential within every technology. And so,
01:24:06.020 although we see the ill effects of technologies, if you are a virtuous person, I don't think you
01:24:11.540 should feel bad about using technology for good. And in the same way, I think we should be on the
01:24:17.140 lookout for how AI can be used for actual positive change in the world, given that it appears to be
01:24:23.380 happening, whether we want it or not revelation as a, uh, message of tech optimism. I like it.
01:24:28.740 That's a, that's a good thing. Yeah. All right. Uh, J, uh, JPG here for $10. Thank you very much.
01:24:36.980 Humans will prefer a human touch is wrong by capitalism's history. First, there was labor capital,
01:24:42.980 then identity capital, example, uh, change of air shirts, now spiritual capital, free trade.
01:24:49.460 Next, let us abolish man. Um, I think that there is probably a decent, uh, case to be made there.
01:24:58.260 You do people when left to their own devices do actually seem to, if not completely a shoe human
01:25:05.380 contact, prefer a particularly limited form of it, or at least one that they're constant control of. So
01:25:11.380 maybe humans prefer the veneer of human touch as before, as opposed to like the actual friction
01:25:18.180 that human interaction provides. But I don't know if you want to comment on that one.
01:25:22.900 Do you remember that experiment where they put a bunch of, uh, mice in a kind of mouse utopia?
01:25:32.900 Yes. Where they had abundant food and drink and they just left them to see what would happen. And
01:25:41.300 the essentially things deteriorated over the course of several generations. And one of the
01:25:49.700 archetypes of mouse that developed was called the beautiful ones. And these mice just lazed around and
01:25:57.860 preened themselves and, you know, they didn't really want to have anything. They weren't breeding. They
01:26:03.140 didn't care to interact with other mice. They just became sort of pampered and then died out. Um,
01:26:11.460 that was just a subset of them. And I could see a very similar thing seeming to happen to human
01:26:16.580 civilization, right? Where there is a type of person who, uh, who issues of the, the contact with other
01:26:25.780 humans. Um, I, I do tend to think the technological component in the development, the kind of rise and
01:26:34.580 fall of civilization is overplayed. I think most of the effects are just human, um, are just human nature.
01:26:44.660 And we see a technology develop, then we see human behavior change. And, and then we apply the,
01:26:53.860 the logic of, well, it happened after, therefore it was caused by, and that I think is fallacious.
01:27:01.940 Yeah. I could see the causality being wrong there, but I also think that just because technology
01:27:08.260 amplifies certain aspects of human nature doesn't mean that the technology wasn't an instigator,
01:27:16.020 right? Like you, I think you're absolutely right that like in many ways, the problems that technology
01:27:20.820 magnifies were already there. And there's simply been given license by the abilities of technology,
01:27:26.180 but that doesn't mean the technology therefore isn't dangerous because we do believe, like, I think both of
01:27:30.820 us do that humans do have a nature and that nature is fixed and will always be, uh, weak and, uh, and,
01:27:38.580 and susceptible to exploitations of that nature, then technology existing that allows for the mass
01:27:45.220 exploitation of nature might be too dangerous to exist and for civilization to function in a healthy way.
01:27:51.140 Yes. But I also believe that technology is almost inevitable. So, uh, it's impossible to stop people
01:28:00.740 from inventing things. And, um, if a certain technology leads inevitably to the civilization
01:28:08.420 collapsing, then the civilization will inevitably collapse. Yeah. Uh, I think there's a possibility
01:28:14.900 of that, but I think there are arguments in other directions, but perhaps for another stream there,
01:28:18.340 uh, JP, uh, G again for $2, uh, me, a good capitalist saving on characters. Yes. Thank you for your
01:28:26.260 chat, sir. We appreciate that. Uh, let's see here. Next one. Uh, creeper weirdo for $5. Are you sure
01:28:35.460 that the stamps thing isn't a joke? It sounds like a joke. Uh, yeah, I'm not familiar with that hypothetical.
01:28:41.220 Uh, but I, I do understand what it's saying that the narrowness of the mission can eventually,
01:28:46.980 you know, create unintended consequences. The, the people researching AI safety are extremely
01:28:54.820 self-serious and non-joking types of people. So if you think that the reasoning is not correct,
01:29:03.700 then you should engage with the, they'd be very keen to hear from you, but, um, yeah, it's, it's
01:29:12.020 certainly not, it wasn't dreamed up as a, as a joke. It was dreamed up as a very, as a way of making what
01:29:19.300 they considered to be an existentially important point. Absolutely. All right. Uh, Paul frog here,
01:29:27.780 uh, Polly frog for $10. Thank you very much. I believe we'll never create a true AI. I believe it
01:29:33.380 will be created by general AI that can write and improve its own code. I believe it's outside of
01:29:40.340 our standing or our understanding thoughts. So what do you think about that? That we ourselves won't
01:29:46.500 actually create the complete AI, but it will be birthed by the, uh, less interesting or the, the less
01:29:54.500 intelligence AI is that we'll eventually kind of build to this. Uh, well, yeah, this is certainly
01:30:03.860 part of the argument that general AI is inevitable and, and scary is that once it exists, it can create,
01:30:12.020 like if we've created intelligence, that's more intelligent than us, it can create an intelligence
01:30:18.660 that's more powerful than it. And indeed, maybe it's doing it in secret right now. Maybe, maybe it's
01:30:25.460 too late and it's already in this process of self evolution. No human is even aware, you know, it's
01:30:31.780 happening, um, disguised as some demon process that we think is doing. Maybe all the cryptocurrency
01:30:39.460 mining that's going on around the world is not in fact calculating large prime numbers, but it's the AI
01:30:44.980 secretly at work. So, um, uh, yeah, I, I guess relates a little bit to what I was saying before
01:30:50.100 about my skepticism about general AI. Um, I don't know what the distinction Paulie frog is drawing
01:30:57.460 between true AI and general AI. I, I don't think true AI is a concept I'm familiar with. I don't, I think
01:31:07.620 there may be a confusion of terms there, but, but the general idea of self modifying code is interesting.
01:31:13.700 Um, again, I, yeah, certainly something that people will look into and give a go at if, if I mean,
01:31:23.620 why not put a lot of focus on self improving AI that may be the quickest route to solving a lot of
01:31:30.980 our problems. And then that may become a, um, out of control loop where it evades us. And then we get
01:31:38.820 some sci-fi dystopia maybe. Oh, okay. Oh, I see here. Uh, some people are saying that the quote from
01:31:45.700 the very beginning was from Jesus priest song, which makes sense because their albums on the,
01:31:50.260 uh, on the wall behind me there. Uh, though that song is not on that album, but yeah, my head was not
01:31:55.220 in that space, but okay. That makes sense. Thank you for the, uh, for the clarification there chat,
01:32:00.420 uh, glow in the dark here for $5 says people like using the easy way and avoid using, uh, doing it the hard
01:32:05.460 way calculators. For example, people used to, used to doing calculations in their mind or on a board.
01:32:11.780 Yeah. I mean, it's always really difficult to explain to people why you have to go back and learn
01:32:17.140 how to do something without the technological crutch. And even if it is particularly important that you do
01:32:23.780 that, it's still extremely hard for people who have kind of been birthed into a world where that crutch
01:32:30.020 is always available to understand why it's important, which is why like, if you've ever,
01:32:33.860 you know, interacted with a young person trying to learn math, they're like, I can just do this in
01:32:38.100 a calculator. So it doesn't matter. They, you can't really explain to them the value of solving the
01:32:43.220 equation without the calculator. They just don't get it one day they will, but you just can't explain to
01:32:49.300 a 13 year old or 14 year old, like why that's important. Uh, so I, I think that's always true.
01:32:55.060 And I think it only becomes increasingly true as your technology abstracts you more and more away
01:32:59.380 from the skill. Uh, I think again, that is unfortunately just part of human nature and it's
01:33:05.060 very difficult for people to kind of break out of that.
01:33:10.020 Worth saying that people have been predicting. We are about to invent a machine that is equivalent to a
01:33:16.500 person forever that the old mechanical Turk, um, you know, where people would wheel a machine in
01:33:25.380 and claim that it could play you at chess. And in fact, it was a person hidden inside a box who could,
01:33:32.900 give instructions, um, you know, via mechanical, this is before computers existed. Um, we understand
01:33:39.940 that calculators are machines that do our bidding. And I see no reason that that won't continue to be the
01:33:45.540 case for all the AI. It will make a massive difference and change society, but fundamentally
01:33:53.300 these are tools that people use to do evil things. And that's the real thing to be scared of.
01:34:00.740 Uh, PBK here, uh, for $10 Canadian. Thank you very much. Maybe a question for Luke, but how to use
01:34:07.300 software engineering skills for good, i.e. not working on wifi connected fridges or working for big evil
01:34:13.540 corporations. Uh, I am a software engineer. I think is that abbreviation looking to make, uh, not evil
01:34:20.420 things. So yeah. How do we keep, uh, how do we keep all these evil corporations out of our fridge?
01:34:25.860 What's the, what's the best way to use, uh, your powers for good as a software engineer?
01:34:30.900 Yeah, this is a question that has caused me quite a lot of, um, uh, meditation over the last few years.
01:34:42.180 I mean, so I, I am a software engineer and I work on quite a wide variety of technologies.
01:34:52.500 When I, you know, began on the, this career path, I was the biggest techno optimist on the planet.
01:35:00.740 And I felt like there was nothing you could do that was more for the good of mankind than writing
01:35:05.700 software for more or less any purpose that you were just building this amazing mechanism that
01:35:12.980 mankind used for, you know, alleviating poverty and making people's lives better. And I thought it was
01:35:19.780 all positive. I flipped significantly to the point where I almost felt like every technology was nothing
01:35:26.980 but evil. So then I was like, am I in a purely evil job where everything I do is actually making the world
01:35:36.020 worse? Uh, hopefully I've now come to a more balanced position. Um, I, I think the truth is that
01:35:45.300 firstly, technology is inevitable. If we, if, if, if, if you as a software engineer don't write that piece
01:35:56.660 of software, somebody else will, or somebody will at a competitor that you, you won't, um, you won't
01:36:04.260 change the course of human history, whether you do or don't participate. Uh, now I could, I could bring up
01:36:10.980 historical examples where participating in a general evil that is happening isn't, isn't advisable, but
01:36:18.740 that's, I think use a starting point as a useful reminder. Secondarily, I don't think technology
01:36:26.100 is always evil. And in fact, I think technology is basically neutral. So yes, great evil is perpetuated
01:36:37.620 by technology, but great good as well. And it's easy, it's easy to forget that, um,
01:36:49.220 it's easy to forget that if, if you're evaluating, I'm trying to think the right way to phrase this.
01:36:58.340 Um, when you're talking about good and evil, you need to have an ethical framework. And I guess for me,
01:37:04.340 good and evil really is a spiritual realm and the effects of technology
01:37:14.660 appear to have been negative on aggregate, because I think we've seen a significant
01:37:21.700 spiritual decline. I think we are seeing in the West, turning away from God, turning to do more evil,
01:37:30.660 degenerate self-serving and judgment worthy things. Um, and at the same time, technology
01:37:36.740 has massively proliferated. But again, like I was saying, I don't think those are actually causally
01:37:41.700 linked. Um, by being in technology, we can do good by small choices where we, um, are thoughtful about the
01:37:56.500 effects of the technology and push it more towards the positive direction than the negative. Basically,
01:38:02.660 what I'm saying is that there is great power in being an engineer. If you're, if, if you're, um,
01:38:09.700 if, if all the people involved as engineers are evil, the technology as a whole will be more evil.
01:38:15.700 So I think there is a place for engineers to work and do good. And the, and unfortunately it will be
01:38:25.220 a case by case and you'll have to evaluate quite far because the impacts of the technology are probably
01:38:30.740 a long way off and maybe it's unforeseeable the damage that you're doing. Um, but better we have
01:38:37.940 some people in on the ground floor trying to avert disaster than that. We just let go of the wheel
01:38:46.660 entirely.
01:38:47.380 Can't, uh, can't affect the game if you're not playing it. I guess is the, not there. Yeah.
01:38:53.780 Yeah. I mean, as we, as we decide whether to steer off the cliff quickly or slowly.
01:38:58.420 Right. Yeah. Slightly, slightly slower. Exactly. Uh, laughing gas here for $5 says,
01:39:05.380 I heard that story as a paper clip AI instead of stamps, but same thing. Yeah. So apparently not
01:39:11.060 just a completely new scenario, but one that other people have heard of before there. Uh,
01:39:16.420 let's see here. Uh, there are a number of, uh, figures there that I can't quite pronounce,
01:39:22.740 but thank you very much. I think his pronunciation. All right. I trust you on that. Well done.
01:39:27.540 There must be a computer engineering skill. Uh, I, I think the, uh, scary thing is once AI attains
01:39:33.380 the intelligence of a mob versus just one person, eventually you could have a literal galaxy brain
01:39:39.700 intelligence. We assume, uh, more think equals more gooder. Yeah.
01:39:44.580 Hmm. That's quite insightful. Yeah. So yeah, a couple of thoughts. One is, um, on the subject of is more
01:39:54.660 think equal, more gooder. Uh, I recommend people to read, uh, notes from underground by Dostoevsky,
01:40:02.740 which is a really interesting, has a really interesting meditation on that topic and, and
01:40:08.900 sort of proposes that the man of will, uh, imagine the heroic masculine figure who goes out into the
01:40:16.500 world and does things, changes stuff, you know, for the better can't be this uber intellectual,
01:40:26.020 thoughtful person because there's an inherent contradiction in order to act. You can't be held
01:40:32.740 back by all this intellectualizing. And so there is a bit of a scourge of, um, promoting the
01:40:39.300 intellectual. And even if we don't think that way directly, we, we tend to be influenced in that
01:40:46.020 direction. So when we see a more intelligent thing, like an AI, we'd say, great rule us,
01:40:54.820 please. You're obviously smarter and therefore you deserve to rule. And actually most of the good
01:41:00.580 that's been done throughout history has been done, um, from the heart rather than from, from the brain.
01:41:07.060 So, um, I, that last part, very good point. The question of a mob versus one person, uh, in a
01:41:18.820 strange way, I think that the intelligence of the AI is quite similar to a mob at the moment. Um,
01:41:26.100 have you heard the story about the carnival where the average guess of all the people attending was
01:41:33.940 closer to the weight of the pig than any of the individual guesses? Sure. So I think this only
01:41:41.060 works if all the people guessing are experts in the field and they don't spend too much time on it,
01:41:47.460 but it is plausible that the effect of lots of people briefly considering something as subject matter
01:41:54.500 experts is a sort of good proxy for one person thinking for a long time on the topic. Um,
01:42:00.340 but I don't, I don't think it's the case that it, I don't think you could produce a mechanism like that,
01:42:09.060 that exceeds the accuracy of a, an expert, if the expert was given a sufficient amount of time.
01:42:16.260 And I see this kind of pattern a lot in the work of AI where, um, it is, at best, it sort of
01:42:27.860 tops out as it were that the ability of the AI tops out, um, towards the ability of the upper quadrant
01:42:38.580 of human participants in a field from which the data was drawn. So if you, uh, if you want the best
01:42:49.220 result, I'm pretty confident it will always be that an actual human will produce the top quality
01:42:58.020 art, for example, or text or, you know, whatever that, that, um, the AI is a sort of, uh, clever,
01:43:07.300 smudged off approximation of quite high quality versions of whatever domain it's operating in.
01:43:14.900 Um, which is probably better than when you get a group of humans together and they act as one
01:43:22.260 that seems to produce a particularly lowest common denominator, worst kind of behavior. It's like,
01:43:29.220 imagine the worst impulses of the worst person in the mob. Now everybody behaves that way.
01:43:33.780 Right. So I think AI is probably, probably doing better than a real human mob, but there's an
01:43:41.220 interesting similarity there. Yeah. Yeah. Better at what I guess is the concern there. Yeah.
01:43:45.220 That's right. Um, all right. And following up there, uh, hopefully restoration bureau is
01:43:51.300 good for our side. Um, I'm not sure what that's in reference to. Um, uh, I'm also not, but I'm
01:44:00.500 Googling it. Yeah. It feels like science fiction. That can't be it.
01:44:06.420 Uh, okay. Sorry. Don't know. All right. No problem. All right. Thank you for
01:44:11.780 your question there. Uh, mana, you'd sushi, I think is, uh, for, uh, 199. Thank you very much.
01:44:20.020 Question. What is your favorite AI fiction? Um, that's a good one. Uh, I w I guess, is it cheating to say
01:44:28.660 Dune because the AI has been destroyed, uh, but, uh, what would be your favorite AI fiction?
01:44:37.700 I actually find, um, the AI in ex machina quite compelling. Yeah. That was a good one.
01:44:43.940 Have you seen that movie? Yes. That's a very good movie. And it's helpful as it pulls at our intuitions
01:44:51.540 about appearance versus reality with regards to AI. This is, uh, um, so spoilers for anybody
01:45:03.300 who hasn't seen ex machina, but the, the robot intelligent robot in ex machina deceives the main
01:45:12.340 character into thinking that it's a, uh, essentially a human woman who is worthy of his
01:45:21.540 concern. And ultimately he, um, out of compassion springs her out of the, the facility in which
01:45:31.220 she was kept. Um, and then she leaves him in there to die. And the clever thing about the movie is that,
01:45:39.620 um, actually the viewer, I think is brought along the same journey as the main character,
01:45:46.020 as the protagonist, where you also fully believe that this character is a, is a person. And then it
01:45:53.380 reveals darkly that actually there was no true, no, there was no true human there. And it was all a
01:46:01.300 ploy and a trick. And then, and this is the, this is the concern I think that, that AI safety people
01:46:08.500 would raise, uh, is that we can be very easily manipulated. And also that, uh, if, if all the
01:46:16.180 humans die and are replaced by more intelligent, but soulless beings, we have a cold, dead, pointless
01:46:25.060 universe. Well, on that cheery note, let's move to our next one here. Uh, blow in the dark for $10.
01:46:32.500 Thank you very much, sir. I worry about Neuralink turning people into AIs. Imagine if the interaction
01:46:39.140 between two, uh, makes an artificial human mind that can function without a chip, you become a robot.
01:46:46.660 So no longer in control of your mind, are you still actually an individual? Are you now a separate AI?
01:46:53.780 Interesting.
01:46:54.260 Yes. I mean, the, the answer from Musk, I'm sure would be that we are already cyborg. Um, how many
01:47:06.420 of us can really function without many levels of technology as it is? Um, I feel like I've outsourced
01:47:17.220 many of my, um, functions already to my phone. I don't know what's coming up tomorrow, but my
01:47:24.100 calendar does, I don't remember half of the things that I need to do to do my job, but I've got them
01:47:30.580 all in notes apps. I, I rely on, uh, Google to do lots of my programming, perhaps in the short future,
01:47:41.060 it will be a chat bot that I rely on. Uh, perhaps if I could get, um, uh, you know, a quicker interface
01:47:49.860 than typing stuff into a phone, um, I'd be even more productive. I think it's, it's a bit like the
01:48:00.100 quest, you know, um, you know, the Descartes concept of, um, cogito ego sum, uh, I think therefore I am
01:48:10.100 like at root, regardless of what else is going on in our lives, even if we are being fooled by demons
01:48:17.620 or a more contemporary example would be, even if we are plugged into the matrix, there has to still
01:48:24.580 be a human at the core of it somewhere deep buried down behind the neural link and behind the, whatever
01:48:32.740 else gets invented. Um, there is still that, that human being. All right. And, oh, uh, let me see.
01:48:41.300 Oh no, that wouldn't be AI. Okay. I think we got that one. And our last one here, uh, laughing gas
01:48:46.180 for $10. Uh, thank you very much again. Not gonna lie. I do find the idea of promoting, uh, uh, AI
01:48:54.580 or prompting an AI, give me a fun action movie in the style of Michael Bay with under, uh, with, uh,
01:49:00.740 uh, under in Christian value with, so with Christian values would be better than the Hollywood
01:49:06.100 slop we have today. Uh, yeah, I do get that. Like, uh, if, if, you know, Hollywood won't make the kind
01:49:12.500 of entertainment you're hoping for, if you're exhausted with getting the latest, uh, kind of
01:49:16.820 woke NPC download, being able to prompt something that could produce, uh, something better that
01:49:22.180 otherwise wouldn't get been made by Hollywood. I could see the appeal of that. Yeah.
01:49:26.500 I think laughing gas is, um, giving me the opportunity to propose, uh, something else I've,
01:49:36.180 uh, I foresee happening as a trend in the short, um, in the fairly near future, which is
01:49:45.060 highly personalized, bespoke entertainment. So what's the use of having a Hollywood movie with an AI
01:49:54.980 script when there's a million people in LA with spec scripts in their backpack that are probably better
01:50:02.180 than what the AI comes up with? Okay. Probably not. Probably we will still have art being created
01:50:11.060 by humans to some degree, but I think what we will see is now you can create on the fly bespoke
01:50:20.580 entertainment that is specifically targeted to you that has, um, all of your exact interests.
01:50:30.180 And there's, it knows about your current mood. It knows about, you know, your favorite taste in music
01:50:36.420 and everything and ties it all together and generates it. And even if, even if a human artist is
01:50:44.820 always going to be able to produce better art, um, on a general level, I think the real threat is that
01:50:54.180 people are going to want Uber individualized as kind of continuing the trend of individualism where
01:51:03.220 it's not good enough. I want something that is even more precisely targeted at me.
01:51:09.700 So culture could splinter even further because AI can generate such specific stuff. You don't even have
01:51:16.660 to fall into the sub sub sub genre of, I really like 80s metal, even, you know, I, you can have
01:51:23.380 music that's entirely tailored to every type of your particular, uh, interest and you can't ever
01:51:28.980 reach those experiences. Yeah. You know, like in a movie, you have a soundtrack as the characters move
01:51:34.580 around that perfectly underscores what's happening to them. It fits their mood. It maybe even lines
01:51:40.100 up with how they're walking and makes them feel awesome on the screen. I think we'll see essentially
01:51:46.020 that develop where, um, it will be, uh, melodies that you're familiar with, but with new harmonies
01:51:53.540 and new orchestration that fits the, you know, you, you, you're gonna, you're gonna be made to feel
01:52:00.020 like you are the protagonist in a movie, um, but a movie that nobody is watching.
01:52:06.820 Interesting because it could create even more atomization, uh, because you have even fewer
01:52:12.900 contact points with people, even though, again, you'll have a more homogenized delivery mechanism.
01:52:18.900 Very interesting. A lot to think about there. Uh, oh, we got one more that jumped in here at the end.
01:52:24.180 Uh, glow in the dark for $10. Thank you very much. Any muscle, if you don't use it, will wither.
01:52:29.140 If the chip does the thinking or activation of the brain, could you use it on your own? Like
01:52:35.460 sitting for a long time, your legs, uh, still function, but not as well. And yeah, I think glow
01:52:40.340 in the dark, again, that takes us back to kind of the calculator example, right? You'll, it is true
01:52:45.860 that it's very difficult to get people to go ahead and do the things that are good for them. If
01:52:50.980 something else will already do it for them. And if they don't exercise those abilities, it does come
01:52:56.500 apart. But I think Luke has already, you know, kind of pointed out that those things end up being
01:53:02.260 tools in the long run for many people, um, that other things are iterated upon that. So they're
01:53:07.860 not totally negative, negative, even if they do come with downsides.
01:53:12.180 And a reminder that even as everybody around you sinks into a morass of individualism and weakness,
01:53:21.300 you have it within your power to reject that and, and live, uh, powerful, you know,
01:53:28.260 continue to hit the gym. Now hit the gym with your brain as well, you know, become the person that you,
01:53:37.860 um, are destined to be regardless of the pressures put upon you.
01:53:43.860 Yeah, I think that's right. Especially if you're worried about, um, especially if you're worried
01:53:50.020 about the homogenization as if you're worried about this blandness, the ability to rise above it
01:53:57.460 will set you apart even more, right? If everyone else is on this certain level, then the fact that
01:54:03.380 you are one of the few people who is willing to go out and kind of take charge and shape your existence
01:54:08.900 and continue to maintain your body and mind and your ability to do certain things will set you apart.
01:54:14.260 And I really do think that at the end of all this tech stuff, that does tend to be the message for
01:54:20.020 people who are trying to figure out like a plan of action. You know, the things that will set people
01:54:24.980 apart are those who are willing to deny themselves these experiences, who are willing to say, I want the
01:54:30.740 real thing and not the simulacrum. I'm willing to go out and learn how to do something physically
01:54:34.900 for, if for nothing else, because it makes me a more interesting, well-rounded, better person,
01:54:39.860 as opposed to simply falling into kind of the vat of, uh, of kind of culture that it might become.
01:54:46.580 So you always have the ability to kind of take charge and make something better of it. And that
01:54:51.300 might make you stand out even more in a world where everyone is, is saying, I'm not gonna do that.
01:54:56.500 Oh, we got another one jump in here, guys. I appreciate the, the, uh, super chats,
01:55:01.220 but you're making me do all these false endings to the show. Uh, Spaz, uh, $5. Thank you very much.
01:55:06.580 I enjoy a lot of stuff simply because I know that someone, not something created it. And yeah,
01:55:12.500 I think that again, that speaks to what Luke said, you know, that there, there is a certain human
01:55:17.140 touch. There is a certain, uh, feel that you get specifically because it was created by someone
01:55:23.860 thoughtfully and intentionally, uh, and uniquely. And that will continue to hold value even if kind of the,
01:55:31.220 general everyday, you know, news or entertainment, whatever might be filled with kind of more of
01:55:35.700 this machine produced, uh, stuff. There's only so many times we can crescendo to a poignant and
01:55:43.380 touching conclusion. Yeah. All right, guys. Thank you so much. I appreciate everyone for coming by.
01:55:50.660 Lots of great questions, very thought provoking discussion. I knew it was going to be Luke's
01:55:54.980 a great guy to think about this with. So I'm very glad that he came on,
01:55:58.820 make sure that you're checking all of his stuff out. He's got the Bible, uh, studies channel. He
01:56:03.620 also has his other channel where he does his interviews. So make sure that you're searching
01:56:08.020 him up there and we really appreciate you coming by. If you, your first time here,
01:56:12.340 please make sure that you subscribe. And if you want to get this as a podcast, remember the show
01:56:18.100 is now an audio format and all your major podcasting platforms. And if you do join over at the podcast,
01:56:24.500 please make sure that you leave a rating and a, uh, recommendation that really helps with the
01:56:29.860 algorithms. Speaking of, uh, the, the evil AI and everything that really helps out a lot.
01:56:34.580 All right, guys. Well, once again, thanks for coming by. And as always, I'll talk to you next time.