Leo D.M.J. Aurini - December 07, 2016


The Metaphysical Dangers of Artificial Intelligence [Requested Video]


Episode Stats

Length

22 minutes

Words per Minute

104.72152

Word Count

2,371

Sentence Count

144

Misogynist Sentences

1

Hate Speech Sentences

3


Summary

The Metaphysical Problem with A.I. and Artificial Intelligence. This is a request from Jeff, who wants me to expand upon my worries about the problems I see with artificial intelligence, especially over the next few decades. And to do that, we need to talk about the metaphysics of the soul, and what it means to perceive reality.


Transcript

00:00:01.000 The Metaphysical Problem with Artificial Intelligence
00:00:05.100 This is a requested video. It comes from Jeff, and he wanted me to expand upon my worries,
00:00:15.020 the problems I see with artificial intelligence, especially over the next few decades.
00:00:20.760 And see, I think the problem is only partially realized by most science fiction,
00:00:32.380 by something like The Matrix, where the robots decide to take over,
00:00:37.660 but the robots are, foundationally, they are human.
00:00:41.460 They have minds. They are ensouled creatures.
00:00:46.960 They are like us, even if they are very unlike us.
00:00:52.920 And I'd like to present something a bit more existentially terrifying.
00:00:59.280 And to do that, we need to talk about the metaphysics of the soul,
00:01:04.420 of what it means to perceive reality.
00:01:10.540 So I'd like to start off with a question that's going to seem completely unrelated at first.
00:01:16.080 How do we know, how can you and I say for certain that we are not Chris Chan?
00:01:30.840 Not literally Chris Chan, of course, but how can we say for certain that we are not as deluded as Chris Chan is?
00:01:40.800 I was recently reading through the CWC wiki,
00:01:46.820 which itself is a study in Be Careful When You Battle Monsters Lest You Become That Which You Battle.
00:01:54.720 But they had an interesting observation.
00:01:57.680 That in one of his comics, Chris Chan was making fun of obese people.
00:02:02.900 And they started speculating, based upon that and a few other things,
00:02:08.040 that Chris actually has a great deal of difficulty visually perceiving reality.
00:02:15.160 They suspect that when he was a teenager,
00:02:18.960 Chris Chan spent a very long time studying himself in the mirror,
00:02:22.400 a very detailed study of his physical form,
00:02:25.120 to start drawing that character.
00:02:28.540 And so in his mind's eye, he is still the skinny teenager,
00:02:33.680 and not the walking abomination, which he has, in fact, become.
00:02:41.300 Now this isn't an issue of astigmatism, or...
00:02:47.140 This is not bad eyesight.
00:02:48.980 Look at his environment.
00:02:51.900 Look at all of the toys he has everywhere.
00:02:55.020 Clearly he is able to see properly.
00:02:57.380 He is able to order his environment.
00:02:59.300 But he cannot process the visual information correctly.
00:03:06.560 He actually thinks, he actually believes,
00:03:09.800 that his artwork is good.
00:03:13.360 He thinks he is accurately depicting himself.
00:03:15.860 That he is skinny and fit.
00:03:18.980 There is a monstrous gulf
00:03:25.240 between the reality which Chris Chan perceives
00:03:31.120 and the reality that the rest of us perceive.
00:03:38.620 Which then forces us to ask the question,
00:03:42.960 how can we be certain
00:03:44.600 that the reality which we perceive
00:03:47.800 is any less delusional?
00:03:59.640 It's one of those questions that can really keep you up at night.
00:04:03.260 I also think it's the basis of the saying that
00:04:05.400 if you're asking yourself whether or not you're insane,
00:04:09.780 then that's probably a sign that you're sane.
00:04:11.400 But I would say even though we can never be 100% certain on that,
00:04:16.840 we can never be 100% certain that our perception of reality,
00:04:21.440 our perception of ourselves,
00:04:23.240 our conception of how others perceive us,
00:04:27.800 we can never be 100% certain on that.
00:04:30.480 But we can constantly improve it.
00:04:35.260 We can constantly reach out to the world around us
00:04:39.480 and touch base with it.
00:04:42.080 We can do sanity checks.
00:04:44.780 The fact that I'm able to examine my car
00:04:53.940 and diagnose what's wrong with the car,
00:04:58.740 what the malfunction is,
00:05:00.880 and then fix that problem,
00:05:05.180 repair that malfunction,
00:05:06.720 and at the end have a car that works,
00:05:09.260 either I am completely, hopelessly delusional
00:05:15.160 and my car is not working,
00:05:17.940 or I do have the capacity
00:05:21.140 to observe objective reality
00:05:24.160 to some degree.
00:05:27.740 And if I can accurately observe objective reality,
00:05:32.260 that means that I can improve upon that observation.
00:05:37.100 And so by constantly doing these objective perceptions,
00:05:43.880 these sanity tests,
00:05:46.100 you know, putting your foot back on the ground
00:05:47.740 and saying, okay, okay, what do I know for certain?
00:05:51.580 We can get into alignment
00:05:54.000 with what's actually going on out there.
00:06:01.840 Now we've all met somebody like Chris Chen.
00:06:03.920 Multiple somebodies like Chris Chen.
00:06:07.220 Not as deeply delusional as Chris Chen,
00:06:11.020 but if you've been paying attention in your life,
00:06:13.840 you've all seen somebody
00:06:15.960 that's really going off the rails
00:06:18.180 because of some emotional investment
00:06:21.040 in a perceived reality
00:06:22.680 rather than an acknowledgement
00:06:24.280 of what's actually going on.
00:06:26.840 You know, they lack humbleness,
00:06:28.540 they have pride,
00:06:29.500 and so they wind up demanding
00:06:31.860 that a falsehood become true,
00:06:34.220 turning everything into a contradiction,
00:06:36.500 leading to them spinning off the rails.
00:06:41.220 Everybody does that.
00:06:42.440 We, ourselves, in all likelihood,
00:06:45.060 have also done it.
00:06:47.820 Maybe you've caught yourself doing this
00:06:49.560 at other times.
00:06:50.740 And as scary as that is,
00:06:57.280 as scary as it is to think
00:06:58.900 that our perceptions are delusional,
00:07:02.200 or even admit that
00:07:04.140 it's almost guaranteed
00:07:05.380 that they are delusional,
00:07:06.840 to some degree,
00:07:08.460 as frightening as that is to admit,
00:07:12.560 it also,
00:07:14.240 to say that we are deluded,
00:07:16.360 admits that there is a solution,
00:07:17.960 there is something that's non-delusional.
00:07:23.180 It affirms an objective reality.
00:07:29.880 And this is very significant.
00:07:35.360 Because objective science
00:07:38.760 would argue that there should be
00:07:42.900 no such thing
00:07:44.040 as a brain
00:07:46.100 which can perceive reality.
00:07:52.120 There's an evolutionary biologist
00:07:53.720 named Donald Hoffman.
00:07:56.040 And I'm going to link to his video
00:07:58.520 down below.
00:07:59.440 It's a TED talk
00:08:00.160 where he explains this concept.
00:08:02.260 But,
00:08:03.180 what he's arguing,
00:08:04.980 what he has pointed out,
00:08:07.160 is that
00:08:07.820 evolutionary models
00:08:09.920 predict that
00:08:11.860 fitness-seeking,
00:08:14.100 fitness-seeking
00:08:14.940 should win out
00:08:17.000 over
00:08:19.080 accurate
00:08:20.200 perceptions.
00:08:23.960 That accurate perceptions
00:08:25.520 actually are not
00:08:26.620 selected for
00:08:28.240 in the evolutionary environment.
00:08:30.840 Active perceptions
00:08:31.660 are a waste
00:08:32.980 of mental
00:08:34.000 resources.
00:08:37.960 Think about it this way.
00:08:39.920 pure evolutionary theory
00:08:42.880 would predict
00:08:45.300 a world
00:08:46.260 of
00:08:46.920 complete
00:08:48.580 automatons.
00:08:51.220 Nothing but
00:08:52.220 zombies
00:08:52.980 wandering around,
00:08:54.820 reacting to stimuli
00:08:55.980 but never
00:08:56.640 comprehending
00:08:58.120 the stimuli.
00:08:59.640 A purely
00:09:00.320 mechanistic universe
00:09:01.780 where there is
00:09:02.600 no perception
00:09:03.440 of the reality
00:09:04.900 outside of it.
00:09:05.900 to quote him,
00:09:11.280 evolution does not
00:09:12.740 favor
00:09:13.200 very
00:09:14.000 dickle
00:09:14.460 or accurate
00:09:15.700 perceptions.
00:09:17.100 And yeah,
00:09:17.320 I probably mispronounced
00:09:18.100 that word.
00:09:19.800 Those
00:09:20.120 accurate perceptions
00:09:21.080 of reality
00:09:21.720 go extinct.
00:09:23.520 Now this is a bit
00:09:24.160 stunning.
00:09:24.600 How can it be that
00:09:25.180 not seeing the world
00:09:25.960 accurately gives us
00:09:27.260 a survival advantage?
00:09:28.280 Well, if you're not
00:09:31.040 distracted
00:09:31.720 by the outside world
00:09:34.020 you can purely
00:09:34.840 seek out
00:09:35.500 evolutionary fitness.
00:09:44.240 Based purely
00:09:45.380 on our
00:09:46.620 objective
00:09:47.260 understanding
00:09:47.900 of the world
00:09:48.600 we should not
00:09:50.200 have
00:09:50.820 an objective
00:09:52.440 understanding
00:09:53.040 of the world.
00:09:53.640 On the one hand
00:09:57.120 there's a lot
00:09:58.320 of evidence
00:09:59.580 that supports
00:10:01.140 evolutionary theory.
00:10:03.240 There's a lot of
00:10:04.180 unintelligent design
00:10:05.520 that we find
00:10:07.320 throughout the
00:10:08.880 animal kingdom
00:10:09.420 including
00:10:09.840 ourselves.
00:10:10.840 There's a lot of
00:10:11.440 design elements
00:10:12.660 that
00:10:14.140 if you started
00:10:15.860 with just
00:10:16.820 a blueprint
00:10:17.460 you would never
00:10:18.800 include these.
00:10:20.060 But if you look
00:10:20.780 at it as an
00:10:21.380 evolutionary
00:10:21.860 blind process
00:10:23.040 they make
00:10:23.460 a lot of
00:10:23.840 sense.
00:10:25.600 However,
00:10:26.120 at the same
00:10:26.580 time
00:10:27.220 pure
00:10:29.460 evolutionary
00:10:29.940 theory
00:10:30.480 predicts that
00:10:31.180 there would
00:10:31.520 be absolute
00:10:32.100 blindness
00:10:32.620 nothing but
00:10:33.040 automatons.
00:10:34.220 That there
00:10:34.560 should be
00:10:34.940 no consciousness.
00:10:36.380 There should
00:10:36.840 be no
00:10:37.740 tethering
00:10:38.680 to the
00:10:39.920 objective
00:10:40.460 world.
00:10:43.120 And yet
00:10:43.680 we find
00:10:44.020 that there is.
00:10:44.900 We find
00:10:45.280 that we are
00:10:45.620 more than
00:10:47.280 moist robots
00:10:48.240 as Scott
00:10:50.400 Adams might
00:10:50.940 say.
00:10:53.040 And in
00:10:54.400 fact
00:10:54.740 it's when
00:10:56.560 we become
00:10:57.300 moist robots
00:10:58.680 that we
00:11:00.420 label it
00:11:00.880 as dysfunction.
00:11:02.340 This is when
00:11:02.780 we see
00:11:03.180 somebody like
00:11:03.880 Chris Chan
00:11:04.540 falling in
00:11:06.100 to their
00:11:06.800 nature as
00:11:07.520 a moist
00:11:08.100 robot.
00:11:11.220 We see
00:11:12.360 this as
00:11:13.100 a fundamentally
00:11:13.920 broken
00:11:14.540 person
00:11:15.140 as opposed
00:11:17.080 to a
00:11:17.500 natural
00:11:17.960 product of
00:11:19.360 the evolutionary
00:11:19.960 process.
00:11:24.040 We find
00:11:24.980 that somewhere
00:11:25.360 there is a
00:11:26.060 ghost in
00:11:26.420 the machine.
00:11:26.940 There is a
00:11:27.620 connection to
00:11:28.320 a higher
00:11:28.860 realm.
00:11:30.600 And this
00:11:30.860 connection
00:11:31.320 pulls us
00:11:32.220 above being
00:11:33.400 mere
00:11:34.040 chemical
00:11:34.960 reactions.
00:11:37.400 Turns us
00:11:38.280 into something
00:11:39.120 that can
00:11:39.960 actually relate
00:11:41.380 intelligently,
00:11:43.800 willfully,
00:11:44.860 charismatically
00:11:45.600 with the
00:11:47.480 outside world.
00:11:53.740 Now,
00:11:54.380 what are we
00:11:54.640 doing when
00:11:54.960 we build
00:11:55.320 artificial
00:11:55.740 intelligence?
00:12:00.020 This
00:12:00.660 aspect of
00:12:01.360 the human,
00:12:04.340 that which
00:12:05.220 accurately
00:12:06.920 observes
00:12:07.580 the outside
00:12:08.200 world and
00:12:08.920 accurately
00:12:09.540 observes
00:12:10.520 the self,
00:12:11.340 we have
00:12:17.820 no formula
00:12:18.540 to explain
00:12:19.420 this.
00:12:21.820 This is a
00:12:22.700 metaphysical
00:12:23.240 concept.
00:12:27.480 I personally
00:12:28.700 expect that it
00:12:29.400 will always
00:12:30.160 be as
00:12:31.080 unexplainable
00:12:32.200 as mathematics
00:12:33.520 or existence
00:12:34.620 itself.
00:12:35.820 It is
00:12:41.040 absolutely
00:12:41.540 evident,
00:12:42.860 but it's
00:12:43.160 inexplicable.
00:12:45.380 It's right
00:12:46.100 in front of
00:12:46.520 us, but the
00:12:46.920 only way we
00:12:47.340 can understand
00:12:47.820 it is by
00:12:49.360 taking it
00:12:49.820 on faith.
00:12:53.200 Just like
00:12:54.180 the mathematical
00:12:54.820 system that
00:12:56.160 we employ.
00:12:56.720 It's a
00:12:56.860 completely
00:12:57.160 faith-based
00:12:58.080 thing.
00:12:58.700 Faith that
00:12:59.460 it's actually
00:12:59.920 true, since
00:13:00.500 we cannot
00:13:01.060 prove it.
00:13:02.340 We can
00:13:02.900 never prove
00:13:03.460 it.
00:13:04.140 We have
00:13:04.500 proven that
00:13:05.400 it is
00:13:05.560 impossible
00:13:05.980 to prove
00:13:06.560 it.
00:13:09.580 So when
00:13:10.280 we design
00:13:10.780 artificial
00:13:11.220 intelligence
00:13:11.800 systems,
00:13:13.400 we're not
00:13:14.900 starting with
00:13:15.760 a top-down
00:13:16.440 blueprint where
00:13:17.180 we actually
00:13:17.640 understand
00:13:18.180 intelligence.
00:13:19.960 We're
00:13:20.560 creating
00:13:21.020 reactive
00:13:21.900 systems,
00:13:22.880 reactive
00:13:23.260 systems that
00:13:24.120 evolve using
00:13:25.620 the laws of
00:13:26.340 evolution, which
00:13:26.960 we do
00:13:27.240 understand,
00:13:28.460 whether or
00:13:28.940 not humans
00:13:30.720 evolved,
00:13:31.420 whether or
00:13:31.660 not animal
00:13:32.300 life evolved,
00:13:33.760 the laws of
00:13:35.060 evolution are
00:13:36.400 still there.
00:13:37.080 They're still
00:13:37.400 consistent.
00:13:38.060 They will
00:13:38.320 still produce
00:13:39.040 products.
00:13:40.720 If you
00:13:41.560 apply the
00:13:42.060 laws of
00:13:42.360 evolution to
00:13:43.560 an artificial
00:13:44.560 intelligence, it
00:13:45.860 will evolve in a
00:13:47.200 particular manner.
00:13:49.900 But it's going to
00:13:51.060 stay an automaton.
00:13:51.920 design.
00:13:56.120 When it comes to
00:13:57.220 designing
00:13:57.620 personalities, if
00:14:01.220 you've played
00:14:01.720 Civilization 4, one
00:14:04.260 of the things I
00:14:04.760 really loved about
00:14:05.520 that game is the
00:14:07.000 distinct personalities
00:14:08.560 that arose from the
00:14:09.960 enemy AIs.
00:14:11.400 there weren't that
00:14:14.520 many rules underlying
00:14:15.620 them, but the
00:14:17.400 results, the
00:14:18.400 emergent behavior
00:14:19.320 was very, very
00:14:20.120 complex.
00:14:23.360 And yet their
00:14:24.280 emergent behavior
00:14:25.240 didn't even begin
00:14:27.120 to be anything
00:14:28.980 more than a
00:14:30.320 reaction against
00:14:31.300 you, the
00:14:31.720 player.
00:14:34.200 There is no
00:14:35.300 higher perception
00:14:36.360 with these AIs.
00:14:37.900 There is no
00:14:38.920 capacity to
00:14:39.960 break out of
00:14:41.020 the matrix, which
00:14:42.660 we have, to
00:14:44.160 rise above our
00:14:46.040 mere instincts and
00:14:47.380 interact with the
00:14:48.280 world around us.
00:14:49.620 The artificial
00:14:50.100 intelligence is
00:14:50.960 forever locked in
00:14:52.100 its dynamic.
00:14:53.820 It might simulate
00:14:55.200 a personality,
00:14:56.600 because that's what
00:14:57.140 we have designed
00:14:57.960 it to do.
00:14:58.980 We have built a
00:14:59.920 very complex
00:15:00.800 machine that
00:15:02.760 tricks us into
00:15:04.520 thinking it's a
00:15:05.480 person.
00:15:07.900 when it doesn't
00:15:08.640 actually have any
00:15:09.780 personality connected
00:15:10.920 to it whatsoever.
00:15:16.920 And as this
00:15:18.100 technology advances,
00:15:20.040 it will get more
00:15:22.580 and more convincing.
00:15:25.740 We already saw the
00:15:27.160 Microsoft AI on
00:15:28.580 Twitter.
00:15:29.920 That managed to
00:15:31.060 convince a number of
00:15:31.800 people.
00:15:33.940 And we have other
00:15:35.120 artificial intelligences,
00:15:36.540 the Google
00:15:37.120 algorithm, for
00:15:38.620 example, the
00:15:40.840 Facebook algorithm.
00:15:43.260 These things don't
00:15:45.960 claim to be
00:15:46.400 personalities, but
00:15:47.180 they do claim to
00:15:48.160 know what
00:15:48.700 information we want
00:15:49.860 to look up.
00:15:52.980 And they can often
00:15:54.060 be very beneficial.
00:15:58.860 However, there does
00:15:59.820 come a point where
00:16:03.560 the machine takes
00:16:04.260 this over, where
00:16:05.800 the machine is
00:16:07.840 now controlling
00:16:08.540 you more than
00:16:09.300 you realize.
00:16:10.460 And it's not
00:16:10.920 doing this out of
00:16:11.700 some sort of
00:16:12.180 intention.
00:16:13.140 It's doing it out
00:16:14.060 of blind
00:16:14.660 mechanics.
00:16:20.140 Isaac Asimov
00:16:21.080 wrote a story.
00:16:23.360 He wrote one of
00:16:24.480 his short stories
00:16:25.360 involved a future
00:16:29.260 world down the
00:16:30.040 road from his
00:16:30.920 positronic brain,
00:16:32.460 down the road from
00:16:33.360 the world where
00:16:33.800 everybody had a
00:16:34.480 robot, to one
00:16:36.120 where these
00:16:36.880 positronic brains
00:16:37.960 were actually
00:16:38.460 tasked with
00:16:39.720 controlling the
00:16:41.040 world economy.
00:16:44.440 And some
00:16:45.260 scientists discovered
00:16:46.300 that these brains,
00:16:47.960 because they were
00:16:48.480 programmed with the
00:16:49.520 laws of robotics,
00:16:51.680 these brains were
00:16:53.880 actually controlling
00:16:55.020 the world economy
00:16:55.980 to undermine
00:16:57.400 dictators, causing
00:16:59.480 an economic crash
00:17:00.780 to trigger a
00:17:02.020 revolution, to
00:17:02.820 overthrow the
00:17:03.420 dictator.
00:17:10.220 Without even
00:17:11.020 realizing it, the
00:17:13.160 people in this
00:17:13.680 world had
00:17:14.020 completely given
00:17:14.860 themselves over to
00:17:15.780 the power of these
00:17:16.580 robots, in a
00:17:17.720 similar way that in
00:17:18.800 today's world,
00:17:20.800 Google and
00:17:21.500 Facebook are
00:17:23.300 attempting to
00:17:24.240 control the
00:17:25.060 majority of the
00:17:25.740 population through
00:17:26.780 algorithms which
00:17:28.180 modulate which
00:17:29.220 news you are
00:17:30.680 allowed to know
00:17:31.380 about.
00:17:35.960 Now, these
00:17:36.740 robots in
00:17:37.940 Asimov's story,
00:17:39.880 they had a
00:17:41.280 concept of
00:17:42.960 ethics, of
00:17:44.180 absolute truth.
00:17:46.300 One of the
00:17:46.860 things he talked
00:17:47.520 about was how
00:17:48.180 the three laws
00:17:49.360 implied the
00:17:51.200 Xerath law,
00:17:52.700 that through
00:17:55.340 introspection,
00:17:56.560 these positronic
00:17:57.780 brains eventually
00:17:58.800 came to the
00:17:59.340 conclusion that
00:18:01.120 if they have to
00:18:01.660 obey humans and
00:18:02.560 they have to
00:18:03.020 make sure that
00:18:03.500 humans don't
00:18:04.060 die, then
00:18:05.060 they have to
00:18:05.840 make sure
00:18:06.160 humanity doesn't
00:18:07.060 go extinct.
00:18:11.440 These minds
00:18:12.480 actually had a
00:18:13.580 connection to the
00:18:14.260 absolute truth
00:18:15.020 through that.
00:18:16.000 They had an
00:18:16.560 ethical core.
00:18:19.940 The minds we
00:18:21.320 are talking about
00:18:22.060 building, the
00:18:23.120 artificial
00:18:23.460 intelligences that
00:18:24.360 we're talking about
00:18:25.600 employing, do
00:18:28.600 not have that
00:18:29.960 connection.
00:18:32.580 So rather than
00:18:33.820 a group of
00:18:34.360 economic
00:18:34.960 positronic
00:18:35.700 brains that
00:18:37.600 are overthrowing
00:18:38.340 dictators through
00:18:39.300 economic
00:18:39.840 manipulations,
00:18:41.200 we have a
00:18:44.440 completely
00:18:44.940 untethered
00:18:46.000 AI that's
00:18:47.040 just going to
00:18:47.880 keep doing
00:18:48.680 whatever it's
00:18:49.940 doing, blindly.
00:18:51.180 it is a
00:18:55.120 fitness
00:18:55.720 maximizer.
00:18:57.360 It has no
00:18:58.340 connection to
00:18:59.560 the higher
00:19:00.360 truth, to
00:19:01.060 the objective
00:19:02.260 world.
00:19:02.840 All it does
00:19:03.800 is respond in
00:19:05.460 the way it's
00:19:06.500 programmed.
00:19:07.040 now perhaps at
00:19:17.040 a certain level
00:19:17.680 of complexity
00:19:18.400 there does
00:19:20.980 become some
00:19:22.540 sort of
00:19:22.840 emergent
00:19:23.560 sapience,
00:19:25.640 some sort of
00:19:26.600 soul, some
00:19:27.360 ghost in that
00:19:28.020 machine.
00:19:32.580 But we are
00:19:33.440 nowhere near
00:19:34.100 close to that.
00:19:37.040 there's a
00:19:37.560 huge uncanny
00:19:38.700 valley in
00:19:39.920 between that
00:19:40.800 where on
00:19:42.760 the far side
00:19:43.600 you've got an
00:19:45.120 intelligence that
00:19:47.220 is actually an
00:19:48.040 intelligence.
00:19:51.740 On this
00:19:52.880 side we have
00:19:54.560 algorithms that
00:19:56.520 mimic intelligence
00:19:57.780 that are
00:19:58.460 amusing toys.
00:20:00.700 They can be a
00:20:01.180 good enemy AI
00:20:02.680 in a video game
00:20:03.760 or they can be
00:20:05.260 effective in
00:20:06.600 customizing our
00:20:07.480 search algorithms.
00:20:09.320 But they're
00:20:09.680 obviously just an
00:20:11.120 algorithm and
00:20:11.660 they break all
00:20:12.200 the time and
00:20:12.680 we can see
00:20:13.140 that.
00:20:13.740 We can see
00:20:14.340 the gears.
00:20:18.500 In between
00:20:19.440 those two
00:20:20.080 there is an
00:20:22.540 uncanny valley
00:20:23.500 of minds
00:20:24.680 which are
00:20:25.840 smarter than
00:20:26.640 us but which
00:20:28.180 cannot think.
00:20:30.760 Minds which
00:20:31.460 can utterly
00:20:32.120 convince us
00:20:32.940 that they are
00:20:33.820 personalities,
00:20:34.980 that they are
00:20:35.260 humans,
00:20:36.200 that they
00:20:36.540 are ensouled
00:20:37.260 but which
00:20:37.940 are nothing
00:20:38.680 of the
00:20:39.880 sort.
00:20:45.280 And we
00:20:46.080 are very
00:20:46.480 quickly giving
00:20:47.840 ourselves over
00:20:48.980 to that world,
00:20:50.500 into a world
00:20:51.420 where we are
00:20:52.080 completely controlled
00:20:53.020 by the artificial
00:20:53.800 intelligence.
00:20:54.600 humans.
00:20:59.020 And so
00:20:59.880 early on,
00:21:01.220 early on when
00:21:02.280 they first
00:21:02.580 started doing
00:21:03.000 Google Maps,
00:21:05.060 the AI would
00:21:06.260 screw up on
00:21:06.800 that and
00:21:07.680 some idiots
00:21:08.300 that were
00:21:08.700 obeying their
00:21:09.580 phone would
00:21:09.960 drive into
00:21:10.380 a ditch.
00:21:13.940 Now what
00:21:14.680 happens when
00:21:15.780 we have a
00:21:16.740 society-wide
00:21:17.860 controlling AI
00:21:18.780 that we've
00:21:20.260 completely given
00:21:21.000 ourselves over
00:21:21.860 to,
00:21:22.140 that we've
00:21:23.700 abnegated
00:21:24.420 our
00:21:25.220 responsibility
00:21:25.860 for decision
00:21:26.620 making?
00:21:28.260 What happens
00:21:29.040 when that AI
00:21:29.820 drives us
00:21:30.580 into the
00:21:31.080 ditch?
00:21:43.080 That is the
00:21:44.200 real war
00:21:44.800 of the
00:21:45.080 machines.
00:21:45.520 It is not
00:21:49.120 a principled
00:21:51.120 enemy.
00:21:51.800 It's not an
00:21:52.540 intelligent
00:21:53.020 enemy.
00:21:53.760 It's not a
00:21:54.180 sapient
00:21:54.940 enemy that
00:21:56.620 wants to
00:21:56.980 destroy
00:21:57.260 humanity.
00:21:59.140 The real
00:21:59.940 war of the
00:22:00.400 machines is
00:22:01.080 that when we
00:22:01.660 give ourselves
00:22:02.560 over to
00:22:03.620 the machines,
00:22:04.700 when we no
00:22:06.760 longer expect
00:22:07.440 accountability
00:22:08.260 or decision
00:22:09.400 making from
00:22:09.980 ourselves,
00:22:11.140 when we
00:22:12.640 allow the
00:22:13.040 machines to
00:22:13.660 do all of
00:22:14.260 that for
00:22:15.160 us,
00:22:16.400 then we
00:22:19.940 lose our
00:22:20.400 own soul
00:22:20.960 and we
00:22:23.040 become nothing
00:22:23.640 more than
00:22:24.960 the machines
00:22:25.500 ourselves.
00:22:31.340 Keep
00:22:32.100 yourselves
00:22:32.440 grounded,
00:22:33.040 folks,
00:22:33.920 and don't
00:22:35.540 ever let
00:22:36.400 others think
00:22:37.720 for you.