Making Sense - Sam Harris - March 07, 2023


#312 — The Trouble with AI


Episode Stats

Length

1 hour and 26 minutes

Words per Minute

177.6029

Word Count

15,445

Sentence Count

4

Misogynist Sentences

1

Hate Speech Sentences

5


Summary

In this episode of the Making Sense podcast, I discuss the paradoxical value we place on expertise and scientific authority, and why we should move away from it. I also talk about the dangers of weaponized misinformation and how we need to do more to address the problem of misinformation and spread of pseudoscientific ideas, and how to counter the spread of false information about vaccines and the War in Ukraine, among other topics, in order to improve public health and public health services. In order to access full episodes of Making Sense, you'll need to subscribe to the podcast, where you'll get access to the full series of episodes exploring the topics covered in this episode. If you're interested in becoming a supporter of the podcast and/or other related projects, you can do so by becoming a patron patron of the M.I.P. program Making Sense. We don't run ads on the podcast anymore, and therefore it's made possible entirely through the support of our listeners, who are making possible by the support they're giving us all a better listening experience and making sense of what we're doing here. Thanks to all the listeners who've helped make this podcast possible, and we're grateful for their support, and are willing to support it in any way they can be a supporter, no matter how small or large. This podcast is made possible by you become a patron, and you'll be helping us make sense of the making sense podcast! of it all! - make sense by listening to the Making sense Podcast. - Sam Harris Make sense? This is a podcast by Sam Harris, PhD, MA, MAing Sense Sam, MAING MESING MINDERING MING Sense , and The Making Sense Podcast by Sam, MD, PhD - a podcast that makes sense, and it's all about making sense, not only by making sense by you, not by talking about it, and not by it, by thinking about it a podcast about it? - and it makes sense by doing it, not because of it, but because it's also by thinking it, it's making sense with it, because it matters so much more than that, it matters, and that's a lot more than you can make sense, too much of it's going to make sense so you can help you understand it, too, and so on it, so you'll understand it.


Transcript

00:00:00.000 welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.500 this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.900 of this conversation in order to access full episodes of the making sense podcast you'll
00:00:21.800 need to subscribe at samharris.org there you'll find our private rss feed to add to your favorite
00:00:27.020 podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.500 therefore it's made possible entirely through the support of our subscribers so if you enjoy
00:00:36.540 what we're doing here please consider becoming one okay before i jump in today i want to take a moment
00:00:49.480 to address some confusion that keeps coming up i was on another podcast yesterday and spoke about
00:00:58.540 this briefly but i thought i might be a little more systematic here it relates to the paradoxical way
00:01:06.880 that we value expertise in really all fields and scientific authority in particular seems to me
00:01:16.140 there's just a lot of confusion about how this goes expertise and authority are unstable intrinsically
00:01:25.100 so because the truth of any claim doesn't depend on the credentials of the person making that claim
00:01:32.940 so a nobel laureate can be wrong and a total ignoramus can be right even if only by accident
00:01:40.560 so the truth really is orthogonal to the reputational differences among people and yet generally speaking
00:01:49.200 we are right to be guided by experts and we're right to be very skeptical of novices who claim to
00:01:56.240 have overturned expert opinion of course we're also right to be alert to the possibility of fraud
00:02:02.400 among so-called experts there are touted experts who are not who they seem to be and we're right to
00:02:09.800 notice that bad incentives can corrupt the thinking of even the best experts so these can seem like
00:02:15.420 contradictions but they're simply different moments in time right the career of reason has to pass
00:02:23.480 through all these points again and again and again we respect authority and we also disavow its relevance
00:02:31.080 by turns we're guided by it until the moment we cease to be guided by it or until the moment when one
00:02:38.600 authority supplants another or even a whole paradigm gets overturned but all of this gets very confusing
00:02:46.780 when experts begin to fail us and when the institutions in which they function like universities and
00:02:54.840 scientific journals and public health organizations get contaminated by political ideologies that don't track
00:03:01.660 the truth now i've done many podcasts where i've talked about this problem from various angles and i'm sure i'll do
00:03:08.380 many more because it's not going away but much of our society has a very childish view of how to respond
00:03:17.020 to this problem many many people apparently believe that just having more unfettered dialogue on social media
00:03:25.580 and on podcasts and in newsletters is the answer but it's not i'm not taking a position against free speech
00:03:34.060 here i'm all for free speech i'm taking a position against weaponized misinformation and a contrarian
00:03:42.220 attitude that nullifies the distinction between real knowledge which can be quite hard won and ignorance or
00:03:51.500 mere speculation and i'm advocating a personal ethic of not pretending to know things one doesn't know
00:04:00.140 my team recently posted a few memes on instagram these were things i said i think on other people's
00:04:05.980 podcasts and these posts got a fair amount of crazed pushback apparently many people thought i was
00:04:12.380 posting these memes myself as though i had just left twitter only to become addicted to another social media
00:04:17.980 platform but in any case my team posted these quotes and my corner of instagram promptly became
00:04:24.220 as much of a cesspool as twitter and then people even took these instagram memes and posted them back
00:04:29.660 on twitter so they could vilify me in that context needless to say all of this convinces me again that my
00:04:35.900 life is much better off of social media but there is some real confusion at the bottom of the response
00:04:42.620 which i wanted to clarify so one of the offending instagram quotes read during the pandemic we
00:04:49.660 witnessed the birth of a new religion of contrarianism and conspiracy thinking the first sacrament of which
00:04:55.900 is quote do your own research the problem is that very few people are qualified to do this research
00:05:03.340 and the result is a society driven by strongly held unfounded opinions on everything from vaccine safety
00:05:10.620 to the war in ukraine and many people took offense to that as though it was a statement of mere elitism
00:05:19.900 but anyone who has followed this podcast knows that i include myself in that specific criticism
00:05:27.580 i'm also unqualified to do the quote research that so many millions of people imagine they're doing i
00:05:34.540 i wasn't saying that i know everything about vaccine safety or the war in ukraine i'm saying that we
00:05:40.940 need experts in those areas to tell us what is real or likely to be real and what's misinformation and this
00:05:48.460 is why i've declined to have certain debates on this podcast that many people have been urging me to have
00:05:53.500 and even alleging that it's a sign of hypocrisy or cowardice on my part that i won't have these debates
00:05:58.940 there are public health emergencies and geopolitical emergencies that simply require trust in institutions
00:06:07.420 they require that we acknowledge the difference between informed expertise and mere speculation
00:06:14.620 or amateurish sleuthing and when our institutions and experts fail us that's not a moment to tear
00:06:22.140 everything down that's the moment where we need to do the necessary work of making them trustworthy again
00:06:27.580 and i admit in many cases it's not clear how to do that at least not quickly i think detecting and
00:06:34.460 nullifying bad incentives is a major part of the solution but what isn't a part of the solution
00:06:41.820 at all is asking someone like candace owens or tucker carlson or even elon musk or joe rogan or brett
00:06:52.060 weinstein or me what we think about the safety of mrna vaccines or what we think about the nuclear risk
00:07:00.540 posed by the war in ukraine our information ecosystem is so polluted and our trust in institutions so
00:07:10.140 degraded again in many cases for good reason that we have people who are obviously unqualified to have
00:07:17.660 strong opinions about ongoing emergencies dictating what millions of people believe about those
00:07:23.740 emergencies and therefore dictating whether we as a society can cooperate to solve them most people
00:07:30.860 shouldn't be doing their own research and i'm not saying we should blindly trust the first experts we meet
00:07:37.340 if you're facing a difficult medical decision get a second opinion get a third opinion but most people
00:07:44.540 shouldn't be jumping on pubmed and reading abstracts from medical journals again depending on the topic
00:07:50.700 this applies to me too so the truth is if i get cancer i might do a little research but i'm not going
00:07:58.860 to pretend to be an oncologist the rational thing for me to do even with my background in science is to find
00:08:06.860 the best oncologists i can find and ask them what they think of course it's true that any specific
00:08:14.220 expert can be wrong or biased and that's why you get second and third opinions and it's also why we
00:08:21.100 should be generally guided by scientific consensus wherever a consensus exists and this remains the
00:08:27.340 best practice even when we know that there's an infinite number of things we don't know so while i
00:08:33.340 recognize the last few years has created a lot of uncertainty and anxiety and given a lot of motivation to
00:08:39.900 contrarianism and the world of podcasts and newsletters and twitter threads has exploded as an alternative to
00:08:49.100 institutional sources of information the truth is we can't do without a culture of real expertise
00:08:55.820 and we absolutely need the institutions that produce it and communicate it and i say that as someone who lives and
00:09:02.860 works and thrives entirely outside of these institutions so i'm not defending my own nest i'm simply noticing that
00:09:11.900 substack and spotify and youtube and twitter are not substitutes for universities and scientific journals
00:09:20.940 and governmental organizations that we can trust and we have to stop acting like they might be
00:09:27.100 now that i got that off my chest now for today's podcast today i'm speaking with stuart russell and
00:09:35.020 gary marcus stuart is a professor of computer science and a chair of engineering at the university of
00:09:42.060 california berkeley he is a fellow of the american association for artificial intelligence the association
00:09:48.060 for computing machinery and the american association for the advancement of science and he is the author with
00:09:54.700 peter norvig of the definitive textbook on ai artificial intelligence a modern approach and he is also the
00:10:01.820 author of the very accessible book on this topic human compatible artificial intelligence and the problem
00:10:07.980 of control gary marcus is also a leading voice on the topic of artificial intelligence he is a scientist
00:10:15.100 best-selling author and entrepreneur he was founder and ceo of geometric intelligence a machine learning
00:10:22.140 company that was acquired by uber in 2016 and he's also the author of the recent book rebooting ai along
00:10:30.140 with his co-author ernest davis and he also has a forthcoming podcast titled humans versus machines
00:10:37.500 and today we talk about recent developments in ai chat gpt in particular as well as the long-term risks of
00:10:45.740 producing artificial general intelligence we discuss the limitations of deep learning the surprising
00:10:51.820 power of narrow ai the ongoing indiscretions of chat gpt a possible misinformation apocalypse the
00:11:01.980 problem of instantiating human values in ai the business model of the internet the metaverse digital
00:11:09.660 provenance using ai to control ai the control problem emergent goals locking down core values programming
00:11:20.620 uncertainty about human values into agi the prospects of slowing or stopping ai progress and other
00:11:28.380 topics anyway i found it a very interesting and useful conversation on a topic whose importance is
00:11:35.180 growing by the hour and now i bring you stuart russell and gary marcus
00:11:40.540 i am here with stuart russell and gary marcus stuart gary thanks for joining me thanks for having us
00:11:53.260 so um i will have properly introduced both of you in the the intro but perhaps um you can just briefly
00:12:02.060 introduce yourselves as well gary let's start with you you're you're new to the podcast how do you describe
00:12:07.820 what it is you do and what you the kinds of problems you focused on uh i'm gary marcus and i've been
00:12:13.420 trying to figure out how we can get to a safe ai future i may be not looking as far out as stuart is
00:12:19.180 but i'm very interested in the immediate future whether we can trust the ai that we have how we
00:12:24.300 might make it better so that we can trust it i've been an entrepreneur i've been an academic i've been
00:12:29.820 coding since i was eight years old so throughout my life i've been interested in ai and also human cognition
00:12:35.260 and what human cognition might tell us about ai and how we might make ai better yeah i'll add you
00:12:40.620 you did your phd under our mutual friend steven pinker and um you have a uh a wonderful book
00:12:47.500 rebooting ai building artificial intelligence we can trust and i'm told you have a coming podcast
00:12:55.740 later this spring titled humans versus machines which i'm eagerly awaiting so um i'm pretty excited
00:13:02.620 about that it's going to be fun nice and you have a voice for radio so you're you're yeah i know i
00:13:08.300 know that joke well i'll i'll take it in a good spirit uh no that's not a joke a face for radio is
00:13:14.060 the joke a voice a voice for radio is uh is high praise that's right thank you yeah stuart who are
00:13:20.060 you what are you doing out there uh so i i teach at berkeley i've been doing ai for about 47 years
00:13:29.660 and i spent most of my career just trying to make ai systems better and better working in pretty much
00:13:37.420 every branch of the field and in the last 10 years or so i've been asking myself what happens if i or
00:13:46.380 if we as a field succeed in what we've been trying to do which is to create ai systems that are at least
00:13:54.700 as general in their intelligence as human beings and i came to the conclusion that uh if we did
00:14:01.580 succeed it might not be the best thing in the history of the human race in fact it might be the
00:14:06.220 worst and so i'm trying to fix that if i can and i will also add you have also written a wonderful
00:14:12.860 book uh human compatible artificial intelligence and the problem of control which is quite accessible
00:14:18.700 and then you have written a uh an inaccessible book or co-written one literally the textbook on ai
00:14:24.780 and uh you've been on the podcast a few times before so you you you each occupied different
00:14:30.700 points on a continuum of concern about general ai and and the perhaps distant problem of super
00:14:38.380 intelligence and stuart i've always seen you on the the sober side of the worried end and i've spoken
00:14:44.940 to many other worried people on the podcast and at various events people like nick bostrom max
00:14:51.260 tegmark eliezer yutkowski toby ord i spoke to many other people in private and i've always counted myself
00:14:59.100 among the worried and have been quite influenced by you and uh your book gary i've always seen you on the
00:15:06.540 the sober side of the the not worried and and i've also spoken to people who are not worried like
00:15:12.940 steve pinker david deutsch rodney brooks and others i'm not sure if either of you have moved in
00:15:19.820 the intervening years at all maybe maybe we can we can just start there we're going to we'll start
00:15:24.940 with narrow ai and chat gpt and the the explosion of interest on that topic but i do want us to get
00:15:32.620 to concerns about where all this might be headed but before we jump into the the narrow end of the
00:15:39.180 problem have you moved at all in your sense of of the risks here there are a lot of things to worry
00:15:47.100 about i think i actually have moved just within the last month a little bit so we'll probably disagree
00:15:54.540 about the estimates of the long-term risk but something that's really struck me in the last month
00:16:01.740 is there's a reminder of how much we're at the mercy of the big tech companies so my personal
00:16:06.620 opinion is that we're not very close to artificial general intelligence not sure stewart would really
00:16:11.260 disagree but he can you know jump in later on that and i continue to think we're not very close
00:16:15.580 to artificial general intelligence but with whatever it is that we have now this kind of
00:16:20.940 approximative intelligence that we have now this mimicry that we have now the the lessons of the
00:16:26.220 last month or two are we don't really know how to control even that it's not full agi that can
00:16:32.300 self-improve itself and you know it's not sentient ai or anything like that but we saw that microsoft
00:16:39.100 had clues internally that the system was problematic that it gaslighted gaslighted its customers and
00:16:44.540 things like that and then they rolled it out anyway and then initially the press hyped it made it sound
00:16:50.940 amazing and then it came out that it wasn't really so amazing but it also came out that if microsoft
00:16:56.460 wants to test something on 100 million people they can go ahead and do that even without a clear
00:17:01.260 understanding of the consequences so my opinion is we don't really have artificial general intelligence
00:17:06.460 now but this was kind of a dress rehearsal and it was a really shaky dress rehearsal and that in
00:17:11.500 itself made me a little bit worried and suppose we really did have agi and we had no real regulation
00:17:17.420 in place about how to test it you know my view is we should treat it as something like drug trials
00:17:22.540 you want to know about costs and benefits and have a slow release but we don't have anything like
00:17:26.620 regulation around that and so that actually pushed me a little bit closer to maybe the worry side of
00:17:32.460 the spectrum i'm not as worried maybe as stewart is about the long-term complete annihilation of the
00:17:38.620 human race that i think you know stewart has raised some legitimate concerns about i'm less worried about
00:17:43.260 that because i don't see agi as having the motivation to do that but i am worried about whether we have
00:17:49.740 any control over the things that we're doing whether the economic incentives are going to push us in the right place
00:17:54.780 so i think there's lots of things to be worried about maybe we'll have a nice discussion about
00:17:58.780 which those should be and how you prioritize them but there are definitely things to worry about
00:18:03.740 yeah well i want to return to that question of of motivation which has always struck me as a red
00:18:10.460 herring so and we'll talk about that when we get to agi but uh stewart have you been pushed around at
00:18:16.380 all by recent events or anything else so actually there are there are two recent events one of them is is
00:18:23.180 chat gpt but another one which is much less widely disseminated but there was an article in the
00:18:29.580 financial times last week was finding out that the the superhuman go programs that i think pretty much
00:18:38.060 everyone you know had abdicated any notion of of human superiority and go completely you know and that
00:18:45.980 was 2017 and in the five years since then the machines have you know gone off into the stratosphere their
00:18:52.300 ratings are 1400 points higher than the human world champion and 1400 points in go or in chess is like the
00:19:01.740 difference between you know a professional and you know a five-year-old who's played for a few months
00:19:07.820 so what's amazing is that we found out that actually an average you know good average human player can
00:19:15.260 actually beat these superhuman go programs be all of them be all of them giving them a nine stone
00:19:22.460 handicap which is the kind of handicap that you give to a small child who's learning the game isn't the
00:19:27.740 caveat there though that that we needed a computer to show us that exploit well actually it's the story
00:19:34.140 is a little bit more complicated we we had an intuition that the go programs because they are circuits
00:19:40.860 right they the circuit is a very bad representation for a recursively defined function so what does that
00:19:49.180 mean so in go the the main thing that matters is groups of stones so a group of stones are stones that
00:19:57.020 are connected to each other by vertical and horizontal connections on the grid and so that by definition is
00:20:04.060 a recursive concept because i'm connected to another stone if there's an adjacent stone to me and that
00:20:11.900 stone is connected to the other stone and we can write that i can say it in english you know i just did
00:20:18.620 instead of one small sentence i can write it in a program in a couple of lines of python i can write it in
00:20:25.180 in formal logic in a couple of lines but to try to write it as a circuit is in in some real sense impossible
00:20:32.860 i can only do a finite approximation and so we had this idea that actually the programs didn't really
00:20:38.940 understand what what a group of stones is and they didn't understand in particular whether a group of
00:20:44.860 stones is going to live or going to die and we concocted by hand some positions in which we thought
00:20:51.900 that the you know just deciding whether the program needed to rescue its group or whether it could capture
00:20:59.660 the opponent's group that it would make a mistake because it didn't understand group and that turned
00:21:04.620 out to be right so that was can i just jump in for one second sure it actually relates to the thing
00:21:09.980 that's on the cover of perceptrons which is one of the most famous books in the history of artificial
00:21:16.140 general intelligence that was a argument by minsky and paper that two-layer perceptrons which are the
00:21:21.900 historical ancestors of the deep learning systems we have now couldn't understand some very basic
00:21:27.500 concepts and in a way what stewart in his lab did is a riff on that old idea people hate that book in
00:21:35.020 the machine learning field they say that it prematurely dismissed multi-layer networks and
00:21:39.820 there's an argument there but it's more complicated than people usually tell but in any case i see this
00:21:44.620 result as a descendant of that showing that even if you get all these pattern recognition systems to
00:21:49.900 work that they don't necessarily have a deep conceptual understanding of something as simple as a group
00:21:55.020 in go and i i think it's a profound connection to the history of ai and kind of disturbing that here
00:22:01.260 we are you know 50 some years later and we're still struggling with the same problems yeah i think it's
00:22:06.700 it's the same point that minsky was making which is expressive power matters and simple perceptrons
00:22:12.540 have incredibly limited expressive power but you know even larger the you know deep networks and so on
00:22:19.420 in in in their native mode they have very limited expressive power you could actually take a recurrent
00:22:26.220 neural net and use that to implement a turing machine and then use that to implement a python
00:22:31.900 interpreter and then the system could learn all of its knowledge in python but there's no evidence that
00:22:37.500 anything like that is going on in the go program so the evidence seems to suggest that actually
00:22:43.180 they're not very good at recognizing what a group is and liveness and and death except in the cases you
00:22:51.660 know so they've learned sort of multiple fragmentary partial finite approximations to the notion of a
00:22:58.060 group and the notion of liveness and we just found that we could fool it where you know we're
00:23:05.660 constructing groups that are somewhat more complex than the than the kinds that typically show up and then
00:23:11.180 as uh as uh as gary said you know uh sorry as sam as you said there there is a program that we used
00:23:18.540 to explore whether we could actually find this occurring in a real game because these were contrived
00:23:25.100 positions that we had by hand we couldn't we couldn't force the uh the game to go in that direction
00:23:31.100 and indeed when we started running this program with sort of an adversarial program it's just
00:23:36.620 supposed to find ways of beating one particular go program called catago indeed it found ways of
00:23:44.940 generating groups kind of like a circular sandwich so you start with a little group of your pieces in
00:23:50.140 the middle and then the program the computer program surrounds your pieces to prevent them from spreading
00:23:56.700 and then you surround that surrounding so you make a kind of circular sandwich and it simply doesn't
00:24:02.060 realize that its pieces are going to die because it doesn't understand what you know what is the
00:24:07.340 structure of the groups and it has many opportunities to rescue them and it pays no attention and then
00:24:13.340 you capture 60 pieces and it's lost the game and this was something that we saw the you know our
00:24:19.580 adversarial program doing but then a human can look at that and say oh okay i can make that happen in
00:24:24.380 a game and so one of our team members is a good go player and he played this against catago which is the
00:24:31.180 best go program in the world and beat it easily and beat it with a nine stone handicap but also
00:24:37.020 turns out that all the other go programs which were trained by completely different teams using
00:24:41.660 different methods and different network structures and all the rest they all have the same problem they
00:24:46.700 all fail to recognize this circular sandwich and lose all their pieces so it seems to be not just an
00:24:53.340 accident it's not sort of a peculiar hack that we found for one particular program it seems to be
00:25:00.700 a qualitative failure of these networks to generalize properly and in that sense it's
00:25:07.420 somewhat similar to to adversarial images where we found that these systems that are supposedly
00:25:13.180 superhuman at recognizing objects are extremely vulnerable to tiny making tiny tweaks in images
00:25:20.460 that are you know those tweaks are totally invisible to a human but the system changes its mind and says
00:25:26.300 oh that's not a school bus it's an ostrich right and it's again a weakness in the way the circuits have
00:25:33.740 learned to represent the concepts they haven't really learned the visual concept of a school bus or an
00:25:39.740 ostrich because they're obviously for a human not confusable and this notion of expressive power is is
00:25:48.140 absolutely central to computer science we use it all over the place when we talk about compilers and we talk
00:25:54.220 about you know the design of hardware if you use an inexpressive representation then and you try to
00:26:01.980 represent a given concept you end up with an enormous and ridiculously over complicated
00:26:08.780 representation of that concept and that representation you know let's say it's the rules of go
00:26:15.180 in in an expressive language like python that's a page in an inexpressive language like circuits it
00:26:21.180 might be a million pages so to learn that million page representation of the rules of go requires billions
00:26:28.620 of experiences and the idea that oh well we'll just get more data and we'll just build a bigger circuit
00:26:34.940 and then we'll be able to you know learn the rules properly that just does not scale the universe doesn't
00:26:41.100 have enough data in it and we can't you know there's not enough material in the universe to build a computer
00:26:47.100 big enough to to achieve general intelligence using these inexpressive representations
00:26:54.460 so i'm with gary right i don't i don't think we're that close to agi and i've never said agi was imminent
00:27:01.180 you know generally i don't answer the question when do i think it's coming but i i am on the record because
00:27:07.820 someone uh violated the off the record rules of the meeting someone someone applied you with scotch
00:27:12.780 no they they literally just broke you know i was at a chatham house rules meeting and i literally
00:27:20.380 prefaced my sentence with off the record uh and 20 minutes later it appears on the daily telegraph
00:27:27.180 website so uh anyway so i was you know the daily telegraph you can look it up what i actually said
00:27:33.020 was i think it's quite likely to happen in the lifetime of my children right which you you could think
00:27:38.940 of as another way of like sometime in this century before we get into that can i jump into um sort of
00:27:46.220 wrap up stewart's point because i agree with him it was a profound result from his lab there's some
00:27:51.180 people arguing about particular go programs and so forth but i wrote an article about uh stewart's result
00:27:58.140 called david beats goliath um it was on my substack and i'll just read a paragraph and maybe we can get
00:28:03.500 back to why it's a worry so kellen pelrine i guess is the name of the player who actually beat the go
00:28:09.980 program and i said his victory is a profound reminder that no matter how good deep learning
00:28:14.940 data-driven ai looks when it is trained on an immense amount of data we can never be sure that
00:28:20.380 systems of this sort really can extend what they know to novel circumstances we see the same problem
00:28:25.580 of course with the many challenges that have stymied the driverless car industry and the batshit crazy
00:28:30.220 errors we've been seeing with the chat bots in the last week and so that piece also increased
00:28:35.660 my worry level it's a reminder that these things are almost like aliens we think we understand like
00:28:41.340 oh that thing knows how to play go but there are these little weaknesses there some of which you know
00:28:47.100 turn into adversarial attacks and some of which turn into bad driving and some of which turn into
00:28:53.420 mistakes on chat bots i think we should actually separate out genuine artificial general intelligence which
00:28:59.340 maybe comes in our lifetimes and maybe doesn't from what we have now which is this data-driven thing
00:29:05.180 that as stewart would put is like a big circuit we don't really understand what those circuits do
00:29:10.460 and they can have these weaknesses and so you know you talk about alignment or something like that if you
00:29:14.860 don't really understand what the system does and what weird circumstances it might break down in
00:29:20.460 you can't really be that confident around alignment for that system yeah i totally this is an area you know
00:29:27.660 that my research center is now putting a lot of effort in is we have if we're going to control
00:29:33.260 these systems at all we've got to understand how they work we've got to build them according to much
00:29:39.580 more i guess traditional engineering principles where the system is made up of pieces and we know how
00:29:45.260 the pieces work we know how they fit together and we can prove that the whole thing does what it's supposed
00:29:50.940 to do and there's plenty of technological elements available from the history of ai that i think can
00:29:59.180 move us forward in ways where we understand what the system is doing but i think the same thing is
00:30:05.740 happening in gpt in terms of failure to generalize right so it's got millions of examples of arithmetic
00:30:13.500 you know 28 plus 42 is what 70 right and yet despite having millions of examples it's completely failed
00:30:23.100 to generalize so if it if you give it you know a three or four digit addition problem that it hasn't
00:30:29.100 seen before uh and particularly ones that involve carrying it fails right so i think it can actually
00:30:35.740 just to be accurate i think it can do three and four edition addition to some extent it completely
00:30:40.780 fails on multiplication at three or four digits if we're talking about minerva which is i think
00:30:44.780 the state of the art to some to some extent yeah but it i think it it works when you don't need to
00:30:51.100 carry because it's i think it's has it has figured out that you know eight plus one is nine because
00:30:57.580 it's got you know a few million examples of that but it had you know when when it involves carrying
00:31:02.460 or you get to more digits outside uh the training set it hasn't extrapolated correctly it hasn't learned
00:31:09.180 the same with chess it's got lots and lots of grandmaster chess games in its database but it
00:31:14.220 thinks of thinks of the game as a as a sequence of you know of notation like in in a4 d6 knight takes c3
00:31:24.940 b3 b5 right that's what a chess game looks like when you write it out as notation it has no idea that
00:31:31.820 that's referring to a chess board with pieces on it it has no idea that they're trying to checkmate each
00:31:38.780 other and you start playing chess with it it'll just make an illegal move because it doesn't even
00:31:45.180 understand what what is going on at all and the the weird thing is that almost certainly the same
00:31:51.580 thing is going on with all the other language generation that it's doing it has not figured out
00:31:57.580 that the language is about a world and the world has things in it and there are things that are true
00:32:02.380 about the world there are things that are false about the world and and you know if i if i give
00:32:09.180 uh my wallet to gary then gary has my wallet and if i he gives it back to me then i have it and he
00:32:13.980 doesn't have it it hasn't figured out any of that stuff i i completely agree i think that people tend
00:32:20.460 to anthropomorphize and i'd actually needle stewart a little bit and say he used words like think and
00:32:25.100 figure it out these systems never think and figure out they're just finding close approximations
00:32:29.980 to the text that they've seen and it's very hard for someone who's not tutored in ai to really get
00:32:35.820 that to to look at it see this very well formed output and realize that it's actually more like an
00:32:41.820 illusion than something that really understands things so stewart is absolutely right you know
00:32:46.300 it can talk about me having a wallet or whatever but it doesn't know that there's a me out there
00:32:50.300 that there's a wallet out there it's hard for people to grasp that but that's the reality and so when
00:32:56.540 it gets the math problem right people are like it's got some math and then it gets one wrong
00:33:00.460 they're like oh i guess it made a mistake but really it never got the math it's just it finds
00:33:04.540 some bit of text that's close enough some of the time that it happens to have the right answer and
00:33:08.780 sometimes not well i want to return to that point but i think i need to back up for a second and
00:33:13.980 define a couple of terms just so that we don't lose people i realize i'm assuming a fair amount of
00:33:19.500 familiarity with this topic from people who've heard previous podcasts on it but might not be fair so
00:33:24.540 quickly we we have introduced a few terms here uh we've talked about narrow ai general ai or agi or
00:33:33.420 artificial general intelligence and super intelligence and those are those are interrelated
00:33:38.300 concepts stewart do you just want to break those apart and and suggest uh what we mean by them sure so
00:33:46.060 narrow ai is the easiest to understand because that typically refers to ai systems that are developed for
00:33:53.100 one specific task for one specific task for example playing go or translating french into english or
00:33:59.980 whatever it might be and agi or artificial general intelligence or sometimes called human
00:34:07.180 level artificial intelligence or general purpose artificial intelligence would mean ai systems that can
00:34:13.660 quickly learn to be competent in pretty much any kind of task to which maybe to which the human
00:34:21.500 intellect is relevant and probably a lot more besides and then artificial super intelligence or asasi would
00:34:29.500 mean systems that are far superior to humans in in all these aspects and i think this is something worth
00:34:35.900 mentioning briefly about narrow ai a lot of commentators talk as if working on narrow ai doesn't present any kind
00:34:46.060 kind of risk or problem because all you get out of narrow ai is a system for that particular task and you know you could make a hundred
00:34:54.060 narrow ai systems and they would all be little apps on your laptop and that none of them would present any risk
00:35:00.060 because all they do is that particular task i think that's a complete misunderstanding of how progress happens in ai so let me give you an
00:35:08.060 example deep learning for example deep learning which is the the basis for the you know the last decade of
00:35:14.060 of exploding ai capabilities emerged from a very very narrow ai application which is recognizing handwritten
00:35:24.060 digits on checks at bell labs in the 1990s and you know you can't really find a more narrow application than that but
00:35:34.060 whenever a good ai researcher works on a narrow task and it turns out that the task is not solvable by existing
00:35:41.500 methods they're likely to push on methods right to come up with more general more capable methods and those
00:35:51.580 methods will turn out to apply to lots of other tasks as well so it was jan lakun who who was working in the
00:35:59.420 group that worked on these handwritten digits and he didn't write a little program that sort of follows
00:36:04.780 the s around and says okay i found one bend okay let me see if i can find another bend okay good i've
00:36:10.540 got a left bend and a right bend so it must be an s right that would be a very hacky very non-general very
00:36:17.820 brittle way of of doing handwritten recognition what he did was he he just developed a technique for
00:36:25.580 training deep networks that had various kinds of invariances about images for example that
00:36:32.780 an s is an s no matter where it appears in the image you can build that into the structure of the
00:36:38.220 networks and then that produces a very powerful image recognition capability that applies to lots of
00:36:44.460 other things turned out to apply to speech and in a slightly different form is underlying what's going on in
00:36:52.780 chat gpt so don't be fooled into thinking that as long as people are working on narrow ai everything's
00:37:00.060 going to be fine yeah if i could just jump in also on the um point of general intelligence and what that
00:37:06.140 might look like chat is interesting because it's not as narrow in some ways as most traditional narrow ai
00:37:13.180 and yet it's not really general ai either it doesn't it doesn't perfectly fit into the categories and
00:37:18.540 let me explain what i mean by that so a typical narrow ai is i will fold proteins or i will play
00:37:23.820 chess or something like that really does only one thing well and anybody who's played with chat gpt
00:37:29.100 realizes it does many things maybe not super well it's almost like a jack of all trades and a master
00:37:35.020 of none so you can talk to it about chess and it will play okay chess for a little while and then
00:37:41.180 as stewart points out probably eventually break the rules because it doesn't really understand them
00:37:45.260 or you can talk to it about word problems in math and it will do some of them correctly and get some
00:37:50.300 of them wrong almost anything you want to do not just one thing like say chess it can do to some
00:37:56.140 extent but it never really has a good representation of any of those and so it's never really reliable
00:38:01.420 at any of them as far as i know there's nothing that chat gpt is fully reliable at even though it has
00:38:06.940 something that looks little like generality and obviously when we talk about artificial general
00:38:11.660 intelligence we're expecting something that's trustworthy and reliable that could actually
00:38:16.220 play chess you know let's say as well as humans are better than them or something like that they
00:38:20.460 could actually you know do word problems as well as humans are better than that and so forth and so
00:38:26.140 it gives like an illusion of generality but it's so superficial because of the way it works
00:38:31.580 in terms of approximating bits of text that it doesn't really deliver on the promise of being what
00:38:37.100 we really think of as an artificial general intelligence yes um okay so let's talk more about
00:38:45.020 the problems with narrow ai here and and we should also add that most narrow ai although chat gpt is
00:38:53.340 perhaps an exception here is already insofar as we dignify it as ai and implement it it's already
00:39:01.020 superhuman right i mean your calculator is superhuman for arithmetic and there are many
00:39:07.020 other forms of narrow ai that are just that perform better than people do and one thing that's been
00:39:14.060 surprising of late as stewart just pointed out is that superhuman ai of certain sorts like our our
00:39:21.580 best go plane programs have been revealed to be highly imperfect such that they're less than human in
00:39:30.060 specific instances and these instances are surprising and can't necessarily be foreseen in advance and
00:39:35.740 therefore it raises this question of as we implement narrow ai because it is superhuman
00:39:42.780 it seems that we might always be surprised by its failure modes because it lacks you know common sense
00:39:48.700 it lacks a more general view of what the problem is that it's solving in the first place and so that
00:39:55.260 that obviously poses some risk for us if i could jump in for one second i think the cut right there
00:40:01.420 actually has to do with the mechanism so a calculator really is superhuman we're not going to find an
00:40:07.020 achilles heel where there's some regime of numbers that it can't do right you know within what it can
00:40:12.540 represent and so the same thing with deep blue i'd be curious if stewart disagrees but i think deep
00:40:18.140 blue is going to be able to beat any human in chess and it's not clear that we're actually going to find
00:40:22.140 an achilles heel but when we when we talk about deep learning driven systems they're very heavy on the big
00:40:27.340 data or using these particular techniques they often have a pretty superficial representation
00:40:33.980 stewart's analogy there was a python program that's concise we know that it's captured something
00:40:39.500 correctly versus this very complicated circuit that's really built by data and when we have these
00:40:44.700 very complicated circuits built by data sometimes they do have achilles heel so some narrow ai i think
00:40:50.780 we can be confident of so gps systems that navigate turn by turn there's some problems like the
00:40:56.140 map could be out of date there could be a broken bridge but basically we can trust the algorithm
00:41:00.860 there whereas these go things we don't really know how they work we kind of do and it turns out
00:41:06.620 sometimes they do have these achilles heels that are in there and those achilles heels can mean
00:41:12.380 different things in different contexts so in one context it means well we we can beat it at go
00:41:16.300 and it's a little bit surprising in other contexts it means that we're using it to drive a car and
00:41:20.940 there's a jet there and that's not in the training set and it doesn't really understand that you don't run
00:41:25.580 into large objects and doesn't know what to do with the jet and it actually runs into the jet
00:41:29.420 so that the weaknesses can manifest themselves in a lot of different ways and some of what i think
00:41:34.460 stewart and i are both worried about is that the dominant paradigm of deep learning often has these
00:41:40.060 kind of gaps in it sometimes i use the term pointillistic that they have they're like collections
00:41:45.020 of many points in some cloud and if you come close enough to the points in the cloud they usually
00:41:50.380 do what you expect but if you move outside of it sometimes people call it distribution shift
00:41:55.100 to a different point then they're kind of unpredictable so in the example of math that
00:41:59.420 stewart and i both like you know it'll get a bunch of math problems that are kind of near the points in
00:42:04.060 the cloud where it's got experience in and then you move to four digit multiplication and the cloud is
00:42:08.780 sparser and now you ask a point that's not next to a point that it knows about it doesn't really work
00:42:13.820 anymore so this this illusion oh it learned multiplication well no it didn't it just learned
00:42:18.940 to jump around these points in this cloud and that adds a enormous level of unpredictability
00:42:24.620 that makes it hard for humans to reason about what these systems are going to do and surely there
00:42:29.420 are safety consequences that that arise from that and something else stewart said that i really
00:42:34.540 appreciated is in the old days in classical ai we had engineering techniques around these you
00:42:40.140 you built modules you knew what the modules did there were problems then too i'm not saying it was
00:42:44.060 all perfect but the dominant engineering paradigm right now is just get more data if it doesn't work
00:42:49.820 and that's still not giving you transparency into what's going on and it can be hard to debug and so
00:42:55.260 like okay now you built this go system and you discover it can't build can't beat humans doing
00:43:00.460 this thing what do you do well now you have to collect some data pertaining to that but is it
00:43:04.220 going to be general you kind of have no way to know maybe there'll be another attack tomorrow
00:43:08.060 that's what we're seeing the driverless car industry is like their adversaries may be of
00:43:12.860 a different sort they're not deliberate but you find some error and then people try to collect more
00:43:17.820 data but there's no systematic science there like you can't tell me are we a year away or 10 years
00:43:23.980 away or 100 years away from driverless cars by kind of plotting out what happens because most of
00:43:29.900 what matters are these outlier cases we don't have metrics around them we don't have techniques for
00:43:34.220 solving them and so it's this very empirical we'll try stuff out and hope for this best methodology
00:43:39.820 and i think stewart was reacting to that before and i certainly worry about that a lot that we don't
00:43:44.540 have a a sound methodology where we know hey we're getting closer here and we know that we're not going
00:43:49.980 to asymptote before we get to where we want to be okay so it sounds like you both have doubts as to
00:43:56.140 whether or not the current path of of reliance on deep learning and similar techniques to scale
00:44:04.380 is not going to deliver us to the the promised land of agi whether aligned with our interests or not
00:44:12.220 it's just it we need more to actually be able to converge on something like general intelligence because
00:44:19.260 these networks as powerful as they seem to be in certain cases they're exhibiting obvious failures of
00:44:25.740 abstraction and they're not learning the way humans learn and we're discovering these failures
00:44:32.140 perhaps to the comfort of people who are terrified of the agi singularity being reached again i want
00:44:38.540 to keep focusing on the problems and potential problems with narrow ai so there's two issues here
00:44:44.060 there's narrow ai that fails that doesn't do what it purports to do and then there's just narrow ai
00:44:49.100 that is applied in ways that prove pernicious but intentionally or not you know bad actors or
00:44:56.620 good actors you know reaching unintended consequences let's focus on chat gpt for another
00:45:04.300 moment or so and or or things like chat gpt yeah i mean many people have pointed out that this seems to
00:45:10.620 be a you know potentially a thermonuclear bomb of misinformation right and we already have such an
00:45:17.580 enormous misinformation problem just letting the apes concoct it now we have we have created a
00:45:23.900 technology that that makes the cost of producing nonsense uh a nonsense that passes for knowledge
00:45:31.660 almost go to zero what are your concerns about where this is all headed you know where narrow ai of this
00:45:39.100 sort is headed in both of its failure modes its failure to do what it's attempting to do that is it's
00:45:45.260 making inadvertent errors or it's just it's failure to be applied you know ethically and wisely and
00:45:52.220 uh we however effective it is we plunge into the part of the map that is just um you know bursting
00:45:59.340 with uh with unintended consequences yeah i i find all of this terrifying it's maybe worth speaking for
00:46:05.580 a second just to separate out two different problems you kind of hinted at it so one problem is that
00:46:10.220 these systems hallucinate even if you give them clean data they don't keep track of things like
00:46:15.420 the relations between subjects and predicates or entities and their properties and so they can just
00:46:20.060 make stuff up so an example of this is a system can say that elon musk died in a car crash in 2018
00:46:27.100 that's a real error from a system called galactica and that's contradicted by the data in the training
00:46:32.140 set it's contradicted by things you could look up in the world and so that's a problem where these
00:46:36.620 systems hallucinate then there's a second problem which is the bad actors can induce them to make as
00:46:42.860 many copies or variants really of any specific misinformation that they might want so if you
00:46:48.700 want a q anon perspective on the january 6th events well you can just have the system make that and
00:46:54.540 you can have it make a hundred versions of it or if you want to make up propaganda about coveted
00:46:58.940 vaccines you can make up a hundred versions each mentioning studies in lancet and jama with data and
00:47:06.940 so forth all of the data made up the study is not real and so for a bad actor it's kind of a dream come
00:47:12.700 true so there's two different problems there on the first problem i think the worst consequence is that
00:47:18.220 these chat style search engines are going to make up medical advice people are going to take that medical
00:47:22.620 advice and they're going to get hurt on the second one i think what's going to get hurt is democracy
00:47:27.180 because the result is going to be there's so much misinformation nobody's going to trust anything
00:47:32.300 and if people don't trust that there's some common ground i don't think democracy works and so i think
00:47:37.500 there's a real danger to our social fabric there so both of these issues really matter and it comes
00:47:43.740 down to in the end that if you have systems that approximate the world but have no real representation of
00:47:48.700 the world at all they can't validate what they're saying so they can be abused they can make mistakes
00:47:54.860 it's not a great basis i think for ai it's certainly not what i had hoped for
00:48:00.700 steward so i i think i have a number of points but i just wanted to sort of go back to something you
00:48:05.500 were saying earlier about the fact that the current paradigm may not lead to the promised land
00:48:11.740 when i i think that's true i think some of the properties of chat gpt have made me less confident
00:48:19.260 about that claim because it it you know it it's an empirical claim as i said this you know sufficiently
00:48:26.620 large circuits with sufficient recurrent connections can implement turing machines and can
00:48:34.060 learn these higher level more expressive representations and and build interpreters for
00:48:38.540 them they can emulate them they don't really learn them well no they can actually they can do that right
00:48:43.260 i mean think about your laptop your laptop is a circuit but it's a circuit that supports these
00:48:48.860 higher level abstractions your brain is a circuit but it's a that's right it's a it's a question of
00:48:53.740 representation versus learning right a circuit that supports that so it can learn those internal
00:48:59.420 structures which support representations that are more expressive and can then learn in those more
00:49:05.180 expressive representations so theoretically it's possible that this can happen well but what we always see
00:49:10.940 in reality is your example before about the four digit arithmetic like the systems don't in fact
00:49:16.220 converge on the sufficiently expressive representations they just always converge on these things that are
00:49:21.420 more like masses of conjunctions of different cases and they leave stuff out so i think i'm not
00:49:26.940 saying no learning system could do that but these learning systems don't well we don't know that right we
00:49:33.340 see some failures but we also see some remarkably capable behaviors that are quite hard to explain
00:49:41.660 as just sort of stitching together bits of text from the training set i mean i think we're going to
00:49:46.780 disagree there it's up to sam how far he wants us to go down that rabbit hole well actually so let's just
00:49:53.100 spell out the point that's being made i also don't want to lose stewart's reaction to
00:49:57.260 yes general concerns about narrow ai but i think this is an interesting point intellectually so yes
00:50:05.980 there's some failure to use symbols or to recognize symbols or to generalize and it's easy to say things
00:50:15.260 like you know here's a system that is playing go better than than any person but it doesn't know what
00:50:22.220 go is or it doesn't know there's anything beyond this grid it doesn't recognize the groups of pieces
00:50:28.620 etc but on some level the same can be said about the subsystems of the human mind right i mean like
00:50:36.780 you know yes we use symbols but the level at which symbol use is instantiated in us in our brains is not
00:50:45.420 itself symbolic right i mean there is a there is a reduction to some piecemeal architecture i mean
00:50:50.540 you know there's just atoms in here right at the bottom is just atoms it's true right there's
00:50:55.980 nothing magical about having a meat-based computer in the case of your laptop if you want to talk
00:51:01.820 about something like i don't know the folder structure in which you store your files it actually
00:51:06.380 grounds out and computer scientists can walk you through the steps we could do it here if you really
00:51:10.460 wanted to of how you get from a set of bits to a hierarchical directory structure and that
00:51:16.620 hierarchical directory structure can then be computed over so you can for example move a
00:51:21.980 subfolder to inside of another subfolder and we all know the algorithms for how to do that but the
00:51:27.980 point is that the computer has essentially a model of something and it manipulates that model so it's a
00:51:34.140 model of where these files are or representation might be a better word in that case humans have models of
00:51:39.740 the world so i have a model of the two people that i'm talking to and their their backgrounds and the
00:51:46.140 their beliefs and desires to some extent it's going to be imperfect but i have such a model and what i
00:51:51.980 would argue is that a system like chat gpt doesn't really have that and in any case if even if you could
00:51:57.580 convince me that it does which would be a long uphill battle we certainly don't have access to it so that
00:52:02.540 we can use it in reliable ways in downstream computation the output of it is a string whereas in the case of my
00:52:09.580 laptop we have you know very rich representations i'll ignore some stuff about virtual memory that
00:52:14.540 make it a little bit complicated and we can go dig in and we know like which part of the representation
00:52:20.700 stands for a file and what stands for a folder and and how to manipulate those and so forth we don't
00:52:26.140 have that in these systems what we have is a whole bunch of parameters a whole bunch of text
00:52:31.180 and we kind of hope for the best yeah so i'm not disagreeing that we don't understand how it works
00:52:36.940 works but by the same token given that we don't understand how it works it's hard to rule out the
00:52:43.260 possibility that it is developing internal representational structures which may be of a
00:52:49.500 type that we wouldn't even recognize if we saw them they're very different and we see we have a lot
00:52:54.860 of evidence that bears on this for example all of the studies of arithmetic or guy vanderbroek's work
00:53:01.180 on reasoning where if you control things the reasoning doesn't work properly in any domain where we can look
00:53:06.700 or math problems or anything like that we always see spotty performance we always see hallucinations
00:53:12.780 they always point to there not being a deep rich underlying representation of any phenomena
00:53:18.220 that we're talking about so from my mind yes you can say there are representations there but they're
00:53:22.780 not like world models they're not world models that can be reliably interrogated and acted on
00:53:28.700 and we just see that over and over again okay i think we're gonna just agree to disagree on that
00:53:34.060 but the the point i wanted to make was that you know if if gary and i are right and we're really
00:53:41.020 concerned about the existential risk from agi we should just keep our mouths shut right we should
00:53:47.980 let the world continue along this line of bigger and bigger deep circuits well yeah i think that's the
00:53:54.140 really interesting that's the really interesting question i wanted your take on stewart and it goes back
00:53:59.180 to the word sam used about promised land and the question is is agi actually the promised land we
00:54:04.940 want to get to so i've kind of made the argument that we're living in a land of very unreliable ai and
00:54:10.940 said there's a bunch of consequences for that like we have chat search it gives bad medical advice somebody
00:54:15.820 dies and so i have generally made the argument but i'm really interested in stewart's take on this
00:54:21.180 that we should get to more reliable ai where it's transparent it's interpretable it kind of does the
00:54:26.220 things that we expect so if it does for we ask it to do four digit arithmetic it's going to do that
00:54:31.340 which is kind of the classical computer programming paradigm where you know you have subroutines or
00:54:36.780 functions and they do what you want to do and so i'm kind of pushed towards let's make the ai more
00:54:42.220 reliable and there is some sense in which that is more trustworthy right you know that's going to do
00:54:47.820 this computation but there's also a sense in which maybe things go off the rail at that point that i
00:54:52.380 think stewart is is interested and so stewart might make the argument let's not even get to agi
00:54:58.140 i'm like hey we're in this lousy point with this unreliable ai surely it must be better if we get to
00:55:03.580 reliable ai but stewart i think sees somewhere along the way where we get to a transition where
00:55:08.940 yes it reliably does its computations but also it poses a new set of risks is that right stewart
00:55:14.860 and do you want to spell that out i mean the you know if if we believe that building bigger and
00:55:20.860 bigger circuits isn't going to work and instead we push resources into let's say methods based
00:55:27.260 on probabilistic programming which is a symbolic kind of representation language that includes
00:55:33.340 probability theory so it can handle uncertainty it can do learning it can do all these things
00:55:38.700 but there are still a number of restrictions on on our ability to use probabilistic programming to
00:55:44.060 achieve agi but suppose we say okay fine well we're going to put a ton of resources
00:55:50.220 into this much more engineering based semantically rigorous component composition kind of technological
00:55:58.460 approach and if we succeed right we still face this problem that now you build a system that's
00:56:03.980 actually more powerful than the human race uh how do you how do you have power over it and so i think
00:56:11.420 the reason to just keep quiet would be give us more time to solve the control problem before we make the
00:56:18.300 final push towards agi against that and if i'm being intellectually honest here i don't know the
00:56:26.220 right answer there so i think we can for the rest of our conversation take probabilistic programming as
00:56:31.020 kind of standing for the kinds of things that might produce more reliable systems like i'm talking about
00:56:36.060 there are other possibilities there but it's fine for present purposes the question is if we could get to
00:56:41.420 a land of probabilistic programming that at least is transparent it generally does the things we expect it to do
00:56:48.300 is that better or worse than the current regime and steward is making the argument that we don't know how
00:56:54.780 to control that either i mean i'm not sure we know how to control what we've got now but that's an
00:56:58.620 interesting question yeah so so let me give you an example let me let me give you a simple example
00:57:03.260 of systems that are doing exactly what they were designed to do and having disastrous consequences
00:57:09.260 um and those the the recommender system algorithm so in social media uh let's take youtube for example
00:57:17.740 when you watch a video on youtube it it loads up another video for you to watch next how does it choose
00:57:23.340 that well that's the learning algorithm and it's watched the behavior of millions and millions of youtube
00:57:29.900 users and which videos they they watch when they're suggested and which videos they ignore or watch a different
00:57:37.020 video or even check out of youtube altogether and those learning algorithms are designed to optimize
00:57:44.540 engagement right how much time you spend on the platform how many videos you watch how many ads you
00:57:49.900 click on and so on and they're very good at that so it's not that they have unpredictable failures like
00:57:55.820 they sort of get it wrong all the time and they don't really have to be perfect anyway right they just
00:58:01.020 have to be you know considerably better than than just loading up a random video and the problem is
00:58:07.260 that they're very good at doing that but that goal of engagement is not aligned with the interests of
00:58:15.340 the users and the way the algorithms have found to to maximize engagement is not just to pick the right
00:58:22.780 next video but actually to pick a whole sequence of videos that will turn you into a more predictable
00:58:29.900 victim and so they're literally brainwashing people so that once they're brainwashed the system is going
00:58:38.140 to be more successful at keeping them on the platform i completely agree they're like drug dealers and so so
00:58:47.340 this is the problem right that if we made that system much better maybe using probabilistic programming if
00:58:53.980 that system understood that people exist and they have political opinions it's a system
00:58:58.860 understood the content of the video then they will be much much more effective at this brainwashing
00:59:04.860 task that they've been set by the social media companies and that would be disastrous right it
00:59:10.940 wouldn't be a promised land it would be a disaster so stewart i agree with that example in its entirety
00:59:16.860 and i think the question is what lessons we draw from it so i think that has happened in the real world
00:59:21.820 it doesn't matter that they're not optimal at it they're pretty good and they've done a lot of harm
00:59:26.460 and those algorithms we do actually largely understand so i i accept that example it seems to
00:59:32.700 me like if you have agi it can certainly be used to to good purposes or bad purposes that's a great
00:59:39.980 example where it's to the you know the good of the owner of some technology and the bad of society
00:59:45.340 i could envision an approach to that and i'm curious what you think about it and it doesn't really
00:59:50.860 matter whether they're where they have you know decent ai or great ai in the sense of being able
00:59:55.500 to do what it's told to do um there's already a problem now you could imagine systems that could
01:00:00.700 compute the consequences for society sort of asthma's law approach maybe taken to an extreme
01:00:07.260 they would compute the consequences of society and say hey i'm just not recommending that you do this
01:00:12.700 and the strong version just wouldn't do it and a weak version would say hey here's why you shouldn't
01:00:16.460 do it this is going to be the long-term consequence for democracy that's not going to be good for your
01:00:20.940 society we have an axiom here that democracy is good so you know one possibility is to say if we're
01:00:27.260 going to build agi it must be equipped with the ability to compute consequences and represent certain
01:00:32.860 values and reason over them what's your take on that steward well that assumes that it's possible for
01:00:38.780 us to to write down in some sense the um the utility function of the human race we can see
01:00:45.980 the initial efforts there in how they've tried to put guard rails on chat gpt where you ask it to
01:00:52.700 utter a racial slur and it won't do it even if the fate of humanity hangs on the balance right so that
01:00:59.660 like if in so far yeah i mean that that's not really true that's a particular example in a particular
01:01:05.900 context you can still get whatever the point being right we we've not been very successful you know
01:01:12.620 we've been trying to write tax law for six thousand years we still haven't succeeded in
01:01:17.260 writing tax law that doesn't have loopholes right but so i mean i always worry about a slippery slope
01:01:23.500 argument at this point so it is true for example that we're not going to get uniform consensus on
01:01:28.060 values that we've never made a tax code work but i don't think we want anarchy either and i think the
01:01:33.900 state that we have now is either you have systems with no values at all that are really reckless or you
01:01:40.220 have the kind of guard rails based on reinforcement learning that are very sloppy and don't really
01:01:45.900 do what you want to do or in my view we look behind door number three which is uncomfortable in itself
01:01:51.260 but which would do the best we can to have some kind of consensus values and try to work according to
01:01:57.020 those consensus values well i i think there's a door number four and i don't think door number three works
01:02:02.860 because really there are sort of infinitely many ways to to write the right the wrong objective
01:02:10.940 and only you can say that about society and you know no but we're not we're not doing great but
01:02:17.180 we're you know better than anarchy i mean it's it's it's the churchill line about democracy is you
01:02:22.060 know the best of some lousy options yeah but that's that's because individual humans tend to be of
01:02:27.340 approximately equal capability and if one individual human starts doing uh really bad things then the
01:02:34.060 other ones sort of you know tend to react and squish them out it doesn't always work we've certainly had
01:02:40.700 you know near near total disasters even with humans of average ability but once we're talking about
01:02:48.700 ai systems that are far more powerful than the entire human race combined then the human race is in the
01:02:54.780 position that um you know as as samuel butler put it in 1872 that the beasts of the field are with
01:03:02.140 respect to humans that we would be entirely at their mercy can we separate two things stuart before
01:03:08.460 we go to door number four yes which are intelligence and power so i think for example our last president
01:03:14.860 was not particularly intelligent he wasn't stupid but he wasn't the brightest in the world but he had a
01:03:19.580 lot of power and that's what made him dangerous was the power not the sheer intellect and so sometimes
01:03:24.460 i feel like in these conversations people confound super intelligence with what a system is actually
01:03:30.780 enabled to do with what it has access to do and so forth i at least i think it's important to to
01:03:36.460 separate those two out so i worry about even dumb ai like we have right now having a lot of power like
01:03:43.020 there's a startup that wants to attach all the world software to large language models and there's
01:03:47.340 a new robot company that i'm guessing is powering their humanoid robots with large language models that
01:03:52.460 terrifies me maybe not on the existential you know threat to humanity level but the level of there's
01:03:58.460 going to be a lot of accidents because those systems don't have good models i so so so gary i completely
01:04:04.220 agree right i mean i wrote an op-ed called we need an fda for algorithms about six years ago i think no i
01:04:11.420 need to read it so sorry stuart i i just i think we should should hold the conversation on agi for a moment
01:04:18.460 yet but i would just point out that that separation of concepts intelligence and power might only run
01:04:24.540 in one direction which is to say that yes for narrow ai you can have it become powerful or not depending
01:04:29.980 on how it's hooked up to the rest of the world but for true agi that is superhuman one could wonder
01:04:36.300 whether or not intelligence of that sort can be constrained i mean then you're in relationship to this
01:04:41.900 thing that you know are you are you sufficiently smart to keep this thing that is much smarter than
01:04:46.620 you from doing whatever it intends to do one could wonder i've never seen an argument that compels me
01:04:52.540 to think that that's not possible i mean like go programs have gotten much smarter but they haven't
01:04:57.580 taken more power over the world no but yeah but they're not general that's that's honestly gary
01:05:01.900 that's a ridiculous example right so yeah wait wait wait stuart before we plunge in i want to get
01:05:08.300 there i want to get there i just don't want to i want to extract whatever lessons we can over this
01:05:13.820 recent development in narrow ai and then then i promise you we're going to be able to fight about
01:05:18.700 agi in in in a mere matter of minutes but uh all right stuart so we have got a few files open here i
01:05:26.940 just want to acknowledge them one is you you suggested that if in fact you think that this path of throwing
01:05:32.700 more and more resources into deep learning is going to be a dead end with respect to agi
01:05:37.980 and you're worried about agi maybe it's ethical to simply keep your mouth shut or or even cheerlead
01:05:44.860 for the promise of deep learning so as to stymie the whole field for another generation while we
01:05:49.420 figure out the control problem did i read you correctly there when you i think there's that's
01:05:54.140 a possible argument that has occurred to me and and and people have put it to me as well okay it's a
01:05:59.580 difficult question i think it's hard to rule out the possibility that the the present direction will
01:06:08.860 eventually pan out but it would pan out in a much worse way because it would lead to systems
01:06:16.620 that were extremely powerful but whose operation we completely didn't understand right where we have
01:06:22.220 no way of of specifying objectives to them or or even finding out what objectives they're actually
01:06:28.460 trying to pursue because we can't look inside we don't even understand the principles of operation
01:06:33.820 but so okay so let's table that for a moment we're going to we're going to talk about agi but on this
01:06:39.900 issue of narrow ai getting more and more powerful especially i'm especially concerned and i know gary is
01:06:47.660 about the information space and again i because i i just view what you know ordinary engagement with
01:06:54.700 social media is doing to us as you know more or less entirely malignant and the algorithms you know
01:07:03.020 as simple as they are and as diabolically effective as they are have already proven sufficient to test
01:07:11.660 the the very fabric of society and and the the long-term prospects of democracy but there are other
01:07:17.980 there are other elements here so for instance the algorithm is effective and employed in the context of
01:07:24.620 what i would consider the the perverse incentives of the business model of the internet the fact
01:07:29.420 that everything is based on ads which gives the logic of you know endlessly gaming people's attention
01:07:35.580 you know if we solve that problem right if we decided okay this is it's the ad-based model that's
01:07:41.340 pernicious here would the problem of the very narrow problem you pointed to well you know with the
01:07:47.500 with youtube algorithms say would that go away or do you are you still just as worried in some new
01:07:54.620 by some new rationale about that problem my view and then stuart can jump in with his my view is that
01:08:02.140 the problems around information space and social media are not going away anytime soon that we need
01:08:09.340 to build new technologies to detect misinformation we need to build new regulations to make a cost for
01:08:15.900 producing it in a wholesale way and then there's this whole other question which is like right now maybe
01:08:21.580 stuart and i could actually agree that we have this sort of mediocre ai can't fully be counted on has
01:08:28.860 a whole set of problems that goes with it and then really the question is there's a different set of
01:08:33.580 problems if you get to an ai that could for example say for itself you know i don't really want to be
01:08:40.380 part of your algorithm because your algorithm is going to have these problems like that opens a whole new
01:08:46.220 can of worms i think stuart is terrified about them and i'm not so sure as to not be worried about
01:08:51.740 them you know i'm a little bit less concerned than stuart but i can't in all intellectual honesty say
01:08:58.140 that no problems lie there i mean maybe there are problems that lie there so long before we get an
01:09:03.580 algorithm that can reflect in that way just imagine a fusion of what we almost have with chat gpt with
01:09:12.540 deep fake video technology right so you can just get endless content that is a persuasive simulacrum
01:09:21.980 of you know real figures saying crazy things yeah i mean this is minutes away not years away i have an
01:09:27.900 editor of the major paper this is not an ordinary editor a special role but a paper everybody knows
01:09:33.900 he read a debate that i got in on twitter two days ago where i said i'm worried about these things and
01:09:39.340 somebody said ah it's not a problem and he showed me how in like four minutes he could make a fake
01:09:44.860 story about like antifa protesters cause the january 6th thing using his company's templates and an
01:09:51.580 image from mid-journey and it looked completely authentic like this can be done at scale right now
01:09:57.340 there are dissemination questions but we know that for example you know russia has used armies of of
01:10:03.980 troll farms and lots of you know iphones and fake accounts and stuff like that so this is like an
01:10:09.100 imminent problem it will affect it's it's a past problem right it's an ongoing problem it is here
01:10:17.100 it's already happened many times in fact if you if you go on google news and look at the fact check
01:10:22.380 section just in the last day there have been faked videos of president biden saying that all 20 to 22
01:10:29.820 year olds in the united states will be drafted to fight in the war oh my god this is just here now
01:10:36.300 and it's a question of scope and spread and regulation this doesn't require really further
01:10:41.900 advances in ai what we have now is already sufficient to cause this problem and is and it's going i think
01:10:48.860 i think sam's point is it's going to explode as the capabilities of these tools and their availability
01:10:56.780 increase absolutely i i would completely agree i i think you know i i don't want to give the
01:11:01.980 impression that i only care about extinction risk and none of this other stuff matters i spend a ton
01:11:07.420 of time actually working on lethal autonomous weapons which again already exist despite the russian
01:11:14.220 ambassador's claim that this is all science fiction and won't even be an issue for another 25 years
01:11:20.300 yeah it's just nonsense as he was saying that uh you know there was a turkish company that was
01:11:25.900 getting ready to announce a drone capable of fully autonomous hits on human targets so so i think the
01:11:33.900 solution here and i and i i have a subgroup within my center at berkeley that is specifically working
01:11:40.860 on this headed by jonathan stray the solution is very complicated it's an institutional solution it
01:11:48.380 probably involves third you know setting up some sort of third party infrastructure much as you know in
01:11:54.780 real estate there's a whole bunch of third parties like title insurance land registry notaries who
01:12:01.740 exist to make sure that there's enough truth in the real estate world that it functions as a market
01:12:07.900 same reason we have accountants and auditing in the stock market so there's enough truth that it functions
01:12:14.860 as a market we just haven't figured out how to deal with this avalanche of disinformation and deepfakes
01:12:22.060 but it's going to require similar kinds of institutional solutions and our politicians have
01:12:28.060 to get their hands around this and make progress because otherwise i i seriously worry about democracies
01:12:35.260 all over the world the only thing i can add to what stewart said is all of that with the word yesterday
01:12:40.460 like we don't have a lot of time to sort this out if we wait till after the 2024 election
01:12:46.140 that might be too late we really need to move on this i have to think the business model of the
01:12:51.180 internet has something to do with this because if if there was no money to be made by gaming people's
01:12:56.940 attention with misinformation that's not to say it would it would never happen but the incentive would
01:13:03.580 evaporate right and i mean there's a reason why this doesn't happen on netflix right there's a reason
01:13:08.940 why we're not having a conversation about how netflix is destroying democracy in the way it it
01:13:15.660 serves up each new video to you and it's because there's no incentive i mean i guess there's they've
01:13:21.580 been threatening to move to ads in certain markets or maybe they have done so this could go away
01:13:26.220 eventually but you know heretofore there's been no incentive for netflix to try to figure out i mean
01:13:31.340 they're trying to keep you on the platform because they want you not to churn they want you to end
01:13:36.220 every day feeling like netflix is an integral part of your life but it's there there is no incentive
01:13:42.380 they want you to binge watch for 38 hours straight exactly yeah so entirely innocent in this no yeah so
01:13:48.540 it's but but it's not having the effect of giving them a rationale to serve you up insane confections
01:13:56.940 of pseudoscience and overt lies so that someone else can drag your attention for moments or
01:14:06.060 or ours because it's their business model because they've sold them the the right to do that on
01:14:11.820 their platform it's not entirely an internet phenomena in the sense that like fox news also
01:14:18.060 has a kind of engagement model that it does center around in my view maybe i get sued for this but
01:14:24.700 center around misinformation so for example you know we know that the executives there were not all on
01:14:30.300 board for the big lie about the election but they thought that you know maybe it was good for ratings or
01:14:35.260 something like that yeah i mean you could look at the weekly world news right that that was a
01:14:39.500 that's right and go back to yellow journalism in the ordinary print outlet which every week would tell
01:14:45.260 you that you know the creatures of hell have been photographed emerging from cracks in the streets
01:14:50.780 of los angeles and you name it right right so what happened historically the last time we were
01:14:57.020 this bad off was the 1890s with yellow journalism hearst and all of that and that's when people started
01:15:03.500 doing fact checking more and we might need to revert to that to to solve this we might need to have a
01:15:09.420 lot more fact checking a lot more curation rather than just random stuff that shows up on your feed and
01:15:14.940 is is not in any way fact check that might be the only answer here at some level probably taking it
01:15:20.540 not so not saying okay facebook has to fact check all the stuff or google has to fact check all the
01:15:25.660 stuff but facebook has to make available filters where i can say okay i don't want stuff in my news feed
01:15:32.860 that hasn't passed some uh some basic standard of of accountability and accuracy and it could be
01:15:40.060 voluntary right there's a business model so come coming back to this business model question i think
01:15:46.140 that the tech companies are understanding that you know the the digital banner ad has become uh pretty
01:15:54.540 ineffective and advertisers are also starting to understand this and i think when you look at the
01:15:59.820 metaverse and say well what on earth is the business model here right why are they spending billions
01:16:04.940 and billions of dollars and i went to a conference in south korea where the business model was basically
01:16:10.940 revealed by the previous speaker who was an ai researcher who is very proud of being able to use
01:16:18.140 chat gpt like technology along with the fact that you're in the metaverse so you have these avatars
01:16:24.380 to create fake friends so these are people who are their avatars who appear to be avatars of
01:16:31.740 real humans who spend weeks and weeks becoming your friend learning about your family telling you about
01:16:36.860 their family blah blah blah and then occasionally will drop into the conversation that they just got
01:16:41.900 a new bmw or they really love rolex watch blah blah blah right so the digital banner ad is replaced
01:16:50.060 by the chat gpt driven fake human in the metaverse and goes from 30 milliseconds of trying to convince
01:16:56.860 you to buy something just six weeks right that's the business model of the metaverse and this would
01:17:03.180 be you know this would be far more effective far more insidious and destructive although when you
01:17:10.140 think about it it's what happened it's what people do to one another anyway i mean there's like product
01:17:14.940 placement in relationships there's a little bit of that but they're really expensive right i mean you
01:17:19.980 know an influencer on on youtube you know you have to pay them tens of thousands of dollars you know to
01:17:26.460 get 10 20 seconds of product placement out of them but these are these are quasi humans you know they
01:17:33.900 cost pennies to run and they can take up hours and hours and hours of somebody's time and interestingly the
01:17:41.020 the european union the ai act has a strict ban on the impersonation of human beings so you you always
01:17:49.740 have a right to know if you're interacting with a real person or with a machine and i think this is
01:17:56.140 something that will be extremely important it sounds like yeah okay it's not it's not a big risk right now
01:18:02.940 but i think it's going to become an absolute linchpin of you know human freedom in the coming
01:18:11.100 decades i tend to think it's going to be a story of ai to the rescue here where the only way we can
01:18:16.540 detect deep fakes and and other um you know sources of misinformation in the future will be to have
01:18:24.780 sufficiently robust ai that can go to war against the other ais that are creating all the misinformation
01:18:31.980 i think it's a useful tool but i think what we need actually is provenance i think video for
01:18:38.140 example that's generated by a video camera is watermarked and time stamped and location coded and
01:18:45.340 so if a video is produced that doesn't have that and it doesn't match up cryptographically with you
01:18:51.740 know with the real camera and so on then it's just filtered out so it's much more that it doesn't even
01:18:59.100 appear unless it's verifiably real it's not that you get you let everything appear and then you try
01:19:06.060 to sort of take down the stuff that's fake it's much more of a sort of positive permission to appear
01:19:12.060 based on authenticated provenance i think that's the right way to go i i think we should definitely
01:19:18.140 do that for video i think that for text we're not going to be able to do it people cut and paste
01:19:22.940 things from all over the place we're not really going to be able to track them it's going to be too
01:19:26.300 easy to beat the watermark schemes we should still try but i think we're also going to need to look
01:19:31.580 at content and do the equivalent of fact checking and i think that ai is important because the scale
01:19:37.020 is going to go up and we're not going to have enough humans to do it we're probably going to
01:19:40.220 need humans in the loop i don't think we can do it fully by machine but i think that it's going to be
01:19:45.100 important to develop new technologies to try to evaluate the content and try to validate it in
01:19:50.380 something like the way that a traditional fact checker might do yeah i mean also i think the
01:19:56.620 the text is probably more validation of sources so at least until recently there are there are trusted
01:20:05.100 sources of news and we trust them because you know if a journalist was to to generate a bunch of fake
01:20:11.580 news they would be found out and they would be fired and i think we could probably get agreement on
01:20:19.420 certain standards of operation of that type and then if if the platforms provide the right filters
01:20:27.100 then i can simply say i'm not interested in new sources that don't subscribe to these standardized
01:20:34.140 principles of operation i'm less optimistic about that particular approach because we've had it for
01:20:41.500 several years and most people just don't seem to care anymore in the same way that most people don't
01:20:45.580 care about privacy anymore most people just don't care that much about sources um i would like to
01:20:51.740 see educational campaigns to teach people ai literacy and web literacy and so forth and you know
01:20:57.500 hopefully we make some progress on that i think labeling particular things as being false or i think
01:21:03.100 the most interesting ones are misleading has some value in it so a typical example of something that's
01:21:08.300 misleading is if robert kennedy says that somebody took a covid vaccine and then they got a seizure
01:21:13.740 the facts might be true but there's an invited inference that taking covid vaccines is bad for
01:21:18.780 you and there's lots of data that show on average it's good for you and so i think we also need to
01:21:23.820 go to the specific cases in part because lots of people say some things that are true and some that
01:21:28.780 are false i think we're going to need to do some addressing of specific content and educating people
01:21:35.740 through through labels around them about how to reason about these things
01:21:39.500 okay gentlemen agi alignment and the control problem let's jump in so gary early on you said
01:21:47.260 something skeptical about this being a real problem because you didn't necessarily see that agi could
01:21:54.620 ever form the motivation to be hostile to humanity and this echoes something that many people have said
01:22:00.780 i certainly steve pinker has said similar things and i think that is you know i'll put words into
01:22:07.740 stewart's mouth and then let him complete the sentence i think that really is a a red herring at this
01:22:13.340 point or a straw man version of the concern uh it's not a matter of our robot overlords spontaneously
01:22:22.540 becoming evil it's just it's a story of what mismatches in competence and in power can produce in the
01:22:32.700 absence of perfect alignment in the absence of that ever-increasing competence i mean now we're
01:22:38.060 talking about a situation where presumably the machines are building the next generation of
01:22:44.060 even better machines the question is if they're not perfectly aligned with our interests which is
01:22:50.460 to say if human well-being isn't their paramount concern even as they they outstrip us in every
01:22:55.900 conceivable or every relevant cognitive domain they can begin to treat us spontaneously based on
01:23:04.780 you know goals that we can no longer can you know even contemplate the way we treat every other
01:23:11.580 animal on earth that can't contemplate the goals we have formed right so it's not that we have to be
01:23:16.300 hostile to the the creatures of the field or the ants you know that are walking across our driveways
01:23:22.380 but it's just that the moment we just we get into our heads to do something that ants and and farm
01:23:28.300 animals can't even dimly glimpse we suddenly start behaving in ways that are you know totally inscrutable
01:23:35.580 to them but also totally destructive of their lives and just by analogy it seems like we may create the
01:23:43.420 very entities that would be capable of doing that to us maybe i didn't give you much of the sentence to
01:23:48.860 finish stewart but we'll weigh in and then let's give it to gary yeah i mean there are a number of
01:23:55.100 variations on this argument so so steve pinker has says you know there's no reason to create the alpha
01:24:01.420 male ai if we just build ai along more feminine lines it'll have no incentive to take over the world
01:24:08.700 yan lecun says well there's nothing to worry about we just don't have to build in instincts like
01:24:14.620 self-preservation and i made a little a little grid world you know mdp which is a markov decision
01:24:21.820 process so it's a it's just a little grid where the ai system has to go and fetch the milk from
01:24:27.660 you know a few blocks away and uh on one corner of the grid there's a you know there's a bad person
01:24:34.460 who wants to steal the milk and so what does the ai system learn to do it learns to avoid the bad
01:24:42.220 person and go go go to the other corner of the grid to go fetch the milk so that there's no chance
01:24:47.500 of being intercepted and we didn't put self-preservation in at all the only goal the system
01:24:53.420 has is to fetch the milk and uh self-preservation follows as a sub goal right because if you're
01:25:01.100 intercepted and killed on the way then you can't fetch the milk so so this is a this is an argument
01:25:06.700 that a five-year-old could understand right the real question in my mind is why are extremely
01:25:13.340 brilliant people like yan lecun and stephen pinker not able pretending not to understand right or
01:25:19.100 pretending either i think there's some motivated cognition going on i think there's a self-defense
01:25:24.620 mechanism that kicks in when your your whole being is is feels like it's under attack because you
01:25:32.860 in the case of yan you know devoted his life to ai uh in the case of stephen is his whole thesis
01:25:39.660 these days is that progress and technology has been good for us and so he doesn't like any talk
01:25:47.020 that progress and technology could perhaps end up being bad for us and so you go into this defense
01:25:53.340 mode where you come up with any sort of argument and i've seen this with ai researchers they immediately
01:25:59.580 go to oh well there's no no need to worry we can always just switch it off right as if a super
01:26:05.340 intelligent ai would never have thought of that possibility you know it's kind of like saying oh
01:26:10.700 yeah we you know we can easily beat deep blue and all these other chess programs we just play the right
01:26:16.140 moves what's the problem it's it's a form of thinking that that makes you know makes me worry
01:26:22.300 even more about you know our long-term prospects because it's one it's one thing to have technological
01:26:27.820 solutions if you'd like to continue listening to this conversation you'll need to subscribe at
01:26:36.540 samharris.org once you do you'll get access to all full-length episodes of the making sense podcast
01:26:42.140 along with other subscriber-only content including bonus episodes and amas and the conversations i've
01:26:48.140 been having on the waking up app the making sense podcast is ad free and relies entirely on listener
01:26:53.820 support and you can subscribe now at samharris.org