Trump's Mar-a-Lago home is raided by the FBI, and Salman Rushdie is attacked on stage by a crazed lunatic. And yet again, I am accused of being soft on Islam, and of pandering to Islamic extremists. What's the problem with this? Is it racism or xenophobia? Or is it Islamophobia? And why do we care so much about what other people think about it? And what does it have to do with freedom of speech and freedom of the press? And who is the real culprit here? All that and more on today's episode of the Making Sense Podcast by Sam Harris. Subscribe to Making Sense to get immediate access to all the latest episodes and listen to them on your favorite podcast platform. Subscribe on Apple Podcasts! Subscribe on iTunes Learn more about your ad choices. If you like what you hear, please consider becoming a supporter of the podcast by becoming a patron of Making Sense. It helps spread the word about the podcast and help spread it around the world. Thanks to our sponsors, including VaynerSpeakers.org and Groundless Media. Make Sense Media. Sam Harris Sam's newest book, "Making Sense" is out now! Learn more at makingsense.co/makingense/sammyspondent and subscribe to the podcast on amazon to become a supporter and receive a free copy of the book "Make Sense Podcasts by Sam's excellent book "The Making Sense" by clicking here in paperback and hardcover edition of The Making Sense at amazon. and paperback by . by The New York Times bestselling paperback edition of "The Vagrant is out in paperback on amzn_ by Penguin Books Thank you for supporting the making sense podcast? by Pizzarelli, thank you're getting a copy of "making sense? and much more! , and we'll send Sam's book out in hardcover "The Best of it all? " " by , and more thanks to so much more, and so much & ? Thanks, Sam's new book , "the Making Sense Press will be out soon! " and of course, thank you, Sam will be giving you all of that, " " and "
01:21:06.780are you are you recording this podcast while robbing a bank
01:21:11.120look it's for the greater good right the money can go further so that's a you know don't do that
01:21:16.560just in case my followers take me too literally let's go on record we're against bank robbery
01:21:22.320yeah at least currently I'm with that I won't say anything about value lock-in on that point but
01:21:27.500well let's talk about this issue of value lock-in and what you call moments of plasticity and and
01:21:34.940trajectory changes how do you think about I mean you might want to define some of those terms but
01:21:39.860what is the um the issue and opportunity here sure so at the moment there's a wide diversity of moral
01:21:49.860views in the world and rapid moral change so you know the the gay rights movement can build in power
01:21:57.840the momentum in the 60s and 70s and then you get legalization of gay marriage a few decades later
01:22:03.000clear example of positive moral change and so we're very used to that we think it's kind of just
01:22:08.060part of reality we wouldn't expect that that might change in the future such that such that
01:22:14.200moral change might come to an end the idea of value lock-in is taken seriously that maybe it will
01:22:20.040will actually maybe moral progress will stall and so value lock-in is the idea that some particular
01:22:25.840set of some particular ideology or moral view or kind of narrow set of ideologies or moral views
01:22:32.500could become globally but dominant and persist for an extremely long time and I'm worried about that
01:22:40.040I think that's a way in which the long-term future could go that could just lead to enormous loss of value
01:22:49.940could make the future dystopia rather than a positive one and you know the most it's easiest to see this
01:22:58.180by starting off with maybe the most extreme case and then kind of thinking about ways you might not
01:23:05.400be quite that extreme but close and most extreme cases just imagine a different history where it
01:23:12.280wasn't you know the US and UK and liberal countries that won World War II and instead it was Stalinist
01:23:20.660USSR or let's say the Nazis who won now they really aimed ultimately a global domination they wanted to
01:23:28.440create a thousand-year empire if that happened and they had a very narrow and like morally abominable
01:23:36.360ideology if they had succeeded would that have been able to persist forever or at least for an
01:23:43.840extremely long time and I think actually the answer is yes where ideologies in general and social systems can
01:23:52.040certainly persist for centuries or even thousands of years so the major world religions are thousands of years old
01:23:57.000but technology I think over time will give us greater and greater power to control the values of the
01:24:04.860population that lives in the world and the values in the future as well where at the point in time where we
01:24:13.060develop the ability to like stop aging such that people's lifespans could last for much much longer
01:24:19.520well then a single dictator could you know it's not confined to the lifespan of a rule is not confined
01:24:26.420to the lifespan of a single individual 70 years but instead would be thousands of years or however long that
01:24:34.640ruler would last even more extreme I think is the point of time at which the ruling beings are digital rather than
01:24:43.060biological and again this feels like sci-fi but you know we're talking about I actually think many people I
01:24:49.140respect greatly think that moment is coming in in a few decades time but let's just say it's like you know plausible
01:24:55.980within this century or centuries well consider what the environment is like for a ruler who is a digital being well they
01:25:07.360don't die they are in principle immortal software can replicate itself and any piece of machinery would wear out but the AI systems
01:25:18.340themselves could persist forever so one of the main reasons why we get model change over time namely that people die and
01:25:25.540are replaced by people who have slightly different views that would be undermined I think in this kind of scenario other causes of
01:25:32.520change would be undermined too or taken away so I think the fact that we have competition between different
01:25:37.620model ideas because we have this diversity of model ideas well if the Nazis had won World War II if they had
01:25:45.180established a world government you know again this is I'm not claiming this is likely or possible but imagine the
01:25:50.180counterfactual history well then we would lose out on that competition of ideas there wouldn't be that pressure
01:25:56.240for model ideas to change and then finally is just potentially a changing
01:26:01.300environment like upsets happen you know civilizations fall apart and so on but I think technology in the future
01:26:07.880could allow for greater and greater stability so again I think AI is particularly worrying here where
01:26:14.400you know again we're imagining the leaders of the Nazis controlling the world okay well one way in which
01:26:21.340dictatorships end is insurrection whether that's by the army or by the people if your army is automated
01:26:28.260if you have kind of robots policing the streets then that dynamic would end similarly with your police force
01:26:34.640and again all of this seems like sci-fi but I think we should be thinking about it we should be taking it seriously
01:26:40.620because it really could come sooner than we think and the technologies we have today would have looked like wild sci-fi
01:26:47.320just a century ago yeah well maybe we'll close out with a conversation about AI
01:26:52.280because I share your concern here and I share your interest in in elucidating the bad intuitions that make
01:27:01.740this not a concern for so many other people so let's get there in a second but remind me what is
01:27:09.640the lock-in paradox that you talk about in the book sure so the lock-in paradox it's like the
01:27:16.240liberal paradox of tolerance but you know I push in the book that we don't want to lock in any
01:27:21.980particular values right now because probably all of the values we're used to are abominable you know
01:27:29.060I think we're like Plato and Aristotle arguing about the good life and we've not realized the fact that
01:27:36.280we all own slaves is you know maybe morally problematic so I think that there's enormous amount
01:27:43.420of moral progress yet to come and we want to ensure that we guarantee that progress and so even if you
01:27:49.720know I think motivation of thinking oh well it's you know 21st early 21st century western liberal values
01:27:55.180that's the best thing and we should just ensure they happen forever I'm like no we need progress
01:27:59.800the lock-in paradox however is that if we wanted to create a world where we do get a sustained period
01:28:07.520of reflection and moral progress that does mean we'll need some restraints so we might well need
01:28:14.420to lock in certain ideas like commitment to free debate commitment to tolerance of opposing
01:28:23.640moral views restrictions on ways of gaining power that aren't via making good arguments and moving people
01:28:33.280over to your side kind of in the same way that like you know the U.S. constitution at least
01:28:39.520aspirationally is trying to bring about this like liberal society where many different world views
01:28:44.200can coexist and where you can make moral progress but in order to have that and not fall into a
01:28:49.520dictatorial dictatorship you needed to have these restraints on any individual's power and so I think we
01:28:56.400may well need something like that for the whole world where we stop any particular ideology
01:29:02.740or just set of people from gaining too much power such that they can just you know control the world
01:29:09.220so that we can just have continued reflection insight empathy and moral progress yeah it's tempting to
01:29:16.420try to formulate the minimum algorithmic statement of of this which is is something that allows for the
01:29:25.300the kind of the kind of incrementalism you've just sketched out and and the the error correction
01:29:31.620exactly that i think is necessary for any progress and including ethical progress but which locks in the
01:29:40.820methodology by which one would safeguard the principle of error correction and exactly you know sensitivity to
01:29:49.460uh the the the open-endedness of the exploration in ethical space right i mean it's like you know
01:29:57.060i i think there's probably whether you define it as consequentialist or not i i think it's probably less
01:30:02.980important i mean the consequentialism as we know has its its wrinkles but i think there's an almost
01:30:09.140axiomatic claim about the primacy of safeguarding the well-being of conscious creatures at the bottom of
01:30:17.220i think any sane moral enterprise right now it's not in how you how you think about well-being how you
01:30:24.340aspire to measure it or quantify it how how you imagine aggregating these quantities there are many
01:30:30.500devils in all those details but the basic claim that you know you have to be at bottom care about the
01:30:37.220happiness and suffering of conscious creatures and marry that to a truly open-ended conception of and
01:30:44.980you know and you know perpetually refined and refinable conception of what well-being is all things
01:30:51.700considered you know and that's where all the discussion remains to be done i think i think those
01:30:57.380are the bootstraps in some form that we have to pull ourselves up by uh yeah so it is a very notable fact
01:31:05.540that i think basically all plausible model views that are on the table see you know well-being as one
01:31:14.660part of the good perhaps they think other things are good too um art flourishing natural environment and
01:31:20.100so on but at least you know happiness broadly construed flourishing life avoidance of suffering
01:31:27.780those are good things and i would just well just not to be too pedantic here but i in in my brain
01:31:35.380all of that collapses to just a fuller understanding of what well-being is i don't think any you know
01:31:42.500reasonable definition of well-being would would exclude the things you just mentioned and the cash value of
01:31:48.820the things you just mentioned it has to be at some level how they impact the actual or potential
01:31:55.940experiences of conscious creatures this is an argument i've made somewhere if you're going to tell
01:32:01.380me you've got something in a you know locked in a box that is really important it's really valuable it's
01:32:07.860important that we consider its fate in everything we do but this thing in the box cannot will not
01:32:16.100affect the experience of any conscious creature now or in the future right well then i i just think
01:32:23.780that's a a contradiction i mean we what we what we mean by value is something that could be actually or
01:32:29.380potentially valued by someone somewhere great so i think we differ a little bit on this where
01:32:35.460i mean i also think like my best guess is that value is just a property of conscious experiences
01:32:44.820and you know art and knowledge and the natural environment all good things insofar as the impact
01:32:50.100the well-being of conscious experiences but that's not something i would want to bake in i'm not so
01:32:55.300certain in it you know there are other views where the satisfaction of carefully considered preferences
01:33:00.420is of positive value even if that doesn't go via the change in any part in any conscious
01:33:07.860experience and other people do have the view that but so again that's the phrase i'm actually not
01:33:13.700familiar with when i hear you say satisfaction of anything including preferences aren't we talking
01:33:21.220about conscious experience actual or potential uh so on this view it's no so suppose that avistotle
01:33:28.820wanted to be famous and he wasn't very famous during his time i'm actually not sure but let's
01:33:35.060let's say that he was certainly much less famous than he is now right does the fact that we are
01:33:39.300talking about avistotle now increasing his fame does that increase his well-being and it's obviously
01:33:46.100not impact i'm prepared to answer that question i'm sure you're prepared to say no and honestly and
01:33:51.300my best guess view is completely agreeing with you that anything we do to talk about avistotle now
01:33:57.220we could say he smells and we hate him it doesn't change his well-being at all however that's not
01:34:03.060true other people disagree and i think other smart people disagree and so i certainly wouldn't want to
01:34:08.340have wouldn't want to say look we figured that out i would want to say look now we've got plenty of
01:34:14.100time to debate this i think i'm going to win the debate maybe the other person thinks they aren't
01:34:18.340going to win the debate but we've got all the time in the world to really try and figure this out
01:34:23.780and i would want to be open to the possibility that i'm badly wrong i guess we are kind of badly
01:34:29.940wrong on this well that's interesting i would love to uh debate that so maybe as a sidebar you and i
01:34:36.420can figure out uh who and and when and where and absolutely always happy to talk about yeah okay well
01:34:42.900so let's talk about ai because it is in in some sense the the apotheosis of of many of these concerns
01:34:50.660and i agree that should anything like a malevolent agi be built or be put in the service of malevolent
01:35:00.340people uh the prospect of value lock-in and you know orwellian totalitarianism it just goes way way
01:35:08.180up yeah uh and that's worth worrying about whatever you think about the the importance of future generations
01:35:13.700but people are there are several stumbling blocks on the path to taking these concerns about ai
01:35:20.260seriously the first is and this really is the first knee-jerk reaction of people in the field who
01:35:27.460don't take it seriously it's the claim that we're nowhere near doing this that are you know our language
01:35:34.980models are still pretty dumb even though they can produce some interesting texts at this point there's
01:35:42.820certainly no concern that they could there could be anything like an intelligence explosion in any of
01:35:48.900the machine learning algorithms we're currently running you know alpha zero for all of its amazing
01:35:54.900work is still not going to get away from us and it's it's hard to envision what could allow it to i
01:36:02.660know that there are people who you know on our side of this debate who have given a kind of kind of
01:36:07.700probability distribution over over the future you know giving it you know a 10 chance it might happen
01:36:13.220in 20 years and a 50 chance it'll happen within 50 years etc but if in my mind there is there's just there's no
01:36:20.580need to even pretend to know when this is going to happen the only thing you you need to acknowledge is that if
01:36:28.340not much needs to be true to make it guaranteed to happen you know that it's really that there's
01:36:36.100there are really two assumptions one needs to make i mean the one assumption is we will continue to make
01:36:41.540progress in building intelligent machines at whatever rate unless something terrible happens right i mean
01:36:48.420this brings us back to the collapse of civilization the only thing that would cause us to stop making
01:36:54.260progress in building intelligent machines given how valuable intelligence is is uh something truly
01:37:02.100awful that makes it you know they render some generation of future humans just unable to improve
01:37:09.380hardware and software so there's that i mean so you know barring catastrophe we're going to continue to
01:37:14.980make progress and then the only other assumption you need is that intelligence is substrate independent
01:37:23.300right that it's not it doesn't require the wet wear of a biological brain it can be instantiated in
01:37:30.340silico and we already know that's true given the piecemeal ai we've already built and we already know
01:37:38.180that you can build a calculator that's better than any human brain calculator and you can do that in
01:37:43.860silicon and there's just no reason to think that the input output properties of a complex information
01:37:51.300processing system are magically transformed the moment you make that system out of meat and so again
01:38:00.260given any rate of progress and given the assumption of substrate independence eventually and again the time
01:38:09.140horizon is can be left totally unspecified you know five years or 500 years eventually we will be in the
01:38:16.100presence of intelligent machines that are far more intelligent and competent and powerful than we are
01:38:24.340and the the only other point of confusion that i've detected here again and again and again which
01:38:30.260can be easily left to one side is the question of consciousness right will these machines be sentient
01:38:37.220and that really is a red herring from my point of view i mean it's it's it's not a red herring ethically
01:38:42.820it's incredibly important ethically because you know if if they are conscious well then then we
01:38:48.420can have a conversation about whether they're ethically more you know suddenly more important
01:38:52.980than we are or at least are equals and whether we're you know creating machines that can suffer
01:38:57.940etc i mean is that if we're creating simulated worlds that are essentially hell realms for sentient
01:39:04.340algorithms i mean that would be a terrible thing to do so yes it's all very interesting to consider but
01:39:09.300assuming that we can't currently know and and we may one day never know even in the presence of
01:39:15.300machines that say they're conscious and that pass the turing test with flying colors we may still be
01:39:21.380uncertain as to whether or not they're conscious the whether or not they're conscious is is the
01:39:25.940question that is totally orthogonal to the question of what it will be like to be in the presence of
01:39:33.220of machines that are that intelligent right i mean i think intel i think you can dissociate
01:39:38.980or very likely you can dissociate consciousness and intelligence i mean yes it may be the case that
01:39:44.180the the lights of consciousness magically come on once you get a sufficient degree of complexity
01:39:50.260and intelligence on board that that may just be a a fact of our world and we certainly have reason to
01:39:56.340expect that consciousness arrives far sooner than that when you look around at animals who we deem
01:40:01.140conscious who are far less intelligent than we are so i do think they're just separable questions
01:40:06.820but the question of intelligence is really straightforward we're just talking about
01:40:11.380you know conscious or not we're talking about machines that can engage in goal-oriented behavior and
01:40:18.420learning in ways that are increasingly powerful and ultimately achieve a power that far exceeds our own
01:40:27.940given that it's it seems virtually inevitable given these two measly assumptions again any progress and
01:40:35.860substrate independence for intelligence we have to acknowledge that we or our children or our grandchildren or some generation of human beings
01:40:46.820stand a very good chance of finding themselves in relationship
01:40:50.180relationship to intelligences that are far more powerful than they are and that's an amazing
01:40:57.620situation to contemplate and it's i'll just remind people of uh stuart russell's analogy here which
01:41:04.180somehow at least in my experience changes people's intuitions on his account it's like we're in the following
01:41:12.020situation we just got a a message from some distant point in the galaxy which reads people of earth we will
01:41:20.020arrive on your planet in 50 or 75 or 100 years get ready right you know you don't know when this is going to happen
01:41:28.580but you know it's going to happen and that statement of of looming relationship delivered in that form would be quite
01:41:37.700a bit more arresting than the prospect of continued progress in ai and you know i would argue that it
01:41:45.620it shouldn't be i mean they're they're they're essentially the same case yeah i just think that
01:41:51.620framing is excellent i mean another framing is if you're chimpanzees and you see this other species
01:41:59.220homo sapiens that are perhaps less powerful than you to begin with but they seem to be getting rapidly
01:42:05.140increasing in their power you'd at least be paying attention to that and uh yeah you remind me a little of
01:42:11.300the second book of the free body problem where the premise i hope it's okay to say is aliens will come
01:42:18.660and i think it's in a thousand years time and they have far more greater power than civilization on earth
01:42:25.380does but the entire world just unites to take seriously that threat and work against it and so
01:42:32.900essentially i completely agree with you that the precise timelines are just not very important
01:42:39.300compared to just noting if and well not if but basically when it happens that we have ai systems
01:42:46.980that can do everything that a human being can do except better that will be one of the most important
01:42:52.980moments in all of history plausibly and well we don't know exactly when it'll come but uncertainty
01:43:00.100comes but uncertainty counts both ways it could be hundreds of years and it could be so it could be
01:43:06.820soon for all we know but certainly relative to the tiny amount of time we're thinking about it
01:43:11.700well we should be spending more but i'll actually just comment on the timelines as well where
01:43:17.780there's a recent survey just came out last week in fact of just as many leading machine learning
01:43:25.860researchers as possible i'm not sure the sample size but if it's like the last survey it'll be in the
01:43:31.540hundreds and they asked well at what point in time will you expect you know do you expect a 50 50 chance
01:43:39.620of developing human level machine intelligence that is ai systems that are as good or better
01:43:45.380human beings all tasks essentially that humans can do and they say 37 years so that's in my lifetime
01:43:53.300i will be entering my retirement for the 50 50 chance of advanced ai when i talk to people who are
01:44:00.740really trying to dig into this they often have shorter timelines than that they often think that's
01:44:06.82050 50 within more like 20 years and substantial probability on shorter timelines i tend to be in the more
01:44:14.020skeptical end of that but basically within any of this range concern about ai it's not merely like oh we
01:44:23.060should be thinking about something that happens in 200 years because maybe that will impact 2000
01:44:27.140years it's like no we should be thinking about something that has a chance of catastrophe we're
01:44:34.580on the same survey the probability that those machine learning researchers that's people who are
01:44:39.940building these things who you think would be incentivized to say what they're doing is completely
01:44:43.540safe the probability of an extremely bad outcome as bad as human extinction or worse
01:44:49.700uh that have the typical probability was five percent of that so put those numbers together
01:44:57.940and you have 2.5 chance of dying or being disempowered by ai according to
01:45:05.620machine learning researchers not cherry pecked for being particularly concerned about this
01:45:09.540you have a 2.5 chance of dying or being disempowered by artificial intelligence in your lifetime
01:45:16.020that's more than your chance of dying in a car crash and we think a lot about dying in car crashes
01:45:20.740uh take actions to avoid them there's like a huge industry a huge regulatory system so that people do
01:45:26.580not die in car crashes what's the amount of attention on risks from ai well it's growing thankfully it's
01:45:32.740growing and people are taking it really seriously and i expect that to continue into the future but it's
01:45:37.940still very small it still means that smart morally motivated people can make a really transformative
01:45:45.700difference in the field by getting involved working on the technical side of things to ensure that ai systems
01:45:51.860are safe and honest and to also work on the governance side of things to ensure that like we as a society
01:46:00.420developing this in a responsible way perhaps digging into you know ai is not a monolith there are
01:46:06.340different sorts of ai technology some of which are more risky than others and ensuring that the
01:46:11.060safety enhancing parts of ai come earlier the more dangerous parts come later this really is something
01:46:16.980that people can be contributing now i think they are i think it's hard i'm not claiming that this is
01:46:21.460like it's clear cut what to do there are the questions over like how much progress we can make but we can
01:46:26.180at least try hmm yeah and i think the crucial piece there again this is courtesy of stuart russell who is
01:46:34.580my vote for you know one of the wisest people to listen to on this topic just to make the the safety
01:46:42.180conversation integral to the conversation about developing the technology in the first place i mean he
01:46:49.540he asks us to imagine a world where it's just how perverse it would be if engineers who were designing
01:46:58.740bridges thought about bridge safety only as an afterthought and under great you know duress
01:47:06.020right so like there's the question about building bridges you know all of the resources put toward
01:47:10.660that end but then only as an afterthought do engineers have conversations about that what
01:47:16.820they call the not falling down problem with respect to bridges right i mean no it's completely insane
01:47:22.500to conceive of build the project of building bridges as being separable from the project of building
01:47:27.860bridges that don't fall down likewise it should be completely insane to think about the prospect of
01:47:33.220building agi without grappling with the problem of alignment and uh related matters i mean so in defense of
01:47:42.980of more near-term concern here many people have mined the history here and found these these great
01:47:51.780anecdotes about just that reveal how bad we are at predicting technological progress i forget the details
01:47:59.700here but i think it was i think it was the new york times or some esteemed u.s paper that confidently
01:48:05.060published a an editorial i think it was just something like two months before the the wright brothers uh
01:48:12.820conquered flight you know confidently predicting that that human beings will never fly or it will take
01:48:18.580millions of years to to uh engine because it took evolution millions of years to engineer the bird's wing
01:48:24.340it would likewise take us millions millions of years to fly and then you know two months later we've got
01:48:29.140people flying and i think there's one anecdote that you have in your book that i hadn't heard but the
01:48:35.460the famous physiologist jbs haldane in 1927 i was going to mention this one yeah go ahead yeah go ahead
01:48:43.700what what was that reference oh yeah well so yeah one of the great scientists of his time uh evolutionary
01:48:49.620biologist and had some remarkable essays on you know the future and predicting future events and how
01:48:56.580a large civilization could get uh and he considered the question will there ever be a return trip to the
01:49:03.060moon and he said oh yeah probably in eight million years and that was yeah did you say 1927 yeah yeah
01:49:11.780so that shows you like people like to make fun of people in the past making these bold uh scientific
01:49:20.100you know predictions about technology such as the early computer pioneers at the i think it was the
01:49:26.340dartmouth conference saying oh yeah we'll build human level artificial intelligence in about six
01:49:30.740months or so and you know people mock that and say oh that's so absurd but what they forget is that
01:49:35.700there are just as many people on the other side making absurd claims about how long technology will
01:49:41.060take in fact technology look things are hard to predict technology can come much later than you might
01:49:48.500expect it can also come much sooner and in the face of an uncertain future we have to be preparing for
01:49:54.340the near-term scenarios as well as the long-term scenarios yeah and the thing we have to get
01:49:59.380straight in the meantime is that the incentives are currently not appropriate for wisdom and and
01:50:08.500circumspection here i mean it's just we we have something like an arms race condition both with respect to
01:50:15.460private efforts and you know rival governments one must imagine right and we're not you know we haven't
01:50:21.780fused our moral horizons with the chinese and the israelis and anyone else who might be doing this
01:50:28.020work and gotten everyone to agree just how careful we need to be in the final yards of this adventure
01:50:36.180as we approach the end zone and and it does seem patently obvious that an arms race is not optimal for
01:50:45.060taking safety concerns seriously it's not what we want and i mean the key issue is just how fast do
01:50:53.860things go so supposing the development of artificial general intelligence suppose that's a hundred years
01:51:01.540away or two hundred years away then it seems more likely to me that uh safety around ai will look like
01:51:08.580safety around other technologies so you mentioned flight in the early days of flight a lot of planes crashed
01:51:16.340a lot of people died over time we learned primarily by trial and error what systems make flight safer
01:51:25.540and now very few people die of plane crashes compared to other modes of death especially in more developed
01:51:30.900countries but we could learn by trial and error because a single plane crash would be a tragedy
01:51:36.500um for the people involved and for their families but it wouldn't destroy the whole world
01:51:41.220um and if ai development goes slowly then we could i think i would expect we will learn by that same
01:51:48.340kind of process of trial and error and gradual improvement but there are arguments for thinking
01:51:52.900that maybe it will go fast in fact leading economic models if you assume that there is a point at which
01:51:59.780you get full substitutability between human labor and artificial labor like as ai in particular
01:52:06.340the process of developing more advanced ai systems then you get very fast growth indeed and if things are
01:52:14.660moving that quickly well this slow incremental process of trial and error that humans that you know
01:52:22.340we as a society normally use to uh mitigate the risks of new technology that process is substantially
01:52:30.740disempowered and i think that increases the risks greatly and again it's something that i really
01:52:36.340think you know that prospect of very rapid growth is at least something that should be very rapid
01:52:42.020progress should at least be on the table where again this survey that just came out if you ask
01:52:47.300leading machine learning researchers though i'll caveat i don't think they're really the the people that
01:52:51.780one should think of as experts here but they're still kind of relevant people to consider they think it's about
01:52:56.740even that that will happen like about 50 50 chance that there will be something akin to what is known
01:53:03.140as the intelligence explosion that doesn't necessarily mean the kind of classic it's two days
01:53:08.260where we move from no advanced ai at all to ai that's more powerful than all the rest of the world
01:53:13.620combined perhaps it occurs over the course of years or even like a small number of decades but
01:53:19.060nonetheless if things are moving a lot faster than we're used to i think that increases the risks greatly
01:53:24.420well it seems like things are already moving much faster than we're used to it's hard just it's hard
01:53:29.860to see where we get off this ride because the pace of change is noticeably different and it's you know
01:53:37.220i'm not a a full kurt swile in here but um it does seem like his claim about accelerating technological
01:53:45.140change is um it's fairly palpable now i mean in in most sectors i mean i guess there are sectors
01:53:54.180where virtually nothing is happening or improving i mean that that is also true but when you're talking
01:53:59.540about information technology and what it does to politics and culture that's you know the recent years
01:54:08.420have been uh kind of a blizzard of of change and uh it's hard to know where the break is to pull
01:54:16.340yeah and people should just take a look at the recent developments in machine learning where they're
01:54:23.940not very famous yet but they will be soon soon and it's truly remarkable what's happening i mean
01:54:29.940obviously it's still far away from human level artificial intelligence i'm not claiming it's not
01:54:34.340but when forecasts have been done again of people working in the area of what sort of progress we
01:54:41.620expect to make things are moving faster even than the kind of the forecast of people who thought it was
01:54:46.900going to be moving fast so in particular a recent they're called language models it's where they just
01:54:54.180get trained on enormous amounts of enormous corpuses of human text but one got trained to solve
01:55:01.300do math proofs essentially at about college level kind of easy college level and the innovation behind
01:55:09.220this model was very very small indeed they actually just cleaned up the data so that it really could
01:55:14.100read formula online whereas that was a problem beforehand and it just blew everyone's predictions
01:55:20.580out of water and so now you have this model where you say you know can't like you give it a mathematical
01:55:25.860problem kind of early college level prove that so and so or like what's the answer to this question
01:55:31.700of social or proof and it will do it elite with 50 accuracy and that was something that people
01:55:38.340weren't expecting for years if you look at like other language models as well i mean okay i'll give an
01:55:44.340anecdote of uh so one of my responsibilities for the university is marking undergraduate exams and so i asked
01:55:52.020gpt3 not even the recent the not even the most advanced language model nowadays but one you can
01:55:58.820get public access to i was curious how good is gpt3 this language model compared to oxford undergraduates
01:56:04.820at philosophy who studied philosophy for three years and so i got it to answer these questions
01:56:10.180um like some of the questions that were the students most commonly answered that's great and it passed
01:56:14.980yeah uh it was not very good it was not terribly good it was in the kind of bottom 10 percent of
01:56:20.340students but not the bottom two percent of students i think if i'd given the essays to
01:56:25.780a different examiner i don't think that examiner would have thought oh there's something really
01:56:30.180odd here this is not a human writing it they would have just marked it and thought like
01:56:34.980not a very good student you know a student that's good in some ways has good structure
01:56:39.460it like really knows the art like the essay form but sometimes gets like a little bit derailed by
01:56:44.660arguments that aren't actually relevant or something that's what it would have thought but would not
01:56:48.980have thought oh this is clearly nonsense this is an ai or something and that's just you know that's
01:56:54.820pretty striking we did not have anything close to that just a few years ago and so i don't know anyone
01:57:00.980who's like this you know i think it's fine to be like an enormous ai skeptic and think that agi is
01:57:06.580very far away you know maybe that view is correct like i said there's an enormous amount of uncertainty
01:57:12.180but at least take seriously like this is like a fast moving area of technology
01:57:16.420yeah yeah and obviously there are many interesting and consequential questions that arise
01:57:22.100well short of agi i mean just just on this front i mean what will be the experience of
01:57:30.820all of us collectively to be in the presence of ai that passes the turing test right it does not have
01:57:37.460to be comprehensive agi it doesn't have to be empowered to do things out in the world but just when
01:57:44.420siri or alexa become indistinguishable from people at the level of their conversation and you know have
01:57:53.620access to the totality of human knowledge you know what when does siri on the on your phone become
01:58:00.500less like a a glitchy assistant and more like an oracle yeah it's uh you know that that could be
01:58:07.300fairly near term even with blind spots and you know something that you know disqualifies her as as
01:58:14.580proper ai i mean i think the turing test will fall for most intents and purposes i mean for some
01:58:19.940people the turing test has already fallen i mean we just have this this kind of laughable case of
01:58:24.420the mine yeah the google engineer who who thought that you know the their current language model was
01:58:30.340sentient it was absurd yeah i mean there was no reason to believe that but i would be very surprised
01:58:35.220if it took more than a decade for the turing test to fall for something like a chatbot uh yeah i mean
01:58:42.660i'm actually curious about why there haven't been attempts at the turing test using language models
01:58:49.220my thought is that they actually would have a pretty good chance of passing it so i'm unsure on that but
01:58:53.940one thing you mentioned is yeah this idea of like oh maybe rather than siri uh you have this incredibly
01:58:59.700advanced personal assistance kind of like an article i just want to emphasize this is a you know one of
01:59:04.580the examples where i talk about accelerating the most the kind of safer parts of ai and leaving
01:59:13.700you know maybe slowing down or not investing much in the more dangerous parts where this idea of kind
01:59:21.620of article ai just something that is it's not an agent trying to pursue goals in the world instead
01:59:27.700it's just this input output function you put in text it outputs text and it is like the most helpful
01:59:34.740extraordinarily smart and knowledgeable person that you've ever met that's the technology we want to
01:59:40.100have earlier rather than later because that can help us with the the scariest issues around alignment that can help us