Making Sense - Sam Harris - May 15, 2023


#319 — The Digital Multiverse


Episode Stats

Length

36 minutes

Words per Minute

165.59279

Word Count

6,031

Sentence Count

5

Hate Speech Sentences

1


Summary

David Auerbach is a writer, technologist, and software engineer. He teaches the History of Computation at the New Center for Research and Practice, and his most recent book is Meganets: How Digital Forces Beyond Our Control commandeer our daily lives and inner realities. In this episode, we talk about the growth and problems of online networks, the tradeoffs between liberty and cooperation, the apparent impossibility of getting rid of misinformation, the instability of skepticism when faced with so much misinformation and the future of social media, the weaknesses of large language models, and the problems of large-language models breaking up digital bubbles, online identity and privacy, and other topics. He previously worked at Google and microsoft after graduating from Yale University, and he previously served as a software engineer at both Google and Microsoft. He is a regular contributor to The Times Literary Supplement, The Nation's Technology Review, and elsewhere. His writing has appeared in the Times Literary supplement, The Nation n, Review, the nation n tablet, and elsewhere, and where he teaches the history of computation at the Nomonet Center for R.I.P. and the NewCenter for Research & Practice. He is the author of the book Meganet: How digital forces beyond our control commandeer the daily lives of billions of people, and dominate our inner realities; and he is a frequent contributor to publications such as The New York Times, The New Yorker, and Slate. In this conversation, we discuss how these systems have gotten too fast and too big to be controlled in any sort of way, and how they have become too big, and too centralized, and that we need to learn how to deal with them in order to make sense of them. We talk about how they can be controlled, and what we can learn from them, and why we should pay attention to them, in a world where they control us, and understand them, so that we can be free of them in the first place. Make sure to subscribe to the Making Sense Podcast to keep up to date with the making sense of it all. To access full episodes of the podcast, you'll need to become a member of the mailing list, subscribe to Making Sense, where you'll get access to all sorts of useful resources, including the latest episodes, tips, tricks, and tips and tips on how to stay up to speed on the latest developments in the field of making sense, and much more!


Transcript

00:00:00.000 welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.520 this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.900 of this conversation in order to access full episodes of the making sense podcast you'll
00:00:21.800 need to subscribe at sam harris.org there you'll find our private rss feed to add to your favorite
00:00:27.020 podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.500 therefore it's made possible entirely through the support of our subscribers so if you enjoy
00:00:36.540 what we're doing here please consider becoming one
00:00:38.940 today i'm speaking with david auerbach david is a writer and technologist and software engineer
00:00:54.160 he previously worked at google and microsoft after graduating from yale university his writing
00:01:02.340 has appeared in the times literary supplement mit technology review the nation n plus one tablet
00:01:09.640 and elsewhere he teaches the history of computation at the new center for research and practice
00:01:15.420 and his most recent book is meganets how digital forces beyond our control commandeer our daily
00:01:21.680 lives and inner realities and that is the topic of today's conversation we talk about the growth
00:01:28.020 and the problems of online networks the trade-offs between liberty and cooperation the apparent
00:01:35.080 impossibility of getting rid of misinformation bottom-up versus top-down influences recent developments
00:01:42.840 in ai deep fakes the instability of skepticism when faced with so much misinformation the future of
00:01:51.400 social media the weaknesses of large language models breaking up digital bubbles online identity and
00:01:58.560 privacy and other topics and now i bring you david auerbach
00:02:03.980 i am here with david auerbach david thanks for joining me thanks for having me sam so you've written a
00:02:17.060 an all too timely book that book is meganets how digital forces beyond our control commandeer our
00:02:24.320 daily lives and inner realities and i devoured this book this week and it really speaks to our current
00:02:34.440 circumstance in a comprehensive way so i just i want to track through your the case you make for
00:02:41.060 kind of diagnosing our problem and offering some possible solutions but before we jump in perhaps
00:02:48.120 you can just summarize your background because you have a an interesting intellectual history that
00:02:53.240 straddles tech and and the humanities in a in a nice way so to tell tell people what you've been up to
00:03:00.200 these many years i've sort of been all over the place i i mean from a young age i was i really loved
00:03:05.780 computers but also but also literature so i tried to sort of keep a foot in both but uh the direction
00:03:12.840 of the times sort of pointed me towards software engineering and so i did end up working as a
00:03:17.300 software engineer at microsoft uh in the around the turn of the century and then google sort of in their
00:03:22.840 meteoric rise days and i spent a little over 10 years doing software engineering just before deciding
00:03:30.080 that it was time to i don't know step out and search for another perspective because i'd been
00:03:36.640 looking into literature and philosophy at that time during that time and i wanted to see if i could do
00:03:43.600 something that would conjoin those two sides of myself and so i set out on uh writing and uh and
00:03:52.580 bringing what i hope is a unique vantage to my opinions on technology but also society more
00:04:01.480 generally and how technology is affecting it uh i you know i wrote a tech column for for slate for
00:04:07.560 some years and i was a policy wonk in dc which it's a great experience to have i think that
00:04:14.040 one of the in our hyper specialized world it actually is really good to have hands-on experience in
00:04:20.100 wildly different domains and there's nothing like attending graduate classes at the same time as
00:04:25.760 working at google to make you understand what uh unquestioned assumptions each culture has yeah
00:04:33.180 yeah that's interesting well so let's jump in let's jump in i guess starting with the title of your book
00:04:40.960 what is a meganet and how do they commandeer our lives on a daily basis and actually i'll add a
00:04:49.480 third question of that is why a new word because i think that every neologism needs a justification
00:04:55.140 yeah so the official definition is that a meganet is a persisting evolving and opaque data network that
00:05:03.040 uh really does determine how we see the world and it consists both of the algorithm and ai driven
00:05:09.940 servers that connect up online life as well as the hundreds of millions of people that are always
00:05:17.860 connected to it both components are needed because the computers act as conduits for these people to
00:05:27.400 for people and the algorithms to engage in sort of a feedback loop of accelerating content production
00:05:34.160 distribution and so it leads to these three properties that i identify which are um velocity
00:05:41.300 volume and virality in other words the size the speed and the feedback that it generates that it
00:05:48.880 keeps compounding on itself and what i what i say in the book meganets is that these systems have
00:05:56.800 gotten too fast and too big to be controlled in any sort of fine-grained way that if we ask a ceo
00:06:03.600 or a corporation to keep track of every bit of content that's published and squash out the stuff that we
00:06:10.240 don't like for whatever edition whatever definition of we don't like you want that's a non-starter at
00:06:16.660 this point it's just too fast it also make leads to inevitable viral blow-ups and crises that happen
00:06:24.780 when a certain meme or whatever takes off and by the time you're trying to stamp it out the horse is
00:06:33.320 already bolted from the barn and to that last question of why a new word my experience is a
00:06:39.940 software engineer was that we really underestimated the human component we saw the systems getting
00:06:45.660 bigger but i really feel no one foresaw just how much assigning a little bit of control to every single
00:06:55.180 user so that they were influencing the weights and the algorithms that their data was going into the
00:07:00.500 system and having a little nudge on the servers and the algorithms that influence collectively
00:07:07.080 was actually a major major force that couldn't be shaped through algorithmic technological or top-down
00:07:15.020 means and so i coined the term to reflect that it requires both the human and the machine component
00:07:23.180 and that we ignore either of them at our peril because it's the combination of the two that's led us to
00:07:28.760 where we are that machines by themselves could not do could not create the world that we exist in
00:07:34.320 today it's because we're hooked up to them constantly in this feedback loop of reacting
00:07:39.140 and shaping and spreading and forwarding that you are seeing these out-of-control behaviors take place
00:07:47.780 that make these systems feel much more organic and ecological like the weather more than you know
00:07:54.240 what we think of traditionally as technological networks well what specific systems are we
00:08:00.320 talking about i think many people listening to you so far will think that what you're talking about
00:08:05.580 must be limited to social networks you know social media companies like i guess including things like
00:08:12.400 youtube what are some examples of mega nets those are the ones where i think we feel it and we observe
00:08:18.220 it most directly because you know that's what we interact with on a daily basis but these systems
00:08:24.080 are actually present at many levels in life you know there there are things that are somewhat adjacent
00:08:31.620 to social networks like online gaming which has been said to be the core of what's going to become
00:08:37.060 the metaverse if the metaverse is still a thing but the gamification of uh of reality and online and
00:08:44.500 offline life is proceeding apace so i think that we should look at that but also things like
00:08:49.700 cryptocurrency networks where things get out of control very very quickly in some cases by design
00:08:55.900 but also for reasons that may not immediately be clear even to the people who are using cryptocurrency
00:09:02.640 networks beyond that we also see governments getting into this business in the west at least the
00:09:10.540 integration of government services and identification systems has been a bit uh slow to happen but in
00:09:17.440 india citizen identity has already been centralized around a single identifier called adhar and if you
00:09:24.980 look at how it is connecting up the various systems and forms of identity you know it's not as though
00:09:30.220 in india you don't have a separate driver's license number and a separate social security number
00:09:34.320 everything is tied around the adhar number and that also produces these sorts of feedback effects
00:09:40.340 because more and more systems get pulled in around that identifier and start reacting to one another
00:09:46.060 and ai is an interesting case because ai certainly qualifies as a meganet or at least a component of
00:09:52.540 meganet and one of the things i argue is that a lot of what we see in ai that disturbs us so much
00:09:59.000 is less ai technology per se and more a consequence of these meganet systems that we've already set up and
00:10:06.260 that we can see some of the things that trouble us about ai already happening in the more out of
00:10:11.820 control but less ai influence systems like you know recommendation engines or cryptocurrency networks
00:10:18.200 for example so uh those are some of the things but i think you could also you could extend it you could
00:10:23.780 extend it to more i i think that in the economic realm is probably where we're going to feel the
00:10:28.740 strongest we see it the most in the socio sociological arena online but this sort of phenomenon happens
00:10:38.080 in my opinion whenever you get enough people hooked up to a network in such a way that you get these feedback
00:10:43.820 effects and that is in no way restricted to social networks i mean if you want an instance that combines them
00:10:51.020 look at the the gamestop stock or the stonk as it was called where a bunch of redditors managed to
00:10:58.580 send gamestop soaring in the absence of any change of its fundamentals and all the institutional investors
00:11:06.680 and the sec were very annoyed by this but they couldn't find that it was actually illegal because
00:11:11.560 it wasn't there wasn't any actual collusion going on what was happening is that it was blowing up like a
00:11:16.600 meme and that's the sort of thing where i say it's not necessarily going to stay on social networks
00:11:22.280 because it can it can spread to the rest of our uh to the rest of our world i think people will have
00:11:28.480 an intuitive sense of what you mean by virality or velocity but can you spell out what you mean by
00:11:35.740 feedback in this case so yeah virality is my v word for feedback and by feedback i simply mean that
00:11:43.380 without without you know before you have had time to look at the result of a system the system has
00:11:49.860 already incorporated the last iteration of its state into its next state in other words you know
00:11:57.880 it's like you're never walking into the same river twice you know to quote the old heraclitus it's the
00:12:03.380 never you're never walking into the same algorithm or data stream twice we tend to think of algorithms
00:12:07.840 as fixed things that we can you know we can tweak or twist a gear on but actually our interactions
00:12:14.200 constantly shape those algorithms and change their weights you're not if you do a search on twitter or
00:12:19.620 facebook or google you're not guaranteed to get the same thing a minute later than you got a minute ago
00:12:24.240 you might but the very act of you searching has already become a new piece of input into how into the
00:12:31.300 weights of those algorithms and that's what i mean by feedback that you have these effects that are
00:12:37.080 cause that that cause certain that cause viral portions to amplify and get out of control before
00:12:44.480 anyone has had a chance it's not as though someone is commandeering this from the top down some people
00:12:49.520 try to commandeer it but i actually think it's much harder to do than people think and that uh
00:12:55.100 conspiratorial thinking is kind of a comfort because you think okay well all this chaos and misery i'm
00:13:01.000 experiencing wherever is because of uh is because uh facebook or microsoft wants me to be miserable
00:13:08.660 but in actuality you know having been on the inside i don't think any of us were thrilled or expected
00:13:15.600 that our algorithms would come to be so dependent on the actual interactions of users okay so i think
00:13:22.320 we're probably going to focus on the social media component because i think that is and we'll talk
00:13:29.500 about ai as well but there's what you describe as you kind of lay out the nature of the problem and and
00:13:36.780 offer some remedies there's really a a landscape of trade-offs and many people are becoming more and more
00:13:44.860 sensitive to some of these trade-offs and they're in some cases picking one extreme more or less uh
00:13:52.040 you know to the exclusion of every other consideration so that i think in the information and misinformation
00:13:58.420 space many of us now perceive that there's some trade-off between basic sanity and liberty right
00:14:07.880 the freedom to just say anything at any scale with any velocity with any consequences
00:14:13.780 is intention with our ability to know what's real in any given moment and to cooperate effectively and
00:14:24.420 to maintain the normal healthy bonds of a an intact society right to have a workable politics
00:14:31.200 it seems to require that we deal with misinformation and disinformation in some way and yet the so-called
00:14:39.360 free speech absolutists tend to view any attempt to deal with you know the basic problem of a shattered
00:14:47.720 epistemology as an orwellian uh overreach and and abridgment of our civil liberties and what was
00:14:56.460 interesting is many of the people who are most adamant that any attempt to deal with misinformation
00:15:02.080 and disinformation is just code for an infringement of free speech and in the u.s context you know the
00:15:10.000 infringement of the the first amendment many of these same people are having have a very different
00:15:16.300 view of the the right to assembly which is also enshrined in the first amendment so these are some of
00:15:24.040 the same people i won't name them here but they will hear themselves referred to are have been very
00:15:29.420 focused on in particular like the the dysfunction in a city like san francisco with you know the all the
00:15:36.900 homelessness and the the mental illness being played out on the sidewalks and you know the open-air drug
00:15:42.420 markets uh they've been very concerned that we admit that it is an unacceptable negative externality
00:15:49.800 to have people defecating on our sidewalks and we can't you can't tolerate this awful status quo
00:15:56.760 under the aegis of well this is just freedom of assembly you know the everyone has a right to
00:16:01.840 congregate on the sidewalk you're going to abridge that right what are you stalin but these same people
00:16:07.040 will not address the quite similar concerns about a digital sewer that we're now all living in and
00:16:15.660 having to swim through and the the digital anarchy that results when we can't have a conversation that
00:16:22.080 converges on basic facts about anything whether it's a pandemic or whether an election was run
00:16:28.140 appropriately etc so i mean let's start with this trade-off or perceived trade-off between
00:16:35.820 understanding our world and being able to speak to one another about consequential issues and the freedom
00:16:43.040 to say anything at any scale it's interesting because i think a lot of it is affected by
00:16:50.740 the issue of volume that we live in this world now of informational abundance and that's very
00:16:58.320 different we used to live in a world of informational scarcity where there was actually selection
00:17:04.840 pressure and there had to ultimately be only a couple of views that won the day i don't think that's
00:17:12.240 really true anymore i think that and i think you see this that those efforts to you know stamp out
00:17:18.460 misinformation that that some people have tremendous problems with they aren't all that effective that
00:17:24.820 you see these you see these factions persist no matter what you do to them and people complain
00:17:31.000 bitterly but the weird thing is is that you know they don't they don't seem to have been stamped out all
00:17:35.720 that much except in extremely virulent and perhaps blatantly illegal cases for all facebook gets
00:17:43.920 criticized for censoring stuff or not censoring stuff i can find pretty vile stuff at very easily on it so
00:17:53.120 i think that what we're actually seeing right now is not even much in the way of censorship so much as
00:17:59.140 just hiding it from view and that the rapprochement in the like in the tension you describe is going
00:18:06.200 to come from people just pretending that stuff doesn't exist the bad stuff doesn't exist which honestly is
00:18:12.280 the traditional way we've all we've always done it uh that our problem seems to be less with um
00:18:19.100 with with these points of view existing than of us being reminded of them and having them
00:18:25.320 shoved in our face but well one point you make at various points in the book is that the companies i mean
00:18:33.160 say facebook or youtube or twitter as examples have much less control than their users imagine right so
00:18:42.040 that mark zuckerberg can't actually stamp out misinformation even if he wanted to and even if
00:18:48.880 that was even if he could accurately target misinformation as misinformation and not commit
00:18:55.220 his own errors of propagating misinformation in the process even an omniscient zuckerberg can't
00:19:01.500 actually affect the censorship change he might want to affect there are only coarse grain mechanisms
00:19:09.280 available you know in the run-up to the 2020 election they did ban all political advertising
00:19:15.860 that can be done but to ban only misinformation well that's a relative well it's a a you have to get
00:19:24.860 people to agree on what misinformation is that's tricky enough already b you need to somehow algorithmically
00:19:32.520 determine whether something is misinformation or not and that's what i'm saying you're never going to do to
00:19:38.360 enough of a degree that you're going to stamp it out because yeah you know it becomes like fighting
00:19:43.520 censorship china has effectively been trying to do this for for decades and with only mixed success
00:19:50.780 they really do have an army of government censors online and still stuff gets through non-stop so
00:19:57.100 and you know we don't even want i i hope a lot of people would would at least agree we don't want to
00:20:02.580 get to china's level even while i think we can also say that pure anarchy and pure hyper libertarianism
00:20:12.120 creates an environment that almost nobody wants to exist in yeah well i think for me and i you know i
00:20:19.220 could be mistaken about this but the distinction that has seemed relevant up until this point is has
00:20:25.840 been encapsulated by somebody's phrase i forget who coined this but to say that freedom of speech
00:20:32.860 is not freedom of reach right which is to say that there's a distinction between the political freedom
00:20:39.340 to say whatever you want whenever you want which is enshrined in the first amendment with some specific
00:20:45.000 exceptions and which you know i'm totally happy to defend i don't think people should be thrown in
00:20:50.960 prison for saying things we don't like and even in most cases saying things that are untrue
00:20:56.480 but being able to freely speak and write and publish in ordinary channels is not the same thing as being
00:21:05.820 free to have your speech algorithmically boosted because we have built a machine that preferentially
00:21:14.960 amplifies misinformation and disinformation and outrage and you know i mean this comes back to the
00:21:21.900 original sin or what many people consider to be the original sin of the internet which is the the ad
00:21:26.900 based attention gaming business model yeah if we if you break that link between the freedom to
00:21:33.260 say anything and the machinery of amplification that has been the the bright line that many of us have
00:21:41.740 been trying to focus on but it is do you agree with that or is it more complicated than that
00:21:46.800 i think it is more complicated than i mean i think you're totally right but i also think that the
00:21:52.360 machinery of amplification is changing in ways that we've only begun to grasp that you know after the
00:21:59.500 20th 20th century of top-down general you know broadcast media where the overall shape of the narrative
00:22:10.700 even if there were disagreements within it was set by a small number of elite players who are now
00:22:17.460 seeing that that's no longer the case and you can actually have a bottom-up generation of a narrative
00:22:21.860 because you've seen narratives that well while they may benefit one political party or another
00:22:26.780 are definitely not created by that political party because they carry certain liabilities with
00:22:32.100 them i don't know if i should name them or not but i think you can know what i'm talking about here
00:22:36.400 and and it's because of those feedback loops that you no longer need some sort of shepherd
00:22:40.980 or demagogue to start generating to start generating an entire narrative landscape that then reinforces
00:22:49.540 itself because you've got you know you've got these meganets that are bringing people like-minded
00:22:54.260 people together and just uh having them say yeah you're right and what about this and building up
00:23:00.280 a corpus or a lore or whatever independent of of you know what we think of as traditional social
00:23:09.040 societal elder influences and so you know what is amplification you know having been associated
00:23:18.360 having had served my time in i guess vaguely traditional old media new media elite circles uh their power
00:23:27.240 is dissipating they definitely have less power than they used to and i i think that no matter where
00:23:34.200 you are in the spectrum you tend to think that other people have more power to amplify than you do
00:23:40.220 because i think everybody is seeing their power decrease or no one feels that they have enough
00:23:44.940 so that if you say see the new york times dissing your point of view you take that as a societal
00:23:54.980 disapproval even though the new york times is really no longer the paper of record in the way that it
00:23:59.420 was 50 years ago so i think there's a there's difficulty even assessing what amplification is and
00:24:07.780 who's getting amplified that we don't generate you know we couldn't generate harry belafonte just died he
00:24:13.740 was uh before michael jackson he was everybody owned calypso his calypso record i don't think we have
00:24:21.060 the the mass media machinery to generate that sort of unity anymore because there's no selection
00:24:27.100 pressure it's not that one one particular product or narrative has to win everybody can win yeah it's
00:24:34.740 an interesting point because it's it pre-internet if you were going to start something like q anon or
00:24:42.680 some other cult of conspiracy you know it's much had to have been much harder to do it's not that it
00:24:48.420 was impossible but i mean you just would have to be you'd have to physically congregate with people
00:24:53.060 in order to you'd have to have you know a q anon conference and then you'd be meeting people
00:24:58.140 in the flesh and seeing all of the the the visual and and palpable evidence of their
00:25:04.020 crack pottedness presumably right and so right so you said it i yeah so yeah it was q anon that i was
00:25:09.680 talking about and exactly it's like the q anon has certainly brought some benefit to the republican
00:25:14.920 party but do i in any way think that that the traditional republican elite decided that they
00:25:21.220 wanted q anon to be a thing no i think they would have shaped it very differently had they had the
00:25:26.020 option because it carries with it some severe liabilities that they have to deal with well so
00:25:32.240 maybe we should talk well let's bring in the ai piece before we talk about remedies here you know
00:25:38.120 like almost everyone or probably everyone who wrote a book that talked about ai and published it
00:25:44.820 anytime earlier than than last week i would imagine some of what you say about large language models and
00:25:53.280 deep learning might feel a little dated is there anything you would you would want to modify now
00:25:58.980 given the the um what's happened with gpt4 i mean you mentioned gpt3 in the book so you're you're
00:26:05.560 sort of up to the minute there but i think you were very skeptical of the the ability of these large
00:26:12.220 language models to process speech uh effectively and i mean are they going to be more powerful than
00:26:21.260 than you expected or or what are your thoughts about ai at the moment honestly i stand by what i said
00:26:28.000 a hundred percent i think that they have the same the same feelings they are the equivalent of the old
00:26:34.020 horse clever hans that uh was very good at being cued by people and responding in convincing ways
00:26:42.160 but couldn't do actually do math my opinion is that these large language models are incredible
00:26:48.000 and they are incredible at producing content which i do say in the book what they are bad at
00:26:53.020 is actually behaving with genuine understanding because they don't have it so i actually think that
00:27:01.040 i've been i think it holds up pretty well i will defend it at length actually yeah um and and some
00:27:06.980 of the weirdness that we're seeing also the fact that these ais are clearly behaving in ways that
00:27:12.240 weren't intended by their creators when the that new york times author got freaked out by uh the the
00:27:17.740 microsoft sydney chatbot that wanted to release nuclear codes yeah yeah well if you look back at it
00:27:24.240 you can see that he was cuing it yeah i had someone say oh i'd feel so much better if it wrote about
00:27:29.020 world peace and i said i can get it to talk about world peace talking about sunshine lollipops and
00:27:32.760 rainbows and why was it so uncanny it's because it's been seeded it's been trained on all the
00:27:38.520 collective conscious and unconscious writings of humanity so that when we said oh what would a
00:27:45.560 horrible a what would you do if you were horrible ai it parroted back the exact most common nightmares
00:27:52.420 that humans have right and then as we write a bunch of stories about it that feeds back into the
00:27:57.440 next iteration of these chatbots and it feeds them back to us so no i think that the the llms are
00:28:02.980 pretty much about where i expect them to be and i do not see them getting past that to a point of
00:28:09.040 achieving what could be called reasoning or true cognition anytime soon even though they will be able
00:28:17.100 to do other things that are very amazing and very world-changing well do you think the the net result
00:28:25.240 of these um language models will be pernicious or benign or beneficial in the near term i mean like how
00:28:36.840 would are you optimistic or pessimistic about the the near term effects i mean let's leave agi and
00:28:43.440 singularities and other uh concerns aside just give me the your sense of the next six to twelve months
00:28:51.540 with respect to the kinds of problems we've been talking about with meganets what what will ai do
00:28:56.540 to help or hurt the situation i actually think that in the next year or so things will not change that
00:29:04.920 much because it's going to take some time to start deploying these ais in in increasing numbers of
00:29:12.500 contexts so in the very short term i think it'll continue to be a novelty and people will tear their
00:29:18.280 hair out but it's going to take a couple more years before you start seeing it deployed to um
00:29:22.680 to generate content to help people generate content you know to work in collaboration with humans which
00:29:28.200 is i think where you will see a big difference that if you have a human assisting an ai this human
00:29:34.620 provides the actual reason and the ai provides the frosting as it were that you're going to see and
00:29:41.160 moreover you're going to see increasing cases where even if this thing doesn't actually even if these
00:29:46.600 things don't actually think people will believe that they think that's where that's where you're
00:29:52.820 going to see the biggest changes on the human side and again that gets back to my thing my theme that
00:29:58.220 the human aspect of this is just as important as the machine aspect of it that in some ways creating
00:30:03.240 a machine that convinces people that it's thinking is if not as much of an achievement is certainly as
00:30:10.080 big a deal as if you created a machine that that actually does think and that goes back to you know
00:30:15.960 eliza which in the 60s was tricking people into thinking that it was an actual therapist that cared
00:30:21.440 about their feelings well this is the supercharged version of it because it's much better than eliza but
00:30:26.380 it's not new for uh these you know turing tests to supposedly be passed especially if you really want
00:30:33.180 it to be passed there was that company i think that was marketing you know virtual girlfriends and
00:30:37.240 boyfriends as chatbots and people were really upset when they shut off the romantic language
00:30:43.860 uh i don't know did you write about this i forget the company's name i did hear about this yeah i
00:30:48.780 forgot the company yeah that may again may take another 12 months but you're going to start seeing
00:30:53.800 this they're they're the human desire for company for pets it's like tamagotchis that that's going to
00:30:59.320 manifest itself and the more that we can embody them in one way or another the better it will be so
00:31:05.080 even though even though you won't be able to have a conversation with them that feels convincingly
00:31:11.580 human at least not if you're looking at it skeptically you know you can still have something
00:31:17.600 that behaves on the order of say a pet and if it's human enough maybe it's a maybe you can feel
00:31:23.320 romantically towards it but but what you'll also see what was the you'll also see down massive downward
00:31:28.800 pressure on uh content creation that increasing amount you're already seeing content farm being
00:31:34.880 generated but it's going to get much easier to generate astroturf or whatever in huge amounts
00:31:40.980 and at the point where you can start generating news articles based on press releases there will
00:31:45.960 the what's already been downward pressure on content generation will get even lower and that'll spread
00:31:51.880 to video as ai generation of of uh of video and sound gets better as well aren't you expecting the
00:31:58.900 spamification of everything where at a certain point most of the content on the internet will be
00:32:04.620 ai generated whether it's text or video or audio and then we'll have this persistent problem with
00:32:10.900 the not knowing what is in fact real i mean when you won't you won't know whether an image is real
00:32:16.740 you won't know whether a video is real you'll be you'll be reading news articles that
00:32:20.940 you're pretty sure were written by entirely absolutely i mean to some extent that's already
00:32:27.020 true on twitter that it's not a lot of tweets you can't quite tell and you're just going to see that
00:32:31.880 that phenomenon grow and grow that in in 10 years time there's not going to be it's not going to be
00:32:38.100 easy to determine whether a video on site is real or manufactured and that gets back to what i think
00:32:44.620 you said about judging reality and i think that what's going to happen is people are going to have
00:32:49.180 different versions of reality because with so much abundance of information out there you can find
00:32:55.000 stuff to support your version of reality if you want a reality in which q anon is true it's going to
00:33:00.480 become easier and easier to just shore it up so what do you imagine the effect will be i i imagine
00:33:06.880 many of us will just declare something like epistemological bankruptcy with respect to the internet and
00:33:13.860 want to read old books more of the time i mean what how do you imagine we deal with a with an absolute
00:33:22.980 tsunami of fake and a half fake or you know otherwise unreliable information well you know we a lot of people
00:33:34.020 believe that there were wmds in iraq for quite a while so people can hold on to their their their beliefs quite
00:33:43.440 rigorously especially if they're in a community of people who agree on them if it's just you in
00:33:49.700 isolation i totally can agree declaring them an intellectual bankruptcy but skepticism is hard to
00:33:55.080 maintain it's it takes a lot of effort and i say this as someone who's predisposed towards it
00:34:00.560 that the comfort of being around people who think the way that you do and you know when i was honestly i
00:34:08.100 probably saw a lot of this in academia because academia is because it's a shrinking environment
00:34:13.680 academics are very much in competition with each other and so the the sort of enforcement of a
00:34:19.040 certain purity and hothouse removal from the world has gotten larger and larger but that doesn't make
00:34:26.740 people as long as there's an incentive for them to keep believing what they're believing they'll do it
00:34:31.780 and as long as you're getting social approval for believing those things i figure you probably will stay
00:34:36.660 online what i do think will happen is that these i call them narrative bunkers you can there it's
00:34:42.800 beyond filter bubbles because it's not just you only see it it's that you are actually in a community of
00:34:48.760 people who are actually reinforcing certain assumptions about the world you can have disagreements
00:34:53.720 about it but the assumptions are the same way in the same way that if you want you know if you watch say
00:34:58.940 i don't know fox news for a week even if you disagree with everything you see you will start to take their
00:35:04.520 narrative frame into you into account and that's what's going to happen you're going to see this
00:35:08.860 divergence and factionalization of narrative frames and increasingly you won't even be able to understand
00:35:15.020 what people in other narrative frames are saying i think i feel like this already happens to some extent
00:35:20.580 that you see people in you know in in sort of the bay area tech scene compared to people in say the
00:35:25.820 new york media scene or you know people who complain about san francisco becoming a living hellhole on earth
00:35:32.820 take your pick that all all these people are working with such assumptions about things they've never seen
00:35:38.820 and perhaps this was always the case to a point but it's only growing stronger i was in seattle a few weeks
00:35:43.680 ago and i was talking to a couple people about the uh do you remember when there were like the
00:35:48.760 seattle protests and they formed that autonomous zone and just from reading reports online it was
00:35:55.380 like according to some people it was a dystopian wasteland according to others if you'd like to
00:36:01.320 continue listening to this conversation you'll need to subscribe at samharris.org once you do you'll
00:36:06.780 get access to all full-length episodes of the making sense podcast along with other subscriber-only
00:36:11.440 content including bonus episodes and amas and the conversations i've been having on the waking up app
00:36:17.320 the making sense podcast is ad-free and relies entirely on listener support and you can subscribe now at samharris.org