#317 — What Do We Know About Our Minds?
Episode Stats
Length
1 hour and 8 minutes
Words per Minute
170.03345
Summary
In this episode of the Making Sense podcast, I talk about AI, AI risk, and AI risk in general, and why AI risk is a bigger problem now than it was a few years ago. I discuss the recent developments in artificial general intelligence (Ai) and how they have changed my views on AI risk. I also talk about the dangers of AI and the potential for AI-related hoaxes and lies, and how to deal with the growing problem of AI misinformation and fake information, and what we can do about it. You can expect weekly episodes every available as Video, Podcast, and blogposts throughout the week, including weekly short-form news and analysis, and weekly long-form commentaries on the most popular podcasts, social media, and other forms of media. Please consider becoming a patron patron of Making Sense if you're interested in learning more about the topics discussed in this podcast. You'll get access to all kinds of special offers, including: 1. The Making Sense Podcast 2. The Future of AI 3. AI risk 4. Artificial general intelligence 5. AI's impact on the world 6. AI and artificial intelligence 7. The future of AI? 8. AI risks 9. AI in the real world 10. AI as a service 11. AI s? 12. AI threat 13. AI is a problem? 14. 15. Is AI a threat? 16. 17. What will AI be like? 18. What is AI's role in the 21st century? 19. 21. What are the future? Is AI's place in the future of AIs? 22. Is AIs AI a good thing, or is AI a bad thing? And so on and so on, etc., etc.? Learn more about AI and its impact on AIM, and so much more? Welcome to the Making sense podcast, made by Sam Harris? Make sense? - Sam Harris - make sense of the making sense podcast by Sam's thoughts on AI and machine learning, and his thoughts on A.M. podcast, and the implications of AI, and its implications, and research, and a discussion on AI, using artificial intelligence, and all things related to A.I? 21st-day AI, by Sam Harris , I hope you'll find out what AI and AI, I hope so?
Transcript
00:00:00.000
welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.520
this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.900
of this conversation in order to access full episodes of the making sense podcast you'll
00:00:21.800
need to subscribe at sam harris.org there you'll find our private rss feed to add to your favorite
00:00:27.020
podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.500
therefore it's made possible entirely through the support of our subscribers so if you enjoy
00:00:36.540
what we're doing here please consider becoming one
00:00:38.840
well recent developments in ai have been interesting i am sure i will do many more podcasts on this
00:00:55.100
topic but for the moment some people have asked whether gpt4 and its rapid adoption have changed
00:01:04.600
my views at all about ai and ai risk as some of you know i did a ted talk on the topic of artificial
00:01:13.240
general intelligence in 2016 and that's available on youtube and elsewhere presumably and nothing has
00:01:22.460
really changed about my concern for agi and alignment artificial general intelligence and
00:01:29.260
the problem of creating it such that it is aligned with our interests it's probably a worse problem
00:01:37.080
now than i thought it was because the main change here is that the suddenness with which ai has improved
00:01:45.300
and the way in which we have blown past all of the landmarks that ai safety people have carefully erected
00:01:53.020
that has alarmed me and many other people because in all my conversations with people like nick bostrom
00:01:59.760
and max tegmark and eliezer yukowsky and stewart russell it was more or less an explicit expectation
00:02:08.760
that as we cross the final yards into the end zone of human level intelligence even under conditions
00:02:15.480
of an arms race which are not at all ideal for solving the alignment problem but even in that case there would
00:02:22.460
be a degree of caution that would sober everyone up and so for instance the most powerful ai models
00:02:29.980
wouldn't be connected to the internet or so it was thought and they obviously wouldn't have apis
00:02:37.000
they wouldn't be put into the hands of millions of people at the outset but with gpt4 we've blown past
00:02:46.160
all of that and so now it's pretty clear that we're developing our most powerful ai more or less in the
00:02:53.400
wild without fully understanding the implications so in my view this does nothing to suggest that we're
00:02:59.620
better placed to solve the alignment problem and that problem seems to me to be as big as ever
00:03:04.420
and it has also magnified the near-term risk of things going haywire due to unintended consequences
00:03:09.960
and potential malicious uses of narrow ai and with gpt4 it's almost like we've done our first
00:03:17.740
above ground nuclear test and we've seen the flash of very impressive ai and now many of us are just
00:03:27.240
waiting for the blast wave of hoaxes and lies to knock everything over now i hope i'm wrong about
00:03:35.040
this but i'm half expecting the internet to be eventually inundated by fake information by lies
00:03:44.120
and half-truths to a degree that could render it totally unusable i mean just imagine not being able
00:03:51.560
to trust the authenticity of most photos and videos and audio and text i mean imagine what the internet
00:04:00.840
becomes when ai generated fan fiction crowds out everything else then imagine the cultic entanglement
00:04:10.600
with all this misinformation on the part of billions of people globally it seems like it could be
00:04:17.220
ivermectin and adrenochrome and dogecoin and catfishing scams and ransomware and who knows what else
00:04:25.120
for as far as the eye can see and even the best case scenario could still look totally uncanny i mean
00:04:34.180
let's say we solve the misinformation problem though how we're going to do that is anybody's guess but
00:04:39.800
even if we did what will people want when all valid information can be produced by machine
00:04:47.160
all art and science and philosophy when even the smartest and most creative people can be taken out
00:04:55.160
of the loop what will we want then and for some things i think we just want results i don't care where
00:05:02.840
the cure for cancer comes from i just want it right so there's no future in artisanal oncology just give
00:05:11.000
us the winning algorithm but what about non-fiction writing if you just want the answer to a specific
00:05:17.320
question i think ai will be fine if you ask chat gpt to tell you the causes of world war ii it does a
00:05:25.560
pretty good job but this will never substitute for reading churchill provided you care to know how the
00:05:32.840
world looked to churchill himself and not to some credible simulacrum of churchill so i don't think
00:05:40.280
anyone knows how all of this is going to transform our relationship to information but what i'm experiencing
00:05:47.000
personally now is a greater desire to make contact with the real world to see my friends in person to
00:05:54.440
travel to be out in nature to just take a walk and it may sound self-serving to say this but podcasts
00:06:02.680
and audiobooks are becoming more and more important for this i still spend a tremendous amount of time
00:06:08.600
in front of a screen and reading physical books but i now spend almost as much time listening to audio
00:06:16.920
because the difference between being stuck at my desk and taking a three-hour walk or a hike
00:06:22.680
and being able to do that and still call it work is just such an amazing have your cake and eat it too
00:06:31.960
experience and while all of this is still being enabled by a smartphone the effect on my life is
00:06:38.040
quite different from being married to one's phone for other reasons listening to audio really is different
00:06:45.240
than endlessly checking email or slack or twitter or something else that is fragmenting your attention
00:06:53.320
anyway it's pretty clear we're witnessing an ai arms race and gold rush and that things are about to get
00:07:00.120
very interesting and it seems quite reasonable to worry that the landscape of incentives is such that we
00:07:07.320
might wind up someplace truly undesirable in fact someplace that actually no one wants to be and we might
00:07:15.560
arrive there despite everyone wanting to avoid such an outcome so there's a lot to figure out and i am
00:07:22.680
sure i will do a few more podcasts on this topic before i'm replaced by a bot that does a far better job of it
00:07:29.320
and now for today's podcast today i'm speaking with paul bloom paul is a professor of psychology at the
00:07:37.400
university of toronto and also a professor emeritus of psychology at yale his research explores the
00:07:44.040
psychology of morality identity and pleasure and he is the recipient of many awards and honors including
00:07:52.360
most recently the million dollar claus j jacobs research prize he's written for many scientific journals
00:07:59.320
such as nature and science for the new york times the new yorker the atlantic monthly and elsewhere
00:08:06.920
he is the author of eight books including against empathy just babies how pleasure works the sweet spot
00:08:15.400
and his new book is psych the story of the human mind which we discuss in this conversation
00:08:22.600
we cover many topics here including fiction as a window onto psychology recent developments in ai
00:08:29.480
the tension between misinformation and free speech the difference between bullshitting and lying
00:08:35.880
truth versus belonging reliance on scientific authority the limits of reductionism consciousness
00:08:43.320
versus intelligence freud behaviorism the unconscious mind confabulation the limitations of debate
00:08:52.840
language language coco the gorilla mental health happiness behavioral genetics
00:09:00.920
birth order effects living a good life the remembered and experiencing selves and other topics
00:09:09.080
anyway it's always great to talk to paul and now i bring you paul bloom
00:09:13.400
i am here with paul bloom paul thanks for joining me again great to talk to you again sam i've lost count
00:09:25.560
but i i am confident that you are my uh my returning champion and most frequent guest so uh congratulations if
00:09:33.720
you if you need yet another honor to add to your to the trophy to keep on the mantel yeah it's a funny
00:09:38.600
thing to put in your cv yeah i'd like to see that please put it in your cv i would like to see the reactions
00:09:44.200
to that yeah some some dean's gonna be scratching his head but i i do i do take it as an honor i i like
00:09:50.760
talking with you well people love hearing from you so um this is not uh not altruism directed in in
00:09:56.840
your direction this is pure wise selfishness on my part so but you have a new book which um is the uh
00:10:04.120
the nominal occasion for this conversation and that book is psych the story of the human mind which we'll
00:10:10.760
talk about this is really your um you have produced a essentially a psych 101 course in super accessible
00:10:18.920
non-boring format for uh the general public so that's great and uh people enjoy it that that's exactly
00:10:25.800
that's that's a nice way of putting it i i aspire to do exactly that which is present the whole story
00:10:31.160
of psychology but uh you know i i hate reading textbooks i couldn't bear to write one and i try
00:10:36.440
to put it in in a way that people could enjoy it and and and also textbooks have a sort of uh neutrality
00:10:43.160
and objectivity and you know by i i aspire towards that i try to tell the story in kind of a straightforward
00:10:48.680
way but i also often give myself the luxury to to weigh in on different debates you can't do that in
00:10:54.200
the textbook no this is not at all textbook like but it does cover the full sweep of what we know
00:11:01.400
or what we think we know or what we are embarrassed not yet to know about the human mind yeah so yeah
00:11:07.960
and there's a lot we don't know i know i know there's some other topics we might want to touch
00:11:12.200
before we jump into the book but um how do you feel about the state of our understanding of the human
00:11:19.560
mind at this point i i guess you and i have spoken about this before i think with specifically with
00:11:25.160
respect to parenthood and how surprised we were to realize even you being a developmental psychologist
00:11:33.240
how little science informed our day-to-day experience of parenting how do you feel about
00:11:39.880
the relevance of of science to living a good life all together at this point guardedly positive
00:11:47.400
i i wouldn't have written a book if i didn't feel like psychology had interesting things to tell us
00:11:53.080
about questions that matter a lot like uh you know how to give how to live a life of happiness how to um
00:12:00.280
how much can we trust our memories how does language work even questions which have become quite quite
00:12:05.720
urgent these days with with the the dawn of ai and whatever revolution we're now going through i think
00:12:12.680
psychology has a lot to say about it on the other hand i try to be honest in the book we've we've
00:12:18.760
a lot of our findings are not as robust as we thought they were and i still believe and i don't
00:12:24.040
know who's who said it for maybe chomsky said this uh very which is that you could learn a lot more
00:12:30.520
from a good novel or a good tv series or a good movie from a psychology textbook if somebody was going
00:12:35.720
to say what's a marriage like what's it like to raise teenagers what's it what's it like to grow
00:12:41.560
old i wouldn't point him to a psychology textbook i'd point him to some good novels yeah that's
00:12:47.480
interesting i i never i mean i used to be a big reader of fiction and then at some point things
00:12:54.360
flipped and now i'm i gotta think i'm 20 to 1 non-fiction to fiction or or probably worse than that
00:13:01.560
it could be 50 to 1 but in recent years i have kind of arrived at that epiphany myself it's just
00:13:08.280
there's so much to learn about human life through fiction and you don't you it seems strange to say
00:13:15.160
that because it is fiction but yeah what you're seeing is are you know the best attempts of some
00:13:21.560
of the smartest and most creative people to capture the substance of human experience and it's you know
00:13:29.560
some of the most compelling attempts at that are by definition what we have singled out as the most
00:13:36.040
valuable forms of of literature and i guess we could add you know film and television here as well
00:13:42.600
but it seems strange to say it but it is in some cases our most accurate window on to at minimum the
00:13:49.560
lives of others yeah and i think a a good writer a good filmmaker has insights into the lives of others
00:13:57.080
often from their own experience and and there's something about it which is often more powerful
00:14:04.600
and more transcendent than what you get through psychological research you know you know you
00:14:10.360
you see a movie like like tar and you you hear about you know you learn about artistic enterprise and
00:14:16.520
about cancellation about good and evil uh the banshees movie lovely meditation on friendship and you know
00:14:23.560
i don't know whether things will ever be different whether whether it'll be a point where i'll say
00:14:26.760
no no check out check out the research it'll tell you more there's certainly things the research could
00:14:30.920
tell you that the novelist never could and so maybe it's a matter of staying in our lane well what
00:14:36.520
do you this is going to be a disconcertingly large question but what do we know about the human mind
00:14:44.120
at this point the year is 2023 if you had to distill what we know or what we think we know at this point
00:14:51.880
to somebody who really knew nothing about the last 150 years of mind science what do you think we know
00:15:01.480
we don't have a theory of the human mind and i don't think we ever will not because of our inadequacies
00:15:08.440
but because the mind is many things and so in some way if you ask what do we know about a human body
00:15:13.800
i have a feeling that an anatomist or a physiologist well you know let me tell you about the heart let
00:15:18.360
me tell you about the spleen let me tell you about the ankle bones and so we know a fair amount about
00:15:24.280
the different components of the mind we know we know some surprising things about memory surprising
00:15:29.720
about personality language motivation sex and generally so so trying to maybe stalling for time
00:15:37.800
if you ever try to answer your question we know the mind is the brain we are we we don't exactly
00:15:42.840
know how the brain gives rise to consciousness but we know how the brain gives rise to intelligence
00:15:47.640
it's not so dissimilar to to any other intelligent machine that we now possess we know that a lot of
00:15:53.640
our mental life is the product of natural selection we know a lot of it is the product of cultural
00:15:58.760
evolution we know and in here you know i'll give a shout out to freud we know a lot of the most
00:16:04.200
interesting stuff isn't accessible to our consciousness we know we're often conflicted
00:16:09.560
beings we know um emotion where we we know and i i think i think we know a lot of my colleagues would
00:16:16.120
disagree with me that we could be extraordinarily rational creatures with a capacity for reason and
00:16:22.280
imagination creativity that far exceeds anything else on the planet but we can also be um be fooled we can
00:16:28.920
fool ourselves so a lot of things like that we've set out a nice menu of topics we can hit so i think
00:16:35.960
we should take those piece by piece but um before we do it's honestly it's honestly a problem for a
00:16:42.280
book like this you know i'm looking forward to talking for you about this but previously we've
00:16:46.200
talked about very focused topics of my other books like empathy or topics of mutual interest like
00:16:51.400
like the morality of ai and this is a sprawling book so we'll just take it wherever it goes yeah
00:16:56.440
before we jump into those topics let's talk about recent developments in ai and uh any other thing
00:17:04.680
that has caught your attention of late uh you know before we turned on the mics we were talking about
00:17:10.920
my deleting my twitter account which is not disconnected to what i find most interesting and troubling about
00:17:18.680
what's happening with ai at the moment and i did notice in your book the one thing that it was
00:17:25.320
clearly dated uh you know and it was dated as of you know two months ago embarrass embarrassingly
00:17:32.040
yes but i mean really you could not have you would have had to have been nostradamus to have foreseen
00:17:37.400
how quickly that particular i think paragraph was going to age but you know ai has moved on quite a
00:17:43.880
bit since you published your book and um how has it struck you yeah let's say ai specifically for the moment
00:17:51.800
so just to fess up i have i have a few paragraphs in my book where i i sort of
00:17:56.120
dismiss statistical attempts to to model the human mind and say oh these could never work
00:18:01.800
and uh i think i think recent events got me a bit flat-footed on this i i i'm kind of like to be honest
00:18:08.120
about when i got things when i get things wrong and when things surprise me and and ai uh what has happened
00:18:15.160
with gpt for and being has been a huge shock to me if you had if we you know if one of our
00:18:21.320
conversations a couple of years ago you know you asked me what's going to happen when will we have
00:18:25.560
a system capable of having a perfectly normal conversation saying intelligent things i'd say
00:18:30.680
i don't know 20 years 50 years maybe never and yet here we are and so i'm i'm kind of stunned by
00:18:37.800
it like a lot of people and i've heard i've heard you you devote a few podcasts talking to people like
00:18:42.360
like my friend gary marcus like a lot of people i'm i'm worried about it i don't know where i stand
00:18:47.720
for people who want to sort of halt research for a period but but i think it's an idea worth taking
00:18:52.920
seriously i'm i'm not really necessarily endorsing the idea that that it will kill us all but and you
00:19:00.120
made the argument a while ago if the odds are like five percent or ten percent that's worth taking rather
00:19:05.400
seriously and and as a psychologist i wonder how much the success of models like gbt for tell us
00:19:14.520
about how our minds work yeah yeah i mean it might on that last point it might not tell us much at all
00:19:20.840
or or we certainly need not to constitute its own form of intelligence that disrupts our lives or benefits
00:19:28.760
us immensely depending yeah i mean i think that's a deep point yeah i i really think i my answer to
00:19:34.840
the question is that humans do not learn do not achieve our intelligence in anywhere anything like
00:19:41.400
the way these large language models do it bears no resemblance to the development of a child and yet
00:19:48.200
they have an intelligence of some sort and so maybe there's more i mean actually i do think that
00:19:53.560
to suggest there's more than one way to become smart yeah i mean there's a few red herrings here
00:19:59.160
i think we should dispense with one is the confusion about the importance of consciousness here and and
00:20:06.840
and any connection uh necessary or otherwise between consciousness and intelligence i we simply don't
00:20:13.320
know you know how and when consciousness emerges and whether it comes along for the ride at a
00:20:19.960
certain level of complexity and a certain level of intelligence or not but there's simply no
00:20:25.720
question that we have built intelligent machines and we're continuing to build them and they are
00:20:32.440
intelligent you know i.e competent whether or not there is ever anything that it's like to be those
00:20:39.320
machines i think it's an important question in its own right but it's quite separable from whether
00:20:45.560
intelligence itself is substrate independent and whether it's you know whether it can be aligned
00:20:50.840
or unaligned with human interests and and whether we might be building systems that we may one day
00:20:58.840
lose control of right it's just that consciousness is a completely separate question there and it has
00:21:05.240
ethical importance because we're building machines that are conscious then we're building machines that can
00:21:10.280
suffer uh or be made happy and you know that's a an important thing to have done or to avoid doing
00:21:17.160
but the more interesting case here for me is that i i think we're in danger of just losing
00:21:23.160
sight of whether the question of consciousness is even interesting anymore because we'll be in the
00:21:29.000
presence of machines that are passing the turing test perfectly they're virtually doing that
00:21:34.600
they're doing that now in a text-based way and at a certain point they're going to seem conscious and
00:21:40.120
we're going to treat them as though they were conscious whether or not we ever know the ground
00:21:44.920
truth there i i agree i every word of that the question of of what it is to become intelligent
00:21:51.960
is kind of a bread and butter scientific question you know computers can do intelligent things brains
00:21:56.440
can do intelligent things we have some conception of how how we could build a machine that could play
00:22:01.320
chess or carry on a conversation and how our brains do that too the question of consciousness as you
00:22:07.000
put it is it is entirely independent but also it's going to be important because you know there's um
00:22:13.960
there was a guy at google blake lamone i think who um who was was working with a chatbot and and
00:22:20.280
became convinced that it was sentient and you know google i think put him on leave or something because
00:22:25.000
he was protesting that it was now held as a slave it should have its own rights its own legal protection
00:22:30.200
and he came in for a lot of mockery which a lot of it i think was was unfair but the question he
00:22:36.200
struggled with is something which is going to happen more and more and more and more we're going to build
00:22:39.720
these machines it's going to be increasingly complicated and when say we have a we have we're
00:22:44.920
in a situation where each of us owns one that regularly interacts with us has wonderful conversations
00:22:50.200
with us seemingly has empathy and compassion for us gives us good advice we talk to it all the time
00:22:56.600
it will be inescapable to see it as conscious and so people will be will ask is this correct and
00:23:04.440
it's of moral importance if it's conscious you you know it comes under you know it comes under the
00:23:09.240
scope of what you've called a moral landscape you can't do bad things to it you shouldn't
00:23:13.480
and but we have no idea how to find out you know and and that's that's going to be a deep problem
00:23:19.000
and that's a problem which is going to bite us on ass pretty soon the only thing that has changed
00:23:23.080
for me since the emergence of chat gpt and its uh cousins is uh that i've grown more concerned
00:23:32.760
about the near-term chaos and harms of you know ai that that falls short of of agi artificial general
00:23:42.440
intelligence i just i just think these tools are so powerful and so disorienting in and of themselves
00:23:50.120
that um i mean i just want to think about turning this technology loose to produce misinformation
00:23:58.840
uh which many people you know will i i mean unless the ai becomes a perfect remedy for that sort of
00:24:06.280
thing it just seems like our information landscape is going to get so gummed up with what is essentially
00:24:13.960
persuasive spam yeah that i just don't know how we talk or think about anything in public
00:24:21.800
ironically what seems a step in the direction of democratizing the search for knowledge
00:24:27.240
i think will quickly pendulum swing into even greater gatekeeping because you know only trusted
00:24:35.640
sources of information it's like one example here is that you know if you when you think about the
00:24:40.680
you know deep fakes and deep fakes of video and audio and and just images photos becoming
00:24:48.040
so persuasive that you just you simply can't tell whether this is a real video of you know putin
00:24:54.760
declaring uh you know that he's launched all his missiles or not like only an ai could do it and
00:25:00.520
maybe an ai can't do it i i just think at a certain point we're going to declare epistemological
00:25:06.440
bankruptcy and say something like okay well if an image hasn't come from getty images that's
00:25:12.520
we can't trust that it's an actual image of anything right and and you know there'll be a
00:25:17.320
hundred versions of that sort of thing what we're just what we're seeing is a greater siloing of
00:25:23.480
information and a greater role for gatekeepers i mean it obviously could play out differently but
00:25:28.440
that's sort of what i'm expecting here because what digital information is going to be taken at face value
00:25:33.560
when again i'm thinking like this is not years away this is weeks or months away yeah i mean the
00:25:40.440
gatekeepers themselves may be ais we might be envisioning the beginning of an arms race where
00:25:45.720
where people are using them to distribute you know false news and misinformation and other people are
00:25:50.360
using to filter it out you could imagine and i think that the science fiction writer neil stevenson
00:25:55.720
had a scenario like this which we'll all have a we'll all have a personal system that that uses
00:26:01.240
our own preferences to filter things out and try to to separate the fakes from the originals but i it
00:26:08.760
might reach a point that no matter how smart you are how smart not an ai is it can't tell a fake
00:26:13.640
from the original and then you go back to where does it come from where's the imprint and i could just
00:26:19.080
see the world's going to change in that regard and i want to ask you do you use gpt four or three
00:26:26.600
or being in your everyday life not yet you know insofar as i have played around with it i've been
00:26:32.600
underwhelmed by what has come back to me i'm overwhelmed by the prospects for manufacturing
00:26:40.680
semi-persuasive disinformation uh and also just getting confused it's like you you ask it a question
00:26:46.600
and it will confidently tell give you an answer and then you when you see that some of its answers are
00:26:52.280
in fact hallucinations it's it's um disconcerting to think about ever relying on it you know in a
00:26:59.720
fully confident way i mean i gotta think it's only going to get better with respect to its error rate
00:27:05.240
but it just seems that we're very close to someone being able to ask um you know chat gpt4 or let's say
00:27:14.840
five uh you know write me a medical journal article in the style of jama about um you know how dangerous
00:27:26.120
mrna vaccine technology is and uh give me you know exactly 110 references and the better that gets you
00:27:35.400
know it's just like you could produce you know fake journal articles by the ream and just populate the
00:27:42.200
world with them i just don't know the sheer scale of it right i mean the fact that we might find
00:27:48.760
ourselves in a world where most of what is online is fake i just think that's possible and i'm not
00:27:56.920
sure what we're going to do about it and you're right that somewhat paradoxically this could force
00:28:02.360
a move back to more respect more weight more value given to sort of trusted traditional authorities
00:28:08.360
where you know if you if if you hear about a video you see a video you might then have to go to the
00:28:14.680
new york times website to see if it's confirmed or not confirmed you go back to people or whatever
00:28:19.800
whoever you trust but and in some way this is a very old problem the problem of forging signatures
00:28:25.800
and legal documents and so on but social media magnifies it a thousand times over so i actually don't
00:28:32.440
know if this is a change of topic or the same topic but you did you did leave twitter and and i've heard
00:28:37.400
you talk about why it seemed like your reasons for leaving twitter were a little bit independent
00:28:42.440
of what we're talking about now yeah well that the misinformation piece was important but it was
00:28:48.360
really misinformation as applied to me i mean i became you know the trending topic of the day and it
00:28:55.000
was a distortion of what i actually said and you know in certain cases meant to say because in this
00:29:02.520
case it wasn't i wasn't speaking especially clearly i mean the reason why i left was i i just
00:29:08.360
noticed that i had reached a kind of tipping point where twitter was obviously making my life worse
00:29:15.880
right and it was just it was just unambiguous whatever story i could tell myself about the benefits
00:29:21.240
of you know the good parts or just the necessity of staying engaged with it as a source of information
00:29:27.960
you know kind of taking the pulse of the world moment to moment as i imagined i was doing checking
00:29:33.400
twitter compulsively it just it was making me a worse person in particular it was making me see the
00:29:42.520
worst of other people in a way that i was i became convinced was a distortion of what the way the world
00:29:48.600
is i mean the people people are not as bad as they were appearing to me to be on an hourly basis you
00:29:55.400
know day after day week after week month after month and really i mean i was on for 12 years and
00:30:00.920
yeah it was just getting worse and worse but it did reach a tipping point when i you know trumpistan
00:30:07.160
went berserk in response to you know something i'd said on another podcast and a couple of things were
00:30:13.880
interesting about that one is that you know while in you know red-pilled twitter there had been a just a
00:30:20.680
complete you know run on the bank of my reputation right i mean like this was you know i was completely
00:30:27.160
defenestrated in my world and and really in any place i care about nothing had happened right and so
00:30:37.320
it was strange to see i mean there's this phrase twitter isn't real life which i think is can be
00:30:42.520
misleading because i think you know twitter can get people elected president and lots of things can
00:30:47.720
happen and and you know if you weren't on twitter you didn't know they were happening for quite some
00:30:51.960
time but there is a sense in which at least for my life it's not real life you know or it became
00:31:00.280
unreal and you know having gotten off of it i'm just i'm amazed at the difference in my life and it's
00:31:08.520
not just the obvious difference of i'm not hearing from 10 000 psychopaths on a daily basis or people who
00:31:13.880
are effectively behaving like psychopaths it's just my sense of what the world is has changed now it
00:31:21.160
could be that there's a bit of a delusion creeping in in that you know i'm i'm not in touch with certain
00:31:27.000
forms of information moment to moment but i don't know i just it's like a uh it's almost a pre-internet
00:31:34.600
existence i mean i spent a ton of time online and in front of my computer as it is so it's not pre-internet
00:31:40.680
but something has been stripped out of my life that was a a digital phantom or a golem or you know
00:31:49.480
something awful which um you know i just it's staggering to me how big it's like i can honestly
00:31:56.200
say that getting off twitter is one of the most important things i've done in the last decade right
00:32:01.480
so it's just it's an obscenity to me that i'm even in a position to say that right like that i that i
00:32:06.520
manage to get so confused about what i should be doing with my attention that i could affect such
00:32:14.760
a comprehensive change in my life by simply deleting my twitter account that's just it's staggering to
00:32:20.200
me so it seems there could be two things going on regarding your interactions with people and
00:32:24.760
probably both are true one is going off twitter you simply spend less time dealing with strangers
00:32:30.440
often malevolent or psychopathic strangers the second is something which which i've discovered
00:32:36.280
which is sometimes you you see somebody online and maybe they have an extreme political view maybe
00:32:41.640
they're very you know into donald trump or maybe they're just extremely woke or extremely anti-woke
00:32:46.840
and then you meet them in person and they're invariably more nuanced complicated kinder more
00:32:53.480
interesting less caricatured i'm sure there's exceptions i'm sure there's people who who are just just
00:32:59.160
as bad in real life or maybe worse but um but there's something about the dynamic of social
00:33:04.360
media that that really does at times bring out the worst in us i i gotta say i was a bit i was a bit
00:33:11.320
tempted to follow your lead but i but there's two things one thing is i don't have your status your
00:33:15.880
celebrity status i don't have that that particular problem of being you know dredged over by by by
00:33:21.800
crazy people and the second thing is that i waste a lot of time on twitter but i do find it's often
00:33:28.520
extremely informative as to what's going on in my world yeah yeah well that's that's what kept me
00:33:34.360
hooked for all those years because it you know i was following hundreds of smart creative people who
00:33:40.040
are constantly surfacing interesting articles or exactly paintings or i mean just it was it was my
00:33:47.000
news feed you know so do you do you have i won't put you on on the spot and ask yourself now i know
00:33:53.240
nothing yeah but but but is but no i'm actually more asking is there some character which has four
00:33:58.840
followers a little egg shape that's you just following the same people and no no no i'm really
00:34:04.360
i'm really off i mean i i occasionally have had to check it for you know to do you know research for
00:34:10.360
a podcast or just to get in touch with a specific piece of information but no like i go for weeks without
00:34:17.240
looking at twitter the website and it's not that i haven't lost anything because i again i was i was
00:34:25.400
seeing articles and other things discovered for me by by smart people that i'm i'm surely not
00:34:30.760
discovering myself now but it really does center on the point you just made which is just the distorted
00:34:38.600
sense of other people i knew i was getting yeah but couldn't fully correct for because in some cases
00:34:44.360
is not only these aren't just strangers who i know if i met them over dinner they'd be better
00:34:49.400
than they seem to me on twitter these are people who i actually know and have had dinner with but
00:34:53.960
yeah i could see what they were doing on twitter and it was changing my opinion of them right it's
00:35:00.600
just you know these are now awful human beings who i used to like over dinner but i can't believe
00:35:06.440
they're behaving this way right so it just it i felt like i had been enrolled in a psychological
00:35:11.800
experiment that was that had gone awry you know probably five years ago at least and it just took
00:35:18.840
me a very long time to find reason enough to just bolt for the door and that's um yeah but i when you
00:35:27.000
add the the ai component to it and the misinformation component to it i i'm very worried about our collective
00:35:36.520
ability to have a fact-based discussion about anything i mean even that even the topics i've
00:35:42.120
just raised i mean like my claiming confidently that we have a misinformation problem is the other
00:35:47.720
side of a debate which you know smart people are having now which i think we just can't possibly bring to
00:35:55.560
a satisfactory resolution i mean the other side is we've got people talking about you know media and and
00:36:03.640
and social media censorship and every you know reference to misinformation or disinformation
00:36:10.920
is a covert way of you know the deep state and the the odious establishment trying to suppress
00:36:19.880
the populist democratic epistemology that is struggling to be born right where like usually we just
00:36:27.880
we're trying to to force the overton window into a certain shape uh and position and uh uh make it
00:36:36.760
impossible for people to talk about or think about topics that fall outside of it so we can't even
00:36:43.160
agree about whether misinformation is a thing at this point yeah yeah so yeah i mean i was gonna say
00:36:50.280
just just to go back a little bit in response to you going off but sam you miss so much and the
00:36:56.360
the truth is to some extent you do miss some things you miss some discoveries you miss some some
00:37:00.440
very funny things very clever things but you you also miss stuff that you probably shouldn't be
00:37:05.880
attending to in the first place not because it's necessarily mistaken but because it's the outrage of
00:37:12.280
the day yes it's people getting furious because something happened in this school in nebraska or
00:37:18.360
somebody said this and they're getting you know and in a few days that will go on i'll move to the next
00:37:23.480
an amount of mental energy and i'm speaking personally here that that that gets caught i get
00:37:28.040
caught up in for issues which i actually have no expertise and no intrinsic interest in but but you
00:37:34.200
know we're wired for gossip and hearing oh my god this person said this and now the world's coming
00:37:39.400
to an end and everybody it just just captivates us and and it's it's it's appealing i think to our
00:37:44.920
worst selves yeah it also gives you the sense that you're you're supposed to form an opinion
00:37:50.120
about everything right especially that's right when you have a big platform you know when you have
00:37:55.400
hundreds of thousands or millions of people following you you know something will happen
00:38:00.040
and you'll and you'll feel like okay this is an invitation to comment and it's interesting not to
00:38:05.960
have that space for that kind of micro commentary in my life anymore like i now i have a podcast where i
00:38:12.600
can decide you know whether i want to talk about something but that's a very different decision than
00:38:18.440
whether to you know retweet something or comment on it and the time course of the decision is
00:38:24.760
different you know lots of ephemeral things just fall away before you you have even decided whether
00:38:31.080
or not you they were worthy of your attention or you know worthy to surface in in your commentary about
00:38:36.520
anything and yeah i mean i was just you know i'm missing a lot on twitter no doubt but what i what i was
00:38:42.680
missing when i was on twitter were things like books right like it was becoming harder to read books
00:38:48.920
you know and so uh yeah it's kind of the pace of of one's response to the information one is taking in
00:38:57.560
and it's uh i don't know i mean it's it's it's definitely in that good it's not that it comes with
00:39:03.080
zero cost but i recognize that people have very different experiences on twitter or any other social
00:39:09.640
media site where they happen to be and you know some people who are just putting out you know
00:39:13.720
happy memes are getting nothing but love back and it's they have no idea what i'm talking about but i
00:39:19.320
just um yeah i'm i'm worried that we we have built tools that we can't we don't know how to control and
00:39:25.320
they may in fact not be controllable by us and they're controlling us right they're making certain
00:39:32.600
types of conversation impossible they're making it difficult or or impossible to solve coordination
00:39:40.040
problems that we really have to solve in order to get anything important done in the world and um i
00:39:46.920
just think we they have created a um what seems like just unbridgeable divides in our politics this
00:39:55.800
could have always been the case right and it could there might be analogies to the to the invention of
00:39:59.640
the printing press that yeah where it made the same kind of indelible changes in how we did things or
00:40:06.040
failed to do things but i don't know i just think the the way in which the outrage machine has has no
00:40:13.080
off button and the pace of our engagement with the story of the day the outrage of the day and the way
00:40:21.720
in which that gets memory hold because it's supplanted by a new outrage of the next day and the way
00:40:28.600
that the cycle time of those changes completely obscures long-standing problems that we just do
00:40:36.200
not have the bandwidth to think about you know it really just seems like we have built information
00:40:43.000
tools that we just can't use effectively so i know a lot of people i i i see what you're saying i agree
00:40:49.320
with a lot of it i know a lot of people who are deeply concerned about exactly what you're talking about
00:40:53.640
particularly now with with ai adding something else to the mix and and i share that concern but
00:40:59.400
all of the solutions that gets proposed often make me a bit queasy john height suggests that the social
00:41:05.640
media basically don't have a it doesn't have a like or retweet button you modify the structure so that
00:41:12.120
that you don't get a sort of amplification and piling on gary marcus thinks the government should get
00:41:17.160
involved in in sort of controlling the runaway flow of uh misinformation robert right you know
00:41:24.200
suggests doesn't think it should be mandated but suggests that that we should redesign social media
00:41:28.760
to to pretty much force people to eat their vegetables and you know get exposed to alternative
00:41:34.520
use and i don't know where do you stand on all of that yeah i honestly i don't have um any kind of
00:41:41.240
remedy worked out in my head i mean personally i have just simply defected and that makes the most
00:41:49.000
sense i mean i'm trying to find a way of interacting with information and producing it that seems um like
00:41:57.960
it it has real integrity and it's getting harder to do and i and i just see how siloed everyone has
00:42:06.680
become in their preferred echo chamber and it's um you know i well i don't feel that that has happened
00:42:13.800
to me in any kind of comprehensive way i can just i certainly see people perceiving me on any given
00:42:20.760
topic to have been stuck in in some kind of bubble and take you know covid uh as a clear case right it's
00:42:30.440
like they're the people who think that covid the disease is uh no big deal and you know or even a
00:42:37.320
hoax and those same people tend to think that the vaccines for covid are just the crime of the century
00:42:42.920
and going to kill millions and then you just flip those two toggles uh for the other half of our society
00:42:48.600
and it's uh is there a conversation to have between those two camps on some medium that could possibly
00:42:57.480
converge on a shared set of truth claims to which everyone would you know in the fullness of time
00:43:05.000
give a cent there's a half a dozen other topics that come to mind that are equally polarizing in the
00:43:10.840
current environment i'm just not sure convergence is remotely possible yeah and it to the extent this
00:43:19.880
gets better i don't really see a natural market solution it's on parallel between somebody saying oh my
00:43:26.680
god restaurants fancy restaurants fast food places serve you know food that's extremely bad for us
00:43:32.600
it's salty it's fatty it's it's high calorie it's and so so why don't we just you know create these
00:43:38.760
restaurants that serve much healthier food with you know vegetables well you could do that but no one's
00:43:43.720
going to go to them yeah and similarly you know if you could you could create a new social media site
00:43:49.560
that does things better that discourages exaggeration and caricature that that brings together people
00:43:55.160
with real expertise but twitter is so much more fun yeah well i do think the there are some changes
00:44:02.040
that i've uh banged on about a lot on previous podcasts which i think would make a huge difference
00:44:08.360
i just don't i don't know that it it makes enough of a difference at this point but i i just i do think
00:44:13.720
the business model to which the internet got anchored is largely at fault right so they just the the fact
00:44:19.880
that we have to gain people's attention algorithmically so as to you know maximize ad revenue right that
00:44:27.000
that's the business model for so many of these sites yeah that's a problem and i i do think that if
00:44:33.320
people just subscribed to twitter and uh there were no ads and there was no anonymity and there was you know
00:44:41.800
very clear terms of service it could be much better than it is but again it does you know you know
00:44:49.080
suffer the analysis you just gave it which is if the more you solve the problems i'm worried about
00:44:55.160
in some measure the more boring it might get right there there will be an eat your vegetables component
00:45:04.280
to it but what we have now is just the privileging of misinformation and outrage by the the algorithms
00:45:12.280
and it's yeah there's another dimension of this which has worried me in a different way which is so
00:45:17.880
many of the algorithms are becoming i don't know where it is bespoke they're becoming geared for for us
00:45:24.200
and for me my example is um i i wake up in the middle of the night you know have a bad habit of
00:45:29.640
checking my email and then i sometimes find myself on youtube and more than once an hour has gone by
00:45:36.440
where i've just it was lost time because the youtube algorithm knows what i like and i like
00:45:41.880
k and peel sketches i like certain movie trailers i like this and that and and i just lose time and and
00:45:48.920
this is not a unique experience to me i you know that i forget his name the guy who ran netflix said that
00:45:54.760
our our enemy isn't other streaming services it's sleep you know i feel that that the real the world
00:46:01.400
that's outside of our screens and involves the outside and other people is at a serious disadvantage
00:46:06.760
relative to the algorithm driven feed that you get from twitter or youtube or a million other sources
00:46:13.080
and i i you know you could choose your dystopia some people now i think are thinking of a sort of a
00:46:18.120
skynet matrix dystopia of ai there's another dystopia where we're all just kind of blobs with our
00:46:23.880
vr things perched in front of our faces just whittling away our lives yeah well it's it's definitely
00:46:31.320
worth uh stepping back and taking stock because i mean just again personally i i as i said i i'm i'm
00:46:40.440
i'm embarrassed at at how long it took me to recognize what twitter had become in my life and it's
00:46:48.120
really you know i i was i'm by no means the worst you know casualty of the platform that i can think
00:46:54.840
of i mean there are people who have much more of a twitter problem than i ever had but it's um i mean
00:47:01.400
it's insane to say it but like something like a hundred percent of the truly bad things that have
00:47:09.720
happened in my life in the last 10 years have come from twitter really if i said 90 i i i'm sure i'm
00:47:17.640
underestimating it it's completely crazy just what a what a malign influence it has been on my life
00:47:25.240
and it took me years to just get fed up with it because of and to some degree it's what you um just
00:47:33.720
noticed with respect to the youtube algorithm it's just it's the steady drip of titillating isn't quite
00:47:40.440
the right word but it's reinforcing information of some kind right and yeah yeah and the fact that you
00:47:47.240
you know on twitter it can feel like an entirely wholesome way of satisfying your desire to be in
00:47:55.240
touch with the world and and have your curiosity slaked uh i mean for the longest time it seems like
00:48:01.640
it's that but yeah it's quite a change it's um well i'm wondering what you uh are most concerned
00:48:11.160
about at this moment and then we're going to take a turn to your your book but like what what are you
00:48:15.080
actually uh thinking about you know whether it's uh you know professional capacity or or a personal one
00:48:22.360
what's worrying you these days what's uh you know top of mind as far as um changes in our society that
00:48:29.000
you're uh that you're finding um bewildering or disconcerting yeah i i don't know where to begin
00:48:36.360
and some of it might be you know we're not getting any younger there's a common lament of the old is
00:48:41.960
oh my god things have gone to hell back back in the good old days you know and i think there could be
00:48:47.800
i think maybe the balance the complaining we've been doing i mean ai done right it could be a could
00:48:56.040
be a godsend could transform the world in in such wonderful ways and so much of the social media so
00:49:03.080
you we have really i think done done a fair job of pointing out the bad side but but it's rescued so
00:49:09.000
many people from loneliness people have found communities people have found love and putting
00:49:14.920
aside the misinformation prominent addiction problem we're social beings and some people are not situated
00:49:21.080
that they can get their social satisfaction out with actual people in the real world so they they
00:49:26.440
do it online and i think there's a satisfaction to be had for that too i mean to some extent this
00:49:31.960
speaks to both the positives and negatives of what we're talking about and it goes back to your comment
00:49:36.040
of all the bad things happening to you happening over over twitter which is we are extremely social
00:49:41.560
animals and our reputation is extremely important to us what people think of us i think only psychopaths
00:49:49.160
say i don't care what people think about me and mean it i mean basically having people say terrible
00:49:55.160
things about you lying about you is horrible is horrible and in some way it's far more horrible than
00:50:04.040
bodily pains or bodily damage i mean you ask people i don't know would you rather the whole world think of
00:50:10.360
you as a child molester or would you rather lose an arm i think people would vote for losing arm
00:50:16.440
yeah and and you know so so and similarly that you know people people the reputational boons and
00:50:24.440
and connecting with people and so on has this euphoric feeling for many people and it can be unhealthy or
00:50:30.600
and and and addictive but i think when done properly could be real plus of these algorithms it's
00:50:37.880
interesting i this could be the way it strikes many people or this could just be my own personal
00:50:43.480
idiosyncrasy but the the worst thing about you know reputational maintenance and you know
00:50:49.800
caring about what other people think the the thing that that really is my kryptonite is
00:50:56.200
the misrepresentation of what my views actually are like i i maybe everyone cares about this to the degree
00:51:03.800
i do but i i don't quite see it so it's not just people saying awful things about you is
00:51:10.040
you know it's just like that the truth is if someone accurately characterizes who i am or what
00:51:16.840
i've done or what you know what i think and they hate me for it yeah that's fine right and so like
00:51:23.640
this you know so let's say you know i'm an atheist right and so someone hates me because i'm an atheist
00:51:27.400
right so a fundamentalist christian will say awful things about me because of my atheism okay great there's
00:51:32.840
no problem with that and you know i i there's some outrageous views i i might have and if someone's
00:51:40.200
accurately characterizing them and they think they totally you know holding that view totally discredits
00:51:45.560
me as a person okay there's again no problem with that but it's just the line or about you know what i
00:51:53.480
think that just gets under my skin in a way that is um fairly uh life derranging and it's and that's
00:52:01.560
why i when i see this larger problem of misinformation at scale where you just can't figure out what is
00:52:09.960
true in this blizzard of purported facts it's uh yeah it really worries me that things can go completely
00:52:20.120
off the rails it's not related to your tremendous dislike of trump which of course is shared by many
00:52:26.680
people but i think there's a a certain feature of your dislike of trump that that connects to your
00:52:32.600
anger about the lies and the misinformation which is trump is you know notoriously famously
00:52:38.440
undeniably a bullshitter he's not he's he's not a liar he doesn't care enough to lie he has an
00:52:45.480
utter disinterest in in the truth yeah he'll just say whatever's whatever works for him and if it's
00:52:52.200
true it's true if it's false it's false he doesn't he doesn't care and there's something and and it seems
00:52:57.960
to it seems like like he he started a trend that that he a lot of people both for him and against him
00:53:05.000
have a sort of ethos that well it could be true it's the sort of thing one would say you know and and
00:53:11.240
and you know epistemological crisis is is is a fancy term but it's genuinely frightening when when
00:53:19.160
people just stop caring about the truth because you can't you can't reason properly you can't do
00:53:23.960
politics properly you can't do science properly can't do society properly and and i think that that's
00:53:29.400
that's the problem with that's one of the major problems with the world we live in now yeah that's
00:53:34.280
a distinction that uh you're you're referencing courtesy of uh harry frankfurt the philosopher he wrote
00:53:39.880
wrote this very short book just really an essay but it's a wonderful little book titled on bullshit
00:53:47.000
and um we've discussed him before on the podcast but to remind people that i think it really is a an
00:53:53.480
important distinction he makes the point in the book that the difference between a liar and a
00:53:57.960
bullshitter is that a liar has to be tracking what the truth is in order to insert his lie in a calculated
00:54:05.800
way in the space provided and he's he's observing all of the norms of of reasoning that his his
00:54:13.880
audience is relying upon because he's again he's trying to lie in a way that is undetected and undetectable
00:54:21.640
by logical human beings so he's not gratuitously contradicting himself he's not uh he's trying to
00:54:29.560
conserve the data as much as he can he is tracking truth and and expectations of consistency and every
00:54:36.920
other epistemological norm uh in order to do his nefarious work uh whereas the bullshitter is just
00:54:44.440
talking yeah and just creating a mood and isn't spending any time trying to track what is true or
00:54:53.160
even trying to avoid contradicting what he said five minutes ago because he just like it's a complete
00:55:00.360
it's complete epistemological anarchy right there's just there are there are no standards there's no
00:55:05.400
authorities there's no hierarchies there's no ground truth to be aware of it's just uh you know a blizzard
00:55:12.280
of opinion that's right and we have now what trump did to a degree that i would not have thought
00:55:19.560
possible was exposed that something like half of our society simply doesn't care about torrents of
00:55:27.800
bullshit on the most important topics and the most trivial being spread at you know every hour of the
00:55:33.720
day across the landscape with no concern for truth in sight one way to put it is that liars respect the
00:55:43.640
truth liars might respect the truth more so than somebody who reflexively is honest and never thinks
00:55:49.080
about a liar works hard to orchestrate uh their statements so that it appears true to people
00:55:55.640
and so really works at it says says i i gotta fool people and and a bullshitter just just bullshits
00:56:03.640
you know i i i have a part in my book maybe it's it's it's departed it's the part of the book where i
00:56:09.240
think i disagree with most of my colleagues where it's about rationality and here i'm going to defend
00:56:15.000
i'm not going to defend bullshit but i'm going to defend people who participate in at some level
00:56:20.360
where sometimes people argue well those who believe or purport to believe conspiracy theories and wild
00:56:28.280
views and misinformation are somehow being irrational but unfortunately it's not as simple as that
00:56:36.280
where what rationality is i think is using logic and probability and empirical facts to achieve your
00:56:43.880
goals now if your goal is to get things right then we should be working to find the truth and
00:56:50.200
appealing to science and working on our logic but often for many people the goal is to get along
00:56:55.800
and if you're in a community and uh i don't know everybody there believes that i don't know take an
00:57:00.760
old example barack obama was born in kenya and is is not an american citizen has no legal right to be
00:57:06.120
president and that's what everybody there believes well there's not much truth to it so if you care
00:57:11.160
about truth you're not going to believe it but you probably want to get along to people around you
00:57:15.480
and so you're sort of in this dilemma where the world as a whole would be better if everybody tried
00:57:21.000
to get things right but as individuals in society believing the the common following the common
00:57:27.560
practices believing what other people believe is actually fairly rational yeah i mean it's changing
00:57:33.880
the nature of the game i mean like we're you're we're equivocating on what rational means in these
00:57:39.320
two contexts but yeah i would agree that it's like a you know a hierarchy of needs problem you know you
00:57:46.600
need you need not to be exiled from your community or burned as a witch more than you need to be
00:57:52.440
that's right you need to have an intellectual integrity at least in in this moment but for me that's a
00:57:59.640
statement of a kind of social pathology right that's a community that is not as good as it might be it's
00:58:07.960
certainly not as in touch with norms of error correction that would keep it in touch with reality in an
00:58:15.160
ongoing way and yeah what you're describing it has much more in common with religion than it has in
00:58:23.960
common with science or any other really rational enterprise i mean these are like assenting to
00:58:29.560
certain pseudo truths it represents a kind of loyalty test i mean any invidious comparison we're going to
00:58:35.960
make between religion and politics and science on the other hand is going to swing on on these kinds
00:58:42.600
of distinctions i mean just the difference between wishful thinking and a host of other cognitive biases
00:58:48.600
and being dispassionately guided by you know evidence and argument that's interesting i i i
00:58:56.280
appreciate the distinction i think of it more though as a continuum so religion is one extreme where you
00:59:02.280
know unless you you publicly you know agree assent to the claims made of one true god they may kick you
00:59:09.640
out of town or burn you at the stake politics you know is is close to religion in that regard where you
00:59:16.600
know if you're a member of a political party and you're campaigning and everything you you you
00:59:20.600
should believe certain things and you'll be punished if you're not but i think even something like
00:59:25.800
science science in sort of a pure sense has has norms of rejecting authority and norms of skepticism
00:59:34.040
throughout but day to day if somebody is too skeptical about claims they're going to get kicked out of the club
00:59:41.160
yeah yeah well that's something that i have uh struggled to make sense of in public for you know
00:59:49.800
audiences that seem deeply confused about what the norms are here and it's hard to i mean this is
00:59:55.080
really the the sense in which science is not a science it's an art right it's like there is no way
01:00:01.800
we can at least i don't think there's a way we can make an algorithm of this process where we
01:00:08.120
value authority and then we discount its relevance to any specific claim right so like you know that's
01:00:17.560
right as you say we overturn it routinely in science whenever you make a breakthrough you're
01:00:23.320
very often proving some prior consensus wrong and we know that you know a nobel laureate in any discipline
01:00:32.120
can be wrong and doesn't need to be taken seriously if he or she is wrong and you know a lowly
01:00:40.200
graduate student can be right and the rightness or wrongness of any claim has absolutely nothing to do
01:00:46.600
with the cvs or the reputations of the of the people making those claims and yet as a time-saving device
01:00:54.120
we routinely rely on authority and consensus because probabilistically what 97 percent of chemists
01:01:02.520
believe about the structure of a given substance is our best bet at understanding what that substance
01:01:09.800
is by the lights of chemistry and that remains true until a flaw in the consensus is discovered by some
01:01:18.520
you know lone genius who then overturns it so it's a specialization problem and a time management problem
01:01:24.760
we just we we can't help but rely on authorities because most of the time it makes perfect sense to do that
01:01:31.640
that's exactly right a lot of cognitive neuroscientists could do excellent work
01:01:36.120
but don't fully understand some of the statistics that they're using you know their collaborator may
01:01:40.920
understand it better may not fully understand the physics of the fmri machine that they use and that's fine
01:01:48.520
and you know the graduate student who says i refuse to to work on the study until i understand all of
01:01:54.360
this and to justify it for myself will have a short career you know you gotta you gotta defer yeah i mean
01:02:00.520
there's just no way to be a true polymath at this point well although ironically ai promises to to make
01:02:10.360
that increasingly possible if in fact you we can outsource our thinking confidently to our robot overlords
01:02:18.440
i mean just because then you then you can like you in the in the presence of you know chad gpt 25
01:02:26.680
if any graduate student at any point can say all right you know explain this to me uh and explain it
01:02:32.520
again and get it down to 100 words and okay like when you think of how quickly you would be able to drill
01:02:38.760
down on you know to first principles on anything that interests you and you can outsource the burden of
01:02:46.280
having to remember all of that stuff to the ai it's um it's possible that we could uh you know have
01:02:54.440
more more of a comprehensive ownership of the full set of facts that you know impinge upon any question
01:03:03.800
but still i mean you know that there'll be a some boundary there where you are just accepting that in this
01:03:11.080
case you're accepting that the ai is integrating all of the uh authorities in in a way that that
01:03:19.080
actually works you know so it's and that's it and that's that brings us back to the limitations of
01:03:24.440
current ai i a little while ago wrote an article where i wanted to get some good quotes from psychologists
01:03:30.760
who actually from scholars in general who believed that the replication crisis showed psychology to be
01:03:36.200
deeply flawed and and so i i asked uh gpt3 and it came out with with two amazing quotes exactly what i
01:03:44.520
wanted one from gert gigaranter one from nassim taleb and i knew they were you know they sounded
01:03:49.480
exactly in the style of those people and then of course and of course neither of them existed it just
01:03:54.840
it just plucked them out of thin air yeah and and these sort of hallucinations are a problem
01:04:01.240
man i i you know i felt rather betrayed well i felt lied to get ready it's gonna get worse yeah
01:04:07.560
that's right okay so uh when i asked you what we know about the human mind you gave me uh several
01:04:15.000
facets of of of the answer to this sweeping question one was evolution and its implications another was
01:04:24.440
the brain as the the evolved organ that is uh the the evolved organ that is uh producing everything we we
01:04:30.280
know as uh the mind in our case another was the uh the insight often credited to freud but there have
01:04:40.200
been many variants of it yeah that much of what goes on in us and as us that is mental is not actually
01:04:51.720
conscious right so there's this divide this boundary line between consciousness and what you know
01:04:58.840
following freud we have learned to call the unconscious and and that could be misleading in
01:05:03.640
a variety of ways one of my favorite wittgenstein quotes is how he is said to have responded to this
01:05:09.800
notion of freud's and he says this is i think fairly close to verbatim imagine the difficulties we would
01:05:18.360
experience if we had a language that constrained us to say that when you see nobody in the room
01:05:25.960
you say mr nobody was in the room right so it's just it's the reification of absence right that's
01:05:32.920
the reification of of nothing being there in this case that we could be concerned that there's a
01:05:38.040
reification of the parts of ourselves that we don't experience uh as though as a storehouse of
01:05:44.040
potentially conscious mental states and then there's just this i guess related issue of
01:05:50.920
reductionism and emergence right so the the the mind the mind and anything any part of it we would
01:05:58.040
want to discuss you know take an emotion or or an act of cognition is an emergent phenomenon which
01:06:05.400
when understood at the level of its you know micro physical constituents seems to um to some minds
01:06:14.600
seems to promise a a smooth reduction to more basic facts which are the real facts but in other
01:06:20.600
cases that seems like a a fool's errand and that there's even in the presence of perfect ai and
01:06:27.880
infinite computing resources we're never going to be talking about
01:06:32.040
human scale experience purely in terms of neurotransmitters and synaptic connections
01:06:43.400
that that's it let me stop on that because that that's that's a deep point when i i i have my i think
01:06:48.920
my my first main chapter is on the brain and i say you know the mind is the brain i talk about that
01:06:53.560
talk about the history of that talk about how that you know as best we understand how that works
01:06:58.120
but i'm very i didn't spend the rest of the chapter sort of saying a lot of people then think that
01:07:05.720
wow so the real science is neuroscience and in the end we're not going to talk in terms of beliefs and
01:07:10.680
desires and emotions at all right it's all going to be you know if you'd like to continue listening to
01:07:18.360
this conversation you'll need to subscribe at sam harris.org once you do you'll get access to all
01:07:23.880
full-length episodes of the making sense podcast along with other subscriber only content including
01:07:29.160
bonus episodes and amas and the conversations i've been having on the waking up app the making
01:07:34.600
sense podcast is ad free and relies entirely on listener support and you can subscribe now at sam harris.org