Making Sense - Sam Harris - March 24, 2026


#466 — What Is Technology Doing to Us?


Episode Stats

Length

18 minutes

Words per Minute

182.96625

Word Count

3,453

Sentence Count

175

Hate Speech Sentences

1


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode, I speak with Nicholas Christakis, the Director of the Human Nature Lab at Yale and a sociologist, about how technology and human relations have changed over the past decade. We talk about the impact technology has had on us, the role of technology in our lives, and the role technology has played in shaping us.

Transcript

Transcript generated with Whisper (turbo).
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're hearing
00:00:11.980 this, you're not currently on our subscriber feed, and we'll only be hearing the first part
00:00:16.320 of this conversation. In order to access full episodes of the Making Sense Podcast,
00:00:20.860 you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore it's
00:00:26.400 made possible entirely through the support of our subscribers. So if you enjoy what we're doing
00:00:30.400 here, please consider becoming one. I'm here with Nicholas Christakis. Nicholas, thanks for joining
00:00:39.160 me again. Sam, it's so good to see you again. Yeah, great to see you. Yeah, we don't see each
00:00:43.740 other in person enough or even on the internet enough, but I always love talking to you. So
00:00:48.900 let's just jump right into it. I'll remind people you are the director of the Human Nature Lab at
00:00:53.180 Yale. You are both an MD and a sociologist and have studied many interesting topics related to,
00:01:00.240 I guess, how human beings and now technology affect one another. And we have too much to
00:01:08.900 talk about. I think I want to start with the question of, I guess I just want your post-mortem
00:01:13.920 on the present. This last decade, what has technology, specifically information technology,
00:01:19.940 done to us? Yeah, so I think we are going to see the other side of our present dilemma. I think
00:01:26.780 it is going to take half a generation to really be on the other side of it because I think we've
00:01:30.640 dug ourselves into quite a hole. I share the opinion, I suspect with you and certainly with
00:01:35.560 people like John Haidt and others, that kind of technology that we've invented or the turns that
00:01:43.040 our technology has taken, our communication technology has taken in the last 10 years
00:01:46.480 have so far been quite harmful to us,
00:01:49.660 whatever other benefits they've had.
00:01:51.040 I think they've contributed to this polarization.
00:01:53.360 They've contributed to anomie.
00:01:55.160 They've contributed to some of the mental health crises we've had.
00:01:58.620 I think they've also led to a surveillance state,
00:02:01.000 not just abroad, but shockingly in our own country
00:02:03.260 where these technologies are being used
00:02:05.120 in ways that I would regard as quasi-totalitarian
00:02:08.700 or at least pose the threat of that.
00:02:11.000 I had a friend long ago.
00:02:12.620 I still have him.
00:02:13.320 He's still a friend of mine.
00:02:14.380 And years ago, he told me he didn't use credit cards
00:02:16.380 And, you know, he refused to get a cell phone and he wanted, you know, he was trying to be off the grid because he didn't want to be surveyed. And I thought he was like a Luddite nut. Yet now, you know, worry that like my every move is being tracked by someone.
00:02:30.720 So if, to the extent that you are arguing, and I think you are, that some of what ails
00:02:35.840 us at present is due to some of these communication technologies and the ways they've been grafted
00:02:41.520 onto very fundamental human desires and exploit those desires, to the extent that we grow
00:02:48.780 as a society to cope with those threats, I think we will look back at this period as
00:02:53.880 It's just that, one in which we yielded to and were adversely affected by and ultimately, let's say, overcame some of these threats.
00:03:03.600 Not dissimilar, you and I remember, when you couldn't swim in the Boston Harbor, the Charles was polluted, the air was polluted, and we sort of cleaned everything up in some sense.
00:03:15.240 So maybe we'll clean everything up in that way, but it'll take some time.
00:03:18.140 So what is your personal engagement with social media these days?
00:03:22.620 How do you use it if you use it?
00:03:24.380 Well, I got very disgusted with Twitter and I didn't abandon my account because I didn't
00:03:30.640 want anyone to squat on it.
00:03:32.520 And I found that I was, the reason I went to Twitter was that I used it as a source
00:03:38.240 of information.
00:03:38.920 Like it was like access to experts in a way that was, you know, really, really helpful
00:03:44.140 to me.
00:03:44.640 And I found that a lot of the knowledge that I was acquiring, I was acquiring, I curated a list of people with diverse expertise and beliefs and followed them, and I really enjoyed it.
00:03:54.660 And then I felt like I had to, it wasn't just appropriate for me to take from the commons, I had to give to the commons.
00:04:00.660 So I tried to generate content that would reflect my expertise or my ideas and be useful to others.
00:04:06.940 But in the last few years, I found it to be just incredibly toxic.
00:04:11.600 And the feed became, even when I just tried to follow only my own people, became full
00:04:17.660 of garbage, a lot of trolling, a lot of mostly far-right conspiracy theories, also some left
00:04:24.080 craziness, of course, too.
00:04:25.760 I just couldn't use it anymore.
00:04:27.080 So I basically now, I stopped using Twitter, and I moved to Blue Sky a couple of years
00:04:31.740 ago, where I get mostly, I mean, the politics are another issue, but in terms of the science,
00:04:37.520 you know, I follow about 600 accounts, mostly scientists, and I get good scientific content,
00:04:43.400 and I have, you know, reasonable interactions. I have a tenth of the followers I used to have,
00:04:48.180 and that's fine. Facebook, I don't really use. LinkedIn, I don't really use. I just started a
00:04:53.840 YouTube channel on trying to advance the public understanding of science called For the Love of
00:04:59.200 science, but I don't really know how to use YouTube. So we're just doing videos, you know,
00:05:03.000 once a week. So I'm really just basically blue sky for science. That's all I'm doing nowadays.
00:05:07.540 Well, I want to get back to the reputation of science and to your efforts on YouTube at the
00:05:13.580 moment. So it just takes, again, social media and what it's doing to us and the toxicity and
00:05:20.840 conspiracism and trolling that you are familiar with and that everyone listening to this will be
00:05:26.420 familiar with. Do you have any sense of what the remedy is? I mean, you know, my personal remedy
00:05:31.860 was to just delete my Twitter account and to now, you know, only in extremis, look at a Twitter feed
00:05:39.680 just because there's some breaking news that is best captured, you know, there. But even that,
00:05:44.900 Sam, do you remember, do you remember that guy who was an expert on military tires? Do you remember
00:05:50.060 that call? No, no. It was, I think it was during, I can't remember if it was when the Ukraine war
00:05:54.740 started, I think. And there was some guy who was an expert in the maintenance of military vehicles.
00:06:00.940 And he sent a long thread out about how the trucks hadn't been moved around properly,
00:06:06.540 the tires hadn't been rotated, how all the tires were exploding. I had no idea there was such a
00:06:10.720 person. And I read his whole thread and I was like, oh my God, it's so interesting. All of that
00:06:15.060 content, that expertise, as far as I can tell, is gone from Twitter.
00:06:19.880 It should be invitiated by AI slop or how has it gone?
00:06:23.500 Well, first of all, whatever the algorithm is, I don't get that content.
00:06:27.860 The AI slop is a serious problem.
00:06:30.840 And one of my family teases me, I'm known to be particularly gullible.
00:06:35.800 Right.
00:06:36.460 And actually, my re-narration of this is that I'm not stupid and naive and gullible.
00:06:41.400 I'm trusting.
00:06:42.460 Yeah.
00:06:44.980 You're a good person, in other words.
00:06:46.860 Exactly.
00:06:47.540 Exactly.
00:06:48.040 That's my story and I'm sticking with it.
00:06:49.680 But the thing is, somehow these algorithms figured out that I like to look at, like, baby elephants.
00:06:55.440 And initially, I got, like, real, I think, you know, like, BBC photos of, like, baby elephants.
00:07:01.140 And then I think the algorithm started feeding me slop.
00:07:03.940 Like, you know, like a hippopotamus, a crocodile attacks a baby elephant.
00:07:08.280 Yeah, yeah.
00:07:10.060 Saved by a rhinoceros, yeah.
00:07:12.040 Yeah, exactly.
00:07:12.680 The balmy elephant comes and stomps up around the ground.
00:07:16.580 The whole thing is totally, it's all fiction.
00:07:18.960 And initially I was like, really take it in by this stuff.
00:07:22.140 So there's a ton of AI slop that's a problem.
00:07:25.580 There is, I mean, it's just, it's useless, honestly, to me at least.
00:07:30.420 So I mean, I don't, I have nothing particularly good to say about the environment on Twitter
00:07:34.080 right now.
00:07:35.040 And it's a multiplicative, you know, profusion of problems from my perspective.
00:07:40.500 Plus, plus, I wasn't so happy with all of my, I understand that all of our personal
00:07:46.040 DMing and stuff on Twitter basically belongs to X and could be used to train AI algorithms and so
00:07:53.700 on. So none of that is appealing to me. Well, I think as we're speaking, there's a
00:07:59.220 lawsuit, I think the first of its kind, against social media companies in California. You mentioned
00:08:05.860 John Haidt. He's been obviously instrumental in bringing awareness to this issue, especially the
00:08:11.180 harm done to teenagers by social media. What is the path forward? Do you think it's a
00:08:16.040 a successful series of lawsuits, a revocation of Section 230, just a virtuous cycle of social
00:08:24.280 contagion where we all begin to change our minds at once and influence the norms around using
00:08:29.520 social media? Or is it just that AI slop itself will provide some cure because every video you
00:08:36.700 see, your first question here until the end of the world is, you know, is this even real? And it
00:08:42.420 will begin to no longer care what's being presented in these non-gate-kept channels.
00:08:49.060 So I have a few things to say about that.
00:08:50.920 First of all, it's known that, as you, everyone listening knows, that anonymity contributes
00:08:55.460 to a lot of the problems.
00:08:57.900 And, you know, this is why people used to, you know, torturers used to wear masks, you
00:09:03.040 know, and people would be disinhibited when they went to mask balls, for example, that,
00:09:07.700 you know, these fancy mask balls, you imagine, from hundreds of years ago, you know, that
00:09:11.780 the aristocracy had, you know, it's disinhibiting to hide your, and this is also why people in mobs
00:09:16.920 behave awfully. They have a kind of practical anonymity while you get riots. It's a sort of
00:09:22.480 well understood process. So I think that humans, of course, behave worse when they're anonymous or
00:09:27.220 pseudonymous. And now I have a hard time arguing, my problem is that I think that any entity where
00:09:33.200 you can't be anonymous behavior is going to be better. On the other hand, I don't necessarily
00:09:37.040 want to abolish anonymity either, because I think that's a tool for totalitarianism. So
00:09:41.780 I think there will be social media companies which require or where people who use them,
00:09:49.920 which afford people the opportunity to be non-anonymous and which people then privilege
00:09:55.060 non-anonymous accounts, which I think will help. So I think tools to afford people the option and
00:10:01.860 also to exploit non-anonymity will help. So like the old blue check mark on Twitter was a good
00:10:08.060 idea. Yeah. Another thing, you said 230, like I struggle with this as well because on the one
00:10:13.660 hand, I do think that 230 was crucial actually for the emergence of the internet. I do think
00:10:19.420 that there is an argument to be made that these social media companies are just carriers and
00:10:23.460 shouldn't be responsible for their content. On the other hand, I also think washing their hands
00:10:29.020 of the content entirely doesn't make much sense either.
00:10:31.540 It allows them to sort of wink, wink,
00:10:33.560 and just ignore horrible abuses
00:10:36.120 taking place on their platform.
00:10:37.780 So I actually don't have an answer
00:10:40.480 to that struggle either.
00:10:42.100 But what I do think is going to happen,
00:10:43.500 just as you said, is I think people,
00:10:46.280 and maybe this will be accelerated by AI and AI slot,
00:10:49.060 I think people will learn.
00:10:51.400 And I think, ironically,
00:10:52.540 we may have a kind of return
00:10:53.940 to a privileging of reputable sources.
00:10:56.880 Like, you know, we've migrated so far away from, you know, the evening news with Dan Rather kind of, you know, thing to everyone is an expert and, you know, there's all this kind of good stuff, but also crap online.
00:11:11.100 I think we may, ironically, people may be willing to pay a bit more for reliability.
00:11:16.340 You may not believe it unless you read it in The Economist, you know, then you'll believe it.
00:11:21.020 You're not going to believe whatever you see otherwise online.
00:11:22.700 So it may reprivilege, you know, sort of credibly real voices.
00:11:29.200 Yeah, yeah.
00:11:30.720 I know you've done some research of late on AI and how it changes not just human behavior with respect to technology or information sources, but behavior toward one another, right?
00:11:44.300 It alters the mechanics of human cooperation on some level.
00:11:48.200 Well, you know, take that strand if you want, but I mean, just generally speaking, what are your thoughts about AI and where all of this is headed for us?
00:11:56.660 So I want to tell a brief toy story or toy model or toy example of the question you just put.
00:12:02.640 But before I tell that, I want to go on a slight digression.
00:12:05.960 Yeah.
00:12:06.380 And because I struggle a lot, as I suspect you do, with, you know, what is happening with these incredibly powerful tools that are being so rapidly developed in our society.
00:12:16.140 And there's this scene in the movie Fiddler on the Roof where the protagonist, who's a milkman in the town of Anatevka, you know, around the time of the Russian Revolution, just before actually, is a very poor man, goes to the town center, and there's a big argument that's going on there.
00:12:32.020 And someone makes something, and Reptevia, he's the character, says, you're right.
00:12:36.440 And someone makes the opposite point, and he says, you're right too.
00:12:40.480 And then someone says, Reptevia, they can't both be right.
00:12:43.340 And he says, you're also right.
00:12:44.600 And this is how I feel when I listen to debates by experts on AI.
00:12:49.260 I listen to some computer scientists and some tech billionaires who talk about the amazing promise of AI and how there will be some bumps, but mostly it's going to be this extraordinary future and that to oppose it is to be a Luddite.
00:13:02.680 And I think you're right.
00:13:03.780 And then I listen to other incredibly expert computer scientists and tech billionaires who say the exact opposite, who say, you know, I think I was at an event with Sam Altman a couple of years ago or a year ago, actually.
00:13:14.140 and he said that he thought
00:13:15.400 there was like a 2%
00:13:16.700 human extinction risk
00:13:18.040 from Aon.
00:13:19.420 Yeah, I think,
00:13:19.880 actually, I think it's higher
00:13:20.900 coming from him.
00:13:22.540 I think his estimate was higher,
00:13:23.860 but maybe he's recalibrated it
00:13:26.300 in the interest of public relations,
00:13:27.960 but I think he was more like
00:13:29.260 20% at one point.
00:13:30.700 Yeah, but I mean,
00:13:31.100 that's crazy to just not
00:13:32.420 hauntly.
00:13:33.740 Yeah, yeah.
00:13:34.520 No, 2% is terrifying,
00:13:36.060 but 20% is psychotic.
00:13:38.500 So you listen to those guys
00:13:39.500 and you're like,
00:13:40.600 well, they're also right.
00:13:41.540 Well, they can't both be right.
00:13:43.040 And, you know,
00:13:43.360 that's also true. So I have sort of stopped trying to form in my own, because I'm not so expert in
00:13:50.160 this area, but I am expert in another area, which is related to this, which is this issue of how AI
00:13:55.800 is going to change human behavior. And here, just to preface one set of ideas, the kind of toy model
00:14:03.000 that I like to throw out there to sort of help people fix ideas is imagine the manufacturer of
00:14:08.580 an Alexa digital assistant. The manufacturer of a digital assistant is very concerned with
00:14:14.140 the human-machine interaction. You would never buy an Alexa if every time you had to speak to it,
00:14:18.920 you said, you had to say, excuse me, Alexa, I'm very sorry to interrupt you. If you don't mind,
00:14:25.020 would you please tell me the weather tomorrow? Right? That would be an absurd level of politeness.
00:14:29.240 You would never buy a machine like that. You expect to be able to say, Alexa, weather,
00:14:34.300 and it obediently responds.
00:14:36.520 And that's fine until you bring the machine into your home
00:14:39.920 and your children in speaking to that machine
00:14:42.360 learn to be rude.
00:14:44.220 And then they go to the playground
00:14:45.380 and they are rude to other children.
00:14:47.820 So what we've been studying in my lab
00:14:49.820 is human-human interactions in the presence of machines.
00:14:53.740 And specifically what we've been focusing on
00:14:56.580 is little perturbations in the AI systems,
00:14:59.780 in the machine systems
00:15:01.200 that modify how the humans interact with each other.
00:15:04.900 And in fact, what we're working on
00:15:07.080 is not so much super smart AI to replace human cognition,
00:15:11.280 but dumb AI to supplement human interaction.
00:15:14.560 And because the humans are smart,
00:15:16.440 you can think of the AI as a kind of catalyst,
00:15:18.800 like platinum in an organic chemistry reaction
00:15:21.180 that just facilitates the interaction of humans
00:15:24.160 and helps optimize them.
00:15:25.940 And we've done a broad set of experiments
00:15:28.020 that have shown this is possible,
00:15:29.360 that you can improve human collective and individual performance through the thoughtful
00:15:35.000 injection of AI agents into social systems.
00:15:40.100 Have you done any research or is there any research on the first point you made, though,
00:15:44.380 that kind of a course and instrumental use of AI has bleed through into human relations
00:15:52.500 and so kids are actually less socially appropriate if they've been barking orders at their
00:15:58.720 bots all day? We haven't looked at that specifically. That's just an example. I think
00:16:04.000 that work has been done, and I think that work comports with my hypothetical example.
00:16:08.960 Well, what would you imagine in the case of humanoid robots? This is something that,
00:16:12.380 honestly, I haven't spent that much time visualizing, but whenever I have spoken about it,
00:16:18.340 I think we can stipulate that we will eventually get out of the uncanny valley and have robots that
00:16:24.760 that look, you know, if not perfectly human, you know, in some sense, better than human,
00:16:30.620 right? They'll be perfect humanoids in some sense. You know, when we want our AI shaped like that,
00:16:37.060 we'll make it shaped like that. I've spoken to Paul Bloom about this some years ago in response
00:16:43.000 to the series Westworld. We looked at that and we thought one piece of philosophy that was
00:16:49.320 accomplished by that series is that it revealed that a place like Westworld probably couldn't
00:16:54.860 exist because you'd really have to be a psychopath to go on vacation and rape, you know, perfect
00:17:01.100 facsimiles of, you know, human women and girls and then come home and, you know, tell your friends
00:17:06.720 what a good time you had, you know, raping and killing robots that were indistinguishable from
00:17:10.320 humans. And so unless, you know, maybe you could set up a theme park that would act like a bug
00:17:15.840 light for psychopaths in that way, but I mean, just normal people would not want to have a
00:17:20.660 perfectly, seemingly veridical experience of being a moral monster. And you would imagine
00:17:27.940 some real contamination, both of how they felt about themselves and how other people saw them
00:17:32.040 if we did that. So just imagine we get to the place where we have, now we're talking to
00:17:38.680 humanoid robots and making demands upon them. I would imagine that our social graces will come
00:17:45.560 creeping back in? I mean, honestly, even just in typing instructions into an LLM, I find myself
00:17:51.360 being inappropriately polite, right? I mean, I'll use the word please, and I think that probably
00:17:56.060 costs Sam Altman some number of dollars every time I do it. How's that going to change us?
00:18:02.720 Well, believe it or not, first of all, I'm not 100% sure I don't know the answer,
00:18:06.820 but I can speculate along with you. Believe it or not, this also is an old topic, and it actually
00:18:12.380 came up prior to the, well, certainly prior to the modern instantiation of Westworld after the old
00:18:18.040 movie. There's a book, I know it's over 20 years old now, called something like Love and Sex with
00:18:22.300 Robots. People were speculating about what it would mean in some futuristic world in which we
00:18:28.220 have the capacity to have intimate relations with machines. And there were two schools of thought on
00:18:33.020 this. If you'd like to continue listening to this conversation, you'll need to subscribe
00:18:37.900 at samharris.org.
00:18:39.880 Once you do,
00:18:40.820 you'll get access
00:18:41.440 to all full-length episodes
00:18:42.840 of the Making Sense podcast.
00:18:44.800 The Making Sense podcast
00:18:45.780 is ad-free
00:18:46.620 and relies entirely
00:18:48.000 on listener support.
00:18:49.500 And you can subscribe now
00:18:50.760 at samharris.org.