Making Sense - Sam Harris - January 16, 2026


#453 — AI and the New Face of Antisemitism


Episode Stats

Length

21 minutes

Words per Minute

158.47609

Word Count

3,429

Sentence Count

209

Hate Speech Sentences

10


Summary

Judea Pearl is a computer scientist, essayist, and writer. He s also the author of The Book of Why, and co-author of the new book, Coexistence and Other Fighting Words. In this episode, he tells us about his early life in the town of Bnei Brak, how he got into computer science, and how he became one of the fathers of AI.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're
00:00:11.740 hearing this, you're not currently on our subscriber feed, and we'll only be hearing
00:00:15.720 the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.060 Podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore
00:00:26.240 it's made possible entirely through the support of our subscribers. So if you enjoy what we're
00:00:30.200 doing here, please consider becoming one.
00:00:36.660 Well, I'm here with Judea Pearl. Judea, thanks for coming into the studio. Great to see you.
00:00:41.320 It's the second time, isn't it?
00:00:42.780 Yeah, I came to you last time.
00:00:44.480 Yeah, I was in your office.
00:00:46.540 UCLA, right?
00:00:47.100 I actually didn't look to see when that was, but that's a few years ago, certainly. That
00:00:51.680 might be.
00:00:52.380 That was for your book, The Book of Why.
00:00:54.360 The Book of Why.
00:00:55.000 Which kind of wraps up for a popular audience all of your work on causality and logic of
00:01:01.600 that, which we'll touch briefly, because I have to ask you about AI, given that you're
00:01:05.860 one of the fathers of the field, but that's not really our agenda today, but we'll start
00:01:10.760 near there. But I want to talk to you about your new book. You have a new book, Coexistence
00:01:17.560 and Other Fighting Words, which I'm sorry to say I have not yet read, but that will give
00:01:23.300 you the ability to say anything to a naive audience on this topic. I'm sure it covers
00:01:29.540 much of the ground I want to cover with you, because I'm, like you, I think, very concerned
00:01:34.400 about cultural issues and the way that we've seen a rise of anti-Semitism on both the left
00:01:40.720 and the right. And we're now seeing the condition of Israel as a near pariah state on the world
00:01:48.260 stage. Briefly, let's start with your background. Where were you born and what did your parents
00:01:53.460 do?
00:01:53.660 Well, I was born in a little town called Bnei Brak, which is seven and a half miles north
00:02:00.100 of Tel Aviv. And it was established in 1924 by my grandfather, Chaim Pearl, with 25 other
00:02:09.900 Hasidic families who came from Poland and decided that it's time to go back to where they belong.
00:02:17.560 So when did they move to Israel, your parents?
00:02:19.720 In 1924. My father, my father family came in 1924.
00:02:23.920 And when were you born?
00:02:24.960 In 1936.
00:02:26.680 Okay. So, and what did your parents do?
00:02:29.860 Well, my father was the secretary of the Bnei Brak municipality. But that only later,
00:02:37.260 only later, he came in and became a farmer. You come to Israel in 1924, you buy a piece of
00:02:43.220 land and you schlep water for miles away and you grow radishes. That's what he did.
00:02:49.400 Yeah. Yeah. That had to be hard work. It's probably still as hard work, but that was farming
00:02:56.580 was, had to be the first order of business.
00:02:58.140 First order. Yeah. The idea was to establish a biblical town with religious orientation and
00:03:05.660 make it into agricultural success.
00:03:10.360 Yes. Do you know about, much about your parents' state of mind when they left Europe in the 20s?
00:03:17.340 I mean, what was that?
00:03:18.180 Yes, I know.
00:03:18.640 Did they see, were they witnessing Weimar and it's...
00:03:22.820 No, no, no. My father didn't see.
00:03:25.160 No?
00:03:25.360 No. That was 1924. And, well, the legend says, at least the family law says that my grandfather
00:03:35.100 came home one day, he was accosted by a Polish peasant and called a dirty Jew, and he came
00:03:42.400 home bloody and he said to his wife and four children, start packing, we are going to where
00:03:48.280 we belong. Okay. Wow.
00:03:49.880 It's a family law, but it has some truth in it, yeah. And what were your principal intellectual
00:03:56.060 influences as a kid? I mean, how did you find your path to computer science as a young person?
00:04:03.860 First, I had a very, very good education in high school. It's a...
00:04:09.340 In Tel Aviv or...
00:04:10.840 I went to a high school in Tel Aviv, yes. I grew up in Bnei Brak, but the municipality of
00:04:17.020 Tel Aviv gave a quarter to its peripheral, to its suburbs. And Bnei Brak was one of its
00:04:23.800 suburbs. So from our town, they chose four people. I was chosen among them. It was a
00:04:29.680 privilege at the time to go to Tel Aviv high school. And we had a beautiful education.
00:04:35.780 You know why? Because my high school teachers were professors in Heidelberg and Berlin that
00:04:42.380 they were pushed out by Hitler. And when they came to Israel, they couldn't find an academic
00:04:47.380 job. So they taught high school. And we were just privileged and lucky to be part of this
00:04:53.480 unique educational experiment.
00:04:56.200 Yeah. Yeah. And your first language is Hebrew?
00:04:58.860 My first language is Hebrew. All the studies were in Hebrew.
00:05:02.160 So, but the people who had just come from Heidelberg, your professors were speaking Hebrew
00:05:08.040 at that point or what? Hebrew. Huh. Interesting.
00:05:10.540 They had to struggle. Some of them still had a Yekish accent.
00:05:15.740 Uh-huh. Yeah. Yeah. Okay. So as I said, we spoke about your book of why last time, where
00:05:21.540 you talk about the importance of causal reasoning. What's your current view of AI? What has surprised
00:05:28.960 you in recent years? What is, how close to causal reasoning are we achieving in the current
00:05:35.860 crop of LLMs? And I'm just wondering what you, how you view progress at this point?
00:05:41.360 In causal reasoning or in toward the...
00:05:44.160 I guess toward AGI in general, but, you know.
00:05:48.280 If that is a goal, I don't think we are much closer. We have been deflected by the effect of LLMs.
00:05:56.100 You have low-flying fruits and everybody is excited, which is fine. I mean, they're doing
00:06:02.080 tremendously impressive job, but I don't think they take us toward the AGI.
00:06:09.320 So you think the framework, the LLM deep learning framework, is a dead end with respect to AGI?
00:06:15.660 No, it's a step.
00:06:17.040 But does it require a fundamental breakthrough of the sort that we haven't had?
00:06:21.080 Yes. Absolutely, yes.
00:06:22.380 So it's not just more data and more compute?
00:06:24.880 No, no, no, no. More data and scale-up, it's all, I don't think it's going to lead
00:06:32.260 over the hump that we need to cross.
00:06:34.900 Can you articulate the reason why, you know, in terms that a layperson can understand? I mean,
00:06:40.860 if someone asked you, why is this insurmountable by virtue of just throwing more data and compute
00:06:46.800 at it?
00:06:47.380 There are certain limitations, mathematical limitations, that are not crossable by scaling
00:06:54.140 up. I show it clearly mathematically in my book. And what LLMs do right now is they summarize
00:07:03.640 world models, authors by people like you and me, available on the web, and they do some sort
00:07:11.420 of mysterious summary of it, rather than discovering those world models directly from the data.
00:07:20.480 To give you an example, if you have data coming from hospitals about the effect of treatments,
00:07:27.680 you don't fit it directly into the LLMs today. The input is interpretation of that data, authored
00:07:36.840 by doctors, physicians, and people who already have world model, the body disease, and what it does.
00:07:43.520 But couldn't we just put the data itself in as well?
00:07:46.520 Here you have a limitation. Here's a limitation defined by the ladder of causation. There is
00:07:52.860 something that you cannot do if you don't have a certain input. For instance, you cannot get
00:07:58.680 causation from correlation. That is well-established, okay? No one would deny even
00:08:05.040 satisfaction by that, okay? And you cannot get interpretation from intervention. Interpretation
00:08:12.220 means looking backward and doing introspection.
00:08:14.720 You say you can't get interpretation from interventions?
00:08:17.720 You cannot.
00:08:18.720 But intervention is, just to remind me, but it's...
00:08:21.980 Intervention is what will happen if I do.
00:08:23.900 Right. So it's a kind of an experiment or a thought experiment.
00:08:26.940 Experiment. Experiment, correct.
00:08:28.480 And also, doesn't it imply a kind of counterfactual condition where you're saying, you know, what
00:08:33.040 would have happened if we didn't intervene?
00:08:34.880 No. No?
00:08:35.920 Here you have a barrier.
00:08:36.880 Ah.
00:08:37.520 You have to have additional information to cross from the intervention level to the interpretation
00:08:43.200 level.
00:08:43.600 And you'd put counterfactuals on the side of interpretation.
00:08:46.560 Yes, correct. Because you go, you say, look what I've seen that David killed Goliath, and
00:08:51.840 what would have happened had the wind been differently, okay?
00:08:55.520 So who among the other patriarchs in the field fundamentally disagrees with you? I mean,
00:09:01.600 do people like Jeffrey Hinton or others who have had...
00:09:04.480 I don't think they disagree. They don't address it.
00:09:07.840 I haven't... Well, Jeff Hinton came up with the statement that we are facing a deadlock, okay?
00:09:14.400 Oh, I hadn't heard that, yeah.
00:09:15.600 Yes, yes. He mentioned there that this is not the way to get AGI, but he didn't elaborate on
00:09:23.040 the causal component.
00:09:24.720 Mm-hmm.
00:09:25.600 So I can't recall if we spoke about this last time, but where are you on concerns around
00:09:31.600 alignment and an intelligence explosion? I mean, I know it sounds like you're not worried that
00:09:36.400 LLMs will produce such a thing, but in principle, are you worried, do you take IJ Goods and others'
00:09:43.280 early fears seriously that once we build AGI on whatever, on the basis of whatever platform,
00:09:49.520 we're in the presence of something that can become recursively self-improving and get away from us?
00:09:54.400 Absolutely, yes. I don't see any computational impediments to that horrifying dream. And
00:10:03.200 of course, we're already seeing dangers of LLMs when they fall into the hands of bad actors. But that's
00:10:13.760 not what we're worried about. We're worried about a truly AGI system that will take over and may be a
00:10:20.560 danger to humanity, yes. Definitely foresee that possible. I can see how it can acquire free will
00:10:27.360 and consciousness and the desire to play around with people. That is quite feasible. It doesn't mean
00:10:34.560 that I'm not going to, I'm going to stop working or understanding AI and its capability simply because
00:10:42.800 I want to understand myself. Yeah, yeah. Are you worried that the field is operating under a kind
00:10:50.480 of a system of incentives, essentially an arms race that is going to select for reckless behavior? I mean,
00:10:57.040 just that we, if there is this potential failure mode of building something that destroys us,
00:11:02.400 it seems at least from the, um, the statements of the people who are doing this work, you know,
00:11:07.360 the people who are running the major companies, you know, that the probability of such a encountering
00:11:12.000 such existential risk is in their minds at least pretty high. I mean, we're not hearing people
00:11:16.640 like Sam Altman say, oh yeah, I think the chances are, you know, one in a million that we're going to
00:11:21.840 destroy the future with this technology. They're putting the chances at like 20% and yet they're still
00:11:27.200 going as fast as possible. Doesn't an arms race seem like the worst condition to do this
00:11:32.480 carefully? There are many other people that are worried about it, like Stuart Russell and other,
00:11:38.560 and the problem is that we don't know how to control it. And whoever says 20% or 5% is just
00:11:45.520 talking. We cannot put a number of that because we don't have a theoretical or technical
00:11:52.480 instruments to predict whether or not we can control it. We do not know what's going to happen,
00:11:58.560 what's going to develop.
00:11:59.840 Right. But what I find alarming about those utterances is that, I mean, if you just imagine
00:12:03.840 if the, you know, the physicists who gave us the bomb, you know, the, you know, the Manhattan Project,
00:12:10.240 if one asked about their initial concern that it might ignite the atmosphere and destroy all of
00:12:16.320 life on planet Earth, if they had been the ones saying, yeah, maybe it's 20%, maybe it's 15%,
00:12:22.720 and yet they were still moving forward with the work, that would have been alarming. But of course,
00:12:26.240 that's not what they were saying. They were, they did some calculation and they put the chances
00:12:30.240 to be, you know, infinitesimal, though not zero. It just seems bizarre culturally that we have the
00:12:36.480 people doing the work who are not expressing, you know, fallaciously or not. I'll grant you that
00:12:42.160 all of this is made up and it's hard to come up with a, with a rational estimate, but for the people
00:12:46.800 doing the work, plowing, you know, trillions of dollars into the build-out of AI to be giving
00:12:52.400 numbers like 20% seems culturally strange.
00:12:56.080 I don't know what I mean by 20%. Look at me, I am fairly sure. All I'm saying is there's no
00:13:05.680 theoretical impediment for creating such a species, dominating species.
00:13:10.240 That is true. And at the same time, I'm working toward that indirectly, not toward that in order
00:13:18.000 to create it, but to understand the capabilities of intelligence in general, because I want to
00:13:25.920 understand ourselves because I'm curious.
00:13:28.960 Do you have any thoughts about how a system would have to be built so as to be perpetually aligned
00:13:37.040 with our interests? I mean, so if you're taking intelligence seriously, right? So we're talking
00:13:42.400 about building an autonomous intelligent system that exceeds our own intelligence and in the limit,
00:13:48.320 improves itself, one would imagine. Do you have any notions about what a guarantee of an
00:13:54.240 alignment could look like before we hit play on that?
00:13:57.280 No, I don't think we can imagine an effective alignment or an effective architecture that will
00:14:06.160 reassure us of alignment with our survival.
00:14:09.680 I think Stuart Russell, it's been a couple of years since I've spoken with him, but I recall
00:14:14.720 his notion. Again, this is, I'm sure this is a kind of a hand-waving notion from the computer
00:14:19.600 science point of view, but to have as its utility function, just to better and better approximate
00:14:26.080 what we want, to be perpetually uncertain that it's achieved our goals insofar as we can continue to
00:14:33.360 articulate them in this open-ended conversation that is the evolution of human culture. Does that
00:14:38.880 seem like a frame that...?
00:14:40.400 It's a nice frame, but I don't see any impediment for the new species to overcome and bypass those
00:14:48.480 guidelines and play.
00:14:51.760 What, what, so people, so people have an intuition that if we built it, there's no possibility of it
00:14:59.440 forming its own goals like that we didn't anticipate, the instrumental goals. I mean,
00:15:04.560 this is like, I mean, there are people fairly close to the field who will say this. I'm not sure. I mean,
00:15:09.760 maybe even someone like Jan Lacoon would say this, but what would you say to that? I mean,
00:15:13.920 you just very breezily articulated certainty that, or something like certainty that an independently
00:15:20.720 intelligent system can play, that it can change its mind, it can discover new goals and cognitive
00:15:27.760 horizons, just as we seem to be able to do. Why is there a difference of intuition on this front?
00:15:33.360 I mean, like, your account seems obvious to me.
00:15:36.240 I don't know why I have different intuition than Lacoon.
00:15:41.680 I just, look, once you want a system that will explore, explore its environment, that's
00:15:52.080 required for any intelligent system. We want it to play like a baby in a crib and find out
00:15:58.400 why this toy makes noise and this doesn't, okay? So it has to play around in order to
00:16:03.120 get control over the environment, to understand the environment, okay? So once you have the idea
00:16:07.920 of playing, what will prevent from playing with us as instrument for his or her understanding,
00:16:15.440 yeah? For instrument for environment to become part of its environment.
00:16:21.200 All right. So this is kind of a reckless pivot from the topic of AI, but it's, I think there's a bridge
00:16:27.040 here. I mean, I guess we could put this sort of in the frame of the cultural conditions that
00:16:32.320 that allow us to reason effectively or fail to reason effectively. And this is on, you know,
00:16:37.760 morally loaded topics like, you know, war and, you know, asymmetric violence, anti-Semitism,
00:16:44.560 Islamism, again, Israel status among nations. You know, unfortunately you are unusually well-placed
00:16:50.320 to have an opinion on these topics, given your history and what happened to your son back in 2002.
00:16:56.720 I don't want to, you know, awaken painful memories, but I just feel like I would need to,
00:17:02.320 I'm happy to talk about this topic in any way you want, but I just need to acknowledge that your son,
00:17:07.600 Danny, was one of the most prominent people killed by Al-Qaeda when the war on terror, so-called,
00:17:14.320 became of, you know, salient to most people in America, certainly for the first time after 9-11.
00:17:21.280 So you've spent, you know, now a quarter of a century witnessing, you know, as I have, but from
00:17:26.720 a far kind of deeper space, the kind of consistent misunderstanding around jihadism and Islamism that
00:17:36.720 has happened, especially on the left in our society. To my eye, we have a kind of an anti-colonial
00:17:44.160 oppressor-oppressed narrative that has captured the moral intuitions of the left such that it's very
00:17:49.440 difficult to talk about some of the ideas within Islam that reliably beget the kind of violence
00:17:54.800 we've seen. And, and, you know, the groups like the Muslim Brotherhood has managed to play havoc
00:17:59.840 with this moral confusion. They've found legions of useful idiots, even on college campuses like
00:18:05.440 your own. I mean, I don't know if you noticed this, but the other day, the UAE announced that
00:18:09.920 it would no longer pay for its students to study in the UK at UK universities for fear that they will
00:18:16.480 be radicalized by the Muslim Brotherhood on UK campuses. So, I mean, that's how far the rot has
00:18:20.880 spread. We can take this from any side. We can, we can talk about 20 plus years ago where, how you
00:18:26.080 came to this or your experience after your, I want to talk about your experience after October 7th.
00:18:31.760 Just, you know, please start wherever you want to start, but.
00:18:33.920 But the, my son's tragedy, tragedy pushed me into public life and into my interest with the social
00:18:45.680 problem and cultural problem, the way you are describing. Yeah. I, we started the foundation
00:18:52.320 after his death to, with the same belief that it's a matter of communication, dialogue with,
00:18:59.440 between the East and West, Jews and Muslims. And we, I got pushed into that very heavily.
00:19:08.240 And I started with, together with the Pakistani scholar, we started the Daniel Pearl dialogue
00:19:16.640 between Muslims and Jews. And we went from town to town and we had the meetings and the discussions,
00:19:24.080 audience discussions. I even took a trip, which I describe in the book, a trip to Doha in 2005 as
00:19:32.720 part of the conference to bridge East-West relationship and to understand what prevents
00:19:40.640 the Muslim world or the Arab world, the Muslim world, yeah, from modernizing and become enlightened as we
00:19:48.880 are. And I was very, very, that was the first time that I found the barriers, which I didn't believe
00:19:56.640 exist. And this was the barrier of Israel. Like we came there with the idea that they would like to,
00:20:04.480 American help in getting modernized and progressive. And we came out, my conclusion is that they had a
00:20:15.280 different idea in mind. And we are talking about moderate Muslim scholars from all over the Muslim
00:20:23.760 world gathering in Doha for this conference, the purpose of which was what can Americans do to speed up
00:20:32.480 the process of progress and democratization of the Muslim world. And I can, their idea was,
00:20:42.640 if you want us to modernize, we'll give you that favor. We are going to do you the favor of modernizing
00:20:50.560 ourselves on one condition. We want Israel head on a tray, on a silver platter. This is a condition.
00:21:00.240 We cannot make any progress unless you chop off the head of Israel.
00:21:04.960 Yeah. Well, and you were at this time, you were living in Los Angeles, right? You were not living
00:21:10.320 in Israel in 2005. No, no, I was in Los Angeles, of course. When did you come, when did you come to
00:21:15.680 L.A.? I came to L.A. 1966. If you'd like to continue listening to this conversation, you'll need to
00:21:23.600 subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making
00:21:29.520 Sense Podcast. The Making Sense Podcast is ad-free and relies entirely on listener support. And you can
00:21:36.160 subscribe now at SamHarris.org.