The Saad Truth with Dr. Saad - June 22, 2024


30th Anniversary of My PhD Dissertation Defence - A Summary & Retrospective (The Saad Truth with Dr. Saad_688)


Episode Stats

Length

51 minutes

Words per Minute

148.37791

Word Count

7,574

Sentence Count

490

Misogynist Sentences

5

Hate Speech Sentences

8


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode, Dr. Carl Sagan commemorates the 30th anniversary of his PhD defense. He talks about his dissertation, why he chose evolutionary psychology, and how it can be applied to a wide range of problems.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 So 30 years ago today, as a matter of fact, not just 30 years ago today, probably to the hour.
00:00:11.480 So I defended my doctoral dissertation at Cornell, and the defense was on January 21st, 1994, so it's exactly 30 years ago.
00:00:23.260 I remember my wife yesterday asked me, do you remember what day it fell on, like what day of the week?
00:00:27.960 And I said to her, oh, I think it was on a Tuesday.
00:00:32.200 And then she checked, and it turned out to be on a Tuesday.
00:00:36.980 And it was in the afternoon, I think it was probably a bit earlier, I think it was maybe from 2 to 4, if I'm not mistaken.
00:00:44.140 So probably by now, I had finished, or just about finished.
00:00:48.280 And I remember as the committee asked me to step outside the room so that they can deliberate, and then they opened the door and asked me to come back in and so on.
00:00:59.940 My doctoral supervisor was the first one who approached me and said, put out his hand and said, congratulations, doctor.
00:01:10.680 So that was the first time that I switched from mister to doctor in obtaining the PhD.
00:01:18.000 The title of the PhD was The Adaptive Use of Stopping Policies in Sequential Consumer Choice.
00:01:27.060 I mean, saying really consumer is too restrictive, because really, the doctoral dissertation could be applied in consumer choice.
00:01:40.600 But of course, it could be applied to any choice.
00:01:43.060 I later used it for mate choice.
00:01:45.720 You can use it for political choice.
00:01:49.540 And let me set the ground to what that dissertation was all about.
00:01:54.460 The dissertation is in the area of psychology of decision-making, or behavioral decision theory.
00:02:03.900 It's really a combination of cognitive psychology and experimental psychology.
00:02:11.700 And specifically, what I was trying to do in the dissertation is study what are called stopping decisions.
00:02:19.880 And now let me explain what I mean by stopping decisions, and then I will...
00:02:25.680 For those of you who didn't notice, in the alert to today's X spaces, I took a screenshot from four...
00:02:36.920 So please have those handy if you're able to.
00:02:40.020 So right next to the alert for today's X spaces, I've got four screenshots of slides that I have used in my lectures.
00:02:51.040 Maybe somebody could put their thumbs up just to make sure that you see where it is.
00:02:55.820 I mean, it's right there in the...
00:02:57.580 Can I get a thumbs up from somebody?
00:02:59.580 Can I have somebody who's there?
00:03:02.360 Anybody?
00:03:03.260 I guess there's a bit of a delay.
00:03:04.720 Anyways, there are four screenshots, as I said, that are really helpful if you want to follow the details of what the doctoral dissertation is about.
00:03:16.900 And I mean, I'm taking the time today to mark the moment.
00:03:20.460 Of course, number one, because it is an important moment.
00:03:25.440 It's a momentous moment.
00:03:26.360 It's the 30-year anniversary of my defense.
00:03:30.040 By the way, I'm 59, so when I defended my dissertation, I was 29, and now it's been 30 years since I've defended it.
00:03:38.900 So now I've gone to the meaning that I've longer lived having defended my PhD than before I had defended my PhD.
00:03:47.700 I am a sentimental and romantic person, so I keep thinking about all these numbers issues.
00:03:53.220 So, okay, a few people have put their thumbs up so you have those four screens, screenshots, which will be really important.
00:04:02.740 One of the things, by the way, that I'm really looking forward to, I hope that as Elon Musk develops the video capabilities of X Spaces,
00:04:13.640 I'll be able to have not just audio lectures, but audio-visual lectures on X.
00:04:19.920 Imagine I could just turn it on and, you know, a thousand people could be sitting and actually listening to a lecture that I've put together.
00:04:29.160 But for now, since we don't have that ability, I thought giving you that visual aid would help.
00:04:33.900 Okay, so what is the general problem first, and then I'll get into the bit of the weeds.
00:04:39.640 And I'm saying this not because it was, you know, my PhD, but really because objectively, it truly, fully explains how people make decisions across any decision process.
00:04:56.420 One of the things that, if any of you have followed my work in general, know about me, is that I'm a synthetic thinker.
00:05:03.960 In other words, I like to study problems and use frameworks that could be applied in many, many contexts.
00:05:11.400 So evolutionary psychology is something that I've dedicated my academic career to because I can use evolutionary psychology to study politics or economics or mate choice or consumer behavior, you know, or psychiatric conditions in medicine.
00:05:27.200 And these are all fields that I have published in, so one of the reasons why that particular dissertation topic is the one that I ended up working on is because I very quickly realized that it is applicable to any decision that we make from the most consequential to the most inconsequential ones.
00:05:49.100 Okay, so let's discuss the fundamental problem.
00:06:19.100 Contrary to what classical economists tell us, see, classical economists think that in order for you to maximize your utility, you need to look at all available and relevant information.
00:06:32.320 Otherwise, if you only look at a subsample of the information, then you can't be assured that the choice that you've made is the optimal one from a utility maximization perspective.
00:06:41.880 Well, of course, classical economists operate in la-la land.
00:06:47.040 And so, yes, if you had all the time in the world and if you were a very committed calculational machine, then you would sit there and spend 400 years looking at all of the information before you bought that house or purchased that car.
00:07:03.300 But we don't do that, rather, what we do is we look at a certain amount of information and at what point we say, I've seen enough, I'm ready to buy that house.
00:07:13.540 I've seen enough, I'm ready to buy that Mazda.
00:07:15.380 So, what is the mechanism that you use cognitively in your brain so that you could say, I'm now stopping and committing to a choice?
00:07:26.100 I'm stopping what?
00:07:26.880 I'm stopping the information search process.
00:07:30.400 Okay.
00:07:30.760 Now, the reason why the model that I worked on in my doctoral dissertation is called a sequential binary model, it's because what you're basically doing is you are left down to two final choices.
00:07:45.340 So, this model kicks in once you've narrowed the field of possible choices down to two.
00:07:51.140 So, think, for example, about the presidential elections in the United States.
00:07:55.000 There is a primary election process where many candidates might present themselves within their own party.
00:08:03.620 Then they whittle it down to one person.
00:08:06.360 The other party does the same.
00:08:07.980 And then in the final process, right, in the general election, it's just two people.
00:08:14.660 But it could also be for anything else, right?
00:08:18.440 So, actually, consumer psychologists have studied the process whereby you start with all possible alternatives.
00:08:25.180 Let's say you're choosing between all beers.
00:08:27.120 Well, there are the beers that you are aware of and the beers that you are unaware of.
00:08:32.520 Well, the ones that you are unaware of, you'll never look at, yes?
00:08:35.400 Now, the ones that you are aware of, you can break those down into the inert set, the evoked set.
00:08:42.900 Yeah, right.
00:08:43.600 So, like, there's a bunch of them that you know they exist, but you're never going to try them because for whatever reasons, right?
00:08:50.680 So, let's say you're choosing between business schools.
00:08:53.700 Well, I'm only going to look at schools that are in the top 10.
00:08:59.500 Those are my preferred ones.
00:09:00.980 And then I might have some backups in the top 30, but I'll never look at any business schools that are below top 30 because it's not worth for me to go to that business school then because it's not prestigious enough.
00:09:14.020 And so, there is a mechanism by which humans, in general, consumers in particular, will whittle down their choices down to the final two, and that's when my model kicks in.
00:09:24.480 So, that's why it's called a sequential binary choice model.
00:09:26.880 It is sequential because what you're doing, so now if you look at the first figure or screenshot that I sent you, it's called the criterion-dependent choice model.
00:09:39.260 Do you see it?
00:09:39.720 Can I get a thumbs up from somebody just so that I can make sure that you're following me because it's going to make the conversation a lot easier if I can assume that everybody is looking at those curves.
00:09:51.580 Can I get a thumbs up from anybody, the criterion-dependent choice model?
00:09:55.040 So, you see these horizontal curves.
00:09:57.400 Okay.
00:09:58.180 So, what is that saying?
00:09:59.400 You see, there are two...
00:10:00.760 Thank you, guys, who put up your thumbs.
00:10:02.980 I appreciate that.
00:10:03.640 Okay.
00:10:04.280 So, you've got alternative A and you've got alternative B.
00:10:07.620 On the x-axis, you've got how many attributes am I going to acquire or look at before I stop and make a choice?
00:10:17.880 Either choose alternative A or choose alternative B.
00:10:21.300 Now, what do I mean by attributes?
00:10:23.060 Let's suppose I was choosing between cars to purchase.
00:10:27.220 So, alternative A is Mazda.
00:10:29.480 Alternative B is Toyota.
00:10:32.300 Okay.
00:10:32.860 Well, attributes are the things that define what a car is.
00:10:37.840 So, price of the car is an attribute.
00:10:40.500 The gas efficiency of the car is another attribute.
00:10:44.120 The safety record of the car is another attribute.
00:10:46.980 How green the car is is another attribute and so on.
00:10:51.940 So, there may be 50 attributes that one can look at across those two final alternatives.
00:10:58.820 But, of course, none of us are going to sample all 50 attributes.
00:11:04.280 We're going to sample enough attributes until something clicks in our brain that says, stop acquiring more information.
00:11:12.920 You've now seen enough.
00:11:14.580 Buy the Mazda.
00:11:15.320 That's exactly what we do.
00:11:17.600 Okay.
00:11:17.800 And so, what this model does, this cognitive psychological model, what it does is it exactly explains step-by-step the cognitive processes that your brain goes through in making that stopping decision.
00:11:33.720 So, if you see on that curve, there are two horizontal lines that I called K.
00:11:41.560 K just means this is the level of differentiation that I must reach in order that I either choose A or B.
00:11:50.840 So, if A ever becomes so ahead that it crosses that K curve, I stop and choose A or if in symmetric way, if I get to B first, I stop and choose B.
00:12:04.960 So, those are called stopping thresholds because basically what it's saying is,
00:12:09.640 I need to achieve that level of differentiation between Mazda and Toyota before I'm sufficiently convinced that I'm ready to choose whatever, whichever car wins first.
00:12:24.420 You follow?
00:12:25.620 So, what you're going to do is you're going to acquire one piece of attribute at a time across the two alternatives.
00:12:33.240 So, let's say my most important attribute is price.
00:12:36.000 So, I will look at price of the Mazda and of the Toyota.
00:12:40.840 Depending on which one scores higher, there's going to be a tracking curve that either goes up towards the A stopping threshold or the B stopping threshold.
00:12:51.640 And then it becomes a race.
00:12:54.060 Whichever stopping threshold I hit first, I will stop and choose.
00:12:58.840 So, let me go to the next slide.
00:13:00.860 So, now we're going to the next slide called Criterion Dependent Choice Model where I actually give an example, right?
00:13:09.040 So, if you see at the bottom of that curve, at the bottom of that figure, there is an actual example.
00:13:16.760 There is D1, D2, D3, D4 and so on.
00:13:21.140 D simply means dimension or dimension is just another word for attribute.
00:13:25.620 So, in this case, I'm going to sample the attributes in decreasing order of importance.
00:13:32.380 Meaning, all other things equal, I will look at my most important attribute first.
00:13:37.460 Then my second most important attribute next.
00:13:39.900 Then my third most important.
00:13:41.320 So, the numbers next to the D1s and D2s.
00:13:44.140 So, for example, see D1 has a weight of 0.3.
00:13:47.680 That means that's my most important attribute.
00:13:49.740 D2 is my second most important.
00:13:51.900 Therefore, it has a weight of 0.25 and so on.
00:13:55.400 So, I start and now these attributes are scored on a scale of 1 to 10.
00:14:01.820 1 is the worst possible score for that attribute.
00:14:04.920 10 is the best possible score.
00:14:06.820 So, I begin with my most important.
00:14:08.700 Now, and if you notice in that specific example, I have set the threshold at 2.5.
00:14:15.060 So, if ever I reach a cumulative differentiation of 2.5, stop and choose that alternative.
00:14:23.120 So, let's do it.
00:14:23.800 Let's go iteratively through the whole thing.
00:14:25.660 And again, it's going to be a lot easier for you to follow if you actually look at the curves that I have,
00:14:33.360 the screenshots that I've set up that I've given you.
00:14:35.560 Okay.
00:14:35.680 So, we're going to start with dimension 1.
00:14:40.440 Alternative A scores a 4.
00:14:42.560 Alternative B scores an 8.
00:14:44.680 So, the difference between 8 and 4 is 4.
00:14:48.080 And you multiply, right?
00:14:49.380 This is called a weighted additive heuristic.
00:14:53.620 So, what I'm doing is 8 minus 4 is 4.
00:14:56.620 And then I multiply that by the importance weight of that attribute.
00:15:00.800 So, 8 minus 4 is 4 times 0.3 is 1.2.
00:15:06.620 Meaning that after sampling the first piece of information on both alternative A and B,
00:15:13.620 B is 1.2 ahead.
00:15:17.080 So, if you see on the curve, the first arrow goes down towards B at 1.2.
00:15:26.560 Can I get a thumbs up from somebody that you know what the hell I'm talking about?
00:15:30.140 Because I can't see your faces.
00:15:31.940 So, I don't know if people are following.
00:15:33.680 Give me a thumbs up.
00:15:35.020 Give me a quick thumbs up.
00:15:36.580 Somebody, before I go on.
00:15:37.920 Where is the thumbs up?
00:15:40.640 Come on, guys.
00:15:42.800 Anybody?
00:15:43.440 I guess there's a delay between when I say, okay, Zagros gave me a thumbs up.
00:15:47.580 It looks like, okay, we're doing well.
00:15:50.500 Okay.
00:15:51.340 We got another thumbs up.
00:15:52.420 We got a whole bunch of thumbs up.
00:15:53.480 Okay, we're going well.
00:15:54.220 Okay.
00:15:54.740 So, now look.
00:15:55.760 After one piece of information, alternative B is ahead by 1.2.
00:16:02.520 But why don't I stop and choose B?
00:16:04.660 Because I've decided, I as a decision maker, that for me to stop, I need to reach 2.5, which
00:16:12.620 is not a set rule, right?
00:16:14.260 You may be a more anal decision maker.
00:16:17.780 You just need to be more convinced before you make a choice.
00:16:21.500 So, you may set those thresholds, those stopping thresholds at 3.5, which basically means that
00:16:27.420 on average, you're going to have to sample more information before you make a choice than
00:16:32.020 I would.
00:16:32.440 So, that actually also captures personality differences across decision makers.
00:16:38.260 You can't imagine how powerful this model is, okay?
00:16:42.720 So, after one piece of information, A is ahead at 1.2.
00:16:47.460 Is 1.2 greater than 2.5, which is my stopping threshold?
00:16:50.980 No.
00:16:51.640 That means I have to continue sampling more information.
00:16:55.240 So, now I get to the second piece of information, my second most important, which is D2.
00:16:59.640 Now, alternative A scores a 9.
00:17:02.680 It does very well on this.
00:17:04.420 Alternative B does very poorly.
00:17:06.260 It scores a 1.
00:17:07.380 So, 9 minus 1 is 8.
00:17:10.200 8 times 0.25 is 2.
00:17:13.380 So, meaning on that piece of attribute information, A is ahead by 2.
00:17:18.880 But remember, B was ahead by 1.2.
00:17:22.820 So, 1.2 in one direction plus 2 in the other direction.
00:17:28.220 So, the net differentiation after two pieces of information have been sampled is that now A is ahead at 0.8.
00:17:38.780 You get it?
00:17:39.260 So, after one piece, B was ahead by 1.2.
00:17:43.600 After two pieces, the cumulative differentiation is that now A is ahead at 0.8.
00:17:49.860 So, after two pieces of information, A is ahead.
00:17:54.260 But it hasn't hit the stopping threshold of 2.5.
00:17:59.260 It's only ahead by 0.8, which tells my brain, this is not enough of a differentiation between Mazda and Toyota for me to stop and choose.
00:18:08.220 So, I'm going to go on.
00:18:09.520 Now, I get to the third piece of information.
00:18:12.820 8 minus, which the third piece is D3, which is, as you can see, it has a weight of 0.2.
00:18:20.100 8 minus 3 is 5.
00:18:22.980 5 times 0.2 is 1 in favor of A.
00:18:28.460 Now, A was already ahead at 0.8.
00:18:31.580 Plus, you add another 1.
00:18:33.900 After three pieces of information, A is ahead at 1.8.
00:18:38.120 So, now, A is starting to really differentiate itself.
00:18:42.260 But, while it's getting close to the stopping threshold of 2.5, it's not there yet.
00:18:48.340 Therefore, my brain is saying, I'm not ready yet to stop.
00:18:51.520 I need more differentiation before I choose between Clinton and Trump, between car A and car B, between this university and that university.
00:19:04.480 So, now, I'm going to go on to the fourth piece of information, my fourth most important.
00:19:09.920 It has a weight of 0.15.
00:19:12.800 10 minus 5, ahead of, for A.
00:19:16.160 A is 10.
00:19:17.860 B is 5.
00:19:19.020 So, that's a difference of 5.
00:19:21.520 5 times 0.15, which is the weight for that attribute, is 0.75.
00:19:28.220 Remember, A was already ahead at 1.5.
00:19:31.580 Now, it's ahead by another 0.75.
00:19:35.240 So, 1.8 plus 0.75.
00:19:39.220 Now, it's at 2.25.
00:19:41.060 So, if you see the curve, it has hit and crossed the 2.5 curve that I had set.
00:19:48.380 Meaning, after four pieces of information, I can now stop and choose alternative A.
00:19:56.160 That's what we mean by a stopping sequential model.
00:19:59.700 Now, notice that this particular problem, it went up to D10, right?
00:20:08.080 If you look at the bottom of that figure, there's a big line after D4, meaning now I can stop.
00:20:16.160 I've reached enough differentiation that I don't need to look at any more information.
00:20:20.220 It only took me four pieces of information in order to choose car A or woman A or candidate A and so on.
00:20:28.640 So, that's what we mean by it's a sequential model.
00:20:32.900 I sequentially keep acquiring information and then I ask, have I reached my stopping threshold?
00:20:38.620 If yes, I stop and choose the one that hit the threshold.
00:20:41.880 If no, I acquire more information.
00:20:44.420 Now, this model had already been proposed by German psychologists.
00:20:50.980 And so, I came along in my doctoral dissertation and now you go to the next slide.
00:20:59.340 This is the one where I write discrimination framework, SAD 1994.
00:21:04.080 SAD 1994 is referencing my doctoral dissertation 30 years ago in 1994, right?
00:21:12.020 So, in my doctoral dissertation, I said, yes, this model is very nice.
00:21:17.520 This iterative sequential model is very nice.
00:21:20.220 But, now notice if you see, I argued that the thresholds should be concave.
00:21:27.660 In other words, they should not be horizontal lines as the German psychologist had proposed.
00:21:35.300 Because, if the threshold is horizontal, this means that throughout the entire decision process,
00:21:44.580 you're never going to relax your threshold.
00:21:47.360 You're always going to expect the same level of differentiation,
00:21:52.100 whether you're on the first acquired attribute or the 87th acquired attribute.
00:21:57.780 And that simply didn't ring true to me because, and as you hear, if you just read with me the reasons,
00:22:06.060 so there are two reasons that can explain the concavity of the thresholds.
00:22:10.940 At some point in the decision, the cost of acquiring additional information might become prohibitively high.
00:22:17.820 Hence, the threshold is relaxed.
00:22:20.040 It needs to be a concave threshold.
00:22:23.840 Secondly, even if I had all of the patience and cognitive computational power in the world,
00:22:32.860 if the two alternatives are poorly differentiated,
00:22:36.940 like, let's take the specific example of Mazda and Toyota.
00:22:40.400 Well, they're both Japanese cars.
00:22:42.220 Well, they're both roughly in the same luxury category.
00:22:45.000 Well, they're both roughly of equal engineering quality, same price, and so on.
00:22:49.080 So, the tracking curve that I put up earlier is always going to be very close to the x-axis.
00:22:57.400 It's never going to reach that desired threshold that I am hoping to reach in order to make a decision.
00:23:04.520 So, unless I'm able to relax my threshold in the concave manner that I've shown you,
00:23:13.160 then I could potentially never reach a decision because I will never be able to sufficiently differentiate
00:23:21.040 between the two alternatives in order to reach that horizontal threshold.
00:23:25.220 So, I ran a sequence of very elaborate experimental psychological studies.
00:23:33.180 Experimental means you're running an experiment in a lab.
00:23:35.760 So, I actually had people come where they did decisions on a computer.
00:23:40.820 It's called a process tracing algorithm that actually kept track of every single behavior that they made,
00:23:48.560 which attribute they looked at next, how long, until when, what was their cumulative discrimination.
00:23:54.840 So, then I can basically track your curve and then I could use some mathematical modeling
00:24:01.020 to be able to predict or to fit whether the stopping thresholds of each decision maker
00:24:09.220 was in line with a horizontal model or a concave model.
00:24:14.320 And exactly what I found in my doctoral dissertation, so that's the first big finding,
00:24:20.200 is that I demonstrated, I discovered that stopping thresholds in a sequential model are not horizontal,
00:24:27.760 but they are concave, precisely because people are adaptive creatures, right?
00:24:34.540 And so, you could already see this evolutionary idea, right?
00:24:37.740 Human beings are not these non-malleable with zero behavioral plasticity.
00:24:43.600 In light of incoming information, I have to be able to adjust my behavior.
00:24:48.720 And one of the ways that I adjust my behavior in the context of this model
00:24:51.720 is I relax my threshold so that it becomes a concave threshold.
00:24:57.820 Okay, so that's one.
00:24:58.780 So, that has a lot of, as I said, it's completely, it's cognitive psychology.
00:25:05.800 It's, you know, if you like, it's literally neuropsychology
00:25:08.920 in that you are tracking every iterative behavior, cognitive element of the decision process.
00:25:16.720 And then, of course, it's got, you know, some advanced mathematics
00:25:21.120 because I'm trying to fit these functional forms to determine whether it's a concave curve
00:25:26.320 or it's a horizontal curve.
00:25:27.660 A second thing that I discovered in my doctoral dissertation,
00:25:32.220 I'm just giving you the big blinds, the, of course, the entire dissertation goes through
00:25:37.340 incredible tons of details and so on.
00:25:42.300 The second big finding from my doctoral dissertation
00:25:47.120 is I discovered a new, another stopping heuristic,
00:25:51.980 which I called the core attributes heuristic.
00:25:56.340 What does that mean?
00:25:57.320 So, the core attributes heuristic, as you might imagine from the name of the heuristic,
00:26:03.380 basically works as follows.
00:26:05.440 It doesn't really, at first, concern itself with the stopping thresholds.
00:26:09.540 It basically says, look, I don't care if there are 25 attributes that define a car.
00:26:14.720 There are five core attributes that I care about.
00:26:18.800 I'm going to sample those five.
00:26:21.100 Whichever car is ahead on those fives, I stop and I choose it.
00:26:25.480 And so, it's a form of stopping threshold,
00:26:29.060 but that's guided by a core set of attributes that are uniquely important to me.
00:26:34.720 But what I also discovered is that when people apply the core attributes heuristic,
00:26:40.360 they are mindful of the unfolding discrimination across the two alternatives.
00:26:46.040 So, let's say my core attribute set was made up of five attributes.
00:26:51.520 But after three attributes, alternative A is so ahead
00:26:56.460 that there is no way that it could ever reverse the decision.
00:27:00.540 So, then I would stop early,
00:27:02.440 meaning I would stop earlier than the five core that I was hoping to reach.
00:27:06.660 Because after three, the cumulative differentiation is so much ahead for alternative A.
00:27:12.720 And then, at times, I would stop late,
00:27:15.360 meaning that once I reached my five core attributes,
00:27:19.360 the discrimination between the two alternatives was so low,
00:27:22.900 meaning the two alternatives were not sufficiently differentiated,
00:27:26.340 that I thought, well, I better go on and sample beyond my core attributes set.
00:27:33.300 So, to summarize so far, and then I'll do one more main finding and that would be it.
00:27:40.440 I can't, it's so, one of the reasons why I wanted to do this X space is because,
00:27:46.180 I mean, what is a doctoral dissertation defense?
00:27:48.980 You get up in front of, well, it's a public defense.
00:27:52.880 Of course, your committee's there,
00:27:55.100 your doctoral committee, your doctoral supervisor, and the other committee members.
00:27:58.120 It's a very austere moment.
00:28:00.840 It's the highest level that you can reach in academia, obviously, the PhD.
00:28:05.880 And you're presenting your work.
00:28:08.660 Now, who could have known that 30 years later,
00:28:12.620 I will be sitting in front of a phone,
00:28:15.660 speaking in front of, I don't know how many people are here,
00:28:17.940 probably hundreds of people, maybe thousands, who knows.
00:28:20.040 And then, I'm giving the summary of exactly those findings.
00:28:26.260 And by the way, I don't have any notes or anything in front of me.
00:28:28.740 I'm just going off my head.
00:28:31.340 And it was literally 30 years ago.
00:28:34.440 And I mean, you know, not even in a science fiction movie could I have imagined
00:28:38.600 that I'm sitting doing an X spaces with, you know,
00:28:42.620 all these people about my doctoral dissertation 30 years later.
00:28:46.480 Okay, so, so far, we've said that, okay, number one,
00:28:51.200 okay, there's this stopping threshold called the criterion-dependent choice model.
00:28:55.660 But it assumes that your stopping thresholds are horizontal.
00:28:59.060 I came along and said, no, no, no, it can't be horizontal thresholds.
00:29:02.740 There needs to be adaptive behavior.
00:29:05.260 There needs to be behavioral plasticity.
00:29:07.400 So, therefore, I'm going to hypothesize that the stopping thresholds are concave.
00:29:11.740 And then I do a bunch of elaborate psychological experiments.
00:29:15.240 And then I demonstrate that, yes, indeed, those thresholds are concave.
00:29:19.280 I also identify a new heuristic called the core attributes heuristic,
00:29:23.560 which itself can be malleable.
00:29:25.600 I could stop early in the core set or I could stop late in the core set,
00:29:29.960 depending on how much achieved differentiation I've achieved so far across the alternative.
00:29:36.220 And now, if you go to the last screenshot that I shared with you,
00:29:42.460 I then said, okay, well, what happens to these thresholds, these concave thresholds,
00:29:49.220 if I compare a decision-maker's behavior under no time pressure versus under time pressure?
00:29:58.840 You follow?
00:29:59.980 So, if you go back to the previous figure,
00:30:06.720 you have the concave thresholds.
00:30:09.140 And now, look at the next one.
00:30:11.460 You see, there are two, so there are two possible ways
00:30:15.920 by which a decision-maker can adapt his or her behavior
00:30:21.780 in light of time pressure constraints.
00:30:25.540 Now, before I get into the specifics, the cognitive psychological specifics,
00:30:30.760 if I told you in words, what is it that you do?
00:30:35.520 Well, if you're under time pressure, you have less time to make a decision.
00:30:40.420 If you have less time to make a decision,
00:30:42.920 how is it that you could adjust your behavior?
00:30:45.320 Well, you could make a decision more quickly because you're under time pressure.
00:30:49.400 How would you do that?
00:30:50.240 Well, you would relax the threshold, the stopping threshold,
00:30:55.540 or you could also accelerate the speed of your processing.
00:31:00.260 So, if you look, so there is one curve that's the no time pressure curve.
00:31:04.980 Then there are two possible curves.
00:31:07.280 There's one that looks, if you notice, I wrote,
00:31:10.040 there's an arrow to a curve where I wrote,
00:31:12.920 maintaining the same rate of decay,
00:31:15.760 but starting at a lower point on the y-axis.
00:31:18.940 So, I can just shift the curves down,
00:31:22.120 maintaining the same rate of decay.
00:31:23.920 What am I effectively saying?
00:31:25.560 At any point in the process,
00:31:28.140 I now simply need to reach a lower threshold
00:31:32.460 before I stop and commit to a choice,
00:31:35.140 which is exactly what I would do under time pressure.
00:31:40.040 I need to make a decision more quickly.
00:31:41.580 The other possible adaptive mechanism when I'm facing time pressure is the other curve that you see.
00:31:51.920 I could start at the same point in the concave curve,
00:31:56.060 but then the concave curve can decay more quickly.
00:32:00.920 Okay?
00:32:01.520 Both of these I found evidence for.
00:32:03.900 So, in other words,
00:32:06.720 so just to summarize,
00:32:08.080 in case some of you are utterly confused,
00:32:10.100 basically what I'm arguing is that
00:32:12.500 when you're facing time pressure,
00:32:16.900 you're obviously going to
00:32:19.300 make a decision
00:32:21.300 in ways that recognizes the fact that you're under time pressure.
00:32:25.820 Within the context of the sequential stopping threshold model,
00:32:32.440 how would you go about instantiating that realization?
00:32:36.420 Well, you would alter,
00:32:38.960 either you would shift in a parallel manner the entire curve down,
00:32:43.540 the concave curve,
00:32:44.340 and or you would make the curve,
00:32:48.840 the decay of the curve,
00:32:50.740 be more staunch,
00:32:53.620 be a faster decay.
00:32:56.160 Okay?
00:32:56.440 And again,
00:32:57.500 using mathematical modeling,
00:33:01.380 I had the subjects,
00:33:03.620 the participants,
00:33:04.420 go through these tasks
00:33:05.640 under time pressure
00:33:07.100 and under no time pressure,
00:33:08.480 and then I could fit
00:33:10.460 the mathematical curves
00:33:12.900 to be able to know
00:33:14.360 which of these strategies they were using
00:33:16.740 and I found evidence
00:33:17.880 for the strategies that I showed here.
00:33:19.580 So, to summarize everything,
00:33:21.860 the key problem
00:33:22.920 in the doctoral dissertation was
00:33:24.940 when is it
00:33:26.420 that people have acquired
00:33:28.360 enough information
00:33:29.800 to stop acquiring additional information
00:33:32.840 and commit to a choice?
00:33:33.900 So, as you can see,
00:33:35.640 this is why
00:33:36.360 when people say,
00:33:38.300 oh, but you know,
00:33:39.960 how is that related to marketing?
00:33:41.760 Well, I mean, it isn't.
00:33:42.900 That's why, you know,
00:33:44.960 I'm not a marketing guy, right?
00:33:47.000 I mean, I happen to be housed
00:33:48.300 in a marketing department
00:33:49.260 because I end up studying things
00:33:52.340 that are related to consumer behavior,
00:33:55.140 economic decision making,
00:33:56.600 and therefore,
00:33:57.320 and plus, frankly,
00:33:58.620 business school professors
00:33:59.560 are on the top
00:34:01.480 of the glitz and glory.
00:34:04.960 And so,
00:34:05.960 but the model,
00:34:07.360 that's why I was so interested
00:34:08.460 in studying it,
00:34:09.140 it's a model
00:34:10.000 that is applicable
00:34:10.940 to any decision,
00:34:13.200 not just consumer choice,
00:34:15.860 right?
00:34:16.160 It's any decision.
00:34:17.040 And so,
00:34:17.600 later in my career,
00:34:18.580 I ended up applying this model.
00:34:21.540 So, I married
00:34:22.820 my doctoral dissertation
00:34:24.380 with evolutionary psychology
00:34:27.200 and evolutionary consumer psychology,
00:34:29.040 which is the field
00:34:29.540 that I founded,
00:34:30.560 because I specifically applied
00:34:32.460 this threshold model
00:34:33.540 to mate choice.
00:34:34.760 Specifically,
00:34:35.880 I looked at
00:34:36.720 the sex differences
00:34:38.580 and where men and women
00:34:40.540 set those stopping thresholds
00:34:43.420 when they're choosing
00:34:44.780 between mates.
00:34:46.120 And again,
00:34:46.780 what do you think
00:34:47.620 from an evolutionary
00:34:49.500 psychological perspective
00:34:51.120 you might predict
00:34:52.300 is that
00:34:53.040 women are going to set
00:34:54.760 the thresholds
00:34:56.040 higher than men.
00:34:57.680 Why?
00:34:58.400 Because when it comes
00:34:59.380 specifically to mate choice,
00:35:01.280 there is a much greater cost
00:35:05.340 for women
00:35:06.680 to make
00:35:07.720 the wrong choice.
00:35:09.640 They're not choosing
00:35:10.260 between lawnmowers.
00:35:11.440 They're not choosing
00:35:11.960 between
00:35:12.900 iPhones.
00:35:17.080 When it comes
00:35:17.560 to mate choice,
00:35:18.500 parental investment theory
00:35:19.700 tells us
00:35:20.340 the sex
00:35:21.260 that is the one
00:35:22.280 most
00:35:22.900 that has to offer
00:35:26.140 the greater
00:35:26.960 minimal
00:35:27.600 obligatory
00:35:28.400 parental investment
00:35:29.460 is the one
00:35:30.600 that has to be
00:35:31.340 more sexually
00:35:32.080 choosy.
00:35:33.360 And therefore,
00:35:34.120 we know that
00:35:34.620 that's women.
00:35:36.280 Therefore,
00:35:36.880 I took that principle
00:35:38.440 from evolutionary biology
00:35:40.240 and I applied it
00:35:42.480 to my sequential
00:35:43.460 choice model
00:35:44.580 specifically
00:35:45.840 in the context
00:35:47.040 of mate choice.
00:35:49.280 Yeah,
00:35:49.640 that's right.
00:35:50.240 That's why I'm
00:35:50.860 Gadsad.
00:35:52.320 You got that,
00:35:53.100 kids?
00:35:53.800 That's why
00:35:54.540 I am
00:35:55.680 the Gadsad father.
00:35:57.440 So,
00:35:57.940 basically,
00:35:58.540 what I then did
00:35:59.640 for the rest
00:36:00.140 of my career
00:36:00.800 is I kept
00:36:02.360 marrying
00:36:04.100 my work
00:36:06.620 in psychology
00:36:07.260 of decision-making,
00:36:08.500 my work
00:36:09.760 in behavioral
00:36:10.320 decision-making,
00:36:11.680 coupled with
00:36:12.500 my
00:36:13.600 interests
00:36:15.840 and expertise
00:36:16.680 in evolutionary
00:36:17.440 psychology.
00:36:18.200 So,
00:36:18.420 all the morons,
00:36:19.020 there's all kinds
00:36:19.500 of morons
00:36:20.000 that write to me
00:36:21.020 and say,
00:36:21.880 oh,
00:36:22.380 you're a marketing
00:36:23.440 guy.
00:36:24.280 I mean,
00:36:24.620 not that,
00:36:25.020 by the way,
00:36:25.320 there's anything wrong.
00:36:26.000 I mean,
00:36:26.140 marketing is life
00:36:26.960 and life is marketing.
00:36:27.880 Everything is marketing.
00:36:28.780 So,
00:36:29.020 I don't need to
00:36:29.600 justify that.
00:36:30.640 But,
00:36:31.000 I'm only marketing
00:36:32.740 in the sense
00:36:33.780 that there are
00:36:35.200 wonderful and
00:36:36.420 exciting
00:36:37.060 opportunities
00:36:38.740 to study
00:36:39.660 human nature
00:36:40.700 within our
00:36:42.020 consumatory
00:36:43.120 instinct,
00:36:44.600 right?
00:36:45.040 Because,
00:36:45.860 short of breathing,
00:36:47.400 right?
00:36:47.580 That's why in my
00:36:48.560 2011 book,
00:36:49.960 The Consuming Instinct,
00:36:51.100 I say,
00:36:51.980 I consume,
00:36:52.800 therefore I am,
00:36:53.560 right?
00:36:53.660 Short of breathing,
00:36:54.760 the thing that you do
00:36:55.440 most is consume.
00:36:56.520 I mean,
00:36:56.720 you're consuming
00:36:57.520 the current
00:36:58.840 knowledge in this
00:37:00.700 X-spaces session.
00:37:02.340 We're a consumatory
00:37:03.420 animal.
00:37:04.540 We consume
00:37:04.980 relationships,
00:37:05.860 we consume
00:37:06.180 friendships,
00:37:06.900 we consume books,
00:37:08.140 we consume marriages,
00:37:09.680 we consume religion.
00:37:11.300 So,
00:37:11.580 everything is
00:37:12.200 consumatory.
00:37:12.980 And so,
00:37:13.600 what I very early
00:37:15.380 decided that I wanted
00:37:16.340 to do was to be
00:37:17.980 an applied
00:37:18.840 behavioral scientist,
00:37:20.260 an applied
00:37:20.720 psychologist,
00:37:21.960 in my case,
00:37:22.760 marrying psychology
00:37:23.660 of decision-making,
00:37:24.640 consumer psychology,
00:37:25.420 and evolutionary
00:37:26.120 psychology.
00:37:27.280 And so,
00:37:27.620 this gives you
00:37:28.560 a window
00:37:30.500 into how it all
00:37:32.520 started in my
00:37:33.860 doctoral dissertation.
00:37:35.320 And by the way,
00:37:35.860 recently,
00:37:37.320 just two months
00:37:39.060 ago,
00:37:39.420 in April,
00:37:40.260 I returned to
00:37:41.600 Cornell
00:37:42.000 to give two
00:37:44.760 lectures,
00:37:46.060 invited lectures,
00:37:47.540 one on
00:37:48.580 my 2020 book,
00:37:50.520 The Parasitic Mind,
00:37:51.940 and one on,
00:37:53.760 regrettably,
00:37:54.340 it's a topic
00:37:55.220 that I wish
00:37:55.740 I didn't have
00:37:56.160 to talk about
00:37:56.720 on the
00:37:58.280 global Jew
00:37:59.340 hatred.
00:38:00.780 And I'm
00:38:02.480 sitting there
00:38:03.000 looking in the
00:38:03.680 audience,
00:38:04.400 and who's
00:38:05.080 sitting in the
00:38:05.660 audience?
00:38:06.540 My doctoral
00:38:07.700 supervisor,
00:38:09.540 Professor Jay
00:38:10.260 Russo,
00:38:10.700 who,
00:38:10.940 by the way,
00:38:11.600 has been
00:38:12.220 inducted into,
00:38:13.900 I can't remember
00:38:15.200 what it's called,
00:38:15.720 the Society
00:38:16.360 for the National
00:38:18.260 of Scientists,
00:38:19.380 a big thing
00:38:20.740 as a very
00:38:22.240 well-known
00:38:22.840 cognitive and
00:38:23.540 mathematical
00:38:23.940 psychologist.
00:38:25.540 And so,
00:38:26.160 here I am
00:38:26.920 at the time,
00:38:27.700 it was in April,
00:38:28.400 so nearly 30
00:38:29.480 years after,
00:38:31.260 you know,
00:38:31.640 I was up
00:38:33.000 as a 29-year-old
00:38:34.640 presenting my
00:38:35.360 doctoral dissertation,
00:38:36.260 which I just
00:38:37.200 now had the
00:38:38.520 pleasure of
00:38:39.540 presenting it to
00:38:40.300 you guys,
00:38:41.580 and he was in
00:38:42.840 the audience
00:38:43.340 listening to me.
00:38:44.120 And then I look
00:38:44.840 and I see
00:38:45.940 Professor Thomas
00:38:47.020 Gilovich,
00:38:47.720 who was a
00:38:48.160 professor of
00:38:48.840 mine,
00:38:49.500 also in
00:38:50.000 psychology,
00:38:50.760 he taught,
00:38:51.760 he was the
00:38:52.680 professor for a
00:38:54.700 doctoral course
00:38:55.440 in judgment
00:38:56.660 and decision
00:38:57.180 making.
00:38:57.740 He's the guy,
00:38:58.880 by the way,
00:38:59.300 who pioneered
00:39:00.620 the empirical
00:39:03.020 study of
00:39:03.580 psychology of
00:39:04.260 regret,
00:39:04.740 something that I
00:39:05.340 cover in my
00:39:06.480 happiness book,
00:39:07.380 which I highly
00:39:08.080 recommend.
00:39:08.660 Please go out
00:39:09.460 and buy it.
00:39:10.420 It's done well,
00:39:11.460 but not nearly as
00:39:12.440 well as the
00:39:12.800 parasitic mind.
00:39:13.460 Somehow it got
00:39:14.020 lost in the
00:39:14.560 shuffle,
00:39:15.560 my publisher
00:39:16.160 was bought out
00:39:16.960 last year by
00:39:17.560 another publisher,
00:39:18.520 so they were
00:39:18.960 in complete
00:39:19.460 chaos.
00:39:20.520 And so I'm
00:39:21.200 quite dismayed
00:39:22.600 that it didn't
00:39:23.100 get the attention
00:39:23.900 that it should
00:39:24.520 have,
00:39:24.940 and that it's
00:39:25.780 such a positive
00:39:26.920 book,
00:39:27.240 it's such a
00:39:27.700 fun,
00:39:28.100 optimistic book.
00:39:29.460 It takes
00:39:30.020 my personal
00:39:32.460 trajectory of
00:39:33.220 happiness,
00:39:34.220 backed up by
00:39:35.720 ancient wisdoms
00:39:36.700 from the ancient
00:39:37.460 Greeks and so
00:39:38.080 on,
00:39:38.800 backed up by
00:39:39.580 contemporary,
00:39:40.360 the latest
00:39:40.840 science in
00:39:42.200 psychology and
00:39:43.040 happiness studies
00:39:43.860 and neuroscience
00:39:44.560 and positive
00:39:45.680 psychology,
00:39:46.740 and it puts
00:39:47.220 it all together
00:39:47.820 to give you
00:39:48.340 hopefully some
00:39:49.460 very actionable
00:39:50.360 prescriptions of
00:39:51.700 how to live the
00:39:53.560 best life that
00:39:54.220 you can.
00:39:55.080 And so in one
00:39:56.200 of the concluding
00:39:57.400 chapters of the
00:39:57.980 book, I talk
00:39:58.660 about, you know,
00:40:00.060 living a life that
00:40:01.000 hopefully allows you
00:40:01.820 to minimize future
00:40:03.220 regret, and Thomas
00:40:04.680 Gilovich is a
00:40:06.720 professor who
00:40:07.360 studied the
00:40:08.620 difference between
00:40:09.460 regrets due to
00:40:10.820 action versus
00:40:11.620 regrets due to
00:40:12.480 inaction, right?
00:40:13.680 And it turns out
00:40:14.600 that over the
00:40:15.140 long run, of
00:40:16.440 course, most
00:40:17.040 people, their
00:40:18.340 most looming
00:40:19.220 regrets are those
00:40:20.480 of regrets due
00:40:22.360 to inaction,
00:40:23.460 right?
00:40:24.120 You know, I wish
00:40:25.100 I had become the
00:40:26.220 artist that I had
00:40:26.960 always wanted to be.
00:40:27.880 I always loved
00:40:29.300 art, but I was,
00:40:30.980 you know, pressured
00:40:32.040 into becoming a
00:40:33.060 pediatrician because,
00:40:34.300 you know, my dad
00:40:34.980 and his mom were
00:40:36.240 pediatricians, and
00:40:37.100 therefore it was
00:40:37.700 expected of me.
00:40:38.780 But now I wake up
00:40:39.480 at 55 and I
00:40:40.380 realize I hate my
00:40:41.880 life, I hate
00:40:42.580 medicine, I always
00:40:43.500 wanted to be an
00:40:44.160 artist, and I
00:40:44.920 wanted to be
00:40:45.660 immersed in the
00:40:46.700 arts.
00:40:47.440 And so many people
00:40:48.280 have these types of
00:40:49.220 regrets.
00:40:49.740 And so I'm only
00:40:51.760 mentioning this
00:40:52.280 because, you know,
00:40:53.120 here I am 30
00:40:54.020 years later, and
00:40:55.340 now all of my
00:40:56.460 professors who are
00:40:57.260 still around are
00:40:58.660 there listening to
00:41:00.580 me, and, you
00:41:01.460 know, in many
00:41:01.840 cases I'm, you
00:41:02.760 know, at this
00:41:03.200 point I, you
00:41:03.900 know, I don't
00:41:04.800 want to say I
00:41:05.240 outrank them, I
00:41:06.000 don't want to talk
00:41:06.480 like that about my
00:41:07.080 professors, they'll
00:41:08.060 always be my
00:41:08.680 professors, but it's
00:41:10.300 so rewarding, and
00:41:11.300 even you can see it
00:41:12.180 in their face, you
00:41:12.800 can see it in the
00:41:13.500 face of my doctoral
00:41:14.320 supervisor, you
00:41:16.100 know, he's sitting
00:41:17.280 there and he's, I'm
00:41:18.420 sure, thinking, hey,
00:41:19.520 I did well with this
00:41:20.680 kid, he's turned
00:41:21.820 out all right.
00:41:22.900 So there's something
00:41:23.480 really beautiful and
00:41:24.640 magical, I mean,
00:41:25.500 there is a reason
00:41:26.100 why.
00:41:27.260 Being a
00:41:28.080 professor is, you
00:41:29.420 know, historically
00:41:30.460 been, you know, the
00:41:31.520 most noble of
00:41:32.160 professions, but of
00:41:33.100 course, believe me, I'm
00:41:34.700 the one who has been
00:41:36.000 very hard on academia
00:41:37.200 because, unfortunately,
00:41:38.340 academia has not
00:41:41.660 lived up to its, the
00:41:43.860 privilege that is
00:41:45.200 given to academics in
00:41:46.520 training your children
00:41:48.660 and mine to how to
00:41:50.720 think, and this is why
00:41:51.660 I wrote The
00:41:52.060 Parasitic Mind, this
00:41:53.060 is why I've been
00:41:54.020 railing against some
00:41:55.060 of the stuff that's
00:41:55.620 been going on in
00:41:56.280 universities for, you
00:41:58.660 know, I just entered,
00:42:00.520 by the way, my 31st
00:42:01.660 year as a professor, I
00:42:02.540 can't believe it.
00:42:04.280 Straight out of my
00:42:05.120 PhD, I got my first
00:42:06.400 professorship, and so
00:42:07.780 it's both now my 30th
00:42:12.180 anniversary of
00:42:12.780 defending my doctoral
00:42:13.520 dissertation, and then
00:42:14.620 it's also I've just
00:42:16.180 finished my 30th year as
00:42:18.260 a professor and about to
00:42:19.300 start my 31st year.
00:42:20.600 there's nothing more
00:42:22.260 beautiful than academia
00:42:23.280 truly, as long as you
00:42:25.860 are a committed purveyor
00:42:28.760 and pursuer of the
00:42:30.020 truth.
00:42:31.500 That's what makes it
00:42:32.580 noble.
00:42:33.120 That's what makes it
00:42:34.000 enriching.
00:42:34.620 That's what makes it
00:42:35.440 soul-enriching and
00:42:36.800 mind-enriching.
00:42:37.980 Once that becomes
00:42:39.380 parasitized by activism,
00:42:41.420 by ideology, once you
00:42:42.840 are no longer a dogged,
00:42:46.120 unbiased pursuer of the
00:42:48.080 truth, then you're a
00:42:49.400 charlatan, you're a
00:42:50.300 scammer, you're an
00:42:51.240 activist, you're a
00:42:52.180 politician, but you're
00:42:53.320 certainly not an
00:42:54.060 academic.
00:42:55.180 And too many academics
00:42:56.620 have lost sight of what
00:42:59.000 it is to be an academic,
00:43:01.360 to have the privilege of
00:43:03.160 being someone who has to
00:43:04.720 not only create new
00:43:06.200 knowledge that future
00:43:07.560 academics will read, but
00:43:09.500 to train your children
00:43:11.560 to become hopefully the
00:43:13.320 great citizens that they
00:43:14.560 will become.
00:43:15.720 And so I take that
00:43:16.900 responsibility very
00:43:17.860 seriously, and this is
00:43:19.120 why I decided, so to
00:43:20.980 sort of talk about the
00:43:22.620 trajectory of my 30
00:43:24.080 years, and then I'll
00:43:25.160 wrap it up, and thank
00:43:26.320 you.
00:43:28.600 I've lived my career in
00:43:30.660 a way where I wanted to
00:43:32.360 have as big an impact as
00:43:34.640 possible in as many ways
00:43:36.220 as possible.
00:43:36.800 Now, I never set out.
00:43:38.180 I never set out to say,
00:43:39.720 hey, I really want to
00:43:40.720 become a famous guy and
00:43:42.200 so on.
00:43:42.520 I just pursued my passion,
00:43:47.320 what I perceive to be the
00:43:50.540 truth.
00:43:51.280 I live a very authentic
00:43:52.500 life.
00:43:53.660 And in doing so, I've been
00:43:55.760 fortunate enough to, you
00:43:57.440 know, publish many, many
00:43:59.360 scientific papers that were
00:44:00.800 cited a lot.
00:44:01.820 I've been able to, you
00:44:03.280 know, write many books that
00:44:04.300 have been bestsellers, been
00:44:05.740 able to build a platform
00:44:07.580 that, I mean, I could have
00:44:08.960 never, I mean, 30 years ago,
00:44:10.600 could I have imagined that,
00:44:12.600 you know, I could walk down
00:44:13.860 on any street and, you
00:44:15.220 know, people are going to
00:44:15.860 come up to you as if you're
00:44:16.920 the Beatles.
00:44:17.760 I never thought that would
00:44:18.880 be possible.
00:44:19.500 I never sought it out.
00:44:20.820 It's beautiful.
00:44:21.940 It's nice.
00:44:22.620 It's beautiful, the love
00:44:24.040 that you receive.
00:44:24.900 Of course, it does come with
00:44:26.040 some negatives.
00:44:26.920 It comes with the negatives
00:44:27.980 of people who don't like
00:44:29.740 you, who send you death
00:44:30.780 threats, who want to kill
00:44:31.860 you, who want to do all
00:44:33.520 sorts of bad things.
00:44:35.260 For example, I don't know
00:44:36.380 how I'm going to go back to
00:44:37.460 university in September.
00:44:38.900 I truly don't.
00:44:39.760 I've been on sabbatical
00:44:40.700 leave since October 7th.
00:44:43.100 My university has gone
00:44:44.080 completely insane.
00:44:45.760 Being the, you know,
00:44:47.360 high-profile person that I
00:44:48.540 am, being Jewish, being
00:44:49.740 someone who shares his
00:44:51.840 opinions without any
00:44:53.160 filtering, it's a problem.
00:44:57.080 And so, but one of the
00:44:58.080 reasons why I still stand
00:44:59.160 and fight is because those
00:45:00.960 shouldn't be problems that
00:45:02.020 should exist, that exist in
00:45:03.300 a enlightened society.
00:45:05.660 We're not in the dark
00:45:06.620 ages.
00:45:07.000 We're not living in an,
00:45:09.060 supposedly in an authoritarian
00:45:10.660 reality.
00:45:13.640 Professors should be able to
00:45:14.920 speak their minds.
00:45:15.700 If you think that my ideas
00:45:16.840 are bad, defeat them with
00:45:18.720 better ideas.
00:45:19.800 But that's what made the
00:45:21.080 West great.
00:45:22.780 Excellence, commitment to the
00:45:24.940 scientific method, meritocracy,
00:45:27.840 individual dignity.
00:45:29.340 All of the things that you
00:45:30.760 thought you would take for
00:45:32.100 granted are no longer for
00:45:34.040 granted.
00:45:34.220 As a matter of fact, they're
00:45:35.120 controversial.
00:45:36.480 As a matter of fact, they
00:45:37.300 make you the pariah at my
00:45:39.180 university.
00:45:40.460 So imagine that I live in a
00:45:41.980 reality where I have
00:45:44.960 incredible accolades and love
00:45:47.900 from around the world in ways
00:45:50.080 that no very, very few
00:45:51.880 professors in history would
00:45:53.360 have ever imagined being able
00:45:54.760 to have.
00:45:55.700 And yet within my own
00:45:56.740 university, I'm a cancer.
00:45:59.580 There's the Gadsad problem.
00:46:01.640 Well, what's the problem?
00:46:02.540 Well, I do scientific work
00:46:04.680 that's, you know, high
00:46:06.580 quality.
00:46:07.300 I publish in top journals.
00:46:08.780 I'm cited a lot.
00:46:10.180 I write books that are
00:46:11.260 translated in 22, 23
00:46:13.060 languages.
00:46:14.720 The Parasitic Mind is
00:46:16.080 probably short of anything
00:46:18.540 that maybe Jordan might have
00:46:19.980 written, but he's no longer
00:46:21.000 a professor.
00:46:22.040 There's probably no other
00:46:23.200 professor that's written books
00:46:25.240 that have sold as much by a
00:46:27.440 Canadian author than me.
00:46:29.420 And yet you wouldn't know that
00:46:30.320 I exist at Cocoria.
00:46:31.340 Well, that's a shame.
00:46:33.680 That shouldn't be the case.
00:46:35.340 Not because I need the, you
00:46:37.320 know, the approval of my
00:46:40.260 employer, but because that
00:46:41.740 demonstrates that we have a
00:46:43.940 problem in academia where up
00:46:46.280 is down, left is right, male is
00:46:48.100 female, freedom of speech is
00:46:50.700 Nazism, biology is Nazism, and so
00:46:54.200 on.
00:46:54.760 And so the quicker we're able to
00:46:56.120 eradicate these things, the
00:46:58.040 quicker I can return to
00:47:00.500 publishing scientific papers and
00:47:03.320 not have to worry about
00:47:05.180 appearing in front of the
00:47:06.260 Canadian Senate to tell the
00:47:07.960 Canadian senators, oh no, there
00:47:10.480 really is evidence as such a
00:47:12.560 thing as male and female in a
00:47:15.600 sexually reproducing species made
00:47:18.540 up of two phenotypes called male
00:47:20.620 and female.
00:47:21.280 That shouldn't require any
00:47:24.020 testimony from an evolutionary
00:47:25.620 behavioral scientist in 2017.
00:47:29.000 Right?
00:47:29.260 So that's, that's why
00:47:30.760 developing a mind vaccine as I did
00:47:35.020 in the parasitic mind is so
00:47:36.380 important because there is nothing
00:47:38.100 more beautiful than the scientific
00:47:40.100 method.
00:47:41.020 It allows us to do things that
00:47:43.780 20, 30, 40, 50 years ago would have
00:47:47.220 seen science fixing.
00:47:48.060 As per me talking now to
00:47:50.540 hundreds of people on a Friday
00:47:52.700 night before Shabbat, which I
00:47:54.800 could have never, none of us could
00:47:56.260 have imagined that that could
00:47:57.180 ever happen.
00:47:57.840 If you had seen it on a StarTech
00:47:59.100 episode, you'd say, oh, that can
00:48:00.520 never happen.
00:48:01.380 Well, it did happen and many more
00:48:02.960 things can happen and they could
00:48:04.300 only happen when we stay true to
00:48:06.980 the freeing abilities of the
00:48:10.400 scientific method.
00:48:11.780 All right, guys.
00:48:12.580 So listen, let me make a couple of
00:48:14.380 requests to you before you head off.
00:48:16.100 Now, if you appreciate all this, I
00:48:18.200 didn't have to do this, right?
00:48:19.420 I don't have to spend 50 minutes on
00:48:21.320 a Friday night on the day of that
00:48:23.660 anniversary doing this, but I'm
00:48:25.340 excited to share this knowledge with
00:48:27.360 people.
00:48:28.000 So I do it out of pure purity, but
00:48:30.340 it'd be nice if people reciprocate.
00:48:32.040 You could reciprocate them anyways.
00:48:33.720 If you've not subscribed to my
00:48:35.380 YouTube channel, my podcast, you can
00:48:37.480 go and do that right away.
00:48:38.800 That costs you nothing.
00:48:39.980 Zero.
00:48:40.620 Nothing.
00:48:41.560 Nothing.
00:48:42.320 Zero.
00:48:42.980 You could do that.
00:48:43.740 That helps because it increases my
00:48:46.420 voice.
00:48:47.200 You can go and subscribe to my
00:48:48.920 exclusive content on X.
00:48:51.580 That's for $6 a month.
00:48:54.680 Well, imagine if I could get thousands
00:48:56.640 of people to subscribe.
00:48:58.180 Suddenly, I could become a professor
00:49:00.660 of the people.
00:49:01.340 I just set up lectures the way that
00:49:02.700 I'm doing now and I don't have to
00:49:04.780 worry about going to campus because
00:49:06.540 someone is going to spit on me or
00:49:08.480 attack me.
00:49:09.480 I would have enough financial freedom
00:49:11.380 to decide that's what I'm going to
00:49:12.940 do.
00:49:13.380 I'm going to be a professor to
00:49:14.660 10,000 people.
00:49:16.100 So, you can go and subscribe to my
00:49:18.500 exclusive content.
00:49:19.760 Do it.
00:49:20.540 It doesn't cost much.
00:49:21.440 It costs you as much as a latte per
00:49:23.040 month.
00:49:23.920 And I think you get a lot more in
00:49:25.920 return than what you're paying to
00:49:28.020 me.
00:49:28.180 You can go and order copies of my
00:49:30.920 books right now.
00:49:32.220 Nothing in life should be
00:49:33.600 parasitic.
00:49:34.500 So, I give you something.
00:49:36.260 You give something in return.
00:49:37.360 Okay.
00:49:38.200 So, if there is any of these many
00:49:40.460 ways that you think you can
00:49:41.760 contribute, please do so.
00:49:44.980 Let's fight for a return and
00:49:48.960 defense of reason, of logic, of
00:49:51.600 science.
00:49:52.580 I'm about to head off to eat a
00:49:55.760 massive, gargantuan Argentinian
00:49:58.920 steak, which for those of you who
00:50:03.320 live in Montreal, you should check out
00:50:05.140 this place called Tango Imports.
00:50:07.540 It's in the South Shore.
00:50:09.000 He brings in beef, vacuum-packed
00:50:13.840 beef straight from Argentina.
00:50:16.280 That's unbelievable.
00:50:17.320 It completely tastes different.
00:50:18.540 Not even for that.
00:50:19.780 It's not even more expensive than
00:50:21.040 what you would buy at the
00:50:21.780 supermarket.
00:50:22.340 So, it's unbelievable.
00:50:23.680 So, we're going to go do that.
00:50:26.000 Tomorrow, we're going to celebrate
00:50:28.940 belatedly Father's Day, which
00:50:31.280 happened last weekend.
00:50:32.820 But my daughter at the time was
00:50:34.600 studying, was very stressed about
00:50:36.040 an exam she had early this week.
00:50:37.620 So, we took a rain check, which
00:50:39.700 we're going to do that.
00:50:40.600 And we're also going to celebrate
00:50:42.000 the 30th anniversary of my PhD.
00:50:44.200 So, I hope that you've got some
00:50:46.340 loved ones that you could spend
00:50:48.020 some good moments with.
00:50:49.500 Thank you so much for taking the
00:50:51.460 time.
00:50:51.800 I know you could have been many,
00:50:52.900 many places.
00:50:53.540 The fact that you chose to be with
00:50:54.760 me is an honor and a privilege.
00:50:57.600 Have a great weekend.
00:50:59.080 Keep fighting the good fight.
00:51:00.500 And I'll talk to you soon.
00:51:01.840 Take care, everybody.
00:51:02.540 Cheers.