In this episode, Dr. Aaron Horschig discusses the importance of interesting research in psychology, and why it's not something that is often taught in most academic institutions. He also gives examples of some of the most interesting research topics in psychology.
00:00:00.000So whether you are a scientist yourself or you're a graduate student or an undergrad who's interested in science or just a lay person interested in science, you really want to be listening to this because it's really important to not only learn how to think like a scientist and how to apply the scientific method, but it's really important to learn how to conduct interesting research.
00:00:26.120And this is not something that is often taught, let's say when you're pursuing your PhD, you're taught methodological rigor, you're taught how to conduct a literature review, you're taught how to apply the scientific method, propose the research questions, posit some hypotheses, develop the data collection procedure to test the hypotheses, collect the data, analyze the data through some research.
00:00:56.120data analytic procedures, some statistical inferencing, and then arrive at a conclusion, offer some practical theoretical implications, some future research streams, and then voila, you've written a paper or you've, you know, written your doctoral dissertation.
00:01:12.220And that's all great. Of course, we want to teach people how to do good science, but how do you teach people how to do interesting science?
00:01:18.940Now, this is something that I've been railing against for much of my career because I very quickly learned that most academic research is astoundingly boring.
00:01:33.340Astoundingly boring, not because there aren't many, many interesting things to study, but it's because people end up being focused on the wrong things.
00:01:43.360So they, they sacrifice interestingness for rigor.
00:01:49.940But of course, these are not mutually exclusive metrics, right?
00:01:53.200You could be very rigorous, both in terms of the internal and external validity of your research, while also being interesting.
00:02:01.720And I don't mean to, by the way, imply that interesting means that it has to be something that is covered on the six o'clock news, that it has to be sexy, and therefore somehow, you know, flippant research.
00:02:15.720You can say some unbelievably powerful things about the human condition, if we're now we're talking about research in the human context, that are, you go, wow, this is interesting.
00:02:27.880I mean, that's, that's one of the reasons I love the behavioral sciences, right?
00:02:31.780Why I love consumer psychology and evolutionary psychology.
00:02:35.180There are so many fascinating things that we can study, but people end up being stuck in designing very rigorous experiments or developing mathematical models that are really tight.
00:02:47.220But then the ultimate question is just pure bullshit.
00:02:50.100And I, it kind of came back to the forefront of my mind, because I just returned, I was in Miami, I was attending the Academy of Marketing Science Conference, which was this year being held in Miami.
00:03:01.840And so as I was going to the various sessions, I couldn't help but, you know, think that which I thought in so many other academic conferences, which is, boy, this is some bullshit research.
00:03:16.340And again, I don't mean that as a, it's not an arrogant statement.
00:03:19.920It's not, I'm not trying to be elitist.
00:03:21.980Again, the researchers who are presenting their work, in most cases, you know, are, are, are doing the science, but they're not doing interesting science.
00:03:31.660Okay, so let's, how do we decide what constitutes interesting?
00:03:34.680Isn't it, you know, in the eye of the beholder?
00:03:37.740Well, I mean, to some extent, yes, but there are some ways by which we can create a taxonomy or a framework for saying, you know, all other things equal, this research question is likely to bear more interesting fruits than this research question.
00:03:55.920Let me start by giving some examples of what I would consider interesting research in psychology in general.
00:04:05.700And then later, I'm going to give specific examples within evolutionary psychology.
00:04:13.320Well, I'll come back to the list in a second.
00:04:15.040So whenever I teach any, any course, whether it be at the undergraduate, the MBA, the master's of science or PhD level, I often will revert back to arguably the three greatest experiments ever conducted in psychology.
00:04:30.940Number one, many of you have heard, of course, the obedience to roles experiment, the Zimbardo experiments at Stanford, where Zimbardo took, Philip Zimbardo took a bunch of undergrads and randomly made half of them be corrections officers, the other half be prisoners, and then watched their behaviors unfold.
00:04:56.300And then he had to stop the experiment on ethical grounds because those who were imbued with the role of being corrections officers internalized that role and became quite sadistic towards their fellow undergraduate students who were playing the role of prisoners.
00:05:13.140And so that was interesting in that you saw what happens to people when they are adhering to certain role expectations.
00:05:23.000But that study, notwithstanding how famous it was, is not as interesting, if you'd like, as, of course, the Stanley Milgram experiments.
00:05:35.080And there, for those of you who may remember, this is where Stanley Milgram devised an experimental design to try to address that which he had often heard regarding some of the Nazis and German soldiers who said,
00:05:54.600hey, hey, I was just obeying, you know, I was just obeying orders, you know, I'm not a bad guy, I'm not an evil guy.
00:06:03.100And so he wanted to actually test, can you get people to do horrible things simply because they are put in a situation where they're going to, you know, there's going to be pressure on them to conform to that authority.
00:06:19.680And so he devised an experimental paradigm whereby he brought in two people to the lab.
00:06:29.920One of them was, quote, randomly determined to be the teacher.
00:06:38.200But I say, quote, randomly because the teacher, it was the actual subject.
00:06:43.980The student was actually a confederate, meaning it's a person who is pretending to be a subject or a participant, but really they're in on the experiment.
00:06:55.180And so he would put them in separate rooms and say, oh, you know, I'm trying to gauge how well people learn as a function of, you know, being punished.
00:07:06.160So there's a schedule of reinforcement cause people to learn better or something to that effect.
00:07:12.660And so the teacher is going to read a bunch of paired associates to the student, young man, blue car, big house, right?
00:07:24.620And the teacher asks the student who's in another room, please memorize these paired associates.
00:07:32.320And then when you go back and you say, you know, young, the student's supposed to say man.
00:07:37.240If he doesn't say the right correlate, you know, paired associate, then he is zapped with an electric volt.
00:07:46.040And of course, unbeknownst to the actual teacher, there is no voltage that's being assigned, but he doesn't know that.
00:07:53.060And so on the other hand, you, on the other side, you hear the, the, the student sort of screaming, ow, this hurts and so on.
00:07:59.900And, and these, these voltages become increasingly stronger, you know, to, to point where it says, you know, danger, do not administer that, you know, this will cause death.
00:08:11.340And just to get to the punchline, many people, well, more than what you might have otherwise imagined, end up zapping a, a fellow human being to the point of causing them great harm, if not death.
00:08:27.020Because every time they would look and turn and say, hey, should I go on the, the, you know, the person in the white lab coat say, you know, hey, you've agreed to participate in this experiment.
00:08:36.020Just do it, follow the instructions. And so the point was to show that people could really be put in, in, in situations where they violate what they would have thought otherwise is their, you know, moral compass.
00:08:48.500Because, and of course, this is not to justify the Nazis and the German soldiers in World War II.
00:08:54.080Now, the reason why this was incredible is because when you went ahead and asked clinical psychologists and psychiatrists, hey, how many people do you think would be psychopathic enough to actually administer
00:09:05.780enough voltage to kill someone? I don't remember the exact numbers, but I think it was something in the order of, you know, one in a thousand.
00:09:13.720And it turns out that an astonishingly greater number than that were, were able to be coerced into doing these awful things unbeknownst to them.
00:09:25.580So that was surprising. It was like, wow, this, and, and by the way,
00:09:29.520the psychologists were so incredulous regarding the results that they said, bullshit, you're, you're, you know,
00:09:37.540no way Milgram, you're making this stuff up. And so that's why he ended up taping the experiments.
00:09:43.540And so you can actually go and watch these tapes experiment. Usually you don't necessarily have to
00:09:48.900tape these experiments if, if, if it's not relevant for the data analysis, but here in a sense, it was,
00:09:54.520you know, a dear diary. It was laying witness to the fact that, no, look, these are really things that are
00:10:00.580happening. So it allowed us to see something that we thought, you know, most humans would not do. And
00:10:07.040therefore it was surprising. Therefore it was interesting. And again, here I'm giving you anecdotal
00:10:12.560evidence of interesting research. I will talk about a formal way by which we can establish what
00:10:18.820constitutes interesting research. And then arguably the one that even, that even impresses me more than
00:10:26.240the Milgram experiment is one that some of you may have heard me talk about before the Solomon
00:10:30.560Ash conformity experiment. This is where, and again, for those of you who know it, forgive me, bear with
00:10:36.460me, but this is where, but by the way, people who are requesting to, to speak your requests are going
00:10:43.740to go unheeded because, uh, in these general sessions, I just, uh, lecture. And then if you want,
00:10:50.840if you wish to ask questions either directly to me or via, oftentimes I take written questions and you
00:10:57.080have to subscribe to my content. No, that's not because I'm Jewish and, you know, Jews, Jew trying
00:11:03.400to make money. It's because that's one of the, uh, exclusive benefits that I offer people as do many
00:11:10.740other content creators for people to subscribe to exclusive content, right? So if you wish to
00:11:15.980interact with me and so on, then please subscribe for my content. It's in the order of a latte per
00:11:22.840month. Okay. So, and you certainly get a lot more worth than a latte and subscribing to the content. So
00:11:28.860there's no point asking to, to, for speaking privileges here, because I'm just going to be lecturing.
00:11:35.000So the, the Solomon-Ash experiment, think of it this way. There are three lines of very different
00:11:41.720lengths. So a lines, A, B, C, and then next to it, you know, but separate, there's another line X.
00:11:50.000Now X, let's say is the same length as line B. And it's clear, like there's absolutely no way unless you
00:11:57.460were, you know, absolutely blind that you wouldn't be able to tell that line X is the same length as line
00:12:04.220B. But the point is to see whether if you get a bunch of people to give the wrong answer, when you
00:12:11.360ask the question, which of lines A, B, and C are the same as line X. So if I bring in eight people
00:12:17.300into the lab and I put the first seven, they're confederates, meaning they're fake subjects, but
00:12:22.540the eighth subject doesn't know this. So the first person, when you ask them which of lines A, B, or C
00:12:28.060is the same length as line X, the correct answer is B. But then person, the first person says,
00:12:34.040oh, it's A. And A is like, you know, five times longer than X. So it's no way that you can say
00:12:41.340that. He says A. The second person says A. The third person says A. The fourth person says A. So as
00:12:48.320you can see, there is this weight of conformity that's being built on the shoulders of the last
00:12:55.180person. And then the point is to see whether that person will conform. In other words, will they go
00:13:00.440against their lying eyes for a stimulus that is completely clear? There's no way you can get
00:13:08.460this wrong, right? I mean, literally, so that's what makes it so powerful, the experiment, because
00:13:13.400the real world has grayness. The real world has fuzziness. There's nothing fuzzy about this. Line B
00:13:20.860is the same length as line X. And line A and C are very, very different length. And yet,
00:13:29.540when you get to person 8, even if one person had conformed, that would be surprising. And yet,
00:13:36.380many more than one person conform, demonstrating the unbelievable power of group conformity in shaping
00:13:45.080our behavior. So I talk about this when I then discuss many consumer behavior contexts that are
00:13:51.840shaped by conformity. Take, for example, the fashion industry. The fashion industry is just one big
00:13:57.340cyclical orgiastic conformity experiment. Okay, everybody, today you all wear blue. Okay, assholes,
00:14:04.620no more blue. That's out. You wear green. And we all nod along and follow the fashionistas,
00:14:11.520right? So the power of that experiment is it demonstrated in an incredibly elegant and simple
00:14:18.080design, something that is incredibly powerful and interesting about the human condition, which is
00:14:25.080you can get people to follow suit and conform even in context where it should be impossible for you to
00:14:33.240be giving the wrong answer. That's interesting. That's powerful. It didn't take a fancy convoluted
00:14:40.440experimental design. It didn't take fancy mathematical modeling. It didn't take fancy, you know,
00:14:49.420procedural convoluted designs. Very simple, elegant experiment demonstrating something universal
00:14:57.920and powerful and therefore interesting. Okay. So now we come to, so how do we, how can we determine
00:15:06.820what's interesting? What if I find dolphin mating behavior interesting, whereas you find, you know,
00:15:13.060the evolution of a particular plant morphology to be interesting? How, you know, you're, you're a botanist
00:15:22.320and you're interested in plants and I'm a, you know, marine biologist and I'm interested in dolphin mating
00:15:27.620behavior. How are we to judge whether my work is interesting or yours? And as I said, yes, there are
00:15:34.800individual idiosyncratic personal differences, but there should be a way by which we can talk about
00:15:40.880all other things considered, things that are interesting or not. Most academics could not give
00:15:46.380a damn whether their research is interesting. As a matter of fact, they take great pride in the fact
00:15:51.200that, you know, they don't look at such vulgar things. They do rigorous research. No, that that's
00:15:57.740bullshit. When I sat down at many of those academic conference meetings, as I did now this past week
00:16:07.100in Miami, I was seriously considering whether I should inject Ebola both in my eyes and ears so
00:16:14.560that I could be protected from some of the quote research that I was seeing. Now, again, it's not
00:16:20.000because the people who are doing the research are dumb. They're not. They're very good methodologists.
00:16:25.560They know how to apply the scientific method to test something, but what they're testing is
00:16:31.140completely useless. So, I'm going to come in a second to the paper in question that I wanted to
00:16:38.000talk about today in terms of how do you establish that something is interesting. But first, I wanted to
00:16:43.220read to you a part of a review that I had written. So, for those of you who don't know, the way
00:16:52.000academic research is judged is that it goes out to a journal, okay, let's suppose, you know, journal of
00:17:00.920consumer psychology. So, if I want to send a paper to that journal, I send it to the editor. The editor
00:17:07.660will decide who are the reviewers that are most likely to be well-suited to review this paper, and
00:17:14.380then it will go out for review, and then the results come back, you know, either reject this paper, it sucks,
00:17:23.880accept it, or please resubmit, but please implement these major revisions, please resubmit, but implement
00:17:32.320these minor revisions, or accept as is. Now, for most papers, it'll go through multiple rounds of back and
00:17:38.560forth, until at one point it gets to, okay, this paper is ready to be accepted. And so, as an academic,
00:17:45.100as a professor, one of the professional, if you like, obligations that are part of your job as an
00:17:53.080academic is to serve as a reviewer for various journals. And so, I've reviewed, you know, innumerable
00:18:01.220papers. I do it a lot less these days, because at this stage of my career, you know, my time is taken
00:18:06.900by other things, and I've already done my, I've put in my time for 30 years, but probably the, not
00:18:14.760probably, the singular most common feedback that I gave a paper when I was reviewing it is basically
00:18:24.640that it wasn't interesting, okay? Of course, that's not the only problem. Sometimes there are, you know,
00:18:30.880data analytic problems, data collection problems, there are internal validity problems, the,
00:18:36.560the, the, the research design was not well developed, the, the hypotheses were not good. I mean, there's
00:18:42.600a million reasons why a paper can go wrong, and I'll talk a bit more about that. But the fundamental
00:18:46.740one is you would read a paper, at least for me, I would say, this, this sucks. I mean, of course, I
00:18:52.400didn't put it in those words. So, I'm going to read you first, I'm going to read you a, a, an actual
00:18:58.480part of a review that I had written. So, here we go. So, perhaps the first criterion that any scholar
00:19:05.460should pose when embarking on a new research project is how interesting the expected findings
00:19:11.080are likely to be. While it is clear that not every research project will be earth-shattering,
00:19:17.080one of the first metrics that should be used in evaluating study is how novel and non-trivial
00:19:22.240the findings will prove to be. Regrettably, the current paper fails in delivering on these crucial
00:19:28.640epistemological metrics. Obtaining significant effects is insufficient in judging the intellectual
00:19:35.700contributions of a study. The directional relationships in the LISRL model, LISRL model is a
00:19:42.260type of causal model, you know, where you're, you're, you're, you're showing which variables
00:19:48.040cause which other variables in a, in a, in a, in a path analysis. So, the directional relationships
00:19:55.440in the LISRL model are largely self-evident. Statistical and methodological rigor do not
00:20:01.960compensate for results that are minimally surprising. If one were to sample 1,000 lay people and ask them
00:20:09.300to predict the directional relationships obtained in the current study, I suspect that most, if not
00:20:15.900all, will provide the correct predictions, i.e. those reported in the structural model. If the authors
00:20:23.080are unfamiliar with the classic philosophy of science paper by Davis 1971, I highly recommend
00:20:30.220that they read it. It should provide them with powerful guidelines in judging what might constitute
00:20:35.740interesting research worthy of their intellectual efforts. So, this was a part of a review that I
00:20:42.000had written for a paper, but that, that exact, you know, what I just read to you is something that
00:20:47.520I've written probably for a hundred different papers through all my career. So, now what is this paper
00:20:53.820by Davis 1971? So, this is a paper that I always assign to my doctoral students and at times to my master's
00:21:05.040students because it is a brilliant paper that tries to find a method or a framework or a set of criteria
00:21:15.480that allows us to say, you know, if it, if a paper passes one or more of these criteria,
00:21:22.460then at least it looks like it's going to be an interesting paper. The general structure, so he
00:21:30.700proposes, Davis, who was a sociologist, he's passed away. Let me first read you the, the reference
00:21:37.680because if any of you want to track it, it's actually a paper that I had recommended for my,
00:21:45.120to my subscribers, uh, probably a month or two ago. I often will, uh, recommend classic academic
00:21:53.420papers or I make book recommendations to my subscribers. So, that's one of the things that
00:21:57.620you can get in terms of exclusive content if you subscribe to my, to my content. But let me now
00:22:04.260mention the paper. It's by Murray S. Davis. It's a 1971 paper. It's published in the philosophy of
00:22:12.160the social sciences. Uh, it starts on page 309. Uh, it's titled, that's interesting exclamation point
00:22:21.560towards a phenomenon, phenomenology of sociology and a sociology of phenomenology. Basically what he's
00:22:29.740doing there in the, that's interesting exclamation point paper, he's saying, well, how do we go about
00:22:34.960establishing something that's interesting? So, let me read you, uh, the summary of the paper and maybe
00:22:40.900the first paragraph and then I will drill down what some of these criteria are. So, this is from the
00:22:47.080paper, chapter, uh, page 309. Question. How do theories which are generally considered interesting
00:22:55.160differ from theories which are generally considered non-interesting? Answer. Interesting theories are
00:23:02.100those which deny certain assumptions of their audience, while non-interesting theories are those
00:23:08.120which affirm certain assumptions of their audience. This answer was arrived at through the examination of
00:23:14.740a number of famous social and especially sociological theories. That examination also generated
00:23:21.300a systematic index of the variety of propositional forms which interesting and non-interesting theories
00:23:28.100may take. The fertility of this approach suggested a new field be established called the sociology of
00:23:35.400the interesting, which is intended to supplement the sociology of knowledge. This new field will be
00:23:42.040phenomenologically oriented in so far as it will focus on the movement of the audience's mind from one
00:23:49.620accepted theory to another. It will be sociologically oriented in so far as it will focus on the
00:23:55.940dissimilar baseline theories of the various sociological categories which compose the audience.
00:24:01.660In addition to its value in interpreting the social impact of theories, the sociology of the
00:24:07.580interesting can contribute to our understanding of both the common sense and scientific perspectives on
00:24:13.320reality. So, just to kind of summarize, what he's basically saying there is, can we come up with a
00:24:18.640framework that allows us to establish in a meaningful academic way how we can measure what constitutes
00:24:27.360interesting scientific research. So, let me just read you the first paragraph of the paper after that
00:24:36.740summary and then I'll describe some of the 12 criteria that he proposed. So, it has long been
00:24:43.960thought that a theorist that a theorist is considered great because his theories are true, but this is
00:24:50.020false. A theorist is considered great not because his theories are true, but because they are
00:24:56.280interesting. Those who carefully and exhaustively verify trivial theories are soon forgotten, whereas those
00:25:05.080who cursorily and expediently verify interesting theories are long remembered. In fact, the truth of a theory
00:25:13.780has very little to do with its impact for a theory can continue to be found interesting even though its
00:25:20.340truth is disputed, even refuted. So, there you go. What he's saying is that even if your theory is wrong, the fact
00:25:28.300that it triggers a sense of that's interesting, well, you know, you're doing good research. So, what are some of the
00:25:37.540criteria that he talks about? So, let me just mention these. He basically, I mean, I'm not going to go
00:25:44.500through all 12 criteria, but it's basically we thought that A causes B, but it turns out that B causes A.
00:25:53.720We thought that A and B are correlated. It turns out that A and B are not at all correlated. We thought that
00:26:01.900the phenomenon was, you know, singularly caused by a single factor. It turns out to be multifactorial. So,
00:26:09.260as you can see, it's always we thought that it was X, but it turns out that it's not X. Now, of course,
00:26:17.260you can quibble with this. You could say, well, it can't only be that interesting research is surprising
00:26:24.880research, and it isn't, right? That's why I'm saying this is not the end-all taxonomy for judging
00:26:30.420interesting research, but it has to make you go, hmm, wow, I didn't know that. That's cool,
00:26:36.020right? So, let me give you an example. I often use this, and by the way, at the Academy of Marketing
00:26:41.820Science Conference that I was at this past week in Miami, I made it my express purpose to sit down
00:26:49.820with young doctoral students and, you know, young assistant professors to chat rather than the
00:26:55.160obnoxious older know-it-all professors, precisely because I was trying to hopefully
00:26:59.700impart some of, you know, my experience onto them, and frankly, we had some really nice
00:27:05.380conversation because they're all looking for what should they study in their doctoral dissertation
00:27:09.740and so on. And so, I was trying to explain to them some of the stuff that I'm talking to you about
00:27:13.400here. Well, life is short, right? Let me give you an example. You could do an incredibly
00:27:23.240rigorous study. By the way, the example I'm giving is literally one that was done, where you use some
00:27:32.960fancy statistical procedures. You could, you know, it's full of very fancy Greek letters and
00:27:39.800mathematical symbols and incredibly complicated design, and yet the final result is customers who
00:27:48.720are more satisfied at this particular establishment are more likely to return to the establishment.