Based Camp - August 08, 2023


Based Camp: Spencer Greenberg on Trying to Fix Science


Episode Stats

Length

22 minutes

Words per Minute

210.257

Word Count

4,794

Sentence Count

267

Misogynist Sentences

1


Summary

Spencer Greenberg is one of the most respected social thinkers in the EA and rationalist movement in the New York City area. He is also the co-founder of Clearer Thinking, a site that helps you apply ideas from psychology and economics to your life and career. In this episode, we talk about his research on the problem of "the replication crisis" and how to fix it.


Transcript

00:00:00.000 Okay, here we go. Hi, everyone. We have a very, very special guest today who we have known
00:00:07.720 actually almost as long as we've known each other. We met Spencer Greenberg back in like around 2015
00:00:14.280 when he was first working on some of his projects that are now pervasively used, which is really,
00:00:20.860 really cool. He is someone that we've profoundly respected for many years. He has been running
00:00:26.580 Clearer Thinking for a ton of time, but more recently, he launched the Clearer Thinking podcast,
00:00:32.320 which is a series of interviews with incredible people that we really enjoy. I'm addicted to it
00:00:36.380 personally, so please check it out. I'll just summarize. The important point is he's probably
00:00:42.600 one of, if not the most respected social figure in the EA and rationalist movement in the New York area,
00:00:49.940 which is a very big thing because it's one of the major hubs of the movement.
00:00:53.020 Yeah. And Spencer, could you tell us what your top projects are right now?
00:00:58.120 Yeah. Well, thanks for having me on. So one of the projects I run is called Clearer Thinking,
00:01:02.420 which you mentioned at clearerthinking.org. And what we do is we take interesting ideas from
00:01:07.540 psychology, economics, math, and so on that people might learn in passing. Maybe they'll learn them
00:01:13.200 in blog posts or reading books, but they don't generally apply them to their lives. And so our goal
00:01:17.360 is to make it really easy to apply these ideas to your life to try to achieve the things that you want
00:01:21.600 to achieve. So we have these interactive modules with over 70 of them right now, and you can use
00:01:25.400 them all for free. And I also do the Clearer Thinking podcast, as you mentioned. In addition
00:01:29.280 to that, we have a bunch of other projects for accelerating social science. So our goal is to
00:01:33.200 try to help psychological research go faster, be more robust, be more reliable, and help unlock
00:01:39.160 important ideas about human nature that can be a benefit to society.
00:01:44.600 Speaking of that, what we wanted to focus on this podcast is you recently did some research into
00:01:49.080 the replication crisis, how bad it's gotten, and I think you have some theories on how it could be
00:01:53.740 fixed. So I'd love for you to just dive into that. First, explaining what the replication crisis is,
00:01:58.720 its scope, and your research, and then going through potential solutions.
00:02:03.720 Sure. Yeah. It's a topic I think about a lot. So basically, there are many really interesting
00:02:08.660 findings in psychology that have unfortunately failed to replicate, which means that basically when
00:02:13.180 people try to redo the same study, collect a new sample of study participants, they just don't get the
00:02:17.920 original answer. And that's been very disturbing. A bunch of findings that were in textbooks and that
00:02:22.840 are really famously known just don't work, it seems. So some examples of this would be from the social
00:02:28.880 priming literature, where they do things like have someone hold a warm cup of coffee, and then people
00:02:34.360 would find that there would be rated as more warm, or they'd rate things as more warm, because we make
00:02:39.020 these psychological metaphors. Well, it's a really cool sounding idea, but it doesn't necessarily replicate.
00:02:43.600 Or another example, when you prime people with words that are related to being older,
00:02:48.420 the people then walk slower. Well, again, a really cool concept, but it didn't replicate when
00:02:52.740 people tried it again. And so the question is, why are so many findings not replicating? And how
00:02:58.460 pervasive a problem is this? And so looking at the many different replication studies that have
00:03:03.440 occurred, my best guess is that from top journals, from very top journals, probably about 40% of the
00:03:08.840 results don't replicate. That's enormous. Wow. Yeah, it's pretty shocking. Some people,
00:03:16.980 their response is, well, science is hard, human nature is complicated. What do we really expect?
00:03:21.780 But my view is that no, 40% not replicating is way, way too high. It should be something
00:03:26.920 on the order of 5% to 10%. I think that might be reasonable. But the problem with 40% is it's almost
00:03:31.540 a coin flip. If you read a paper, will this result hold up, right?
00:03:35.140 Yeah. So do you think that this is, one thing that might be fun to go into for the audience
00:03:41.300 is how does this system work? Like the scientific system that gets things in these journals,
00:03:48.060 and where do you believe it's failing? Yeah. So it's an interesting question why
00:03:53.940 this is happening, because you might think, isn't this what peer review is designed to prevent,
00:03:58.540 right? People are submitting to these top journals, experts in their fields are reviewing the papers.
00:04:02.600 But I think the fundamental problem there, it's not that peer reviewers want to let in garbage
00:04:07.240 papers. They don't really have any reason to let in garbage papers. They don't get paid more if
00:04:11.060 they let in bad papers. They don't get more prestige if they let in bad papers. No, the reviewers are
00:04:15.000 reading the papers and trying to let in the good stuff. The problem is that whether something
00:04:19.280 replicates or not is not generally visible from reading the paper.
00:04:23.440 Oh, because I was going to ask, are there warning signs?
00:04:26.020 Yeah. Well, in recent years, there's been a lot of interesting things learned about the warning
00:04:32.480 signs. But people didn't really know what the warning signs were necessarily in the past. And
00:04:38.400 it's very difficult for people to tell what is a legitimate paper and what's not. I can tell you
00:04:42.340 about some of the warning signs. One is about p-values, which are a rather technical subject and
00:04:48.120 something that is confusing people. But the basic gist of a p-value is when you're testing a hypothesis
00:04:52.800 using statistics, let's say you say, well, if I give people this psychological treatment,
00:04:58.280 then they'll have less anxiety at the end of the study on average than if I don't give them the
00:05:01.780 psychological treatment. And then you want to know, well, did the result that I got exceed what
00:05:07.460 you'd expect by chance? So let's say you find that people who got the psychological treatment had
00:05:12.240 an average anxiety rating of 5 out of 10, whereas those that didn't get it had an average anxiety
00:05:16.200 rating of 7 out of 10. So that looks good. It looks like the psychological treatment worked. But the
00:05:19.940 question is, well, it's 5 compared to 7. Is that really good enough that we can conclude the
00:05:24.000 treatment works? And so what you do is you compute what's known as the p-value. And the p-value is
00:05:28.840 basically saying, what's the chance I would get a result that's this different? So a difference this
00:05:33.680 large or larger, if in fact there was no effect. So if the p-value is really small, let's say it's 0.01,
00:05:41.220 that means there's only a 1% chance you get a result this extreme or more extreme if there was no effect.
00:05:45.720 And so that probably is not due to chance. Whereas if you get a higher p-value, like 0.3,
00:05:51.440 that means there's a 30% chance you'd get a result this extreme or more extreme if there was no effect.
00:05:56.040 And so maybe it might be due to chance. So the idea of the p-values is the smaller they are,
00:05:59.780 the less likely the result is to be due to chance. Now, one of the really interesting things that has
00:06:03.800 been found empirically, and we've actually looked at this, we looked at over, I think it was over 200
00:06:08.440 different replication studies looking at what traits actually predict what replicates and what
00:06:13.440 doesn't. And smaller p-values in the original paper are associated with better replication.
00:06:18.880 So if the p-value was, let's say 0.04, that's still considered statistically significant because
00:06:25.740 by the definition of statistically significant, anything below 0.05 is considered statistically
00:06:29.740 significant. And so it may well have been published as a true finding, but it's less likely to replicate
00:06:34.820 than if the original p-value had been 0.001. And so part of the reason this is believed to be the case
00:06:40.420 is that people do kind of fishy statistics to get the p-value down to be just below 0.05 so that they
00:06:46.060 can go publish the paper. So my question is, I mean, do we ever see with really low p-values,
00:06:51.780 they're not being replicable as well? Because I mean, my understanding is that you just should
00:06:56.100 almost never see that. I mean, that seems like such an obvious thing. If the p-value is low,
00:07:00.240 it's more likely to replicate. But are we still seeing instances in which the p-value is low and it's
00:07:04.740 failing to replicate? That's a really good question. Because let's say you had a really low p-value.
00:07:08.480 There's only a one in 100,000 chance you get a result this extreme if in fact there was no effect,
00:07:13.980 right? Well, that's really strange. Does that mean there's definitely an effect? Well, it turns out
00:07:18.760 there can be other reasons besides just a statistical fluke, a false positive, why something
00:07:25.760 wouldn't replicate. For example, there could have been a mistake in the original study, either a
00:07:29.680 mistake in the analysis, right? Maybe they just screwed up the Cisco analysis and it wasn't really a low
00:07:33.760 p-value. Or maybe there was a mistake in the design where people weren't allocated their groups
00:07:38.100 properly or other things like this. Of course, there's also the possibility of fraud. That could
00:07:41.440 be another reason. They could have just made up the data, right? So there could be various reasons why
00:07:45.040 even with a small p-value, it won't replicate, but it's certainly more likely to replicate.
00:07:49.200 So what was your study in this space where you recently did a study in this space? Can you
00:07:53.100 elaborate on that? So we launched a project called Transparent Replications. If you want the
00:07:58.200 details, you can find on our website, clearthinking.org. And the idea of our project is we want to
00:08:03.360 replicate new psychology papers coming out in the top general science journals. So the Journal of
00:08:09.060 Science and the Journal of Nature, which are just incredibly famous journals that lots of people
00:08:12.220 want to publish in. And our goal is that we want to get to the point where we're doing so many of
00:08:16.960 these replications that if you are a psychologist going to publish in these journals, that there's
00:08:20.860 more than 50% chance we're going to replicate you. And if we can achieve that goal, then it means that
00:08:26.860 people submitting to these journals are suddenly going to have to grapple with the fact that there's a good
00:08:31.360 chance they're going to be replicated. And therefore, they're going to have a different incentive to do
00:08:34.860 their research in a way that makes sure it replicates. Because if it doesn't replicate,
00:08:38.100 people are just going to find out. Now, this is really interesting. So one question I have is,
00:08:43.720 have you guys worked with, I mean, if there are still really no repercussions for being unreplicable,
00:08:49.840 what are the extent of the repercussions you expect with a project like this? And how do you expect
00:08:55.760 the academic field to react more broadly? I mean, if you call out high status people within the
00:09:01.040 academic field, the academic field is going to begin to frame you like a villain or negatively
00:09:06.020 because you are a threat to them, like you're this new exogenous threat. How have you seen them react
00:09:11.420 to it? And do you think that they will apply heavy penalties to the people who publish unreplicable
00:09:18.640 findings? Yeah, those are really good questions. A lot of interesting things to dig into there.
00:09:22.900 First of all, we've seen a really positive response largely from the academic community about
00:09:26.280 our project, which was really nice to see. I think there's a lot of acknowledgement and an
00:09:30.540 increasing acknowledgement that there's a real problem and the standards have to change.
00:09:34.420 The reality is, while it might be good for one individual researcher to push through crap,
00:09:38.880 it's really, really bad for the field. If you're a psychologist, the value of you being a psychologist
00:09:45.120 has declined and declined and declined as the general public has become aware of all these issues
00:09:49.460 to the point where some people are just not trusting the papers anymore. That's terrible for the field.
00:09:53.600 So although it is kind of a collective action problem, it is really good for the field to raise
00:09:57.800 its standards and it will actually benefit academic psychologists as a group. So I think
00:10:02.120 that many of them realize, wait a minute, if we raise our standards and we can show the public
00:10:06.600 that our standards are raised, it will actually raise our prestige and raise our credibility.
00:10:10.220 That being said, it's never fun to be told that your paper doesn't replicate, right? Nobody likes to hear
00:10:16.320 that. Even truth seekers, it's upsetting to hear that. But we really try to be fair to researchers and we try to make
00:10:22.480 it clear that we're being fair. So what we do is we contact them. We tell them that we're running a
00:10:26.640 replication, that their paper was selected through this systematic process we use. We're not singling
00:10:31.740 them out. We use this process to select them. That's kind of clear, clearly defined. And then we
00:10:36.900 say, here's our exact rebuilt copy of your study. Please look at it and tell us if in any way it deviates
00:10:43.080 from your original research. Because we want to be 100% fair to your research. And we want to make sure
00:10:48.040 that if it doesn't replicate, it's because the original paper didn't replicate, not because we screwed
00:10:51.300 something up. And we also give them a chance to respond on our final report if they want to give
00:10:55.420 any comments. And if they find any mistakes in our work, they can of course tell us and we'll correct
00:10:58.500 those mistakes. And so how is all this funded? So we, thankfully we have a grant. We're really
00:11:05.140 grateful to the, we've actually got two grants. We're really grateful to those that gave us grants
00:11:08.900 to help us do this. What's the end goal in terms of how you hope academia is going to shift going
00:11:16.700 forward, at least like social science research. Is it like, are people going to have different
00:11:21.440 methodologies? Are you also trying to, I don't know, make alternate processes available or show
00:11:29.440 people better ways of doing things? What's your goal?
00:11:33.060 So my personal goal is to make psychology into a science that is better at figuring out
00:11:38.820 important truths so that those truths can come out and better society and better human lives.
00:11:45.040 Right. So that's really what I want to happen. And academia is really the only game in town,
00:11:49.600 pretty much. There are some companies that do some psychological research, but a lot of it is
00:11:52.480 locked away. It doesn't ever get out there in the world. And so if without academia producing
00:11:57.760 important truths about humans, like we're just not getting a lot of them. Right. So my hope is that
00:12:02.960 with a project like this, we can help work in the right direction to get scientists to produce more
00:12:08.420 robust findings that then can benefit society. Now I will say also with something like this,
00:12:15.440 it really does hinge on a change in incentives, right? So in order for our project to work,
00:12:20.180 we need it to be the case that people do their research differently. And I think that while people
00:12:25.340 are often resistant to hearing that their own work didn't replicate, I do think that other researchers,
00:12:30.280 when they see that and they say, ah, this paper didn't replicate, it does really greatly diminish
00:12:34.400 their belief in that paper. And I think they're much less like it is cited if they know it hasn't
00:12:38.800 replicated. They're like, ah, that didn't replicate. So my hope is that it really does act as a
00:12:43.340 significant incentive to doing better research.
00:12:46.060 I really admire the work you're doing in reforming this kind of academic research,
00:12:50.320 but I also kind of want to touch on what we might call like renegade research or like the
00:12:55.080 resurgence of gentleman scientist research, specifically because you're kind of one of
00:13:00.020 our heroes on that front. You've done a ton of research through clearer thinking. You also,
00:13:04.680 you created guided track and positively, which have really made it possible for many people to
00:13:11.500 do research on their own without being.
00:13:14.600 Quick side note. So these softwares, if any of you like to do your own research or you want to go
00:13:19.460 out there and inexpensively run a study, they make it possible to get participants at reasonable costs
00:13:25.540 and they make it possible to run and design a study reasonably.
00:13:29.260 Yeah. So guided track specifically enables you to create these surveys. It even includes,
00:13:34.620 and this is, you don't need to learn how to code. It's so easy to use. And this is what Spencer was
00:13:38.500 working on when we first met him. People like Ayla use it for her surveys and then positively enables
00:13:44.020 you to recruit audiences to fill out those surveys. So if you're not famous like Ayla and you can get a
00:13:49.000 large sample size. In fact, she, I think still even uses positively for sample sizes as well,
00:13:53.060 like for participants. So yeah, like these two things together are really making it possible
00:13:57.880 for gentlemen or gentlewomen scientists to do these things. Like what are your thoughts on
00:14:03.100 the future of renegade science? And are you thinking about additional tools or processes
00:14:09.320 to give to people outside of academia? I mean, clearly you're trying to reform academia. I think
00:14:13.560 that's beautiful, but what can non-academics learn about doing good social science and other research
00:14:18.840 using your tools and other tools to do these things?
00:14:23.280 Yeah. I'm glad you mentioned our tools because that is part of our mission is to get
00:14:26.260 answers to these important questions, not just through academia, but through all means. And that
00:14:30.680 also means to independent researchers who might be doing stuff outside of academia. So yeah, we
00:14:35.660 created positively and got a track to make it easier. We have a new tool working called hypothesize to help
00:14:39.620 with the data analysis as well. So if you think about doing research, you've got to build your study,
00:14:44.840 which is what guided track is for. You've got to recruit your sample, which is what positively is for.
00:14:47.940 You've got to analyze your data and that's what hypothesis is for. So we're trying to create the
00:14:50.820 trifecta there. Yeah. I think there's just a ton of potential to do interesting research.
00:14:55.280 For example, you mentioned AILA. I mean, AILA has done a really interesting research on sexuality
00:14:59.340 that is just really different than what academics have done as far as I can tell.
00:15:03.800 And I think it really adds something to knowledge of the topic. So I'd love to see more of this. And
00:15:08.300 we do a lot of our own research at Clear Thinking. For example, we ran a randomized control trial on
00:15:12.720 habit formation where we implemented a habit formation intervention. We tested it where we tracked
00:15:17.700 people over, I think it was about six weeks to see if they stuck with their habit using our tool
00:15:21.160 versus a control group that didn't have access to our tool. And we were able to show that way
00:15:25.280 that our tool actually improved people's habit formations. You can actually use it for free
00:15:28.640 on our website. It's called the daily ritual tool, clearthing.org. So that's the kind of study
00:15:32.920 we like to run, but we have a bunch of studies running most of the time testing a whole bunch
00:15:36.820 of hypotheses. You know what I'd really like to see emerge is a gentleman scientist network
00:15:43.900 of the various people like Ayla, like you, who are running studies outside of academia and then
00:15:49.820 disseminate them through social media. Because I would really, I mean, the core reason I was
00:15:54.980 thinking it'd be really great if there was a network like this is one that makes it easier for people
00:15:58.180 to find these and potentially even create like a collated journal that's just specifically these
00:16:02.340 types of people. But I would really love if your service at one day could run to see if these
00:16:09.620 studies replicate at a higher or lower rate than the studies coming out of academia and traditional
00:16:14.820 journals. Because I would suspect that, for example, Ayla studies would replicate at a higher rate due to
00:16:21.060 her large sample sizes, despite the fact that there's a perception that it's lower quality research
00:16:26.740 than what's coming out of academia. It's interesting. So when someone is self-taught, I mean, I think
00:16:32.220 there are advantages and disadvantages. The disadvantage is that there may be methods they don't understand
00:16:37.200 very well or best practices that they just blow past because they haven't been taught about them.
00:16:43.060 And I think that is a real concern. And I think if you ask Ayla, I bet she'll say that her earlier studies
00:16:47.240 are much worse than her ones today because she's picking up on some best practices and learning some
00:16:51.340 of the things that maybe she didn't know going into it. On the other hand, a really nice advantage of being
00:16:55.340 outside of academia is you don't have certain pressures on you to write certain kinds of papers.
00:16:59.980 You don't have pressures to constantly publish. And so you can kind of take your time more.
00:17:04.480 For example, in one of our lines of research, we ran something like 15 different studies
00:17:09.100 before we put anything out as a result, because we just really wanted to figure out what was true
00:17:13.860 about the topic. And that's how long it took us to feel confident we knew the answer. We just didn't
00:17:17.560 feel the pressure to just put something out immediately just because, oh, wow, we need three
00:17:21.180 papers right now in order to be on the tenure track, right? So I think that's a big advantage.
00:17:25.820 When I think about really good research, I think of it as a few different things coming together,
00:17:30.660 really high level skill, which might involve training, but also might be self-taught skill
00:17:35.920 and really high levels of truth-seeking and those things kind of multiplying together.
00:17:41.760 So if you have someone who has no skill whatsoever in doing research, I think it's fair to say they're
00:17:45.920 going to have no output. So zero times anything is zero. On the other hand, if you have someone
00:17:49.700 who's absolutely no truth-seeking, they're completely indifferent. Well, they're essentially
00:17:52.920 going to just make up information, right? And so again, you get a zero. And so it's like this
00:17:57.120 truth-seekingness times a skill that come together to create good research. And I think
00:18:00.840 one thing that independent researchers often have going for them is they tend to be really
00:18:03.720 truth-seeking because it tends to be why they're motivated in the first place, right? They don't
00:18:07.400 have the career pressure. They just really want to know about this topic. Yeah. Paul Graham recently
00:18:12.460 released an essay on doing great work. And like the TLDR of it is get into fields where you have a good
00:18:19.700 aptitude and where you're super self-motivated and curious, which there's the truth-seeking,
00:18:23.840 there's the skill and aptitude. And then specifically look for the gaps in current
00:18:28.200 knowledge where there seems to be not good explanations or just not a lot of research
00:18:33.440 or attention. And I think what's really telling there is this really points toward or in favor
00:18:38.760 of the renegade scientist camp because in academia, it's hard sometimes to get funding, to get academic
00:18:45.480 support, to get someone to work with or to get funding when it is one of those gaps because that
00:18:51.460 that may not be where the money is. That may not be where the institutional support is or the
00:18:55.240 attention or the prestige. So it gets us really excited. The one final question I wanted to ask you
00:19:00.900 is of the studies that you have ran, which was the most surprising result to you or the result that
00:19:06.580 changed your world perspective most? Oh, that's a great question. I meant to think about it for a moment.
00:19:11.960 I keep having all these different ones go through my head and I'm like, nah, that wasn't that surprising
00:19:33.620 or that one didn't change my view that much. I'm worried that every social science thing boils down
00:19:39.400 to use it or lose it. People are lazy. It's really hard to change. I can't think beyond that.
00:19:48.560 Actually, there is one result that comes to mind. We haven't released it yet because we're still
00:19:52.820 analyzing it. So I'm a little reluctant to talk about it, but I will tell, I'll talk about it
00:19:56.300 preliminarily, but like with caveat, we're still analyzing. And so what we actually end up concluding
00:20:01.200 will be seen. But we ran a study on decision-making that actually completely shocked me. We put people
00:20:07.860 through a decision-making protocol where they kind of really like had to go through every single
00:20:13.120 like kind of pro and con related decision. And again, they actually had worse, they were less
00:20:17.460 happy with their decisions as followed up like months later when they actually knew the outcome
00:20:21.900 of the decision. And so I'm still processing what exactly that means and why that came out. And we still
00:20:27.740 have a lot of work to do to understand that result. But I think if that ends up holding up, I think
00:20:31.920 that will be the most surprising one to me that that's getting some things. Yeah.
00:20:36.180 That is fascinating. And something I would tell our audience, so the way I really want to wrap up
00:20:40.200 this episode for our audience is one of the biggest studies that we've ever done in terms of changing
00:20:44.440 our world perspective was the study that we ran using data that you had collected for a completely
00:20:51.920 different study. So you had collected data to try to find out what correlated with the way that you
00:20:58.500 voted in the last presidential election cycle. But one of the things that you asked people was how
00:21:04.060 many kids they had. And it was that data set that allowed us to look at and find out what was really
00:21:08.780 correlating with high fertility. And so there's a few things I would impress upon our audience,
00:21:14.140 which is one, if you do want to do a study, you can go out there and do it yourself. We're going to put
00:21:19.620 links below here to all of these products that allow you to go out and run these studies yourself.
00:21:23.780 But two, the great thing about independent researchers, and you can do this to some extent
00:21:28.700 even with professional researchers, is if you have an idea, that doesn't mean you necessarily
00:21:33.300 need to collect the data yourself. You can reach out to someone like Spencer or someone like Ayla
00:21:38.220 or someone like us. And if we've run a similar study in the past and we still have the data in
00:21:42.460 like a shareable format, we can share it with you, which can allow you to do deeper, more interesting
00:21:48.720 digs on even subjects that might be really tangential. So that's a really fun way that
00:21:53.500 you can approach things. And if any of our listeners do like really interesting studies,
00:21:57.600 we'd love to have you on. And this is, I will say to that, there's also this norm that's been
00:22:02.240 changing where people have been getting better at publishing their data sets. We try to publish our
00:22:06.140 data sets most of the time. And a lot more psychologists are publishing their data sets. So you can just
00:22:10.780 find more and more data out there to test hypotheses that you already have. And some data might be able to
00:22:14.900 help answer a question in just 10 minutes of analyzing it. That's really exciting.
00:22:19.260 That's spectacular. Well, be sure to check out his podcast,
00:22:22.180 the Spencer Greenberg Clearer Thinking Podcast. Clearer Thinking is Spencer Greenberg. Yeah.
00:22:25.640 Oh, sorry. You can just Google it or find it in podcasty locations.
00:22:30.860 Yeah. And give it a good review because it's so good. More people need to listen to it.
00:22:33.860 Give it a good review. Yes. Skip subscribing to this one today. Just give his podcast a good review.
00:22:39.800 You're doing God's work, friends. Good. Well, I'm looking forward to our next conversation already.
00:22:44.820 Spencer, and we'll see you soon. Thanks for having me on.