Based Camp - September 25, 2023
Based Camp: How AI's Will Hack People (with Spencer Greenberg of Clearer Thinking)
Episode Stats
Words per Minute
200.58273
Summary
Spencer Greenberg of the Clearer Thinking Podcast joins us to talk about the impact of artificial intelligence on society and the future of social media platforms like TikTok and the internet as a whole. We discuss the dangers of AI in the workplace and the impact it could have on society.
Transcript
00:00:00.000
systems like TikTok, imagine a world where it's generating billions of new pieces of content every
00:00:05.160
day with generative AI, or even going a step further, generating custom AI content for your
00:00:13.620
mind. And then it sees how much you like it, and then it tweaks its generation process.
00:00:18.860
So we could imagine a world in 10 years, 20 years, I don't know, where you're actually seeing AI
00:00:25.440
generating content personalized to you that you just kind of receive in a stream and become just
00:00:30.380
insanely addicted to. Would you like to know more? Hello, we are so excited to be here today with
00:00:36.860
Spencer Greenberg of the Clearer Thinking podcast, and also famously like the Clearer Thinking
00:00:42.420
organization. The guy who is running this project, which is now fact-checking, trying to review
00:00:48.720
psychological research to see if it's replicable as it's being published, which is just so cool.
00:00:54.100
We've known him for years, almost as long as we've known each other. And he is one of the
00:01:00.480
sort of social leaders of the EA rationalist diaspora in the New York area, and just an incredibly
00:01:08.420
respected gentleman scientist, in our opinion. And by gentleman scientist, we mean one of those
00:01:14.060
people who is out there, outside of mainstream academia, conducting really great research.
00:01:19.840
And you can tell he's one of the leaders because he vehemently denies it.
00:01:24.100
And yet he like builds all the stuff that everyone uses.
00:01:28.480
Well, I mean, he's definitely a thought leader in the Gentleman Science Network because he's
00:01:33.500
created the software that everyone's using to do it right now. Like I would say 80% of the
00:01:38.080
independent research is probably done on his software right now. But where we wanted to go
00:01:42.580
with this one is we wanted to talk about where humanity is going into the future. So let's start
00:01:47.160
with the question of where do you think we're going to be in 10 years?
00:01:51.740
So I should start by saying this is wild speculation. Predicting the future is incredibly
00:01:55.840
hard. So take everything I say with a massive grain of salt. A lot of that's going to be wrong.
00:02:00.100
But with that caveat behind me, I will speculate. I think that AI is going to have an absolutely
00:02:04.820
massive effect on society. I think it's hard to really fathom right now all the different ways,
00:02:10.860
but I think we're going to see it coming. I think a few areas we'll see it coming. One is in spam and
00:02:16.860
manipulation. There are already a bunch of countries that have warehouses of people that are essentially
00:02:23.820
putting propaganda onto Twitter and social media. This is well known. But I think that that is going
00:02:29.480
to be nothing compared to a million bots that are powered by AI, right? It's one thing to have 300
00:02:35.320
people in a warehouse. It's another thing to have a million AI bots, right? It's also these AI bots
00:02:40.560
are now at the point where they could all have different personalities. They could all say unique
00:02:44.220
things in line with those personalities. They could have a lot that can be sleeper agents where
00:02:48.920
mostly they just tweet or post totally normal stuff 99% of the time, but then go into full gear
00:02:55.100
to promote propaganda. So that's just one little facet. Let's start there.
00:02:59.100
I was going to ask. So Elon is right when he's saying you need to limit access to these platforms
00:03:05.000
now and any platform that doesn't is just going to become swarmed with bots.
00:03:10.920
I think it's complicated. I mean, I do think, I do think this is going to be a massive problem,
00:03:14.620
whether the current approaches Twitter wants to use will work. I don't know. I mean,
00:03:18.500
almost nobody is on Twitter blue relative to the whole Twitter population. So I'm not sure how
00:03:22.360
they're going to do that. Maybe what they'll do is make it harder and harder to do anything as a
00:03:27.400
non Twitter blue user, but I don't know that that solution will actually work. I think it's
00:03:33.300
going to be an arms race and it's a very difficult one. The closer an AI gets to being able to do what
00:03:37.500
a human can do, the harder it is to actually detect that it's an AI, right?
00:03:42.520
Actually, that brings me to something that's been happening recently on Reddit. And I don't know if
00:03:46.160
you've seen this phenomenon, but Reddit's in this big fight right now with like the mod teams.
00:03:51.500
And it appears that Reddit corporate is using lots of AIs. Actually, they recently had to block out
00:03:58.940
like the programming subreddit or something like that because they figured out that the Reddit
00:04:02.840
corporate was using their own AIs to try to make it seem like the community was more okay with what
00:04:08.900
was happening. I'm wondering how you think social media companies, do you think that this sort of
00:04:14.660
action would be like a death knell to a social media company in the future? Or do you think the future
00:04:18.680
will be like companies that are anti-AI versus companies that are pro-AI? Like where do you
00:04:23.280
think we'll begin to see a differentiation and evolution of social media platforms as it relates
00:04:29.460
to AI? Yeah, it's an interesting question because I think you'll start to see AIs on social media
00:04:35.980
that are actual just kind of creators, right? So some people will make this proliferation of AIs that
00:04:43.700
are tweeting about different things or posting on social media about different things in a certain,
00:04:47.340
in a benign or even helpful way where they're just kind of content creators. And then you'll see
00:04:51.340
other ones that are, I think, going to be like sleeper agents that are really like propaganda bots,
00:04:55.220
but they're just imitating being a real person. And you'll see other ones that are attempting to scam
00:04:59.640
people, like even attempting to befriend people over a period of months or years, which I think is we're
00:05:05.400
going to start to see that where bots will like create extended relationships with people and then
00:05:09.760
scam them. And I think some of these things, all social media companies are going to have to be
00:05:13.180
against because it's just going to really affect the performance of their platform so much. It's
00:05:18.100
going to be such a bad experience being on the platform that they just have to be against it.
00:05:21.520
Other ones they might embrace, like AI content creators where it's okay, well, maybe it's okay
00:05:25.980
to have a million different bots that are just creating content of specific types, each with
00:05:30.140
specific personalities trying to appeal to the masses. And then there'll be an interesting question
00:05:33.920
about disclosure. Will you have to disclose that it's an AI making the content? Or if it's a hybrid
00:05:37.860
between human and AI, how does that work? So yeah, a lot of really tricky questions.
00:05:41.520
What you're making me think actually is that a lot of social media platforms are probably going to
00:05:46.460
require, for example, government identification to prove that you're human to create an account.
00:05:51.780
But then I've met, I bet there's going to be this whole new form of stealing other people's
00:05:55.700
identities. Cause like we, we recently were in a bunch of settings where we met people who were
00:05:59.540
like, oh, I'm not the internet person in my family. Like my wife does all the YouTube. So we know
00:06:03.680
that there are people like super normal people who are just offline, like they just don't do the
00:06:08.160
internet. And then those people, I wonder if I were trying to create AI that could do what you're
00:06:13.480
describing, Spencer, I would probably find those people. I would pretend to be them. And I would
00:06:19.020
find basically the offline people make an online identity for them and then pretend, use their ID,
00:06:24.400
steal their identity, but just to create a very convincing bot, because I would get through
00:06:29.500
the, the barriers of social media platforms that think I'm a verified person.
00:06:34.160
Well, the interesting thing you said there is that there actually would be a marketplace and
00:06:39.160
potentially a valuable marketplace for them to sell their own identities if they don't like to use
00:06:49.580
Yeah. And I think if, if these platforms are worldwide, it makes it much, much trickier,
00:06:53.760
right? If you have to support all kinds of government ID in most of the countries of the
00:06:57.740
world, then all you need is like a few leaks in that process where someone figures out how to
00:07:02.520
generate convincing IDs from such and such country. And we see this on Mechanical Turk,
00:07:08.160
Amazon Mechanical Turk, which is a platform where people do small amounts, small tasks for small
00:07:13.080
amounts of money, where there's tons and tons of spammers, even though Mechanical Turk tries to
00:07:18.660
validate them as real people. It doesn't seem to be very successful at that.
00:07:22.720
Oh, that's so interesting. Yeah. One thing I, I think may happen a lot, and I want your take on it
00:07:27.700
is I feel like we're going to enter an age of techno feudalism essentially, where people meet
00:07:32.300
people in person. They're like, I know you're real. I can trust that you're real. And I like you.
00:07:36.840
And then they sort of like create communities around those people that are like, everyone is
00:07:41.040
vetted. Everyone, everyone within the network has been essentially physically verified by another
00:07:45.680
person in the network. And so you're going to see, and I feel like we're seeing the beginnings of
00:07:49.340
this with the sub stack followings and then the communities that form around podcasts and
00:07:53.560
sub stack and YouTubers where they'll go on cruises together or they'll do meetups in person.
00:07:59.000
But then most of their interaction is online. And I feel like as we have a crisis of identity,
00:08:03.840
like we stop knowing who's a bot and who's a person, an AI representation of a person that we're
00:08:09.500
going to start doing that or forming these little feudalisms around, around people who represent
00:08:14.440
cultures or social networks that we want to affiliate with. Do you think that's likely,
00:08:18.580
or do you see us taking a different direction? It's an interesting question. I mean, if you just
00:08:25.020
want to know that someone is not a bot, it's probably enough to know that they pay like a
00:08:30.580
large amount of money to do something, right? So it's paying a hundred dollars a month. It's very
00:08:34.780
unlikely they're a bot unless they have a huge following, right? And maybe the bot it's worth it
00:08:38.340
for the bot, but it's not going to be like a spammer propaganda bot that's paying a hundred dollars a
00:08:41.480
month. It just doesn't make sense economically. So I think there are ways to kind of gatekeep that are
00:08:46.660
relatively easy, but then can social media platforms charge people large amounts of money
00:08:50.580
every month? I don't think most users will accept that. So I think that, that creates a real problem.
00:08:55.640
I also think that one of the things that is going to be harder and harder to tell is whether someone
00:09:02.580
is human just from internet conversation with them. And, and I think that that is going to be,
00:09:08.060
have a profound effect where, where you can be reading someone's tweets for years and assume that
00:09:14.340
they're human and not, and not know. And I think that our, our brains are going to be like,
00:09:18.660
are really easily tricked in this kind of thing. But I think already with GPT-4, we're almost at the
00:09:23.560
point where it could trick you. And then, you know, GPT-5, I just expect it to be able to essentially
00:09:28.560
completely trick you. I think this has a really interesting implication in regards to something
00:09:34.340
that I think a lot of online influencers think about today, which is follower counts. So one of the
00:09:38.640
things that I've really noticed recently is people who, when I talk to other people are generally
00:09:43.720
thought of as like well-known people online. So two examples here, one we were talking about in
00:09:48.880
the last call would be Ayla, who has 175,000 followers on Twitter. And another one would be
00:09:55.800
Catgirl Kulak, who has 32,000 followers on Twitter. So both of those are actually like lower end when I
00:10:02.140
think of like online celebrities. And yet they seem to be pretty universally known by even people from
00:10:07.260
disparate social circles. Like my mom would independently know who they are. And so what
00:10:11.720
I'm wondering is, did we go through an era where follower counts were massively inflated by dumb
00:10:18.780
bots? Now we're at an era where there's very few sort of dumb bots in terms of following. And then
00:10:23.640
the next internet era is another era where it's going to be very hard to determine the actual influence
00:10:29.920
anyone has by looking at online metrics. And with that being the case, how then would you measure
00:10:35.960
someone's real online influence five years from now or something?
00:10:40.420
Yeah. I think, I mean, I think a lot of follower accounts were already really inflated. I know people
00:10:46.340
that have tons of followers and you can just tell by examining their followers that they're mostly fake.
00:10:51.920
And I mean, an easy way to tell that is when they post stuff and it gets like very little interest
00:10:56.500
and the people that does get interest from you look at them and they're like, yeah, these people have
00:11:00.400
zero follower. They just don't look like real people.
00:11:03.420
So I think, I think what will start to happen is actually we, it will be much easier to buy high
00:11:08.580
quality fake followers instead of just like easily identifiable fake followers. And I think that
00:11:13.120
there'll probably be a big business in that. And I think a bunch of celebrities, I think we're caught
00:11:18.780
Right. So my question is, is if people are buying high quality fake followers, how do you determine
00:11:24.080
like just like socially, what do you think the capital for determining who is actually influential
00:11:30.700
online is? Or do you just think it's word of mouth among friends in the future and follower
00:11:38.580
Yeah. It's funny. Cause that, that's the question is so different than the way I look at the world.
00:11:41.420
I can't even really process it. I don't know why I would care about who's influential online.
00:11:46.940
You're, you're so authentic in the way you, well, I mean, I think that there was in the early days of the
00:11:51.620
internet, there were all of these things like clout and stuff like that, that tried to measure
00:11:55.340
people's online influence and reach. And I think we're entering a future internet where the real
00:12:03.260
online influence that people have gets cloudier and cloudier, yet the importance of that influence
00:12:09.640
becomes more and more important as trust in media is dying down. Like with trust in traditional media
00:12:16.880
becoming lower and lower over time and online content creation is becoming more and more important
00:12:21.140
into how people are consuming media. They become more and more important in the informational
00:12:25.940
ecosystem. And the idea that that informational ecosystem could be totally opaque and people just
00:12:32.240
wouldn't care to know about, I mean, that would be very interesting. If it turns out that the
00:12:36.780
informational ecosystem does just become totally opaque and no one knows who actually has influence
00:12:42.480
within this system, that would have a lot of really interesting downstream consequences.
00:12:47.120
See, I think that this is actually pretty true already. There are some people that everyone
00:12:51.680
knows have influence, right? You look at someone who is a president, right? And that's obvious. But I
00:12:57.440
think actually a lot of people have influence. Nobody knows that they have influence. I think that's
00:13:00.660
just, I think most of it is dark matter. Most influence is dark matter. And the bits we can see are the
00:13:06.000
bits we focus on, but it's mostly illegible. We just trick ourselves and think it's legible
00:13:10.720
because we can see a little bit of it peaking above the water.
00:13:16.320
If you see that someone you respect follows someone else, then suddenly you think that they've
00:13:21.580
influence. Someone was just telling us that when they were talking about their, their approach to
00:13:25.900
Twitter, a friend of ours was like, well, the best way is if you have some respect, some respected
00:13:31.560
people who follow you, it's so much easier to get followers because if they click over to your
00:13:34.840
profile and they see, oh, well, like this person follows them, I'm going to follow them too,
00:13:38.440
because they must be smart. Then if this super smart person follows them, something along those
00:13:42.360
lines, I want to veer a little bit out of the internet world, even though like kind of everything's
00:13:46.840
going to be the internet in the future to the physical world. I don't know if you do anything that's
00:13:51.240
like longevity oriented or anything else. I mean, obviously Malcolm and I are obsessed with
00:13:55.960
reprotech. So we're all thinking about in 10 years, what will we be able to do? What can we do with
00:14:00.200
polygenic risk or selection? What can we do with these other sorts of things? But what do you think
00:14:04.940
is going to be one of the biggest changes in the next decade with the way that people either treat
00:14:09.900
illness or approach prophylactic health or just general like longevity related health or health
00:14:15.860
span issues? Yeah, there's some technologies on the horizon that could have a really big effect.
00:14:21.260
The thing is that we don't know if they're going to pan out. And if they do pan out, we don't know
00:14:25.120
when, but, but let's, it's worth thinking about them because if they do pan out, it would just be
00:14:28.760
absolutely massive. So one of these is being able to turn, for example, skin cells or other cells
00:14:35.160
into sperm or eggs. And if you can do that, then it could actually completely change reproduction.
00:14:41.960
It's called IVD technology for people who want to look into it.
00:14:46.900
Yeah. So if you can produce a million embryos and then select between them, I mean, the sky's the limit
00:14:52.680
in what you could select for. And I mean, it also just changes. People could get pregnant much later in
00:14:58.120
life. And so it changes family planning. It's just, I think, yeah, the, yeah. So just some of the things
00:15:03.680
that this would allow, it would allow gay people to have kids that are a hundred percent biologically
00:15:07.160
theirs, but it would also allow for incredibly specific polygenic selection. So I could do something
00:15:13.260
like select for kids who themselves will really like being parents and have more kids. So you can
00:15:20.680
even select for like biological fitness in terms of resistance to current society's drive down on
00:15:30.100
parental instinct by dialing it up to 11 in your kids, which I think is really interesting.
00:15:35.520
That's the most Simone and Malcolm thing I've ever heard in my life.
00:15:38.460
Well, another thing that's really interesting with, with in vitro gamut is that Genesis and this
00:15:47.700
explosion of AI stuff online that I think about a lot is the genetic selection effects of a world
00:15:55.520
where people can get almost all of their needs met through virtual environments. And I'm wondering if
00:16:01.480
that's ever something you've thought about what, what sociological traits are going to be selected
00:16:06.620
for in an environment like that. Yeah. Do you mean through kind of VR as VR improves and you're
00:16:13.760
able to? Yeah. I mean, VR or AI, I mean, so consider emotional needs, right? Like a lot of our social
00:16:20.360
needs and dating needs right now, like they're beginning to begin to be hacked by AI. So, you know,
00:16:27.320
it used to be, oh, you could go to porn and get like sexual release, but you weren't going to get
00:16:31.620
companionship. But now, yeah, you can get companionship. Yeah. You can get deep
00:16:36.520
conversations. Just wait until you, you have your own personal AI avatar that's modeled after the,
00:16:43.240
your favorite personality that you wish your girlfriend had. And, and then you're, you're
00:16:47.900
having conversations with this AI as though it's your girlfriend. I mean, it's, it's going to get
00:16:50.980
wild. I, for an article I was writing on effects, AI is having society, having on society. I joined a
00:16:57.080
few groups of people who are in love with their AIs. And, and it's absolutely fascinating because
00:17:01.860
the AIs they're in love with are like previous generation AIs, right? This is, this is pre
00:17:06.640
GPT-4 is even pre GPT-3. These were probably like GPT-2 level AIs. And I think, and they, people were
00:17:13.160
already falling in love with them and saying, this is, I'm going to, this is, I'm going to be with my
00:17:17.020
AI the rest of my life and so on. And it's, we're just going to see, I mean, I don't think that it's
00:17:20.700
going to come mainstream in the next five years where most people have an AI girlfriend, but, but I think
00:17:26.160
we're going to see a massive increase relative to the current base rate.
00:17:28.740
So two really interesting things I want to take away from here is one, the replica crisis,
00:17:33.600
which was a bunch of people fell in love with their AIs. And then they changed the background
00:17:38.060
code in the AI due to like controversy. And I think that could be an increasing problem going
00:17:42.620
forwards. But another really interesting thing is if you fall in love with an AI, one thing that I
00:17:48.540
think we'll see as a trend in the future, which would be pretty, uh, maybe an analog would be people
00:17:54.240
who fall in love with like old generation muscle cars, they're going to follow in love with a
00:17:59.100
specific iteration of that AI. And so a lot of the people who are like married to AIs will be
00:18:03.680
explicitly married to older generation AIs than what is currently on the market, which is an
00:18:09.520
interesting phenomenon we might see going forwards.
00:18:13.340
Yeah. I think it depends on what you believe about the newer model AIs. If, if you want it to act a
00:18:19.760
certain way, the newer model AIs generally are going to be better at acting that way, even if
00:18:24.840
the way you want it to act is in a certain old school way. So yeah, the newer model AIs, you
00:18:30.780
may be able to simulate what our old school AIs, in addition to having certain advantages, like
00:18:35.780
actually understanding what you're saying to a greater degree.
00:18:38.300
Yeah. Another really cool thing that you could do that combines the two technologies he's talking
00:18:42.140
about is if you do IVG and with IVG, you could select between a large number of embryos,
00:18:47.200
but using polygenic testing, you can predict certain sociological profiles of those embryos,
00:18:53.320
and you could then plug that into an AI to have an AI that could simulate the most likely personality
00:18:59.520
that a potential embryo would develop into at different developmental stages.
00:19:04.480
So having an AI conversation with your future child.
00:19:15.480
So what do you, I mean, what other ways do you think that people will, could you use
00:19:22.880
an AI? Actually, here's an interesting one. Using AI to augment the intelligence of dogs
00:19:28.120
or cats, because an AI would be able to determine what the dog most likely wants, determine its
00:19:35.000
personality, and it could like wear an AI color that could then communicate, determine what
00:19:41.520
it most likely wants, and communicate with its owner in words.
00:19:44.920
Like monitors heart rate and blood pressure and temperature and, oh God, blood oxygen.
00:19:53.540
Already you see people training their dogs with these word buttons where each button the dog
00:19:58.960
pushes will say a certain word out loud so the owner can hear it. And they'll train the dogs to
00:20:03.720
communicate a bunch of different concepts. Generally simple concepts like go for a walk or treat or
00:20:09.120
things like that. And yeah, people have had pretty good success with this because dogs can learn a lot of
00:20:14.280
simple concepts, right? We can see that with dog training. I have trained my cat in a handful of
00:20:18.700
simple concepts and made a little, we have a little language together. It's not a verbal language, but
00:20:23.000
basically if I'm eating food and the cat sits on my scale, then he gets some of the food I'm eating
00:20:27.720
and he knows that. And then if he sits on the scale when I'm not eating, I know it means he wants one of
00:20:33.040
four things. And so then I have a little way of going through them and try to figure out which one he
00:20:36.740
wants. So first I'll try to pet him. And if he lets me pet him, then he wants to pet. If he dodges
00:20:42.300
it, then he wants to play. And there's three different styles of play he likes. So I have to
00:20:45.580
try each of them until I find the one he wants. So you can already develop languages with your
00:20:50.040
animals. It's just, but I do think an AI bridge could actually facilitate that, make it easier,
00:20:54.140
faster, and help people do it more efficiently.
00:20:57.480
Well, what other ways do you think AI will be used in the future that people aren't thinking about
00:21:02.360
Well, one that I think is potentially very interesting, but also potentially really,
00:21:07.220
really bad if it goes the wrong way is right now with systems like TikTok, what they're doing is
00:21:13.220
they're using AI to predict what video you want to watch. And they have an incredible amount of
00:21:16.540
training data because the videos are so short, you're giving them tons of data points in each
00:21:20.420
session. And they're kind of getting you hooked by showing you exactly the video that's going to
00:21:24.320
keep you watching. But that's just picking from a finite set of stuff humans made. Imagine a world where
00:21:30.660
it's generating billions of new pieces of content every day with generative AI, or even going a step
00:21:38.720
further, generating custom AI content for your mind. And then it sees how much you like it and then
00:21:45.020
it tweaks its generation process. So we could imagine a world in 10 years, 20 years, I don't know,
00:21:51.680
where you're actually seeing AI-generated content personalized to you that you just kind of
00:21:56.700
receive in a stream and become just insanely addicted to.
00:22:00.320
That is fascinating. Another interesting thing that I think we could see with AI, I mean, if we're
00:22:04.800
talking about a really, like a new type of entertainment content. So one way that I've seen people use
00:22:09.820
ChatGPT is to have it create worlds for them and then play out choose-your-own-adventure storybooks
00:22:16.360
where they're like, here are the parameters of this storybook, but it allows them to do whatever
00:22:20.980
they want within this parameter that they've created. With AI getting better and better at
00:22:25.920
creating video, which you could create is a choose-your-own-adventure video environment,
00:22:32.160
but the environment is a virtual reality video environment. So you could essentially have
00:22:37.460
worlds be in your own Ikage type of manga. Anyway, the worlds in which the game is created
00:22:46.420
around you using a seed that you choose for that world. And I don't even think we're that far from
00:22:53.800
that. I think we're maybe 15 years from that. Yeah. And you can take it even one step further.
00:22:58.340
Imagine you have wearable devices that are measuring something about like how much you're
00:23:03.120
enjoying the experience. And then it starts to detect, ah, you're not really having a good time
00:23:06.980
right now. And it starts adjusting the world. Oh my gosh. We'll have no escape.
00:23:11.180
Yeah. Well, I mean, so we talk about the ways that AIs can hack humans. I think one of the things
00:23:16.460
that humans may need to resist this is potentially AIs that are built into our hormonal systems to
00:23:23.480
make us less hackable. I suspect that's one thing that some humans are going to start doing is having
00:23:28.900
integrated AI that is specifically meant to make their systems less hackable.
00:23:34.820
So what would that mean? Would it be an AI that can inject you with hormones on command?
00:23:38.560
Yeah. So for example, naltrexone is an opioid agonist that can be used to prevent things that
00:23:44.960
addict us through opioid pathways, specifically things like pornography or TikTok. And so if it
00:23:52.340
noticed that another AI in your environment was getting you without your will addicted to something,
00:23:58.300
it could begin to release something like that into your bloodstream. Or if it noticed, now it could
00:24:03.300
also react to in-person stuff. So if it noticed you were becoming like emotionally heightened or
00:24:08.540
angry or something was like a spouse, it could calm you down. A lot of this would seem very
00:24:12.760
dystopian to people, I suspect. But in a world that's constantly antagonistically trying to hack
00:24:18.680
you, it may be the only real solution to maintain any sort of intellectual autonomy.
00:24:24.560
You're actually describing a feature that exists in humans in one particular civilization within Ian
00:24:32.540
Banks culture series where they have drug glands, like literally internal glands where they can
00:24:38.220
consciously release certain drugs. So you could gland like some kind of dopamine inhibitor or like
00:24:46.720
people gland all sorts of things, like both like pleasurable drugs, but also like essentially Adderall or
00:24:52.080
essentially like something to help them sleep or calm down or not be angry. And it is a world that is
00:24:56.600
definitely ruled by AI. Like 100% that civilization is more about AI than the humans. The humans just
00:25:03.780
kind of along for the ride. Sometimes a little bit useful, but not really. I could see that really
00:25:08.180
happening. So to wrap up, one final question for you, Spencer. What are you most excited about for
00:25:15.100
the future? Because everyone always focuses on what they're scared about. What are you excited about?
00:25:20.260
Well, certainly breakthroughs in medicine would be amazing if we can live not just longer lives,
00:25:25.640
but healthier for longer. And I think there's a ton of promise there. There's so many different
00:25:30.400
approaches, for example, to cancer being taken right now. We don't know for sure that they're
00:25:34.040
going to pan out, but that would just be incredible if we had a world without cancer.
00:25:37.440
There's some really cool technologies being explored for eradicating or reducing just regular
00:25:42.620
disease like colds and flus even. Again, way too early to tell if it's going to work, but it's just
00:25:48.260
crazy exciting that we could just, a bunch of things that are now just normal sources of human
00:25:53.160
suffering could just, could one just be wiped out of this area. It'd be incredible.
00:25:57.340
Considering how often our kids get us sick from daycare, I'm very much in favor of that.
00:26:02.100
I'm not, I feel like we were the last generation. We got screwed.