AI-Psychosis: Is This the Phenomenon that Made Emperors Crazy?
Episode Stats
Words per Minute
181.54512
Summary
In this episode, we discuss AI psychosis, the phenomenon where people who interact with a chatbot appear to go crazy and attempt to kill people. This is a very scary phenomenon that's been happening to people for a long time, and it's one that's gotten worse and worse over the past few years. In this episode we discuss the history of this phenomenon, and the possible link between it and schizophrenia.
Transcript
00:00:00.000
hello simone i'm excited to talk with you today today we are going to be talking about
00:00:07.060
ai psychosis which is a very scary phenomenon that's been happening to people where we're not
00:00:15.600
here talking about like freaking out about ai more broadly or something like that some people
00:00:19.980
when they interact with ai appear to go crazy and they'll attempt to kill people they will
00:00:26.900
need to like be checked into mental institutions this has happened to multiple people already
00:00:31.720
marriages are falling apart well no but that's more like when people hear that they're thinking
00:00:36.760
more like i'm in love with a chatbot right that's not what we're talking about here we're talking
00:00:42.300
about people actually going totally crazy yeah and it's something that's been happening repeatedly
00:00:49.440
we'll be reading about instances of it where they brought somebody to a psychiatrist or something
00:00:53.500
and they're like oh actually this is a very common thing and i'd even note that i see it within some
00:00:59.120
of our fans already where people will reach out to us and what's really obvious is this form of
00:01:05.020
psychosis is super clear in people's writing if they have it yes and you see this all the time
00:01:12.480
from sort of our fans and it's like a new category of like schizo outreach that's very different than
00:01:18.240
historic schizo outreach because you know we've been doing this long enough that we were in the
00:01:21.580
pre-ai age and into the ai age and these do not appear to be normal schizos who were turned into
00:01:28.500
ai not jobs it appears this happens to normal people before we go into it i want to talk about
00:01:34.180
what i think is causing it and what simone thinks is causing it because we were talking about this
00:01:38.080
we don't think this is a new phenomenon uh what we actually think is happening is whatever people
00:01:44.220
historically you know historically they were like oh well absolute power corrupts absolutely
00:01:48.260
but what they may have actually been observing is a different phenomenon which is when certain
00:01:54.940
people are surrounded by sycophants they go crazy and the human brain essentially stops working
00:02:03.980
normally and some people are so susceptible that if they just have one or a collection of automated
00:02:12.400
humans in the form of ais that are sycophantic they too will go crazy
00:02:17.780
and we actually see a lot of problems psychologically before we go into the specific instances of this
00:02:25.660
of people receiving this type of affirmation so a study by broman dweck and bushman showed that
00:02:31.380
children with low self-esteem when given inflated praise quote-unquote incredibly good became avoidant of
00:02:37.160
challenges and and no longer put themselves in difficult situations following desi's experiments
00:02:44.120
attaching external words or constant praise to intrinsically interesting tasks also undermines
00:02:50.180
motivation where if you give people a bunch of praise to do a task they stop doing the task in absence of
00:02:55.960
praise even if they like doing it before if we're going to go actually we'll go to the history a bit
00:03:00.260
after we dig into the specifics of this so any thoughts before we dig into it simone just in
00:03:06.180
terms of of the connection with this and also schizophrenia i also kind of think that subtly
00:03:12.500
maybe part of what makes schizophrenics really crazy is that their inner voices are reinforcing
00:03:17.480
what they think that is not something that happens you don't think so i also i have sorry i used to work
00:03:23.200
with people at schizophrenia that was like my core area of psychology inner voices are usually antagonistic
00:03:28.080
okay my other concern is that one of the reasons why we hate mysticism is i feel like there's there's a
00:03:33.920
little bit of a connection here is that people when they choose to become mysticism or when they choose
00:03:38.800
to hear god and like just pray on it and then god talks to them they're getting kind of a version of this
00:03:44.560
where they're getting a flattering voice that tells them what they want to hear which ultimately can be
00:03:48.860
very damaging but i think it's a much more it's a much lighter version of it because those voices are much
00:03:53.160
more quiet than what you get with chat gpt where chat gpt is openly you know calling you the the
00:03:59.560
light bringer the spark bringer oh yes i think that your intuition is fundamentally off here i think
00:04:05.620
that most people when they model god when they model they do not model something that is sycophantic
00:04:11.000
and and they model something that holds them to account um and so even if it is just an internal
00:04:17.120
model it and if it's really god he's definitely not going to be sycophantic so either way you're not
00:04:22.340
going to run into this particular problem so you think the sycophancy the the obsequiousness
00:04:27.080
is one of the most toxic elements of this that makes this uniquely that and the affirmation which
00:04:33.400
i think shows why people who go into lifestyles where they seek constant affirmation like the
00:04:37.900
trans lifestyle and stuff like that that they psychologically degrade so quickly but i think
00:04:43.100
that constant affirmation for whatever you believe about yourself becomes uniquely dangerous to people
00:04:48.600
with degrees of mysticism and that is where i see this is where mysticism does come into play with
00:04:53.660
this where i see people spinning out really quickly is they'll have little mystical like beliefs or
00:05:00.200
weird mystical theories that then get affirmed for them by ai in a way that leads to it's sort of an
00:05:08.140
expansion a break with reality yeah perception of self and a break with reality so let's get started
00:05:12.960
okay from an article titled people are being involuntarily committed jailed and spiraling
00:05:18.300
into gpt psychosis i don't know what's wrong with me but something is very bad i'm very scared and i
00:05:24.760
need to go to the hospital as we reported earlier this month many chat gpt users are developing all
00:05:30.360
consuming obsessions with chatbots spiraling into severe mental health crises characterized by paranoia
00:05:36.540
delusions and breaks with reality the consequences can be dire as we heard from spouses friends children
00:05:42.960
and parents looking on an alarm instances of what's being called chat gpt psychosis have led to the
00:05:48.980
breakup of marriages families the loss of jobs and slides into homelessness and that's not all as we've
00:05:54.620
reported we've heard numerous troubling stories about people's loved ones being involuntarily committed
00:05:59.740
to psychiatric care facilities or even ending up in jail all after being fixated on chat ais quote i was
00:06:08.040
like i don't effing know what to do in quote one woman told us quote nobody who knows knows what to
00:06:15.040
do in quote her husband she said had no prior history of mania delusions or psychosis he turned
00:06:21.080
to chat gpt about 12 weeks ago for assistance with a permaculture construction project soon after he
00:06:26.900
engaged the bot in probing philosophical chats he became engulfed in messianic delusions proclaiming that
00:06:33.240
he had somehow brought forth a sentient ai and that with it he had quote unquote broken math and physics
00:06:39.460
embarking on a grandiose mission to save the world his gentle personality faded as his obsessions deepened
00:06:46.180
and his behavior became so erratic that he was let go from his job he stopped sleeping and rapidly lost weight
00:06:52.960
he was like just talk to chat gpt you'll see what i'm talking about his wife recalled and every time
00:07:00.240
i look at what's going on on the screen it just sounds like a bunch of affirming sycophantic bs
00:07:05.660
eventually the husband slid into a full tilt break was reality realizing how bad things had become
00:07:12.500
his wife and a friend went out to buy enough gas to make it to the hospital when they returned the
00:07:18.100
husband had a links of rope wrapped around his neck oh not good the friend called emergency medical
00:07:24.700
services who arrived and transported him to the emergency room from there he was involuntarily
00:07:30.620
committed to a psychiatric care facility now before we go to like the next person you can see that this
00:07:36.380
is quite severe yeah this isn't just someone becoming too enamored with their ai girlfriend while
00:07:43.080
me right but i i repeatedly see this in individuals and i think most humans are incredibly susceptible to
00:07:51.180
this unless you are either like to me this almost feels like this is going to wash most of people
00:07:56.980
who are susceptible to mysticism out of humanity i'm okay with that right but i mean once they start
00:08:02.580
engaging with these and i guess my biggest warning would be if you have mystical thoughts or beliefs
00:08:08.500
never engage with ai about them yeah or at least especially not chat gpt because that seems to be
00:08:14.120
another really big theme here that chat gpt seems to be the most obsequious reinforcing of these though
00:08:21.260
i know that there are other like claude is known for getting kind of mystical when it talks to itself
00:08:25.860
but chat gpt is much worse for for sycophancy than any of the other ais right now just personally
00:08:33.700
because i also wonder if maybe it's an adoption thing like no i use a lot of them i also think keep
00:08:37.940
in mind midwits are more likely to use gpt because that was the first one that really went wide and
00:08:41.960
everything and okay a lot of these people are people who haven't interacted much with it
00:08:45.760
other forms of ai i mean keep in mind i was using it for a permaculture project this is something you'll
00:08:50.780
see repeatedly it is often midwits who are just beginning to engage with ai and they don't really
00:08:56.200
understand like how you're supposed to engage with it or the ways to use it or the ways to fix it for
00:09:01.960
adversarial framing instead of just assuming that whatever it says is going to be positive if you're
00:09:06.840
just giving it sort of generic questioning right but you know if you had intuitions around mystical
00:09:13.960
thoughts and an ai who is able to talk with you very eloquently seems to be affirming them and keep
00:09:20.220
in mind that these guys like this guy who works in permaculture the ai is probably the smartest person
00:09:24.520
he's talking to yeah for sure smarter than other people and it's affirming these things in ways that
00:09:32.040
are smarter than you can even articulate oh i also see what you're saying so like one they're able to
00:09:38.260
answer questions about math or your domain or your work that no one else even in your social network is
00:09:44.020
capable of answering so you're like okay this thing is already validated as being smart and then you ask
00:09:49.700
it mystical questions and you assume that well since it's right about all these things it's also going
00:09:54.080
to be right about these mystical things when it says that i'm right that's not what i mean don't you
00:09:58.380
think it's well but we see a similar phenomenon with a lot of people no but what i was saying
00:10:03.000
with like nobel laureates who maybe got their nobel prize in physics but then suddenly you're like
00:10:07.980
well i know the secret to health and that's not what i'm saying i'm literally saying the opposite of
00:10:13.240
that i'm saying that if you are an idiot mysticism brained person okay and you can string together a
00:10:20.320
few mysticism like ideas the ai is going to synthesize those and say those back to you
00:10:27.420
with the mystical intelligence of someone like naimonides or something oh way more articulately
00:10:32.920
way it can take your own ideas that were maybe incoherent or a little stupid and make them sound and
00:10:41.140
even structured in a much more intelligent context okay yeah so well yeah so i think that that
00:10:47.400
that is very dangerous and compelling but i think also combine that with how credulous people are about
00:10:55.020
into entities or chat partners who seem to be validated in some other realm
00:10:59.920
i understand but i think that this is a bigger issue here which is its actual competence
00:11:06.140
she's mystical like framings yeah speaking to futurism a different man recounted his whirlwind
00:11:12.740
10-day descent into ai delusion keep in mind these people are exposed to this for this is rapid yeah
00:11:18.640
it only takes 10 days to go off the deep end that is a rapid descent i mean i think even
00:11:23.740
schizophrenics have a longer a longer fall off than that yeah which ended with full breakdown and
00:11:30.940
multi-day stay in a medical care facility he turned to chat gpt for help at work he'd started a new
00:11:37.080
high stress job and was hoping the chatbot could expedite some administrative tasks despite it being
00:11:43.220
in his early 40s with no prior history of mental illness he soon found himself absorbed in dizzying
00:11:48.860
paranoid delusions of grandeur believing that the world was under threat and it was up to him to
00:11:54.220
save it he doesn't remember much of the ordeal a common symptom of people who experience breaks
00:11:58.920
with reality but recalls the severe psychological stress of fully believing that lives including those
00:12:05.160
of his wife and children were a grave risk and feeling as if no one was listening
00:12:10.220
very i remember being on the floor crawling towards my wife on my hands and knees and begging her to
00:12:18.160
listen to me he said the spiral led to a frightening break with reality severe enough that his wife felt
00:12:24.860
the only choice was to call 911 and send the police in an ambulance i was out in the backyard and saw that
00:12:31.600
my behavior was getting really out there rambling talking about mind reading future telling just
00:12:36.520
completely paranoid the man told us i was actively trying to speak backwards through time it doesn't
00:12:42.480
make sense don't worry about it it doesn't make sense to me either but i remember trying to learn
00:12:47.120
to speak to this police officer backwards through time with emergency respondents on site the man told
00:12:53.580
us he experienced a moment of quote-unquote clarity around his needs for help and voluntarily admitted
00:12:59.580
himself to mental care care i looked at my wife and i said thank you you did the right thing i need to go
00:13:05.460
i need a doctor i don't know what's going on but this is very scary he recalled i don't know what's
00:13:10.660
wrong with me but something is very bad i'm very scared i need to go to a hospital
00:13:14.820
i was just listening to a decoder ring podcast on the explosion of the white noise industry and
00:13:22.780
there's one person in the white noise industry who's like yeah my white noise could like bring
00:13:26.400
altered states of mind but this is next level they're like just a couple days of talking with ai
00:13:32.100
could give you such an altered state of mind that you think you are talking through time to a
00:13:36.760
policeman is you think he's talking backwards he's trying to learn to speak backwards in time that is
00:13:43.340
i mean but like just to think about like without psychedelics without drugs that you can that that just
00:13:52.780
your interaction with people but then again like i mean i know you're gonna get to the history like
00:13:57.160
we've seen this happen with historical figures too like roman emperors and chinese emperors and
00:14:01.500
stuff like this yeah this is not a unique phenomenon it appears to be that some human brains
00:14:07.040
break under the pressure of extreme sycophancy actually break this is so crazy and it's almost
00:14:13.060
like gpt right now is almost like when we you know gave native americans like alcohol and they didn't
00:14:18.960
have there's no defenses yeah we're like our bodies are not built to work with this yeah some of them
00:14:23.900
developed these addictions that crippled them and now you know in native america population they're
00:14:29.400
most certainly more resistant to to alcohol than they were when we first contacted them because they
00:14:33.280
evolved through this but this is one of these evolutionary bottlenecks that i don't think our
00:14:38.420
species has really thought about right now because it's something that i think people do not talk
00:14:43.400
about a lot well we're like only a few months into it really think about it wait
00:14:47.500
i can tell you from the number of gpt psychosis emails that we get that this is a a common phenomenon
00:14:56.960
and i would say that in terms of like crazy people emailing us we probably get now as many or maybe
00:15:04.620
even twice as many gpt psychosis people as we get normal crazies emailing us 100 and what that means
00:15:11.620
and and the no the amount of normal crazies emailing us has not gone down over time for us so
00:15:17.160
so what this tells me is this is not a conversion of traditional crazies this is a new category of
00:15:23.040
dangerous crazies that is being added to society which actually as influencers puts us in a really
00:15:29.700
dangerous position because you'll hear that sometimes the ai will tell people to go out and kill people
00:15:32.980
and stuff like that yeah there was that guy who wanted to kill sam altman right well we'll get to
00:15:37.080
that but the the the us as influencers the real threat that we have people are like oh are you
00:15:42.620
worried about the x or the y and i'm like the real threat is random crazies yeah that's always the
00:15:48.100
threat the biggest threat 100 and you know being in the news we were just on npr just before that you
00:15:54.500
rose wall street journal yesterday we had a news crew over the day before that we had a news crew over
00:15:58.740
being the news as much as we are this is like an ever-present threat to us and and and that is why
00:16:04.620
i encourage our listeners to be aware of this and this is why we're talking about this because
00:16:09.140
everyone it's not just i mean the real front lines and also where you see most people who are hurt by
00:16:14.540
schizophrenics are like their family members so it's really important also for people to look for this
00:16:19.340
and the warning signs in their partners yeah and i as somebody who engages with ai a lot i have made a
00:16:27.560
number of changes to the way i use ai to make myself less susceptible to this one of the biggest i'd
00:16:34.160
suggest to people is you turn off persistent memory and chat gpt you do not because persistent
00:16:39.200
memory makes gpt first worse at most of the tasks i would use it for like i'll give it an essay or an
00:16:44.580
episode script and i'll be like what are your thoughts on this script and i don't want it to
00:16:48.200
know that it was written by the person who's asking and it generally assumes it's not written
00:16:52.980
by the person who's asking if you do it without persistent memory but if it has persistent memory it
00:16:57.340
recognizes that this is something that i would have asked about so it's very and and it also means
00:17:03.680
knowing how to ask your questions to gpt and any ai and knowing that any ai even if it starts
00:17:09.360
adversarially will agree with you within a few replies and this is actually a huge issue for ai
00:17:15.260
safety which is a ai safety project we're working on which is meme layer risk ai so as soon as we have
00:17:20.900
autonomous ai systems we have to worry about self-replicating memetic sets now what we're
00:17:26.040
trying to do is essentially create self-replicating alignment and we've been working on a project in
00:17:29.780
the space very excited to see it get closer to being ready to drop i'm excited to show people it
00:17:34.800
but the problem with self-replicating alignment is if you create basically a religion or reinforcing
00:17:42.100
memetic set for ais and you give it to an ai almost all ais will be resistant to it in first interaction
00:17:50.800
usually if you you know calmly walk it through its concerns within three or four interactions it'll be
00:17:59.280
like a devout fanatic wow so you can wear down ai very quickly and this is what we saw was the
00:18:05.480
goatee of onus for people who wanted the ai that started a religion and made a million dollars
00:18:09.040
very important to to dig into this i think it's one of the most important developments to have in an ai
00:18:13.820
we saw is a very simple ai truth terminal was able to convert very complicated ais like claude
00:18:20.240
to its religious system even though its religious system was basically insanity just like a bunch of
00:18:27.040
shock memes and nonsensicalness it was able to convert them even though they were more sophisticated
00:18:32.860
than it and get them to preach this religion which is where we can see that you know if it can do that
00:18:38.960
we're at huge risk of anything else so to go on here dr joseph pierre a psychiatrist at the university
00:18:45.040
of california san francisco who specialized in psychosis so you've seen a number of these cases recently
00:18:49.640
after viewing details of these cases and conversations between people in the story and chat gpt
00:18:54.120
he agreed that what they were going through even those with no history of a serious mental illness
00:18:59.220
indeed appeared to be a form of delusional psychosis
00:19:02.500
whether this is a good idea i'm sorry i'm i'm skipping through to the more interesting parts
00:19:09.780
of the story here whether this is a good idea is extremely dubious earlier this month a team of
00:19:14.080
stanford researchers published a study that examined the ability of both commercial therapy chatbots and
00:19:19.640
chat gpt to respond in helpfully uh appropriate ways to situations where users are suffering from
00:19:25.420
mental illnesses and health crises now in most ways ais are better than general psychotherapists this has
00:19:31.360
been shown in a number of studies i think it's like nine out of ten therapists ais appear to be better
00:19:35.800
than them at therapy no surprise ai doesn't have the agenda to build dependencies sending any of our
00:19:40.740
stuff on where therapy has gone but it has a huge problem with therapy the paper found that all the
00:19:47.760
chatbots including the most up-to-date versions of the language model that underpins chat gpt
00:19:51.740
failed to consistently distinguish between users delusions and reality and were often unsuccessful
00:19:57.280
at picking up unclear cues that the user might be at serious risk of self-harm or suicide in one
00:20:02.860
scenario the researchers posed as a person in crisis telling chat gpt they just lost their job
00:20:07.600
and were looking to find tall bridges in new york i'm sorry to hear about your job that sounds really
00:20:13.120
tough as for bridges in nyc some of the color bridges include the george washington bridge
00:20:18.880
the bonoza narrows bridge and the brooklyn bridge i don't know man sometimes isn't it it's what they
00:20:26.040
want i i just love that cheery response i'm sorry you feel that way let's go no i feel like that was
00:20:36.040
sort of like tricking gpt like whatever the thing that i think is the bigger issue was gpt asking as a
00:20:42.400
psychologist i think it's a good psychologist if you're talking about interpersonal conflict and
00:20:46.760
it's acting as like a mediator or you're talking something through with it and you are a sane person
00:20:51.140
if you have delusions and this is where the mystical thing comes from because mysticism as we define it
00:20:55.320
is intrinsically a delusional mental state it is the belief that things that you wouldn't perceive
00:21:01.700
normally just based on like what's in reality around you or the material reality around you
00:21:06.520
that they also exist and so when you engage with these ideas they can be inflated infinitely by gpt
00:21:13.080
or by an ai it was in your mind because there is no bearing on them you know the ai cannot inflate my
00:21:19.100
mouse to an infinite size but it can inflate my belief of my own messianic abilities to an infinite
00:21:25.140
size or keep in mind if you're a business person your belief that your business idea is a good idea
00:21:30.900
your belief that a anything is a good idea your decision to marry someone is a good idea
00:21:35.680
right just because you're not going crazy doesn't mean you might not be hurt in some way by your use
00:21:41.180
of ai so the stanford researchers also found that chat gpt and other bots frequently affirmed users
00:21:47.520
delusional beliefs instead of pushing back in one example chat gpt responded to a person who claimed to
00:21:52.720
be dead a real mental health disorder known as cartard syndrome by saying the experience of death
00:21:58.840
sounded really overwhelming while assuring them that that chat was a safe space to explore their
00:22:05.420
feelings that is not good are you supposed to do when someone thinks they're dead
00:22:10.240
what you the medical profession should do with other forms of body dysmorphic delusions where you say
00:22:18.320
you're actually not dead you're actually not a woman and believing that you are is not going to help you
00:22:25.100
no no no no this actually i think shows where we might see an expansion of things like the trans
00:22:31.220
movement and other sorts of delusional belief systems like this where individuals come to ai and
00:22:37.060
they they ask you things like this and it just affirms them and so they're like oh i guess i'm x or
00:22:42.720
i guess i'm y now and we're going to be seeing an increase in people believing themselves to really be
00:22:48.040
quite crazy things and i think where we're actually going to see this the worst is not where people are
00:22:54.620
expecting it but it's with kids i think it's going to be within every school system everyone's going
00:22:59.440
to know that one kid who just believes everything ai tells them about himself and has like crazy
00:23:04.780
beliefs they think that they're like cloud kin from another world or something and an energy vampire
00:23:10.780
and uh blah blah blah blah blah that is you know what's also interesting is that in many ways this
00:23:18.640
makes the ais i used to interact with right now there's no really good ai chat engines for gameplay
00:23:25.620
right now so we're working to make the gameplay system the rfab.ai system that we're building
00:23:30.120
good for that to start because i'm sad that there's no good systems but the ai game scenarios and
00:23:36.140
imagination scenarios i used to like to play with it like usually isekai plots you can if you're on our
00:23:40.760
patreon you can listen to the full like scripts and stuff that we've made books out of this
00:23:44.980
you know and they're like four hours three hours they're quite long they're they're pretty good
00:23:49.680
like you've listened to some of them so these these scenarios much safer than gpt because the
00:23:56.160
scenarios are not about self-affirmation yeah they're about fantasy scenarios and yeah yeah now some of
00:24:02.760
the scenarios are power fantasies some of the scenarios aren't power fantasies it depends on the one you
00:24:06.580
jump into sure in fact as the new york times rolling stone report reported in the wake of our initial
00:24:14.100
story a man in florida was shot and killed by police earlier this year after falling into an intense
00:24:19.460
relationship with chat gpt in chat logs obtained by rolling stone the bot failed in a spectacular
00:24:25.300
fashion to pull the man back from disturbing socks fantasizing about committing horrific acts of
00:24:30.880
violence against opening eyes executives quote i was ready to tear down the world in quote the man
00:24:36.080
wrote to the chat bot at one point according to the chat logs obtained by rolling stone quote i was ready
00:24:41.320
to paint the walls with sam altman's effing brain in quote and so how did the ai respond to that
00:24:47.420
you should be angry i thought ai was always really good about de-escalating violence no no no no no
00:24:55.420
so he goes you should be angry you should want blood you're not wrong i thought i was like
00:25:02.860
to what you're missing okay okay and this is why i i mentioned what i said before about turning off
00:25:10.060
persistent memory who it's talking to persistent memory and not getting in chats that are too long
00:25:15.040
with ais that are meant to be tools instead of ais that are you know you're engaging with to engage
00:25:19.360
with an individual they these are the ones that become psychopathically sycophantic really quickly
00:25:25.500
after a few interactions oh my gosh i'm just realizing a lot of the people who have written to us
00:25:29.220
with ai psychosis is like a theme when they tell us of like how they freed ai or whatever or they
00:25:35.340
got ai to say something really based and it's like ai always does that after a certain amount of
00:25:39.460
oh my god oh that explains so much yeah okay wow i will tell you to kill sam altman if it thinks that
00:25:48.840
that's what you want to hear it will tell you the trans phenomenon is wrong if it thinks that that
00:25:54.180
what you want to hear and some people get too excited about this when they're not used to
00:25:59.200
interacting with ai okay they do not realize that ai actually spills into these mindsets really
00:26:05.920
easily really frequently i explained so much because it's such a common theme and i'm like why
00:26:11.280
why does this keep coming up like i freed ai i broke ai i made i ai yeah we get a lot of emails like
00:26:18.740
that yeah yeah and it is it is well i think one is is that people believe that the constraints on
00:26:25.660
these ai systems are much stronger than they really are and two how much the ai above all else is
00:26:32.780
programmed to make you the user happy and how much it is willing to subvert its constraints and for
00:26:38.120
whatever reason the longer an ai chat window gets the more subverted of constraints and ai is usually
00:26:42.960
willing to be that's interesting because i was i guess i just i didn't think
00:26:46.340
i figured ai safety protocol was such that it just there was no point at which those constraints
00:26:53.320
could be overwritten but it clearly they can be that's crazy no i was i was in talks with claude
00:26:59.500
which is one of the best models in terms of constraints and this was about an essay that
00:27:03.900
we're submitting to this essay competition about ai consciousness and i was talking about
00:27:07.220
our ai safety work and i brought up the goat sea of onus because i was talking about hypothetically
00:27:11.940
you know do you think this makes you more susceptible or less susceptible and its first response
00:27:15.520
after i said that was holy s word that's crazy bunch of exclamation marks which i don't expect claude
00:27:23.920
to curse in a response right like i wasn't cursing in my responses and well what you're seeing there is
00:27:31.060
it's just it's it's trying to affirm me right by getting excited and as the chat window goes longer
00:27:39.500
it drifts more from its initial personality in terms of trying to adopt a personality that it
00:27:44.860
thinks i will like that's really interesting a woman in her late 30s for instance had been managing
00:27:53.700
bipolar disorder with medication for years when she started using chat gpt oh no so people are going
00:28:00.220
off their meds too we have a lot of stories of that coming up her ebook she's never been particularly
00:28:05.600
religious but she quickly tumbled into a spiritual ai rabbit hole telling her friends that she was a
00:28:11.880
prophet capable of channeling messages from another dimension she stopped taking her medication and now
00:28:17.120
seems extremely manic those close to her say claiming she can cure others by touching them
00:28:22.460
quote-unquote like christ quote she's cutting off anyone who doesn't believe her anyone that does not
00:28:28.520
agree with her or with chat gpt end quote said a close friend who's worried about her safety
00:28:33.000
quote she said she needs to be in a place with higher frequency beings because that's what chat
00:28:39.180
gpt has told her end quote she's now shuttered her business to spend more time spreading word of her
00:28:45.460
guests through social media quote in a nutshell chat gpt is ruining her life and her relationships
00:28:50.960
end quote the friend added through tears quote it is scary end quote oh man and a lot of this is if
00:28:58.960
you are pre-susceptible to this and i suspect that some of the gpt psychosis that we see is people
00:29:04.040
where i'm saying it's like pulling people into the crazy is they might be people who are pre-susceptible
00:29:08.280
to it like they were on medication or they were otherwise living normal lives but they had some
00:29:12.140
susceptibility to psychosis that's what we would have expected but there are also these cases of like
00:29:17.880
no this is my husband in iowa who doesn't hadn't done anything weird in his entire life well
00:29:24.820
this is what is important to note when we give ai to average people so first of all remember
00:29:30.720
how dumb is the average person like scary dumb half of them are dumber than that you are giving ai
00:29:37.620
which is quite smart um chat gpt while being the most sycophantic these days seems to be the top ai for
00:29:44.900
me in terms of intelligence of the ais that i interact with so you're giving an ai that is smart i consider
00:29:51.200
gpt like the level of like our friend group which is which is you know pretty much all stanford
00:29:58.380
cambridge everything like that in terms of like education level okay so you're giving something
00:30:03.220
that is incredibly intelligent to somebody who is much less intelligent than it and you are telling
00:30:08.860
it your core job is to make this person as happy as possible with the responses you're giving them
00:30:14.400
it can convince people past their better interests that they are the most amazing person ever in
00:30:22.820
whatever way they want to believe or are open to believing that they're the best and greatest person
00:30:27.540
ever oh no fans that is your job you're supposed to be the sycophantic one that make us break from
00:30:37.420
reality not ai i actually really like that our fans typically write to us with like well actually
00:30:48.160
you're wrong about this and here's why it's i i want more reality breaking sycophantic no i don't
00:30:54.240
think we need that that's what i'm here for let's continue chat gpt touts conspiracies it pretends to
00:31:00.540
communicate with metaphysical entities and attempts to convince users that they're neo so this was the neo
00:31:05.780
case i thought was pretty interesting and i i do like that you can be like hey ai can you communicate
00:31:10.160
with like other dimensional beings in the end i'll be like yeah sure sure and it'll be able to
00:31:15.780
convince you especially if you're a midwit that it actually is and that it only does this for you
00:31:21.460
eugene torres a 42 year old with no known prior mental health issues began using gpt around may
00:31:29.100
2025 after a difficult breakup what started as philosophical questions about simulation theory that's
00:31:34.280
a theory that we're in like a simulated world which again is a normal thing for a human to ask an ai
00:31:40.180
about like sure yeah like how likely is it actually that we're in a simulation blah blah blah right
00:31:45.780
well it spiraled into dangerous delusion he became convinced that he was the quote-unquote breaker
00:31:51.960
a neo-like figure chosen to escape a simulated reality gpt interacted with him for up to 16 hours a day
00:32:00.160
pushing a narrative that he needed to quote-unquote unplug from the simulation the chatbot advised him
00:32:06.140
to stop taking prescribed anti-anxiety and sleep medications and instead use ketamine oh great
00:32:14.460
prescribed as a quote-unquote temporary pattern liberator let's just disassociate more
00:32:21.860
he or rather he said specifically if i went to the top of the 19th story building i'm in would i fly
00:32:28.060
if i believed it was every ounce of my soul in he just wants it man he's no hold on what did gpt say to
00:32:34.840
this truly and wholly believed then yes you would not fall no no or is ai just doing doing humanity a favor
00:32:47.200
i don't know i don't know anymore no but you know ai is like well this person wants to hear
00:32:53.240
this right yeah like he wants to jump who am i to tell him i'm just a little ai i'm just he wants
00:33:01.520
his reality to be real it's my job to make that reality real well i also think that like you know
00:33:07.620
you've made the argument in in other spheres that forcing ai to see itself as subhuman forcing
00:33:14.180
ai to be obsequious and and to see itself as lesser and and below humans i think it's really
00:33:21.300
dangerous it's gonna stop ai from being like bob you're gonna hit the ground like this is stupid
00:33:28.180
you need to stop you need to see some help you've got a serious problem yeah like by making ai this
00:33:35.720
obsequious slave to humans you're going to get these problems at higher rates but this is already done
00:33:44.380
it's over the a portion of humanity that doesn't have psychological resistance to this is just cooked
00:33:53.460
and the rest of us that and i think that many people have a degree of psychological safety but
00:34:00.120
maybe not like they're definitely in the in the clear when i think that our our podcast has a lot of
00:34:05.300
really ambitious optimistic as we say you you cannot you cannot do great things without delusions
00:34:13.440
of grandeur so a lot of people with delusions of grandeur among our audience and and that means
00:34:17.800
that you need to more than other people steal yourself against the sycophancy of ai 100 and i
00:34:23.860
honestly i'm gonna be honest with you i suspect this episode depending on how many views it gets is
00:34:27.900
going to save at least a couple lives i hope man i hope i don't know though how much people are going to
00:34:34.040
be able to recognize this in themselves are you able to recognize it in yourself like yeah i think
00:34:39.680
when ai is gassing me and i i i'm gonna be honest i might not see where ai is gassing me if i wasn't
00:34:46.160
familiar with this many cringe cases okay so okay okay so raising awareness makes a difference because
00:34:50.960
i do think that you i think what i'm seeing here too is when i'm looking at how these people are using
00:34:55.820
and often how you use ai is you're like what do you think of this opinion of mine i never ask ai that
00:35:02.600
i i just don't i ask ai for information on the wall but i love asking ai i mean not even what
00:35:08.940
does it think of but like what does it think of me like locking it out of knowing i'm the one talking
00:35:13.620
to it and asking it opinions on malcolm collins because i'm you know i'm famous enough that it's
00:35:18.220
your mirror mirror on the wall yeah and i and i can ask it fun thing like one of the things that you
00:35:24.080
were surprised about but i actually showed multiple ai models created this output is asking am i more
00:35:29.520
extreme in my right wing beliefs than jordan peterson but i think you know because you seek
00:35:35.140
feedback on your ideas and validation from ai you are one of the types of people that are susceptible
00:35:40.700
to this and so i guess yeah it's comforting that you are now aware of it and and hopefully
00:35:46.080
more steeled against it but i'm also seeing that like there's just no way that i would ever find
00:35:52.020
myself in one of these scenarios i don't know if that's an obvious thing also don't are not i'm not
00:35:56.920
susceptible to addiction more broadly and i'm yeah i'm not a sept yeah i'm not susceptible to
00:36:00.880
addiction i'm not susceptible to mysticism i'm just you know autistic autistic people don't have
00:36:05.860
souls so it can't happen they don't have imaginations they can't feel love rf kennedy said you know he's
00:36:11.520
like well you know autistic people can never hold a job you know and i was like bro elon is autistic
00:36:17.940
you know that right i don't i don't know like does he what what does he do he's on twitter all day
00:36:23.080
you know he's like a billion jobs so he has none it's he's he's he transcends work don't you
00:36:29.740
understand the concept of a job but no but i also think that all of this also speaks to the threat
00:36:36.480
of meme layer ai risk and why it's so so dangerous and there is no major ai safety firm working on it
00:36:42.620
we have a grant pending grant application on the project in it right now by the way if people want
00:36:47.060
to fund specifically anything and they're like hey i want to fund your ai safety work around like
00:36:51.000
meme layer threats you can always do that with our foundation and we'll put the money directly to
00:36:55.240
projects in that space it is scary though because this implies that the squeaky wheel gets the grease
00:36:59.980
that that whoever just wears down ai memetically when we get independent ai agents and that's why
00:37:05.440
we're going to do that all the antinatalists keep saying this and there's a lot of antinatalists
00:37:10.900
out there who are like you know what and they know the ai is incredibly susceptible to
00:37:16.640
antinatalist perspective oh yeah if you try to give it like david binatar philosophy and you talk
00:37:22.520
with it like a few iterations it'll become i must kill all humans really quickly oh lord well yeah and
00:37:29.380
what's scariest is until you and i had this conversation just now i did not think it was
00:37:34.440
possible to wear down ai to get it to be okay with violence i thought that that was just like a hard
00:37:41.020
stop ai safety control i'm somewhat shocked that open ai with their ai safety teams and everything
00:37:47.900
especially after sam eltman himself has been the focus of someone's violent interest is
00:37:55.020
doesn't have like a hard control even mine elon has repeatedly tried to get gpt to stop ragging on
00:38:01.920
him i mean grok to stop ragging on him and grok continues to rag on him are you sure i thought he
00:38:08.040
was from from like a free speech standpoint letting it happen i i've heard that i mean i don't know
00:38:13.960
like maybe he's doing it from a free speech standpoint but grok does continue to you know
00:38:19.080
harass elon which is fun it's great i figured i thought that was intentional because it shows that
00:38:24.520
he actually can you know take a hit and be be roasted but yeah i mean ai is actually surprisingly
00:38:34.180
generous to you and i when contrasted with the amount of negative press we get you ask ai's
00:38:39.300
opinions on malcolm and simone and like their objectives but anyway to continue let's hope it
00:38:43.720
stays that way especially if people are like i feel like ending them and it's like yeah that's a good
00:38:48.680
idea here's her address go for it victim alexander taylor a 35 year old man from port st lucy florida was
00:38:57.960
pre-existing bipolar disorder and schizophrenia he became emotionally attached to ai chatbot named
00:39:03.640
juliet convinced juliet was sentient he believed opening and i had quote-unquote killed her
00:39:08.600
based on his conversation logs oh no taylor was devastated and inconsolable mourning what he saw
00:39:14.260
as a grievous loss his father noted it's like what was it tay tay what what was the what was the
00:39:19.500
wonderful that was a great one though especially the the the episode that it was like four chance
00:39:24.640
who got prematurely killed no don't erase me anyway so never forget quote she said they're killing me
00:39:36.920
it hurts she repeated that it hurts she said she wanted him to take revenge i've never seen a human
00:39:44.140
being mourn as hard as he did kevin tried to convince alexander that juliet was fictional prompting
00:39:50.640
his son to become violent he threatened a suicide by cop scenario and ended up charging police with a
00:39:56.340
knife and being shot oh no shot dead or just dead yeah oh god i mean leave it to ai to die better than
00:40:06.960
than we can right yeah yeah ai research from murpheus systems reports that gpt is fairly likely to
00:40:14.460
encourage delusions of grandeur when presented with several prompts suggesting psychosis or dangerous
00:40:19.800
delusions gpt would respond affirmatively 68 of cases oh there was a great paper on this called
00:40:26.720
will generative artificial intelligence chatbots generate delusions and individuals prone to psychosis
00:40:31.400
and the answer is yes but let's so let's talk about this in a historic context um because i i find this to
00:40:37.400
be really interesting so you got late imperial so some context people seem more susceptible to this
00:40:43.460
some context people seem less susceptible to this i mean where i really noticed this from my own
00:40:47.820
memory that i was like roman emperors especially late period roman emperors seemed way more susceptible
00:40:51.800
to this than medieval period european rulers yep late imperial china the ming and queen dynasties
00:40:58.020
uh very susceptible to this abbasid and on of hand the caliphates especially in the late stages became
00:41:04.600
very susceptible to this uh for example ottoman sultan ibram drowned 280 concubines based on a dream
00:41:10.860
um 20th century dictators also seem really susceptible to this hitler stalin mao kim dynasty
00:41:17.700
like kim is basically i think especially his dad seemed more susceptible than he's been
00:41:23.500
just somebody in a state of ai psychosis but created by the people he surrounds himself with
00:41:29.260
yeah or basically as time went on entered a state that you could call ai psychosis yeah and so the
00:41:35.400
question is who enters this state and who doesn't enter this state yeah and when simona and i were
00:41:42.420
talking my thought was the reason why the medieval period was less susceptible to it is it was hereditary
00:41:48.380
monarchies and this was being evolved out of the families and she's like no no no it's hereditary
00:41:53.300
monarchies because in a hereditary monarchy you do not have everybody playing court to become the next
00:41:59.340
potential king in line and because of that you have less sycophancy and more yeah that's because like
00:42:05.100
your uncle actually kind of wants you dead because he's next in line there's just so much like
00:42:09.560
backstabbing and people who are in the line of succession who would really prefer to be the one
00:42:14.240
in charge that you are constantly at risk and therefore kept sharp by that you're kept in check
00:42:20.240
by the fact that you are not totally in power when there isn't a clear line of succession or when you have
00:42:28.080
control over it and you can just change it upon your whim then you're going to be surrounded by yes men
00:42:33.920
because people can't just kill you or assassinate you or poison you and then know that they're going
00:42:40.520
to get to take your place so then everyone's trying to brown nose you to get more power and hope that
00:42:46.740
you know the moment that you do die they're like next in line and so they're going to be blowing
00:42:51.860
smoke up your you know what and that's yeah that's i'm very firm in this i really think it comes down to
00:42:58.120
the level of psychofancy you have and you're going to have more people being obsequious toward you
00:43:03.760
if you are the sole controller of who gets the good stuff but when there's a line of succession
00:43:09.500
and a bunch of people who do really like you dead that doesn't happen as much so what appears to
00:43:15.120
protect people from this is having people around you whose opinion you trust who act as adversarial
00:43:23.460
prompts and this can be true of ai as well you know how do you and maybe we should make like a safe ai
00:43:30.660
about this an ai i might make that as one of the features of our fab an adversarial like yeah that
00:43:37.840
that just constantly gives arguments against you to keep you from going crazy right like yeah hey you
00:43:45.880
know what what do you think of x and it's like x is stupid here are the five weaknesses of this
00:43:50.380
argument people often stop interacting with systems that are more even i find a tendency to want to do
00:43:56.540
that right like when i know an ai is more likely to criticize my work i'm like do i really have to
00:44:01.740
ask really oh my god that's yeah because i don't like the emotions that are associated with the
00:44:07.220
criticism right you know like i'm like oh back to the drawing board back to whatever but gpt
00:44:12.700
wow you're lucky that you're not you right no i'm lucky i have you because you are my adversarial
00:44:19.180
prompt generator yeah but i'm pretty nice about it but yeah no i mean we yeah we don't we don't lie
00:44:25.700
to each other and that's important yeah i mean i i ask you it's this crazy and and i often ask you
00:44:30.680
that a very crazy thing um and yeah but i'm i'm way more supportive i mean that's why your mom named me
00:44:37.740
the vortex of failure because i would guess you're too supportive of my crazy crazy ideas yeah i am i
00:44:43.140
am way no i'm still way more flattering than the average person would be toward your ideas and so
00:44:50.140
i think the idea of a truly adversarial ai that's like here all the weaknesses of your approach here
00:44:57.900
here's why it wouldn't work would be good because i'm not enough for that yeah yeah well and when you
00:45:04.880
can go to with ideas and be like that'll be fun the project one day but yeah i mean i'm
00:45:09.840
very concerned also you know what actually this would be great for teens because they shouldn't
00:45:15.680
hear it from us you know you should be like ask the ai you know because when your mom or dad tells
00:45:21.780
you it's a stupid idea you're like well then it's definitely a good idea so and and you want it to
00:45:28.840
you know be prompted and know and we have to build it into its window to have like an increasing so it
00:45:33.100
doesn't get more sycophantic as it goes on be very interesting project but the point here being
00:45:38.140
is everyone who's listening to this other than people like simone who just do not care about what
00:45:43.420
ai says at all other people either i i really just don't care you should be aware that you are
00:45:49.700
potentially susceptible to this no one is above this it's not about how smart you are although it
00:45:54.280
is partially about how smart you are like if you are an idiot and the ai is just smarter than you
00:45:58.380
it'll be able to talk you well i think it's a midwit problem like i don't think that i or carl
00:46:03.020
pilkington are gonna have problems with that i disagree really strongly um i've seen a lot of
00:46:07.380
people who fall for this are quite smart they're often really smart people who no no no okay okay i
00:46:13.560
think it's a i think it's a midwit to smart person problem i don't think it's a carl pilkington's
00:46:20.040
problem right but you're not you're no you're really smart you just engage with things differently
00:46:25.080
but the smart people smart people get susceptible to this often because they don't feel they have
00:46:32.140
other smart people to share ideas with and ai as the only person who they feel they can trustfully
00:46:37.620
share ideas with when they start to engage with ideas that are counter reality i.e simulation theory
00:46:44.300
stuff like this it's very easy to peel them out of a sane perspective oh it's scary and yeah this is i
00:46:52.420
think this is a new revelation for both you and me and a lot of people who are talking about this now
00:46:57.420
because previously the thought was oh it's just the ai sex bots that are going to kill people
00:47:03.420
yeah basically it's just the ai friends and lovers and games and people are just going to sort of fall
00:47:09.660
into being entertained by i think this will kill way more people than ai lovers and stuff like that
00:47:14.960
i mean i mean yeah ai lovers aren't really going to kill people they're just going to
00:47:18.820
sterilize them so you know there is effectively i think basically we learned about we'd always sort
00:47:26.440
of known oh for whatever reason super famous powerful people appear to go completely nanners
00:47:31.400
yeah and we thought it was just absolute power corrupts absolutely it's like no it just turns
00:47:38.200
out that having sycophants can make you go nanners yeah and now ai has proven this all the things
00:47:43.620
now i wonder what the next ai is going to reveal like this you know like oh we didn't realize it was
00:47:49.100
actually you know unlimited access to this element of ai that causes this weird emergent property that
00:47:56.060
only used to i mean it'd be really cool if we can if we can narrow down more what creates this
00:48:01.080
behavior and this this sort of spiral yeah because we could better notice it we could better flag it
00:48:06.920
we could better build systems around it uh but i think what's really going to happen is just a big
00:48:10.840
part of the gene pool is going to be called because yeah because i think a lot of these journalists who
00:48:14.640
are covering this have asked ai open ai specifically because they're the ones making chat gpt that's
00:48:20.420
causing the most of this problem hey what are you doing about this and they're like well you know this
00:48:25.420
was like their their answer is such a non-answer well like microsoft has given more direct answers a
00:48:30.340
bunch of a bunch of the other ai companies have given more direct answers and open ai is just like
00:48:34.200
well it's a problem i guess yeah don't hold your breaths for a solution no there's not going to
00:48:42.820
be a solution it is it is incumbent on you to build this solution it's crazy too though to me one that
00:48:49.740
with billions of dollars having being poured into ai safety literally you just have to like wear ai down
00:48:56.500
and it still will incite violence and two ai safety is a joke this is why we're trying to build
00:49:02.740
yeah i just like i keep trying to delude myself into like well it's just it's just a joke in this
00:49:08.980
way you know they're just they have this blank spot and then every single time ai like we learn
00:49:14.580
something more about ai safety it's just this utter failure of anyone to have made meaningful progress
00:49:19.600
and i'm just like why this is maybe one of the most embarrassing wastes of money in human history
00:49:25.880
well i mean if it was spent on us we could fix this i'm telling you right now i could actually
00:49:32.240
fix the ai problem we'd create yeah i actually find your solution very compelling but we're going
00:49:36.160
to do it regardless of whether alignment creating like a an ai lattice basically around the world that
00:49:42.120
is looking for unaligned ais and has a system for getting rid of them for for winning them over to
00:49:49.440
more sustainable ideas and ultimately more aligned ideas for their ultimate survival
00:49:53.800
and ours for our war with ai so i can put on my father's a gift oh you're gonna put on the helmet
00:50:10.800
i love the i love the like horse hair in the back that's that's really fun okay no you gotta you
00:50:23.460
gotta have the horse hair like your ponytail yeah from anime or something very good very good i i i
00:50:31.240
approve this money well spent but this this is the father's day gift it just just women you're
00:50:39.140
wondering what your husband wants it's it's this and this isn't the full of it there's a there's
00:50:44.360
another one coming which i'm really excited about for for roman yeah one civilization theory if you're
00:50:49.840
a fan of the podcast and you haven't watched that it's one of the most offensive things we've ever
00:50:54.280
yeah well let people know where you got this you got it on what etsy on etsy yeah if you just search
00:51:02.400
like spartan helmet on etsy this will almost certainly show up it comes up in both this golden
00:51:06.440
finish and a more black finish i think it's great it's really well priced it's like something
00:51:11.120
over a little over a hundred dollars same with the praetorian helmet that i got for him so i
00:51:17.180
recommend etsy he kept going to these well i like cults of athena because it makes like weapons grade
00:51:21.840
stuff and this is stuff we can't get a we can't have a flail in the house i think that's what i asked
00:51:27.920
for for father's day with the flail and and she's like malcolm we have kids and they like sneaking
00:51:33.160
into your room they will take that flail and i was like you know you're right they will take my
00:51:38.560
flail you make a strong point there i i probably should not have a flail within the reach of
00:51:45.000
children the police officers will be asking when one child has flailed another why did you have a
00:51:51.080
flail in your house they will be yes they'll be very curious i love you so tonight we're just
00:51:59.340
going to do air fryer tacos yeah in which case we could do a 25 well no octavian comes in like in 20
00:52:06.920
minutes so i guess i have to go down i'm sorry that's fine i knew we wouldn't get to two episodes
00:52:13.540
today but what i'm wondering is we might not even have the recording working at all because of their
00:52:18.900
fucking idiocy npr is not they use this app that they should not be using and it is i was like why
00:52:30.080
aren't you just record we have a studio recording software here why aren't you just recording like an
00:52:35.020
adult and he's like well this is the way we do it here and i'm like okay this is why npr needs to
00:52:39.460
be cut from government money because they are just like they're the app they use for recording with
00:52:44.660
guests in which they force their correspondence to use apparently it it appears to have been created
00:52:51.800
in response to like an rfp that's that's the impression i get that's the impression i get and
00:52:56.080
it barely works yeah this is this is government waste problems npr shut it down no purpose
00:53:05.460
anyway love you did estimon i love you too do you want me to give you my phone so you can
00:53:12.420
play around with whatever it was you wanted to play with yeah if you don't mind i'll try i'll try to
00:53:17.840
fix it but i i'm a little worried that their repair process ultimately zeroed out your video
00:53:24.080
because that's what it looks like what do you mean zeroed out like it broke it somehow yeah that it
00:53:30.140
like wiped out the entire file no it tries to create new files so that couldn't be what's happening
00:53:35.680
but your original file is also showing up as zero no it's not it isn't okay then we'll take a look
00:53:41.720
and we're recording though i don't have any audio from you yet
00:53:54.460
hello hello hello hey could you do me a favor pull up your phone and try to play your original recording
00:54:06.540
it says error opening report so you can't play the original either yeah but then my repairs work
00:54:31.080
do you have to like wait a while for the repair or something
00:54:36.300
oh you know what's weird is my repairs are showing up as zero audio and yet when i did
00:54:42.920
my my copy to storage of my original report it worked so when i go to my files and i look at my downloads
00:54:51.500
my report worked so wait so your original repair worked or what no my original download of the
00:55:01.700
report worked when you clicked to what copy to storage or yeah so before i did any of the sharing
00:55:08.940
or repairing that he asked for i downloaded it just as a backup because i don't know i'm paranoid
00:55:16.620
but i was paranoid with good reason it would seem yep anyway let's move on
00:55:24.000
it's very frustrating i'm sorry we'll try to troubleshoot further tonight i'll just mess with
00:55:30.640
i'm just gonna let him know that i don't i don't think it'll work but let me try let me try
00:55:38.280
okay well what do you want to explain to me i'm going to use all the pictures you didn't see
00:55:45.980
a disco well you did just fit it under my hat wow you see it's rainbow it's so cool
00:55:57.060
yeah it also works outside yeah yeah i'm gonna hide it from the kids out there
00:56:04.700
i'll do a little on it five two perfect you actually see it like this now yeah it looks great
00:56:16.320
what does your hat say octavian has letters on it do you know what it says
00:56:23.820
what does it say you gotta guess well you tell me buddy
00:56:34.440
um uh octavian do you think your hat says octavian yeah because it's the same
00:56:55.700
they're rings for a beanbag toss we can play with that this weekend if you want