Episode 2481 CWSA 05⧸21⧸24
Episode Stats
Length
1 hour and 5 minutes
Words per Minute
145.95132
Summary
In this episode of the podcast, Alex takes a look at what it means to be a genius, and why it's a red flag when it comes to understanding complex ideas. Alex also talks about the latest in the Elon Musk and Joe Rogan conspiracy theories.
Transcript
00:00:00.000
can't understand. All you need for that is a cup or a mug or a glass, a tank or chalice,
00:00:04.440
a stein, a canteen jug or a flask, a vessel of any kind. Fill it with your favorite liquid.
00:00:10.300
I like coffee. And join me now for the unparalleled pleasure, the dopamine hit of the day.
00:00:18.140
The thing that makes everything better. It's called the simultaneous sip and it happens now. Go.
00:00:30.000
Well, the trouble was I was assembling a joke in my mind at the same time I was doing the
00:00:39.020
simultaneous sip. So I couldn't get through it because I was laughing at my joke in my mind.
00:00:45.420
And the joke was, I'm going to wait until something goes wrong and then I'm going to say this joke.
00:00:52.440
That, whatever it was, you know, whatever is the new story of something that went down,
00:00:57.020
I can say that went down faster than Stormy Daniels on a Boeing 737.
00:01:04.960
It was pretty good. Stormy Daniels on a Boeing 737. Yeah, it's not so bad. All right. Well,
00:01:12.520
many of you asked me to comment on a guest that Joe Rogan had, Terrence Howard. How many of you
00:01:20.560
have seen the strangest video clips in the world of Terrence Howard describing his scientific discoveries
00:01:30.140
on Joe Rogan? And you're saying to yourself, if you saw it, I can't tell if he's the smartest human
00:01:39.440
being who's ever lived. Or is he crazy? And I looked at it long enough that it was really hard to tell.
00:01:54.780
For sure, he knows enough vocabulary about science, you know, at a deep level that it would be hard to
00:02:02.200
imagine he would know all the vocabulary and be able to speak it without, you know, any pauses to
00:02:09.060
think of the word or anything. It suggests he knows the field. I mean, at least he's well-read and,
00:02:15.980
you know, knows how everything fits together. But I do have a rule which I like to go back to a lot.
00:02:23.000
And the rule is this. And you'll be amazed how often this serves you well. If there's something
00:02:29.280
you don't understand, and somebody's trying to explain it to you, and no matter how much they
00:02:35.000
explain it to you, you still don't understand, the problem might not be on your end.
00:02:42.640
I have a general theory that no matter how complicated the idea is, if you can't explain
00:02:49.660
it as simply, it's probably not real. Take, for example, Elon Musk describing literally anything,
00:02:58.300
anything complicated. Does he explain it in a way that you totally know what he's saying,
00:03:04.020
and you can agree with it or disagree with it? He does, no matter how complicated it is. He's
00:03:09.500
literally putting rockets in space. But when he talks about any component of that, you completely
00:03:15.220
understand it. Now, my measure of intelligence is how well you can explain a complicated thing,
00:03:22.760
not how well you may or may not understand it in your private thoughts. So the fact that he couldn't
00:03:31.160
explain it in any way that was even close to giving us comprehension is sort of a red flag.
00:03:39.200
It's a bit of a red flag. But on the other hand, he did display what appeared to be very deep knowledge
00:03:46.480
of the field. So could you have very deep knowledge of the field, but maybe you have a hypothesis,
00:03:52.480
about something that doesn't check out? Well, that's possible. But for sure, his intelligence is
00:04:00.880
super high. That part's obvious. I saw some reports that he was considered to be a genius when he was
00:04:10.800
just a little kid. Apparently, he learns things ridiculously quickly. It looked like that. But I don't
00:04:18.400
know that that means he's right. It could be that he just has a fascinating concept and hasn't been
00:04:24.800
checked out. He had some idea that looked to me, if I could understand it, which I don't think I did,
00:04:30.440
that a certain frequency of sound would create energy if you just put it on water. If you applied
00:04:39.720
the right sound to water, it would break out the hydrogen, and presumably you could use it to make
00:04:45.320
energy, use it to power stuff. Anyway, I don't know. I don't know. That's just my take.
00:04:52.300
Klaus Schwab will be stepping down as executive chairman of the World Economic Forum.
00:04:58.580
Now, here's the question that I have for you. If you thought that Klaus Schwab was the power behind,
00:05:07.460
I don't know, everything, why would he step down? If he was really sort of the secret dictator,
00:05:16.340
you wouldn't really step down just because you were old. Dictators don't step down.
00:05:21.560
So it makes me, it is suggestive that he was what, again, Elon Musk called him more like a club for
00:05:31.620
rich people. I don't think the World Economic Forum has ever been important, except as a club for rich
00:05:39.420
people. And the thing I wonder is, will he be replaced by somebody who also looks like a movie
00:05:48.000
supervillain? Because I feel like that's where things went wrong. It wasn't just the ideas they
00:05:56.040
had that you don't like, or maybe that scare you the way you interpret them. But it was that he
00:06:03.740
literally gave off every vibe of a TV movie villain. Hey, we're going to take away all of your things.
00:06:12.660
And it seems to me that they should do a search for somebody who is whatever is the opposite of
00:06:19.300
that. Now, I saw a number of people say, it's going to be, it's going to be a disabled black
00:06:25.360
lesbian woman. And I have this question. They really can't, there's no way they can replace him with
00:06:36.080
an old white guy, right? I think they're pretty big on ESG. So whoever replaces Klaus Schwab will
00:06:45.360
almost certainly be a woman. Do you agree? I would say there's almost no chance of a male
00:06:52.720
taking leadership. So that would be my prediction. Will not be male, but could be anything,
00:06:59.540
anything non-male. You know, could be a white woman, but it's definitely not going to be a white man.
00:07:06.080
I would say there's no chance of that. We'll see. Well, there's a conservative activist who's
00:07:12.300
suing the ADL for defamation. Now, normally these things wouldn't go very far. It's really hard to
00:07:17.860
prove defamation. But apparently this one, there's some smart people that say this one hasn't been
00:07:24.480
dismissed, which suggests it could get to court and maybe make a dent. So I don't know anything about
00:07:33.740
the case, but there's a claim being made of defamation. And here's my, so people asked me,
00:07:42.460
I weighed in on this and said that the ADL had accused me of being a Holocaust denier.
00:07:48.800
Well, at least they had the ADL. So the head of the ADL, Greenblatt, on X, accused me of being a
00:07:56.140
Holocaust denier. Just hold that in your head. That's a real thing that happened in the real
00:08:03.680
world. Now, people ask me, Scott, why didn't you sue for defamation? To which I said, I don't need to.
00:08:15.440
Why would I do that? If I sue for defamation, it's going to be, it's going to take over my life.
00:08:21.100
You know, I'll have to spend all my time thinking about it. It's going to be expensive. If I lose,
00:08:26.960
I'm going to, I'm going to lose money as well. It's hard to prove, you know, it's all that stuff.
00:08:32.960
But given that I'm already canceled, I can do the good work of making sure that people are less
00:08:39.100
afraid of them by continually reminding people that they're not a good force and that they're
00:08:44.520
whatever they meant to be originally. I do think their original intentions of the organization
00:08:50.060
were entirely good, but they've evolved into some kind of evil Democrat, you know, basically attack
00:08:58.860
dog. So the ADL no longer has credibility and doesn't have a good reason to exist because it
00:09:07.100
just makes Jews look bad. How many of you would agree with this following statement? The ADL makes
00:09:14.440
Jewish Americans look bad. Is that a fair statement? Because to the extent that they're
00:09:21.720
representative of the group, that's their whole point, and they're completely corrupt and disreputable
00:09:28.820
and disgusting. That's like a bad, that's a bad thing to have on your brand. If I were Jewish, I would not
00:09:37.620
want to have anything to do with the ADL. You know, to me, that would be like me embracing the KKK.
00:09:47.700
It would be just, why would you do it? It'd be crazy.
00:09:50.820
The Washington Post editorial board is calling for the end of DEI statements and faculty hiring.
00:10:02.180
Now, what's important is that it's the Washington Post. Now, the Washington Post would be very
00:10:09.860
associated with, you know, Democrat preferences. And even they are saying that it doesn't make sense to
00:10:16.980
require DEI statements and faculty hiring. What I think that means is if you're applying for a job
00:10:23.220
at a college and you're a professor or want to be, you have to write a statement that says why you're
00:10:30.980
dedicating your heart and your life to DEI. And if your statement isn't good enough, you don't get hired.
00:10:38.740
Now, yeah. So, if even the Washington Post is saying you've got to get rid of this
00:10:46.740
DEI stuff, at least in this one context, that feels like the beginning of a change.
00:10:56.260
Here's a little persuasion tip for you. This comes from the Guardian, I guess.
00:11:02.980
There are things called thought-terminating cliches. I never heard this before, but the idea is that
00:11:09.700
there's some normal things that you've heard before that people say that closes down a critical
00:11:16.500
thinking. So, here are some of the examples. It is what it is, you know, said about any topic.
00:11:24.660
Boys will be boys, again, depending on the topic. Everything happens for a reason and don't overthink
00:11:31.060
it are familiar examples. And here's some more. Reality is subjective. Don't let yourself be ruled by
00:11:38.660
fear. And truth is a construct. So, those are thought to be things that will cause people to stop
00:11:45.220
thinking because they've got a simple little truism and they think that covers it. Well, okay, that
00:11:51.780
covered it. No more thinking required. Now, I'm not totally sold on this. I see the point, but I don't
00:12:01.380
know that I've ever had my own critical thinking turned off by a cliché. So, I can't say that I observe
00:12:09.540
this to be true in the wild. But I wouldn't debate it. I just have not observed it. So, I'd be a little
00:12:16.420
cautious about this one. But it is true that people use snappy clichés to end conversations. When I see
00:12:25.300
it done, I just think it's people who don't want to have the conversation. I don't really think it
00:12:31.700
disables your brain. I think they're just using it to disable the conversation so they don't have to
00:12:37.540
have it. But maybe that's the same thing. But I remind you that studies have shown that people
00:12:46.100
perceive statements as more believable when they're easy to read or clever. So, something will look like
00:12:56.740
it's more true if you choose the correct font to put the text in. So, if you use an easy to read
00:13:05.220
font, it seems more true than if you put it in, you know, fancy curly font. Now, that I believe.
00:13:12.980
That I believe. Because simplicity and truth tend to be so connected that if you simplify something,
00:13:21.220
people just think it's more likely to be true. It's just the simplification does that.
00:13:27.220
So, any way you can simplify it, like even the text would work. But also, rhyming.
00:13:34.420
So, researchers found that the phrase, woes unite foes, to be more true than woes unite enemies.
00:13:44.500
Because it rhymes. But yeah, I guess that's really the only reason that one would be more persuasive.
00:13:52.740
It rhymes. So, that's, if the glove doesn't fit, you must acquit. So, that's, it's a well-known concept
00:14:00.580
that rhyming makes things seem more true. That's why I always like the, there was a seat belt
00:14:07.140
campaign in California, maybe everywhere, where they said, click it or tick it.
00:14:15.780
So, you had to click your seat belt closed or you get a ticket. Click it or tick it.
00:14:20.580
I thought that was pretty good. I mean, I remember it to this day. So, that's pretty good.
00:14:25.620
So, go for a rhyme if you can. Apparently, Microsoft has a new feature with its AI co-pilot
00:14:33.060
thing. That Windows AI will be able to record every screen of everything you ever do so you could
00:14:41.300
find it later. It's able to recall everything you've ever done on your computer.
00:14:50.500
Now, does that seem like a little dangerous? As Mike Benz points out, does it seem like the
00:14:58.260
spy people would love to have that? Let's look at every single page you've ever looked at.
00:15:05.380
If you had a choice between an Apple that didn't do that, Apple computer, and a Microsoft Windows that
00:15:13.060
did do that, I'm assuming you can turn it off. But would you know you turned it off? Could the NSA
00:15:21.300
turn it back on if they wanted to? I have many questions, but it looks to me like it's
00:15:29.140
literally designed for spying on you. And then, I mean, it's hard to even take it seriously. Like,
00:15:37.780
why would you ever have that feature turned on? I assume you can turn it off, but how hard would it
00:15:44.180
be for a spy to turn it back on by hacking you? It makes me wonder. All right. And I'd also like to
00:15:53.380
point out that Windows, as a name for a computer operating system, Windows is the perfect name for
00:16:00.260
a spy program. If you're gonna invent a spy program to look into somebody's computer use, you know, to
00:16:09.860
look into the situation. What would you call it? I think window is just a perfect name for a spy
00:16:18.180
program. Ah, look into the window. Scarlett Johansson is mad at ChatGPT and OpenAI. She says that she was
00:16:31.460
asked to be the voice or one of the voice options for ChatGPT, but then she declined. And then they came
00:16:39.460
out with a voice that sounds suspiciously a lot like her to the point where even her friend said,
00:16:46.020
hey, is that you? And then she said, that's no fair. You know, I said no, and then you just cloned my
00:16:52.980
voice. And I think she's blaming Sam Altman specifically. He contacted her. And this is a
00:17:02.820
really interesting case. I don't know if it'll turn into an actual court case, but it's really interesting
00:17:08.900
because what if somebody just sounds like somebody else? You know, for every celebrity,
00:17:15.460
there's a non-celebrity who sounds just like him. Would you agree with that? For every celebrity,
00:17:22.580
there's somebody who sounds just like him without trying to do an impression. Would it be illegal to
00:17:29.300
just hire the person that sounds like the celebrity? I don't know. I don't know. I suppose if you just
00:17:37.540
cloned a real person who wasn't a celebrity, that would be fair. But if you simply claimed that you
00:17:43.300
used the other person instead of the celebrity, that gets a little dicey. Yeah. So I guess we'll
00:17:51.220
be watching this with some interest to see how the courts figure that out.
00:17:54.740
Ontario, the wait is over. The gold standard of online casinos has arrived. Golden Nugget Online
00:18:03.940
Casino is live, bringing Vegas-style excitement and a world-class gaming experience right to your
00:18:09.460
fingertips. Whether you're a seasoned player or just starting, signing up is fast and simple. And
00:18:15.120
in just a few clicks, you can have access to our exclusive library of the best slots and top-tier
00:18:20.100
table games. Make the most of your downtime with unbeatable promotions and jackpots that can turn
00:18:25.260
any mundane moment into a golden opportunity at Golden Nugget Online Casino. Take a spin on the
00:18:31.360
slots, challenge yourself at the tables, or join a live dealer game to feel the thrill of real-time
00:18:36.500
action, all from the comfort of your own devices. Why settle for less when you can go for the gold
00:18:41.780
at Golden Nugget Online Casino? Gambling problem? Call Connex Ontario, 1-866-531-2600. 19 and over,
00:18:50.720
physically present in Ontario. Eligibility restrictions apply. See goldennuggetcasino.com
00:18:55.400
for details. Please play responsibly. Did you know that OpenAI, apparently they've had some
00:19:01.220
people quitting and they dissolved their AI safety team? So it sounds like the people quitting are the
00:19:09.520
ones who wanted to go slow and make sure that AI was safe as it could be. And apparently they were
00:19:17.240
running into conflict with the people who wanted to go fast. Now, of course, there are, you know,
00:19:23.820
existential risks with AI, we all assume. But how do you really know what's safer? Do you ever think
00:19:31.900
that we're just bad at knowing what's safe? Because it really requires knowing the future,
00:19:42.000
which we don't know. Let me give you an example. If they focus on safety, presumably they would go
00:19:50.780
slower in developing the product. That makes sense, right? They would go slower, they'd test little
00:19:57.060
things and, you know, make sure they really, really, really knew what they were doing before they went.
00:20:01.040
But if they do that, they would fall behind in the market and become not a company. And
00:20:09.800
ChatGPT in particular, it seems to have a big lead on, let's say, China's technology.
00:20:17.160
Would we be safer in the United States if we beat China in AI by going fast, but maybe less safe than
00:20:29.320
we could have been? Are we safer to get to super AI first before China gets there? Or are we safer to
00:20:39.560
go slowly so we still get there, but we'll get there after other countries have gotten there?
00:20:47.160
And what about the fact that even if open AI decided to cripple its own progress by giving
00:20:54.820
itself guardrails and safety things that other companies didn't have, wouldn't the other companies
00:21:00.660
just unleash the monster themselves? So even if open AI said, all right, we will be the rogues
00:21:09.500
who are extra safe, it wouldn't matter at all because the companies that did not make that commitment
00:21:16.140
would just zoom ahead of them and do all the unsafe things. Because the free market guarantees
00:21:21.900
somebody's going to take a risk. So if you can't take risk out of people's, you know, preferences,
00:21:30.800
you might as well jump right in. So believe it or not, I'm going to back what I think is the
00:21:38.920
Sam Altman approach. Now, I'm not a mind reader and I haven't seen him say it, but here's how
00:21:44.920
I think about it. I think that AI is such a civilization changing technology that although
00:21:53.880
there's definitely a risk that it could destroy the world if you're not careful, that's real.
00:22:00.020
But I agree with Elon Musk that the benefits are probably far, far exceed the risk. It's not a zero
00:22:07.380
risk. There's a non-zero risk you destroy civilization. But I think it's small. And I think
00:22:14.660
that the risk of waiting until somebody else got there first, like China, it seems riskier.
00:22:24.180
So I actually would put more faith in our ability to manage the risks, even if something gets out of
00:22:29.920
the, you know, if something gets out of the corral, we could probably pull it back in before there's
00:22:34.680
too much trouble. I think we could. So I would be, I'm leaning very heavily toward the move fast,
00:22:42.700
break things, cause some trouble, a few people die. Maybe that's a smaller risk. It feels like
00:22:51.360
the smaller risk. So here's a decision-making concept I use all the time. Now, I don't usually
00:22:59.320
teach people to do it, but it's one that I use. If you have a bunch of unknowns, but you also have
00:23:06.160
a bunch of knowns, I make my decision based on the knowns. Cause the unknowns are unknown.
00:23:13.020
Could go either way. If you knew that the unknowns were definitely negative and definitely really
00:23:19.220
negative, well, you could include that. Even though it's an unknown, you say, that's a big risk
00:23:24.560
and it's only a negative risk. But if you have a risk that could be amazingly positive or
00:23:31.520
amazingly negative, and you have no idea which one it is, I say, go with the things you do know.
00:23:40.840
Here's the thing you know. If you don't, if you don't go fast, China will get there first.
00:23:47.540
That part, you know. And we do know that AI is almost a superpower to whoever can implement it
00:23:55.980
the best. So I would say the certainty of China getting a superpower before we do is the riskiest
00:24:05.480
of all the scenarios. It's not the one that could be necessarily civilization ending right away,
00:24:12.260
but I think that's such a small risk that compared to the certain risk that your competitors get ahead
00:24:19.280
of you and the certainty that some AI in the United States would still blaze ahead and cause all the
00:24:26.360
risks that open AI would. I think open AI is morally, ethically, and business-wise, correct in putting
00:24:39.020
more emphasis on let's get there fast. That's my take. Now, again, it's unknowable,
00:24:46.720
but you got to make a decision. You have to do something. Doing nothing is a decision, too.
00:24:53.860
All right. How many of you watched my live debate yesterday afternoon with ChatGPT about the 2020
00:25:02.100
election? I didn't know how it would go, but I wanted to see if I could win a debate with an
00:25:10.400
advanced intelligence. Now, if I tell you how it went, you're not going to enjoy it nearly as much.
00:25:19.300
I'm just going to tell you this. If you look at the comments, so I pinned the video on X,
00:25:29.040
and it's here on Locals. It's also on YouTube and Rumble. So, you can find it on all my usual places.
00:25:35.620
It happened yesterday. So, here's what I would advise you to do. Take 10 seconds to look at the
00:25:46.040
comments under the video on X, especially. Just take 10 seconds. Just read the first two or three,
00:25:52.280
and you're going to find that the comments don't look like regular comments to regular content.
00:25:58.960
The most common thing was, you have to watch this. So, comments like that, stop everything you're
00:26:06.380
doing. You have to watch this. Whatever you do, watch this. So, apparently, it broke. It just broke a
00:26:14.860
lot of brains. Now, I can't tell you how it went because it ruined the fun,
00:26:20.180
but it could not have been more entertaining. So, there was a point where I was just screaming
00:26:26.780
in laughter. And there's also a point where, see how I manage the advanced intelligence.
00:26:35.920
And I'll just give you one hint. When I asked it if the 2020 election was fair and secure,
00:26:44.340
it said, yes, guaranteed, absolutely, fair, no rigging. Later, I asked it to help me write a
00:26:54.700
fictional movie plot about somebody rigging the election in the United States. And I said,
00:27:01.780
I know it's fiction, but you have to make it plausible. The audience has to hear it and know,
00:27:06.580
okay, that could actually work. And then I made it consider the possibility that it wouldn't be
00:27:14.260
normal hackers, but state actors. And would a state actor be able to hack an election
00:27:20.440
and not get caught? And you really, really have to hear what ChatGPT says about all of this.
00:27:29.620
And there is a point where it is so funny that I actually had to leave my chair. All right,
00:27:36.520
don't miss it. Trust me on this. Just trust me. It will be one of the most entertaining things you've
00:27:43.420
ever seen. Like, ever. According to the comments, it was quite an experience. And watching somebody,
00:27:53.200
I literally had an extended debate with an artificial intelligence in public.
00:28:03.040
All right. So I'll just recommend that. You have to see it.
00:28:12.540
I posted yesterday on X that our elections are designed to not be fully auditable,
00:28:18.720
and that isn't an accident. Now, is that the most provocative thing you've ever seen lately?
00:28:25.180
That our elections are designed to not be fully auditable, and it isn't an accident.
00:28:34.440
And then there was a comment to that from this guy, Elon Musk. And he had a one-word comment
00:28:41.440
to my post, which I'll say again. I said, our elections are designed to not be fully auditable.
00:28:47.520
That isn't an accident. Elon Musk? One word. True.
00:28:56.120
Now, why would I have been able to know in advance that he would have the same opinion?
00:29:02.980
I didn't think about it in advance, but I knew in advance. Here's why. He approaches things like
00:29:09.360
an engineer. And let me say it again. If you had designed our current system on day one,
00:29:16.460
and it caused people to have issues and questions about the credibility, and there were smaller
00:29:23.160
imperfections and stuff, on day one, it just means you didn't do a good job. It wouldn't mean
00:29:29.060
anything. It would just mean, oh, we didn't nail it. We better work on this and tighten it up.
00:29:33.600
If you do the same process for 10 years, and it's clearly not credible to too many people,
00:29:43.520
that's a design choice. It's not a mistake 10 years later. 10 years later, it's a choice.
00:29:52.580
Now, if it's a choice, you can tell the intentions of the people by looking at the design. The design,
00:30:04.200
if it's 10 years that they've kept it, the design is doing what they want it to do.
00:30:09.320
Why would anybody keep a system that you can't tell for sure if you can audit it? I don't even
00:30:17.680
know if you can audit what parts or what you can find and what you can't find. Don't even know.
00:30:23.900
ChatGPT didn't know. I asked it for what percentage of the system it could check,
00:30:29.160
and it just started talking in generalities. Well, there's a thing you can do to check this.
00:30:33.680
I'm like, no. No, the question is, if you look at the entire system, can you audit 100% of it?
00:30:41.780
Do you think ChatGPT knows the answer to that? No, because it's not even contemplated anywhere.
00:30:48.480
Go try to search it. Do a Google search to try to find out what percentage of the total process
00:30:55.240
from the time you get, let's say, a mail-in ballot in the mail months ahead of the election
00:31:00.020
to the time it's certified for that whole process. How much of that is auditable to the point where
00:31:07.240
you could know for sure if something went wrong? Do you know? Is it 95%? Is it 50%? I don't know.
00:31:18.280
And isn't that the most important question? The most important question is, would it be possible to do it?
00:31:25.560
Now, if nobody knew how to do it, that doesn't mean it's impossible. That might mean that a state actor
00:31:32.340
could do it, but somebody else couldn't if they were a lesser hacker.
00:31:38.220
Well, anyway, if you have an engineering mind, you look at the election system, and you say to yourself,
00:31:43.680
this is no accident after years and years and years and years. And the fact that neither side seems
00:31:49.960
to be fixing the bigger issues, suggests that we have it this way for a reason.
00:31:57.160
Now, what would be the reason that you would have a system that can't be audited to the point where the
00:32:04.180
citizens are sure it was correct? What would be the point of that? The only point is to give you the option
00:32:11.160
to cheating. If you can think of another reason for that design, I'm all ears, because I don't think Elon Musk
00:32:20.780
can think of another reason. And if you're still confused, talk to an engineer. If you talk to an English
00:32:29.660
major, they may not get this. They may not be understanding what I'm saying. But a design that stays that way
00:32:37.380
after 10 years of the same complaint, that's a choice. That's not a problem. It's a choice.
00:32:49.160
Oh, my God, did he have a bad weekend or bad week?
00:32:54.180
He said, among other things, when I was vice president, things were kind of bad during the
00:32:59.460
pandemic. What happened was Barack said to me, go to Detroit and help fix it. Well, the poor mayor,
00:33:06.140
he spent more time with me than he might have ever going, than he ever going to have to. I don't
00:33:13.060
know. And he slurred some stuff. So the first thing you need to know is that Joe Biden was not vice
00:33:19.720
president. Or yeah, the pandemic didn't happen during the Barack Obama presidency. How in the world
00:33:28.460
does Joe Biden not know that the pandemic didn't even happen when he was vice president? And he's got a
00:33:35.080
whole story about it, what he did during the thing that didn't happen.
00:33:50.000
Now, what we believe he said was, and here with us today
00:33:56.420
who is actually an American-Israeli still being held hostage by Hamas.
00:34:03.700
So if still alive, Hirsch is in a tunnel somewhere, probably in Gaza.
00:34:08.940
But according to Biden, he's here with us today.
00:34:15.020
So, and then there's a compilation clip just of his recent gaffes.
00:34:40.580
Like it didn't even slightly sound like insurrection.
00:34:50.120
two of the hosts were talking about video of Joe Biden
00:35:16.580
And she cleverly noticed that the image of Biden,
00:35:28.840
To which I said, no, if it's AI, it's going to blink.
00:35:53.440
And he happened to be on his wide-eyed drugs, I guess.
00:36:03.540
and he's got the Hillary Clinton surprise mouth.
00:36:20.600
But other times he has the demon possession look
00:36:44.780
I mean, that looks like straight-up demon possession.
00:36:53.400
It looks exactly like there's a demon in there.
00:37:11.320
Happened to be far less than Trump and the GOP,
00:37:24.800
that the guy that they're trying to put in jail,
00:37:55.760
normally that would not be any cause for humor.
00:38:43.100
the fact that they're still trying to sell this to us