Episode 2995 CWSA 10⧸21⧸25
Episode Stats
Length
1 hour and 20 minutes
Words per Minute
145.46515
Summary
In this episode of Coffee with Scott Adams, Scott talks about his new book, "Reframing Your Brain" and other things, including the latest fake news story about Trump commuting Denny's prison sentence, and Elon Musk's plan to colonize Mars.
Transcript
00:00:00.000
How are you all doing? Everybody good? Get in here. We've got a podcast to do for your
00:00:10.360
entertainment. As soon as I'm ready. Hey, there it is. What is today? 21st?
00:00:30.000
Good morning, everybody, and welcome to the highlight of human civilization. It's called
00:00:44.060
Coffee with Scott Adams. You've never had a better time, but if you'd like to take a
00:00:48.980
chance on elevating your experience up to levels that nobody can even understand with
00:00:54.140
their tiny, shiny human brains, all you need is a cup of mug or a glass of tankard shells
00:00:59.260
or steinacanthine jug or flask, a vessel of any kind. Fill it with your favorite liquid.
00:01:04.540
I like coffee. Join me now for the unparalleled pleasure, the dopamine of the day, the thing
00:01:09.740
that makes everything better, including your oxytocin. And it happens now. Go.
00:01:15.940
Thank you, Paul. Paul doesn't like me to thank him too often, but I like to do it.
00:01:29.380
All right. How would you like to see a reframe to start your day from my incredibly successful book,
00:01:36.100
reframe your brain? So this is our new system. Every morning you get a new one. It will change your life.
00:01:46.860
And remember, not every reframe works for every person, so you're looking for your magical one.
00:01:51.820
Let's see. How about this? Oh, you already know systems rather than goals, so I can skip that one.
00:02:05.440
And you already know about talent stacks, so I'm going to skip that one for now. I might double back to that one.
00:02:11.860
All right. Here's another one. How often do you find yourself with a problem that just sort of popped up
00:02:21.220
and you're like, oh, God, another problem. I don't need a problem. So here's a reframe for when you get
00:02:30.540
that bad mood because you got a new problem. Instead of saying another problem, why me? Say, ooh, new puzzle to solve.
00:02:41.860
Watch what happens when you redefine your problem as a puzzle to see if you can solve it.
00:02:50.400
The thing with the reframes is when you first hear them, they don't really sound that powerful,
00:02:55.300
do they? If you just hear them the first time. But watch what happens. So this will be my challenge to you.
00:03:03.760
Next time you have a problem, just a life problem, just say, ooh, a puzzle. I got a puzzle.
00:03:11.860
To solve. And then see if you can solve it. It will change everything. This one's very powerful.
00:03:18.340
I use this one a lot. And it definitely, definitely helps.
00:03:26.180
Well, according, all right, here's some fake news. I guess TMZ is inaccurately reporting
00:03:32.020
that Trump was considering commuting Denny's prison sentence. I think probably a lot depends on the word
00:03:40.560
considering. Because people used to ask me if I was considering things that I was definitely not
00:03:47.740
going to ever do, such as, well, I won't even say it. But I always say, well, yeah, I considered it.
00:03:56.160
I considered it. And then I ruled it out. So would it be true that Trump may have considered it?
00:04:03.360
Maybe. Probably. Well, almost definitely. He almost definitely considered it.
00:04:10.240
But it doesn't look like he's likely to do it, which is a big difference.
00:04:17.200
Did you know that the X platform now has a chat function that's sort of a super-powered DM?
00:04:26.080
I still don't know all the differences between the DMing and the chatting on X,
00:04:30.160
because the chatting is new. But apparently the chatting has what DM doesn't have,
00:04:34.480
which is strong encryption. And Elon Musk says that even he could not read your messages,
00:04:42.960
even if he wanted to. Do you ever believe that? If the person who owns the platform says,
00:04:51.820
even I can't read the message, do you believe that? Do you believe that the CIA couldn't force him
00:04:58.660
to open a back door? Could they? You know, he's a unique character. He's not like anyone else.
00:05:09.060
So maybe he would risk pissing off the CIA if they said, no, you really, really have to open this up.
00:05:18.260
We will make your life a living hell. You will lose all your government contracts.
00:05:23.540
We'll come after you in ways you didn't know anybody could come after anybody.
00:05:26.860
You're going to have to give us a back door. Would he have the stones and the character to say no to that?
00:05:35.680
If he could bring down the entire empire and all the good that he would be able to do?
00:05:39.860
If it canceled him from being able to ever colonize Mars, would he open up a back door to the chat?
00:05:50.020
He suggests that he wouldn't. But we live in a world where people are humans.
00:05:56.240
They're not robots. If you put enough pressure on a human, you can get him to do anything.
00:06:04.000
There's no real limit to what you can get a person to do if you have enough pressure,
00:06:08.340
especially if they have, well, I don't want to say it, but there are lots of points of pressure,
00:06:13.100
obviously, for somebody like him. Anyway, they got chat.
00:06:18.460
So one of the pioneers, the most famous architects of the AI world we're in is this guy, Andre Carpathy.
00:06:30.640
So he was one of the original open AI guys, and he's worked for Tesla,
00:06:38.780
And he's the one who came up with the term vibe coding, which you hear a lot in AI.
00:06:44.640
So I bring this guy up because of people who understand the current and future direction of AI.
00:06:54.380
He would be right at the top. Can we agree on that?
00:07:01.720
that if you were going to try to guess what AI will turn into or, you know, figure out where it's going,
00:07:11.060
And the reason I bring him up is his opinion is the same as mine,
00:07:24.120
He said that today's agents, which is, you know,
00:07:27.540
if you wanted your AI to act like a little person who does things that you wanted to do,
00:07:33.760
He said the agents aren't ready to work like real workers or interns,
00:07:44.280
Now, which of the other AI experts are telling you
00:07:47.740
that AI basically won't work for five to ten years?
00:08:00.260
but if they don't know how to make you stop hallucinating,
00:08:05.860
Have I not been telling you that for two years?
00:08:10.600
and they don't know how to make you stop hallucinating.
00:08:15.760
They don't know how to be a little agent that doesn't turn against you.
00:08:19.880
The most basic things that you thought AI would do,
00:08:25.720
Do you know what's another way to understand five to ten years away?
00:08:35.060
If it were a year away, you would already see it.
00:08:38.160
They would just say, well, it's not commercialized,
00:08:45.820
So, you know, the exception might be the self-driving cars
00:08:53.460
because they need to be trained in mostly a video way.
00:08:58.160
So it's a completely different path for the self-driving cars.
00:09:01.820
But Axios is reporting that the Uber drivers are now getting trained optionally,
00:09:09.980
they don't have to do it, to train their robot replacements.
00:09:15.200
So obviously, at some point, the Ubers will be self-driving like their competition.
00:09:21.780
And Uber is having their real-life drivers train voluntarily
00:09:30.400
and respond the way they respond to people and stuff like that.
00:09:36.660
Imagine being in that job where you are already not happy
00:09:46.200
I don't want to say anything bad about real work.
00:09:50.780
And I have a lot of respect for the people who do that kind of work in particular.
00:09:56.160
But they're being put in a humiliating situation
00:09:59.640
where they're essentially being told that they're being replaced
00:10:11.360
If you didn't love your job in the first place,
00:10:20.060
And you have to train the robot to replace you.
00:10:30.640
the average price of gas is now at a four-year low.
00:10:39.940
I'll bet you care a lot more about the price of gas today
00:10:52.120
So at least you'll be happy about the price of gas.
00:11:17.180
that the country is slowly losing tech jobs in general?
00:11:24.940
but the tech jobs in general have gone down a little bit.
00:11:30.040
So I'm not sure if this is the beginning of a gigantic trend,
00:11:33.240
but I don't think many jobs have been lost because of AI.
00:11:42.140
oh, in two years, there'll be massive layoffs in tech?
00:11:54.040
My hypothesis is that one of the most well-informed AI people
00:12:00.400
says AI doesn't work and it won't work for five to 10 years.
00:13:05.400
So what they wisely did is they took big positions
00:13:48.740
So they would basically buy their big investments,
00:14:42.900
meaning that open AI could never really make money.
00:15:00.180
connecting themselves to all these other companies
00:15:04.620
because the whole country would go down if they did.
00:15:34.440
I don't know that any of this is bad, by the way.
00:16:01.060
that they might actually have designed themselves