Episode 2770 CWSA 03⧸06⧸25
Episode Stats
Length
1 hour and 27 minutes
Words per Minute
144.67293
Summary
In this episode of Coffee with Scott Adams, I talk about an AI that can spot hoaxes, and a meta-analysis to see if the public is good at spotting fake news, and why I don t think it is.
Transcript
00:00:00.000
Good morning, everybody, and welcome to the highlight of human civilization.
00:00:11.020
It's called Coffee with Scott Adams, and boy, you're lucky to be here.
00:00:15.440
But if you'd like to take this experience up to levels that nobody can even understand
00:00:20.260
with their tiny, shiny human brains, all you need for that is a cupper, mugger, a glass
00:00:26.000
a tanker, chalice, a stein, a canteen, jugger, flask, a vessel of any kind, fill it with
00:00:33.480
And join me now for the unparalleled pleasure of the dopamine hit of the day, the thing that
00:00:38.680
It's called the Simultaneous Sip, and it happens now.
00:00:52.060
Well, did you know that Grok, AI, allegedly can now, according to Maria Naufal, he's
00:01:01.140
writing, doing a little write-up, according to Elon Musk, Grok can identify frauds.
00:01:09.640
Let's say you get an email from a Nigerian prince.
00:01:12.800
You could say, hmm, I wonder if that Nigerian prince really needs my money to free his fortune.
00:01:19.180
And then you could run it through Grok, and Grok would say, no, Nigerian princes are not
00:01:28.760
And it can spot phishing, phishing as in P-H-I-S-H, phishing scams, and all kinds of red flags.
00:01:44.360
So, of course, I asked it about the fine people hoax, and it got it completely correct.
00:01:49.720
And I asked myself, do you think that Grok got the fine people hoax completely correct
00:01:59.200
Do you think it just looked at the facts and decided, yep, that's a hoax?
00:02:03.740
I don't think there's any chance of that, because I don't think the large language models can do
00:02:09.160
I think the large language models would just look at the predominant opinion.
00:02:14.360
And unfortunately, it would still look like there were more people saying it was real,
00:02:21.520
So I think that Grok is being, let's say, the finger is on the scale, just like all the AIs,
00:02:30.560
except that the finger on the scale of Grok is trying to be accurate.
00:02:39.480
I think some of the other AIs, if somebody puts their finger on it, is to keep the propaganda
00:02:45.080
But in this case, the fine people hoax is definitely a hoax.
00:02:50.160
And so if there's a little bit of programming in there to make sure that it gets that one
00:02:54.660
right, that wouldn't be the worst thing in the world.
00:02:57.440
But it makes me wonder if it can spot hoaxes in general, because I've taught you the two
00:03:11.820
I've given you all these rules for spotting hoaxes.
00:03:19.760
But I'll bet it could do someday, and I'll bet it could be trained to do it.
00:03:23.200
If the only thing you did was say, all right, there's a few people who are good at spotting
00:03:31.460
So just look at their feed and try to imitate them.
00:03:41.240
For example, in this next story, there was a meta-analysis to see if the public is good
00:03:53.360
Do you think the public's good at detecting fake news?
00:03:56.840
Well, somebody did a meta-analysis to find out if the public is good at spotting fake news.
00:04:12.220
They use fake analysis to determine whether people can spot fake news.
00:04:19.900
If the people in this study didn't know that they were part of a meta-analysis, well, they
00:04:27.860
No, actually, if it's meta-analysis, it means they're looking at studies that other people
00:04:38.080
And I've given you the long explanation of why.
00:04:42.800
But as soon as you see meta-analysis, just discount it.
00:04:48.000
I mean, it could be true because a lot of questions are yes, no.
00:04:54.740
So meta-analysis can be correct, but so can a coin flip.
00:05:00.740
It's basically a coin flip because a human being decides what's in the analysis and what's
00:05:08.840
And if it's based on the assumption of the person doing it, as opposed to just the data,
00:05:17.620
So for example, if there's one big study that overwhelms all the other studies, a meta-analysis
00:05:23.460
looks at all the studies and says, all right, one study might be unreliable, but if I look
00:05:29.300
at the average of the studies, I'll get something useful.
00:05:33.100
No, you won't, because let's say one study is big, and you're looking at them and you
00:05:38.100
go, oh, well, we'll throw that big study in there because it's so big, but then all the
00:05:44.860
little ones don't even matter because the big one would overwhelm it.
00:05:47.860
Well, or you say, hmm, this one study says that everything's wrong, and these other studies
00:05:57.560
Then you use your judgment and say, huh, I think this one study that disagrees with the
00:06:02.200
other ones was poorly done, in my opinion, because of reasons.
00:06:08.480
And the other ones might be poorly done, but maybe you don't care or they agree with you.
00:06:12.700
So meta-analysis, as soon as you hear those words, run away.
00:06:22.780
Could AI do what I just did and say, oh, yeah, meta-analysis, that's your trigger for not
00:06:38.440
Well, according to a new study, so researchers found out that from, this is from a source
00:06:52.180
I've never heard of it, so I'm trying to give credit, but I don't know who that is.
00:06:56.840
It turns out, according to the study, humans are more likely to trust an AI voice that sounds
00:07:05.660
So if you listen to an AI voice that sounded sort of like you, you would trust it more
00:07:11.600
than if it sounded sort of like some stranger you never met.
00:07:22.820
It's one of the most well-known, well-documented, well-understood phenomena in the entire world,
00:07:28.480
and it's called pacing, pacing, plus one is a scientific journal.
00:07:40.040
So pacing is when you match the person you're trying to persuade, but the matching could come
00:07:48.980
One form would be if you do the same body language.
00:07:53.320
You know, this one is good in a meeting with your boss.
00:07:56.080
Let's say your boss is leaning on the table like I am with both arms.
00:08:10.080
Let's say you can detect the breathing pattern of your boss.
00:08:17.440
Just breathe, inhale when they inhale, exhale when they exhale, and do as much as you can.
00:08:21.580
Let's say your boss has a certain way of talking.
00:08:25.920
My best example is some people like to use a lot of war analogies, like, oh, I jumped on
00:08:32.200
that hand grenade, or we'll take that hill tomorrow, or we'll die on that hill.
00:08:38.360
And if you hear that, you just start using some of your own war analogies.
00:08:42.960
It's like, well, it looks like we're going to be battling them today.
00:08:48.700
So although a human can't easily reproduce the voice sound of another human, you didn't
00:08:56.420
I could have told you with complete certainty, as could 100% of hypnotists, every hypnotist
00:09:04.440
in the world would have said, oh, yeah, obviously, if you can match somebody's voice, that would
00:09:13.220
Meanwhile, OpenAI, according to TechCrunch, Kyle Wiggers is writing, that they're planning
00:09:27.940
And the charges could be up to $20,000 per month to have an AI-driven agent.
00:09:35.200
Now, an agent would be something like a little humanoid entity that might help you with programming.
00:09:43.220
Or it might be a little humanoid sort of entity that would help you with sales leads or stuff
00:09:54.280
So if you had the best software developer, it might be $10,000 a month.
00:09:58.560
If you had some lower-level function that a human could do, let's say that the AI agent costs
00:10:09.760
are probably similar to what a human would cost, but lower costs.
00:10:22.360
So I don't doubt that there will be a product and there will be a release, but I'm quite
00:10:28.100
certain with my total lack of knowledge of AI, I still feel quite certain that the large
00:10:35.360
language models will never be able to be a reliable agent because of the hallucinations
00:10:42.980
and the lack of knowledge about current things and the complete inability to even look at
00:10:53.280
So I think what's going to happen is that the AI portion will just be the user interface
00:11:00.300
and that whatever is the agent is going to be a whole set of non-AI programs, which could
00:11:09.040
And so if you put them together, it might be a good agent, but it won't be the AI that's
00:11:16.260
It'll just be user interface to a whole bunch of specialized programming for each of these
00:11:23.880
I think the LLM is just a user interface and it's going to be tough to get past that.
00:11:29.260
But as long as the other programs is solid, the agents might be good.
00:11:36.900
It's just that they'd have to develop a whole new technology, not use AI to do the whole
00:11:44.440
I would love to be wrong about that, by the way.
00:11:48.060
Well, a judge has denied Elon Musk's request to block open AI in their conversion from a
00:12:12.820
And the reason for the denial is that it's not obvious that if it went to court, it would
00:12:19.940
So apparently, I don't know enough about the law to give you the fine details, but the
00:12:26.060
basic idea is that the judge would have probably blocked it if, looking at the facts, the judge
00:12:35.000
said, all right, if this goes to trial, it's going to get blocked by the court, by the jury
00:12:43.240
So I might as well block it now, because it's almost certainly going to go in that direction,
00:12:47.820
and there's no reason to let them go too far down that road if you know they're going to,
00:12:51.860
let's say, 90% of the chance they're going to get blocked.
00:13:05.780
It means that the judge can't determine that it will definitely go in one way or the other.
00:13:11.300
So there's at least a 50% chance, according to one judge, that Elon Musk will be able to
00:13:25.460
Now, it feels to me that OpenAI has way too much backing and money and geniuses working
00:13:32.960
for it that it could get in this situation, which is an existential risk to the entire company.
00:13:44.780
So, you know, of course, they're up against Elon Musk with all of his resources and all of his brains
00:13:57.860
You know, if he gave a gazillion dollars as part of the founding thing and the part of the agreement
00:14:05.760
for taking that money was it would remain non-profit, which was really the central thing he wanted.
00:14:18.780
He said, if we're going to build an AI, it's got to be open source and, you know, non-profit.
00:14:29.680
And I don't know what argument they would use to make it not fraud.
00:14:34.460
If you take 50 million or whatever it was from somebody with a given set of assumptions
00:14:40.620
that are agreed, and the most important one, the most important one is to stay non-profit,
00:14:48.600
and then you violate that, how could you possibly win that case in court?
00:14:54.520
Now, obviously, the judge is smarter than I am, so the judge thinks it's a coin toss,
00:15:02.280
To me, it doesn't look like a coin toss, but we'll find out what happens.
00:15:06.360
Anyway, this was reported by Julia Shapiro in The Hill.
00:15:30.340
Johns Hopkins University has developed a bionic hand that knows what it's touching.
00:15:38.180
So, just by touching, it can tell what it has in his hand.
00:15:49.120
If the robot hand knows what it's touching, can it also make eye contact at the same time?
00:16:05.000
But I think they found the first killer application, if you know what I mean.
00:16:09.840
Let's just say, if this thing really is that good with its hands,
00:16:14.560
a lot of single guys are going to get a domestic robot to help with the dishes
00:16:28.940
The thing I love about this story is that 100% of the men listening to it were instantly,
00:16:36.200
you instantly got to the joke before I finished the sentence.
00:16:49.140
Trump is expected to sign an executive order to eliminate the Department of Education,
00:17:01.500
How many years have I been alive listening to the,
00:17:04.560
oh, Republicans are going to get rid of the Department of Education,
00:17:19.280
Now, Democrats, of course, who don't understand how anything works,
00:17:26.580
I guess nobody's going to get an education down,
00:17:39.940
But I'm pretty sure this is nothing but reorganizing how we do things
00:17:50.360
because I would think some of that budget ends up at the States.
00:17:53.900
So, Ian Carroll, who some of you know as a very colorful and interesting Internet personality,
00:18:04.860
he's usually working on the conspiracy theories,
00:18:08.020
and whether they're true conspiracy theories or not, we don't know.
00:18:12.480
But he's kind of into the interesting part of the news.
00:18:17.440
And he's talked quite a bit about the Epstein files.
00:18:19.860
And he thinks the Epstein files are never coming out.
00:18:26.200
Now, a hybrid of that, which I've said, is there will be lots of files,
00:18:31.000
and they might come out, but not the good stuff.
00:18:34.740
I think that no matter how many files we get to see,
00:18:40.400
And what Ian Carroll believes is that the files would be so deeply destructive to Israel
00:18:50.540
that there's just no way we're going to release them,
00:18:56.500
Israel has too much of a connection to the United States, let's say.
00:19:01.840
You can say they have too much control over the government.
00:19:08.860
I'm just going to say we have too tight a connection,
00:19:13.740
The only part I can be sure of is that we have a tight connection.
00:19:19.200
So if, for example, Israel said, whisper, whisper,
00:19:24.900
you know, this would be the worst thing in the world for Israel,
00:19:28.440
and therefore it would destroy our relationship,
00:19:34.520
I don't know if that's going to happen, by the way.
00:19:36.300
So I'm not on the side that thinks he was only working for Israel.
00:19:43.620
If you're Epstein, and you're doing what he's been accused of doing,
00:19:48.540
do you think you're going to be faithful to one master?
00:19:54.540
What would cause him to be faithful to one master?
00:20:01.560
Because remember, he's unfaithful in every other way.
00:20:05.200
He's, you know, he'd be the ultimate liar, con man, sexual abuser,
00:20:13.880
So do you think that if his bosses were in Israel,
00:20:19.040
that he wouldn't also do some work for the United States,
00:20:27.300
I think he would sell it to whoever he could sell it to,
00:20:30.360
and he would gain influence by doing a favor for them.
00:20:38.900
And, you know, if it turned out that it was Israel,
00:20:46.780
there's something closer to freelancing happening,
00:20:51.880
So I feel like there's lots of reasons not to release the good stuff,
00:21:01.560
Well, also, we're being told the JFK files are coming any minute.
00:21:08.920
And so that's where all the JFK files will be put,
00:21:11.760
and we can look at them and find out all the nothing.
00:21:15.840
I expect to find out absolutely nothing from the JFK files.
00:21:21.240
You don't think that after, what is it, 50 years,
00:21:25.260
you don't think that the good stuff's been removed or scrubbed
00:21:31.120
Don't you think that the only things that are in the file
00:21:33.360
are the things that the bad guys wanted in the file?
00:21:42.140
seemed to have had complete control over everything at that point.
00:21:57.480
You know, if they did the Warren Commission and that was fake,
00:22:02.200
why would they put real information in the files anywhere?
00:22:26.120
that landed at the same time as the assassination.
00:22:34.380
And we'll just go down some stupid rabbit hole.
00:22:39.820
There'll be like one name of somebody who's already deceased.
00:22:45.500
He was friends with that serial killer who's deceased?
00:22:52.180
and we'll just go down some stupid rabbit hole.
00:22:55.100
So there might be something that grabs our attention.
00:22:59.040
But I don't think it's going to clear anything up.
00:23:29.900
The fake that we were spending actually $8 million
00:23:36.460
But then they had to take it off their fact-check
00:24:48.360
we're going to be honest with the American people