Is AI Overhyped? Is AI a Bubble? (New Models Don't "Feel" That Much Better, Why?)
Episode Stats
Length
1 hour and 3 minutes
Words per Minute
178.3091
Summary
In this episode, we discuss whether AI is a bubble, a bust, or if AI is over. We talk about the pros and cons of AI, the current state of the industry, and the impact of AI on society.
Transcript
00:00:00.000
Hello, Simone. I'm excited to be talking to you today. Today, we are going to be talking about
00:00:03.720
whether AI is hype, whether AI is plateauing, whether AI is over. And by this, what I mean,
00:00:11.260
because I had the head of our development team, Bruno, who comments a lot on the Discord in the
00:00:17.380
comments here, so fans of the show will know him. He sent me an email that we're going to go over
00:00:21.500
as part of this, sort of being like, okay, so here's some evidence that AI doesn't seem to be
00:00:26.640
making the systems level changes in society that you had predicted it would make in the past and
00:00:33.240
that many other people are predicting it will make. And when I, and we're seeing other people say this,
00:00:38.140
when I go out and I interact with AI today, I really struggle to see how having this thing I can chat
00:00:44.720
with is that useful. It may be fun as like a chat bot or something like that, but I don't see its
00:00:50.420
wider utility yet. Now, we'll be going into the arguments around this, because I think that there
00:00:56.780
are some strong arguments, like the AI industry is making almost no money right now. You know,
00:01:01.060
is this industry not, not almost, but almost no in contrast, it was the investment that's going into
00:01:05.780
it. And the amount that we talk about it and other people talk about it as mattering. And then you've
00:01:10.720
got to think about all of this in the context of, yeah, but like 80,000, oh, sorry, around 90,000
00:01:16.580
people just in the U S tech sector had their jobs cut due to AI this year, you know? Yeah. So come
00:01:21.060
on. Do you not matter to them? Um, but, but so what we're going to be seeing here is I think the
00:01:28.940
way that the people are looking at AI and expecting AI to transform the economy is different from the
00:01:35.420
way it actually is. They're looking at how AI is useful to them instead of how AI will replace
00:01:42.860
them. I'd also note here to the question, because Sam Altman, you know, literally like
00:01:48.640
Sam Altman, who one of the, the guy who runs one of the largest AI companies has said AI is a bubble
00:01:54.300
right now. Right. And so people will come to me and they'll be like, well, you know, even he's saying
00:01:58.180
it's a bubble. And I'm like, I would say it's a bubble right now. It is a bubble. It's obviously
00:02:02.820
a bubble right now, but the fact that a thing is a bubble doesn't mean it's not going to transform
00:02:06.820
society. So if you go to the.com boom, right? Like the.com boom was a bubble, right? But the
00:02:14.980
internet still transformed society. The companies at the beginning of the.com boom, you know, like
00:02:21.240
they were formed in the middle, like Amazon and Google and stuff like that. Like if you made the
00:02:26.020
right bets on those companies, if, if anything, what, like, if you wanted to make the best bets
00:02:32.200
possible, wait for the AI bust and then invest in whatever survives. If there is a traditional
00:02:38.320
bust, you know, keep in mind, like what, what I mean by a boom now is a lot of people are
00:02:41.420
investing in AI companies without understanding in the same way, like in the early.com boom,
00:02:45.980
what the internet's actually good for and good at.
00:02:48.440
Well, what kind of sucks is, is also the AI companies that I think are coming out of this.
00:02:54.120
They're not going to be hopefully traded a big shift in AI tech booms. As far as I see is
00:03:00.460
that they're not something you're going to see in a stock market. They're small. They
00:03:04.540
don't have a lot of staff. They're not public. So our ability to participate in the upside
00:03:10.760
Well, the other thing about AI development and you can, you know, back me on this is we
00:03:15.860
can see all these metrics that say that AI is supposedly getting better and smarter. And
00:03:20.180
yet when you consider like the latest model of Grok versus the last model of Grok, you don't,
00:03:25.080
you don't go like, this is like 50% better. Like it doesn't feel that way to you. Same
00:03:29.620
with open AI's model. Same with a lot of these models. You, you interact with the most cutting
00:03:33.220
edge thing. And you're like, this is marginally like three or 4% better, but all the metrics
00:03:38.140
are showing that it's like massively better. So is this a problem in how we develop AI,
00:03:42.460
how we measure AI, everything like that. I'm going to be talking about that in this as well.
00:03:45.700
I'm also going to be talking about the study that Facebook put out saying that you can't
00:03:50.160
get, basically they're saying AI really isn't that smart. No, no, sorry, not Facebook.
00:03:54.560
I want to say Apple maybe put this out, but if it was Apple, it shows why Apple has not
00:03:59.580
developed anything good in the AI space because the people they have working on it are just not
00:04:03.680
that bright. I do have to say though, I'm really excited about their smart house play. I think
00:04:08.400
that if they're, they're going to have any win, them being the ones that make everything that's AI
00:04:14.240
in your smart home connected and work seamlessly and be really pretty, they are going to be the ones
00:04:19.180
that are capable of pulling that off. So we're going to go into an article, start,
00:04:23.440
start by going into an article in future magazine. So you can see that this isn't just Bruno making
00:04:27.480
these claims. And the article was titled scientists are getting seriously worried that they've already
00:04:32.540
hit peak AI. Speaking to the New Yorker, Gary Marcus, a neuroscientist and longtime critic of open AI
00:04:39.100
said what many have been coming to suspect. Despite many years of development at a staggering cost,
00:04:44.760
AI doesn't seem to be getting much better throughout GP though. GPT technically performs
00:04:50.060
better on AI industry benchmarks and already unreliable measure of project. As experts have
00:04:54.700
argued, the critic argues that it's used beyond anything other than virtual chat buddy remains
00:05:00.360
unlikely. Worse yet, the rate at which new models are growing against the dubious benchmarks appears to
00:05:07.200
be slowing down. I don't hear a lot of companies using AI saying 2025 models are a lot more useful to
00:05:13.860
them than the 2024 models, even though the 2025 models perform better on a bunch of benchmarks,
00:05:19.440
Marcus told the magazine. Now here I note here when he's like, I don't see AI being used for anything
00:05:25.220
other than one of the things we're going to get into later in this episode as a chat bot, I'm going
00:05:29.340
to be like, well, then it's just because you're not familiar with any industry other than your own.
00:05:33.820
AI has already invented drugs that are going through the production cycle that are likely to save millions
00:05:39.020
of lives. And not just like one drug, like countless drugs at this point. AI models that
00:05:43.640
are trained on the human genome have already made numerous genetic discoveries. It's the way that
00:05:49.960
you're using AI. And we'll go through these discoveries in a bit. It almost to me feels like
00:05:55.340
with some of these people, like that Monty Python sketch, you know, like what have the Romans ever done
00:06:00.180
for us? What have AI ever done for us? And it's like, well, okay, yes, they invent lots of drugs.
00:06:05.640
And yes, they help with drug discovery. And yes, they help with, but apart from the sanitation,
00:06:09.720
the aqueduct and the road, irrigation, medicine, education. Yeah. Yeah. All right. Fair enough.
00:06:15.800
And the wine. Yeah. All right. But apart from the sanitation, the medicine, education, wine,
00:06:23.120
public order, irrigation, roads, a freshwater system of public health. What have the Romans ever done for us?
00:06:29.560
Oh, peace. Oh, peace. Actually a fun thing here. Just if you're like, how, how is a way that I'm
00:06:37.800
not thinking about using AI? I'll talk about really quickly the way I use AI in running my company.
00:06:41.980
So, and Brutus is actually the one who implements this is whenever we assign a task to the, one of
00:06:48.440
the programmers, we ask an AI about how long this task should take to complete. And then we benchmark how
00:06:55.300
long it takes them to complete the task against how long the AI thinks it will complete the task.
00:06:59.560
And we can create weighted averages to see sort of how productive a person is. Obviously this isn't
00:07:06.280
going to be perfect and AI is going to overestimate a lot, but it does create a relatively accurate
00:07:11.160
benchmark that we can use to normalize who are the best performers on our team, which is very
00:07:17.980
interesting. And I haven't heard people using AI in this way. And this is from Bruno's email.
00:07:23.600
Ed Zitron has focused heavily on the financial side. He knows OpenAI's current annualized revenues
00:07:28.980
sits around 5.26 billion and Anthropix at around 1.5 billion, but expenses remained outsized. OpenAI
00:07:36.940
alone may be spending roughly 6 billion annually on servers, 1.5 billion on staff and another 3 billion
00:07:44.740
on training. Investors have been asked to sign acknowledgements that profitability may never
00:07:50.660
materialize. Against that backdrop, valuations in the hundreds of billions look speculative at best.
00:07:56.320
So this is really important. 5.2 billion and 1.5 billion for two of the major AI companies are
00:08:05.640
laughably small in terms of what they're making. How are they getting to hundreds of billions in
00:08:14.140
valuation, right? Like why aren't they making more money? And why is this true sort of across the
00:08:19.740
industry? Because we are seeing this across the industry. Before I go further, I'm going to explain
00:08:23.460
this phenomenon because this is actually an important phenomenon. The reason why they're
00:08:29.000
making so little actual money is because their long-term potential is so high.
00:08:35.960
Well, this is how it's pretty much for every tech startup over the past 20 plus years.
00:08:41.520
Yeah. Yeah. So for people who understand how VC works, so VC comes in, it floods the space that it
00:08:46.660
thinks is going to be worth a lot in the future. And then because it's flooding the space with so
00:08:50.780
much money, like because OpenAI and Anthropic are getting so much money to compete against each
00:08:55.060
other, they have to be rock bottom in terms of the prices they're offering or even offer things for
00:09:01.900
essentially free versus the cost to produce them. And people can be like, wait, but then why would
00:09:08.820
they even like do that? Right? Like they're trying to win in a market where you, the customer are not
00:09:15.640
actually the primary customer, where the venture capitalist is actually the primary customer and where
00:09:21.940
it doesn't like part of, for them, you as a user, you are providing them value. You know how like with
00:09:28.940
Google, you provide them value as a user because they get like data from you that they sell to other
00:09:33.720
people and they get like ads from you, but like the data from you is more important. Like some
00:09:38.600
companies like make their money off of the data they collect from you. Okay. You as a user are
00:09:44.040
actually a data point that these companies are able to trade for cash from venture capitalists.
00:09:51.800
That is why you actually like, it's, it's actually a fairer deal. It's not like they're cheating you or
00:09:56.880
you're cheating them. They are trading the fact that you are using them to say, look, I am beating out the
00:10:03.400
other major companies. Hey, you investors know eventually somebody is going to eat this industry.
00:10:09.780
Okay. So that's, that's important to know. Like this is something you'd actually expect if things
00:10:14.920
are going well. So to go back to his email here and none of this is a stupid thing to know. Like
00:10:20.920
I'm not like saying Bruno was stupid for, for asking this question, right? Like it's easy to add.
00:10:26.080
Why is it valued so much when it's making so little? Why haven't tons of profits accumulated in this
00:10:30.700
industry yet? To go back to his email for context, compare this with Amazon web services, AWS launched
00:10:36.360
in 2006, reached cost revenue parity in just three years. And in its first decade, accumulated roughly
00:10:42.240
70 billion in costs. By contrast, Amazon itself spent around 105 billion in just the last year on
00:10:49.420
Zitron. So, so AI, the biggest company in the space is making 5.26 billion a year. Okay. And he's
00:10:58.040
pointing out here that in its first decade, Amazon web services made 70 billion. And then he points
00:11:02.860
out that what Amazon has spent on its AI has been more than what Amazon web services has made in the
00:11:08.880
last decade, 105 billion. Right? Zitron underscores that the entire generative AI field, including open
00:11:17.880
AI, Anthropic, Midjourney, and others produces only about 39 billion in revenue. That's less than half the
00:11:25.140
size of the mobile gaming market and slightly above the smartwatch market at 32 billion. These
00:11:30.940
comparisons illustrate the scale mismatch between AI valuations and demonstrated ability to generate
00:11:36.280
revenue. And so, so this is a really apt comparison. It's bringing in about as much as the smartwatch
00:11:42.220
market. That's yeah. Wow. Putting that into perspective, that's pretty sobering. The smartwatches
00:11:48.300
are so pervasive now, maybe not as sobering as you might initially think.
00:11:53.620
Yeah. Well, I mean, it should be more given how afraid people are of it, how much people are talking
00:11:58.940
about it. Right. There is also the issue of market making a product market fit. LLM based tools are not
00:12:08.480
meaningfully differentiated from one another. The average user tries one, thinks this is kind of cool,
00:12:13.920
and then stops using it. This raises a concrete question. How would one sell these products in a
00:12:20.160
way that justifies ongoing subscription fees? How do you... Did they really stop using them? Like
00:12:24.760
here again is where I questioned it. And also like the smartwatch market... No, no, no. This is
00:12:29.320
factually wrong. If we look at usership rates, they are shockingly high. Okay. Okay. Cause I'm just
00:12:35.820
like, you're not questioning this and I'm like, ah, wait a second. Like... No, but I understand how
00:12:41.860
somebody could feel that way if they're just thinking about like, especially if AI hasn't caught
00:12:46.980
you or you haven't found product market fit for AI within your life. Yeah. You're just going to walk
00:12:53.100
away from it. Right. You're going to be like, what's the point? Right. Yeah. It's also really
00:12:56.660
unfair to compare this to the smartwatch market as it is today, because the smartwatch market is in
00:13:01.360
it's now we make money from this era. So a lot of people wear Oura rings, right? When they first came
00:13:07.620
out, you didn't pay a monthly subscription for them. Now you have to, you can't have one without
00:13:13.280
it. I wear a Fitbit. I don't pay a monthly subscription for it, but I'm constantly upsold
00:13:17.620
on it. So now they've switched into the monetization phase, but at the beginning, it was just no,
00:13:22.420
get this on people's bodies and then try to make money from them. And right now AI is in the get this
00:13:27.960
in people's workflows and lifestyles phase. So of course it's not making that much money.
00:13:33.940
Yeah. Well, and, and because that's what VCs are trying to do, because they're trying to capture
00:13:37.780
the market and then make the money by getting the giant company. Yeah. They're, they're not being
00:13:41.560
foolish about this. Like this is actually makes economic sense. Yeah. I mean, it also happened
00:13:45.840
with Uber and Lyft. They used to pay drivers really well, only have really nice cars and have really,
00:13:50.360
like, they were probably running at a loss. They charged so little. They were running at a huge loss
00:13:54.540
for a really long. Yeah. And so like we had this, this, this generation, which was so nice where you had
00:13:58.940
this like VC subsidized luxury lifestyle where you had like super affordable food delivery and
00:14:05.220
Ubers and smartwatches, but that was because of the growth phase. And right now we're enjoying
00:14:10.240
this short period where we don't have to pay a lot for these AI services.
00:14:13.260
Yeah. I'm actually going to argue that the VCs here might be making a mistake
00:14:17.100
and the mistake that they might be making. And this, this depends on stickiness to particular AI models
00:14:23.800
is they think that they're developing something like the next search engine or the next Uber or Lyft
00:14:28.980
when what they're actually developing is a commodity. By that, what I mean, and I think the way that most
00:14:33.840
people are going to interact with the very best deployments of AI are going to be through skins,
00:14:40.540
basically through windows. Like what we're building was our fab AI, right?
00:14:43.880
Oh, and yeah, people aren't necessarily going to be loyal to Jot, GPT, or Grok. They're going to use
00:14:50.880
a variety of different services that are going to interchange Grok and ChatGPT based on whichever
00:14:56.180
is best and cheapest at the time. Yeah. Kind of how people switch out, at least in the United States,
00:15:00.420
in many places, you can switch out the utility company that you buy from. So you can buy from
00:15:04.120
this utility company, or you can buy from this one that only does green energy or whatever. And you
00:15:07.800
choose whichever you like based on values and based on price. Yeah. And we actually already see this in
00:15:14.480
the data. People switch between like which AI is the top used one changes a lot, which one is sort of
00:15:19.660
known as the best one changes a lot. And you're, you know, if, for example, like right now, the main
00:15:26.380
AI I use is Grok, right? If OpenAI had a model that I just thought was dramatically better, I would
00:15:32.080
switch to them, which has occasionally happened. Claude used to be my main AI, then OpenAI was my
00:15:36.880
main AI for a while. No, it's the Grok, and I may switch again, right? Like I switch all the time
00:15:42.080
what the primary AI I use is. Now, another argument he made in this email that I thought was really
00:15:49.660
interesting is he's like, the iPod, it's a thousand songs in your pocket. The iPhone,
00:15:56.020
it's the internet in your pocket. Like what is AI in your pocket, right? And I think that this
00:16:03.220
actually is a big mistake. And it's one of the reasons why people are not understanding the value
00:16:07.760
of AI. They're thinking about AI's value to them, not AI's value to say genetic researchers or certain
00:16:16.540
groups of programmers, et cetera, right? Like if AI has replaced a job, we often talk about AI can
00:16:22.520
probably replace, I guess, 25% of law clerks now. Now what it hasn't happened yet, but it definitely
00:16:28.540
has the capacity to do that. And you're like, well, what if it makes mistakes? And I go, then just put it
00:16:33.040
in a chain so that it checks for those mistakes. This is one of the reasons when people are like,
00:16:37.500
well, what about hallucinations? And I'm like, hallucinations, like literally don't matter.
00:16:42.280
First, they don't happen that much in current model AIs. I argue somebody was like, oh, well,
00:16:47.920
I don't trust you guys because you get some of your information from AIs as a fan. And I was like,
00:16:52.960
excuse me, Broseph, do you think that the average thing you read from a reporter is going to be more
00:16:57.360
accurate than the average thing you read from an AI? Yeah. At this point, it's not hallucinations. It's
00:17:01.680
it's sourcing that like from sources that get it wrong. And we check our sources, like we check
00:17:07.740
the sources that AI cites, but we can't always take the time to figure out how reliable those
00:17:13.280
sources are. We're like, well, the New York Times reported about it. Like, right. But the point I'm
00:17:18.180
making here is that inaccurate information is more likely for me, like twisted information is more
00:17:25.040
likely to come from a New York Times article than a Grok 4 output. And I would bet that this is
00:17:30.260
something you could even look at statistically, right? Because these, these, it's not that there
00:17:35.260
isn't like a political bias within AI. It's just less extreme and distinct than the political bias
00:17:41.320
within the reporter class. So, and, and then the, the amount you can reduce hallucinations just by
00:17:46.740
doing one pass. By that, what I mean is you have the AI output and answer, then you take that answer,
00:17:52.040
you put it in to a different AI with the question being like, is anything in this hallucinated or wrong?
00:17:57.460
This is an output from another AI. You do that in the probability I'd argue is like 0.01 that you're
00:18:04.280
going to have a hallucination and whatever that output is, as long as you're using like high-end
00:18:08.180
AI models. But the problem with being like, where is it? You know, if you have like the iPod or the
00:18:14.460
iPhone and you're like, it's this in your pocket, what is AI to you, the end consumer? I think this
00:18:19.600
shows a misunderstanding of AI's role within the market. AI is a tool that at its most productive
00:18:26.880
replaces human beings. AI is a simulated intelligence. It's a simulated human being.
00:18:33.860
That's, that's fundamentally where it's most valuable. It's when it can replace an entire
00:18:38.040
call center. You know, that's like a hundred million jobs. If you replace that, when it can
00:18:42.840
replace an entire coder by making other coders more efficient, when it can replace legal clerks,
00:18:48.440
when it can replace. And one of the things he asked me in this email as well, you know, when,
00:18:52.880
when does the data come down where you change your mind on how much AI is going to change the
00:18:57.260
economy? Like, would you need to see things stop moving as fast in the field? Would you need to see
00:19:02.340
like hurdles begin to come up? And I'm like, even if, and we'll go over the potential hurdles to
00:19:08.240
continue to AI development. Even if one came up, even if AI development completely stopped where it
00:19:13.560
was today, most of my predictions on how much it's going to transform our society would stay there.
00:19:19.120
By that, what I mean is using multi-pass AI, you should be able to replace about 25, 35% of the
00:19:25.580
legal profession right now. And yet that hasn't happened yet. You know, you, you should be able
00:19:30.660
to complete replace 25, 35% of government bureaucrats right now. And yet that hasn't
00:19:35.740
happened. Accountants. And yet that hasn't happened. Right. Copywriters. And, and, and when I point this
00:19:42.580
out, I mean, I pointed out was in my own life, right? Like I have seen, you're like, AI does not
00:19:46.840
replace professions. And I'm like, if you are watching this podcast, you are participating in
00:19:51.080
something where AI has replaced professions because we had an earlier iteration of this podcast. If you
00:19:55.620
go back in our podcast history before base camp, where we paid for an editing team and we paid for
00:20:02.080
a card creation team for title cards. Those tests are now both done by AI. So those are, those are two
00:20:09.300
people's jobs who are now done by AI and dramatically. Well, you still, you still use software and you do
00:20:16.200
the editing and I still use, well, AI image generation and I do the title cards, but yeah, like.
00:20:22.500
AI is an increase in my capacity. I couldn't do it without AI. Yeah. No, same. I couldn't, no way. Yeah.
00:20:29.580
So I think that that's really big. And then keep in mind how much AI transforms the economy. If Elon's
00:20:35.220
move to make like robots that work with AI for like factory labor and stuff like that.
00:20:41.740
And everyone initially was like, oh, this is so silly looking, et cetera. But apparently they're
00:20:45.780
having a lot of success with this from what I've heard, like through the grapevine friend network,
00:20:50.020
stuff like that. If this works out now, it's every factory job, right? Yeah. Well, I've also read that
00:20:57.140
China is investing heavily in AI enabled hardware as well. So things like robots. So if, you know,
00:21:05.020
it's not like only one person is trying this, plus also Boston Dynamics has been at this forever.
00:21:10.000
People are, they're going to be major players and absolutely there's going to be a physical
00:21:13.640
element to this. But Boston Dynamics, I actually feel like this is very much like drones come out,
00:21:19.260
right? And everyone's like, well, that's cool, but that's just a toy. Oh yeah. Now like it's
00:21:24.760
completely changed to warfare. Yeah. And now everyone's like, oh gosh, tanks don't work anymore
00:21:29.860
with this new model of warfare and large ships. And we need to completely change the way we fight
00:21:35.360
wars. Like drones were a toy until they weren't right. Yeah. Yeah. Very much the same with AI and
00:21:42.760
where things are going. Yes, absolutely. And I'd also point out here when you're like, well, okay,
00:21:50.420
but what other industries could AI disrupt other than genetics and science and drug development and
00:21:56.560
copywriting? And well, a big one is my cousin owns the company that created the movie here,
00:22:02.200
which takes Tom Hanks and then puts him at different ages and in different environments.
00:22:07.320
This time is time and time with your time and his news is captured.
00:22:17.100
And if you look at the, they'll do viral stunts all the time where they'll like create
00:22:20.580
TikTok reels of like the various celebrities and we'll get like millions of views, but it's faked,
00:22:26.220
is faked with their faces. If you can simulate an actor, that's a bunch of industries that you've
00:22:32.500
just nuked. Right? Yeah. And I mean, this is already, I mean, yeah, it's, I was just listening
00:22:39.780
to a podcast on how acting has been disrupted already in that now production companies are more
00:22:48.420
making money off of the IP and the concept rather than the actors, which is why we see so many more
00:22:53.020
actors have side hustles and create companies and start investing and do all these commercials and
00:22:59.740
have a clothing line or a phone company because yeah, this, this whole industry has changed.
00:23:05.300
So I think we also aren't, aren't recognizing how much many industries have already fundamentally
00:23:10.180
changed with only the beginnings of tech enabled industry shifts away from like key man risk and
00:23:17.860
key man risk being defined as any company that kind of depends on unreliable humans for its financial
00:23:24.100
wellbeing. Um, people have been trying to use tech to render key man risk obsolete for a very long
00:23:29.800
time. And AI really handles that well. Yeah. So I'll note here, I don't know. We're going to get,
00:23:35.580
well, a key man risk was in movies and stuff like that. With, with, with tons of other things too,
00:23:40.180
though. I mean, we don't know if it can do it with competence yet, but I mean, keep in mind what we'll be
00:23:45.200
going over the AI that came in second place in that coding competition and stuff like that. Like
00:23:49.000
AI can clearly handle very advanced tasks. Yeah. But one of the things that's often hidden if you're
00:23:54.560
colloquially using AI is how rapid recently the adoption of AI within corporations has been.
00:24:00.860
So if you look at, and I'm going to put a graph on screen here, AI usage at work continues a remarkable
00:24:07.600
growth trajectory in the past 12 months alone. So this is for 1-1-2025. So like
00:24:14.900
recently, right. Usage has increased 4.6 X. That's in 12 months, 460% increase in usage.
00:24:26.260
And over the past 24 months, AI usage has grown as an astounding 61 X, 61 X in growth in usage.
00:24:37.640
This represents one of the fastest adoption rates for any workplace technology, substantially
00:24:43.580
outpacing even SAS adoption, which took years to achieve the similar penetration levels.
00:24:50.160
Now we're going to go over another graph here. This is showing AI development. This isn't actual
00:24:53.960
usage of AI, but this is AI medical devices approved by the FDA. So you see it is shooting up. Now,
00:25:01.500
unfortunately it only goes to 2023, but I doubt this trajectory has flown down some.
00:25:06.360
Yeah. But I want to look at now a few metrics where we're looking at like adoption within companies.
00:25:14.720
So if you look at organizational AI adoption, and this is from the Stanford AI index in 2023,
00:25:20.940
55% of industries had done it. And by 2025, it had jumped to 71%. Now note here, we're reaching saturation
00:25:29.440
on adoption by many points of AI, which is a potential problem, but we'll talk about how much AI is.
00:25:35.940
I think what you're seeing is people are adopting it, but they still don't really know how to use it
00:25:39.700
yet. Right? Like if I say AI won a coding competition, people are like, wait, how could I ever get AI to
00:25:46.040
code like that for me? Right? And I'm actually sending you the model that they use. They used a sort of
00:25:52.900
chained model. And the way the chained model worked is it had multiple models in an engine
00:25:59.280
where you would have one model that asked it to plan what it did next. Then a model that asked it
00:26:03.760
to code based on it planning. Then a model that would evaluate the code that it had just created.
00:26:09.360
Then a model that attempted to improve the code it just created. Then a model that revealed all that
00:26:14.340
planned again, moved to the next stage. And this is something that we are building. Cause if you're like,
00:26:20.940
well, how would I do this? Our fab.ai is going to allow you to build chained models like this
00:26:26.340
in a very easy way. So just wait. And you'll be able to do this yourself using multiple AI models,
00:26:32.660
very near in the future. Now the generative AI adoption. So for 2023, you have 33% of companies
00:26:39.240
using this 75% in, in this year. And this is coherent solutions trends, 2025 McKinsey. If you look at
00:26:46.860
the AI user base, we go from a 100 million active user base in 2023 to 378 million users globally.
00:26:55.460
This is Forbes 2025. If you look at job impacts, there were no reported job impacts in 2023.
00:27:01.920
And in 2024, it looks like 16 million people likely had their jobs automated by AI. And in this year,
00:27:09.620
it looks like 85 million jobs will be replaced. And this is from demand sage 2025 AI jobs barometer.
00:27:18.140
Now I'd also note here that people are like, well, AI has reached certain, and we're going to go over
00:27:23.740
where AI has sort of plateaued in its growth. And this is actually kind of an illusion by the way that
00:27:28.000
we're measuring AI growth. But one of the things that we've actually been seeing is significant
00:27:32.580
advancements to the actual underlying model, which leads to jumps in growth within some area.
00:27:38.800
While I will not say I was wrong about AI, because I don't think I was, where I will
00:27:43.460
admit I was wrong was about DeepSeek not mattering. DeepSeek has been very diligent in publishing
00:27:50.340
how they do stuff. Like, like, despite being a Chinese company, they've been very sort of open
00:27:55.100
source in how their new model works. So we understand how they basically reinvented the
00:28:00.240
transformer model in a way that has a lot of advantages. I mean, this is something that's just
00:28:04.520
been having significant bumps even over this last year. So to go over this, they invented
00:28:11.240
something called multi-head latent attention, MLA. MLA is a modified attention mechanism designed
00:28:17.820
to compress the KV cache without sacrificing model performance. In a traditional multi-head
00:28:23.060
attention for the original transformer architecture, the KV cache grows linearly with sequence links
00:28:29.480
and model size, limiting scalability for long context tasks, e.g. processing 100k tokens. MLA
00:28:36.000
introduces low rank compression across all attention heads to reduce this overhead, making inference
00:28:41.480
more efficient while maintaining or even improving training dynamics. It basically makes training way
00:28:47.600
cheaper and is how they achieved what they achieved. Now I'll show here another graph on screen for
00:28:52.860
people who don't think that we're making advancements. This goes only from 2022 to 2024.
00:28:59.380
Okay. So keep in mind, this is not like a, I'm going distantly into the past to show like
00:29:03.380
massive improvements, right? This is the smallest AI models scoring above 60% on the MMLU 2024 to 25.
00:29:13.160
And you can see here now we're at 5.3 mini, but what's really cool here is when you see the big jump,
00:29:18.680
this happened in late 2023. This was with Mistral 7b. With RFR AI, I've actually found Mistral 7b is
00:29:24.860
astoundingly good given how inexpensive it is to use. We might be able to sort of chain the Mistral
00:29:30.300
B model. I'm thinking to get responses that are near the quality of Grok 4, even though it costs
00:29:36.920
150th to run. Wow. So yeah, very fun to see how we might be able to attempt that. Now let's look at
00:29:45.640
how many employees, because I want to keep this all very recent so people can see like, this is,
00:29:50.700
this is, this is actually happening today. So I'm putting a graph on screen here, which is how many
00:29:56.640
employees use AI tools, contrasting 2024 with 2025. In financial services, just in the last year,
00:30:02.880
it went from 4.7% to 26.2%. Ooh. In healthcare, 2.3% to 11.8%. In manufacturing, 0.6% to 12%.
00:30:14.520
Where you see big ones retail, 1.1% to 26.4%. Oh, wow. And you can look at the others here,
00:30:22.380
but it's, it's huge, right? So now I'm going to put up a graph on screen here of different types of AI
00:30:29.200
tasks and how they have jumped in them. This is from the Stanford index, select AI index of technical
00:30:35.740
performance benchmarks versus human performance. And what you will notice here is where human
00:30:40.860
performance is a hundred percent mark, AI has been shooting up in their proficiency across the board,
00:30:45.740
but you also notice here that it appears that AI gets really dumb after it passes the human benchmark.
00:30:50.740
Like it stops going up as quickly. And then here we have AI benchmarks have rapidly saturated over
00:30:57.080
time. So here we have a number of different AI benchmarks and you can see they all sort of
00:31:01.200
taper off after a human. And this creates an illusion for a lot of people that once AI gets
00:31:07.460
smarter than a human, it stops getting smarter after that. And what's actually happening is the
00:31:12.740
benchmarks that we creating are saturated because we didn't have to deal with entities this smart.
00:31:19.340
And humans are unfortunately very bad at telling when an entity is significantly smarter than them.
00:31:24.340
Where you can see this really loudly is our open AI. Oh, by the way, any thoughts? I've been
00:31:30.880
just rattling here, Simone. No, I'm really enjoying this, but also I've had trouble comprehending why
00:31:39.560
people think AI is plateauing. So. Well, I mean, I do think the perception, like the current model of
00:31:47.880
opening AI I'm using doesn't feel that much better than the previous models. And in some cases,
00:31:52.520
even worse, right? I can understand that. I can understand somebody being like, what do you mean
00:31:58.540
this is like 50 or 60% better? It feels three, 4% better. Based on how they use it. Sure. Yeah.
00:32:04.820
Based. I understand your snarky remark there. You're accurate, Simone. But I think if you want to see
00:32:09.700
where you can see this really loudly, you can see this on the difference between the special version
00:32:15.220
of ChatJPT4 and ChatJPT5 and all of the romance people. If you watch our episode on like the AI dating
00:32:21.240
people. They're so mad. They're so mad because it no longer talked to them like a dumb romance author.
00:32:27.460
It didn't put a bunch of emojis in things. None of this florid poetic language.
00:32:33.580
Yeah. You see this on the meme where people are making fun of it, where it doesn't like give a bunch
00:32:38.040
of emojis and flowery stuff when somebody gives a baby announcement. It's just like, congratulations,
00:32:42.760
have fun. Where the other one used to do like, you're going to have a welcome to the bipedal
00:32:47.940
moment. Like you're going to have a little one running around. Oh my gosh. But really,
00:32:52.520
I'm so excited for you. But basically it was acting like an idiot. But unfortunately your average
00:32:58.620
person's intelligent level, your average, it capped out at GPT 4.5. And so when AI became smarter and
00:33:06.560
more sophisticated and more, I mean, sophisticated, that's the word. When it became more intellectually
00:33:12.280
sophisticated and understood that this is not an appropriate way to communicate with your normal
00:33:17.260
person, you don't send them long, lavish love poetry, right? Unless you're prompted to intentionally
00:33:23.160
be cheesy, it stopped doing that. And people freak the F out. So in many ways, one of the
00:33:30.520
phenomenons we're seeing here is people stop being able to judge how smart an AI is when the AI is
00:33:36.020
significantly smarter than that. Now to note here, how much we have saturated our benchmarks at this
00:33:41.520
point. Here, I am reading from a Substack post by Ash K. Curry or something called no AI progress is
00:33:49.660
not plateauing. And he notes here talking about one of the metrics that they were judging on.
00:33:55.220
And to their credit, they created a really difficult benchmark. When they released this benchmark,
00:33:59.960
even the smartest AI models were only able to solve 2% of the problems. This was two months ago.
00:34:05.200
In November 2024. So this post came out a little bit ago. So in two months ago, I love it. I have
00:34:10.680
to say a little bit of a go of this months ago at this point, right? So November 2024, it could solve
00:34:15.480
2% of the problems. And here's a graph of how many it could solve. Great. Except so far, with only a
00:34:21.540
two-month time difference, OpenAI announces O3. So keep in mind, this was not the O4 model yet.
00:34:26.860
Their smartest model at coding math later in December 2024. How did it do? It got 25% right.
00:34:33.680
Now, I note here that 99.9% of the population cannot solve even 1% of the problems on the frontier
00:34:44.840
mass test. Yeah, these are really difficult tests. And here we have an AI that solved 25% of it,
00:34:50.680
though. Five years ago, the state-of-the-art AI was ChatGBT2, which could sometimes write a fully
00:34:57.000
coherent paragraph in English. And if we look here, we can see another test being saturated here.
00:35:03.220
This is ARC AGI semi-private V1 scores over time. And you can see we went from like basically getting
00:35:09.920
none of it right with GPT4 in 2023 to 2025. But when I say none of it, I mean, it's getting like
00:35:15.840
two to 3%. It's getting near 100%. So they have to shut it down and create a new test.
00:35:21.920
Yeah. And the benchmark used to just be, hey, could I not tell the difference between you and a human
00:35:28.180
in conversation? Just keep moving to goalposts.
00:35:31.920
Yeah, yeah. So we're now going to go to this AI competition for coding, right? We talked about
00:35:39.440
this multimodal model. It did really well. This happened recently. So what was this contest that
00:35:44.480
I'm talking about? What happened? So the contest focused on creating good enough heuristics to
00:35:48.500
complex computationally unsolvable problems, like optimizing a robot's pass across a 30-30 grid
00:35:53.880
with the fewest moves possible. Under strict rules, no external libraries or documentation,
00:35:58.980
identical hardware for all, and a mandatory five-minute cool-down between code submissions.
00:36:04.120
A Polish programmer named Pedsmarie Dobrynski, known online as Psycho, who was a former OpenAI
00:36:09.820
employee, so no, really, really smart people were competing in this competition, took first place
00:36:16.460
after a grueling 10-hour marathon session. The OpenAI model debuted, OpenAI HC, finished a close
00:36:24.660
second, with Deepak edging it out by 9.5%. Final scores, 1.81 trillion points for the human versus
00:36:34.960
1.65 trillion for the AI. The AI beat 11 humans in total, so that was the rest of the field right
00:36:42.020
there. The event featured the world's top 12 human coders as qualifiers, with the AI added as an extra
00:36:48.680
competitor. Psycho was the only human to outperform the AI, while the other 11 humans placed third or
00:36:54.120
lower. As for how they ran the AI to make it competitive, it wasn't a standard publicly
00:36:58.300
available model like GPT-4 or even O1 that just spits out code in one go. This was a secret
00:37:04.820
internal OpenAI creation described as a simulated reasoning model similar to the O3 series, an
00:37:10.640
advanced successor to O1. It ran on the same code, at coder provided hardware as humans to ensure
00:37:16.480
fairness, but its strength came from its iterative multi-step process. And I mentioned how that went,
00:37:20.540
like plan, code, blah, blah, blah, blah, blah. Okay, right. So now we're going to talk about a paper that
00:37:26.540
Bruno cited for me in the thing he reached out to. This is an Apple research paper titled The Illusion
00:37:32.440
of Reasoning Makes the Case That Language Models... This is Bruno writing here. Oh, and other people have
00:37:36.880
asked us to comment on this too. This is great. Yeah. Cannot reason as marketed. The critique dovetails
00:37:42.280
with other signals of caution. Sam Altman has called this field a bubble. As I mentioned, it technically
00:37:46.460
is. And Elon has raised concerns about looming energy constraints, which might happen. Basically,
00:37:51.940
Elon's big bugaboo is energy is a bigger constraint than chips. He's not saying the industry is
00:37:56.760
overrated. These warnings are not isolated. So you point to structural issues, both technical and
00:38:02.580
economic, or they point to structural issues. Okay, let's go over this paper because this paper is
00:38:07.060
ridiculous. It's actually ridiculous. So what they did is they gave AI a number of puzzles to do.
00:38:17.360
And the AI outperformed humans by orders of magnitude at these puzzles. But they didn't
00:38:25.500
like the way it outperformed humans orders of magnitude. And so it could have been more efficient.
00:38:31.680
And I'm just like looking at them like with a guffaw on my face. Like, how can you be this unfair
00:38:38.860
to AI? Like, here, AI, do this puzzle. It does it at like 10 times the speed of a human or at 10 times
00:38:45.940
an advanced level of what an average human can do. And they're like, it's like, I notice you forgot to
00:38:51.480
dot your eyes. I guess I'm gonna have to mark you. It's not sentient. Imagine if teachers did that.
00:38:57.700
Like, it's like a super prejudiced teacher. But let's go in. Let's go into this. Okay. So we've got the
00:39:02.680
Tower of Hanoi. Okay. So the average human limit on the Tower of Hanoi can solve with up to three to
00:39:09.200
four disks. Seven to 15 moves with trial and error. Doubles minutes mentally with physical disks
00:39:15.680
disks at five to seven disks if you move to physical disks. Okay. So AI, when did AI do this?
00:39:22.720
So models like O3 Mini, so not a particularly advanced model here, right? It was able to do it
00:39:28.900
up to 15 moves. And I'll note here, but it did, it did, it did break down. Okay. So, so let's,
00:39:39.420
let's look at Claude 3.7 Sonnet. So we're saying, okay, but we're not looking for how high can it do it?
00:39:44.660
We're looking at how high can it do it flawlessly. Okay. Okay. So your average, you know, 95 IQ human
00:39:52.740
or whatever, right? They're, they're at three to five disks. Claude 3.7 Clonet can do it flawlessly
00:39:58.840
up to five disks. Okay. All right. So why did they get mad at the AI? Yeah. Why? Please explain this to me.
00:40:10.780
Saber argues this is an illusion because even at medium end traces show incoherent exploration
00:40:17.520
and effort peaks, then drops, e.g. fewer tokens spent despite budgets indicating no true recursive
00:40:24.600
understanding, just pattern extension until it breaks. Okay. My, my, my brother in Christ,
00:40:30.860
did you have an EEG hat on these humans? You don't know how their reasoning was working during this.
00:40:37.280
You don't know that this wasn't happening in the humans. Exactly. But also you didn't even use
00:40:43.100
humans as a norm in this. When they did this, they didn't use humans. I'm using other studies to look
00:40:48.040
at how humans perform on this. You just assume that the human brain doesn't work that way.
00:40:52.720
That's what always gets so, when people are like, AI is just a token predictor. And I'm like,
00:40:58.100
a lot of the evidence suggests human brains are a token predictor. So your episode on this,
00:41:01.280
and more episode evidence has come out since our episode on that, that I've got over in other episodes
00:41:06.280
because it annoys me so much. There's just like voluminous evidence, a huge chunk of the human
00:41:10.740
brain is probably a token predictor. But I just hate so much when they're like, humans don't make
00:41:17.080
these types of mistakes. And I'm like, well, first of all, even if you're considering them a mistake,
00:41:22.580
note that the AI did better than the humans at its task. So if the way it did, it was a mistake,
00:41:29.040
then clearly it understood its resource allocation and limitations and performed with it in a way that
00:41:35.280
out-competed its competitor, right? Who are you to say that you know better than it about how it can
00:41:40.520
do this? And if it could do it better, why didn't you add that to the token layer? You could have done
00:41:45.860
that. All right. So next, we're going to go to river crossing. The average human limit, the classic
00:41:51.040
here is three, is solvable through hints. Though the average might need trial and error to avoid
00:41:57.440
constraints. And at number four, 20 plus moves, complexity explodes. Most would fail due to tracking
00:42:02.620
multiple states mentally. All right. So humans, average human intelligence, three. Clots on it,
00:42:09.180
3.7, fails beyond three as well. And errors step up at four or higher. But it says that in humans,
00:42:16.360
four becomes near impossible. Okay. And so it collapses it around where humans do. Okay.
00:42:21.440
So here, AI is performing similar to humans. So why do they say that this proves it is dumber?
00:42:27.980
Well, they highlight, to highlight the illusion, they say, despite self-reflection, AI can't
00:42:32.800
consistently apply constraints, leading to invalid moves early, proving no deep understanding of
00:42:38.440
safety rules, just probabilistic guessing which falters. But you could change the way the AI model
00:42:44.080
works to do this. If the human brain is a token predictor that evolved, it almost certainly has
00:42:49.680
pathways to check for these types of mistakes if these are common mistakes within token predictors.
00:42:55.220
But you have locked the AIs that you are using out of doing that.
00:43:01.260
Also, like, these are tests, you know, how far you get.
00:43:06.360
It's not like I've taken the SAT or some other standardized test and then been told,
00:43:13.640
oh, but you know, you took way too long on like these three problems or like you went back and
00:43:21.080
Yeah. Like you're right or you're wrong. You get this many questions answered or you don't.
00:43:26.520
The fact that people keep not only moving the goalposts, but then going back into these tests
00:43:31.380
and evaluations and nitpicking the methodology used just seems like massive amounts of denial.
00:43:38.780
Well, this is how Apple is explaining why they can't make an AI because AIs aren't real.
00:43:42.800
But okay, so here we go to their blocks world test. Average human intelligence, you can get up to three
00:43:49.000
to five blocks. The AI breakpoint got up to 40 blocks. But LRMs collapse at high ends. AI vastly
00:44:01.180
outperforms humans on this one. But the paper points to an illusion via trace analysis. At medium
00:44:06.480
in corrections happen late. At high exploration is random and incoherent, not strategic.
00:44:12.120
Showing reliance on brute force patterns, but it's working, not adaptable planning. But if it's
00:44:17.960
working, it's a good strategy. You are demanding that it solve it the way that you solve it.
00:44:23.480
So throughout the paper, the way that you want to solve it. Yeah. Basically what they show is that
00:44:29.200
AI exhibits flaws like overthinking, exploring the wrong paths unnecessarily, which you could put in
00:44:36.380
the token layer for it not to do, or have another model that checks it to stop it from doing this.
00:44:42.220
Inconsistent self-correction and a hard cap on effort, collapsing incoherent at high complexity,
00:44:47.200
much higher than human, without adapting. Unlike humans who might intuitively grasp rules or persist
00:44:52.700
creatively, even slowly, AI doesn't build reusable strategies. It just delays failure and medium regimes.
00:44:59.260
So like, when I look at this paper, I'm honestly, I read a lot of romance mangas that take place in
00:45:07.680
fantasy worlds. And you'll have the evil, you know, stepmother or whatever, or concubine who will like
00:45:14.080
arrange all the tests. So her clearly incompetent son can beat the clearly much more competent person.
00:45:20.760
And then the, the, the, the bribed vizier will come out and say, well, do you not see that he took too
00:45:26.620
long on question number five, which proves auspiciously unlucky. And it's like, come on, my friend, what are
00:45:34.820
you doing? Like, clearly you're just begging the question here, right? Like the AI is outperforming
00:45:41.960
people and you are using its outperformance. This reminds me of the hilarious test that some people
00:45:46.680
have been like, they released this paper saying, oh, well, yeah, there was this paper done on Claude
00:45:52.280
that showed that it didn't know the logic, the internal logic it had used to get to certain
00:45:56.940
outcomes, right? Like when you could look at this internal logic. Humans don't know the internal
00:46:00.700
logic they use to get to certain outcomes. I point, a lot of people think that humans know,
00:46:04.740
but if you look, there's been a lot of experiments on this. Look at our LLM models where we go over all
00:46:09.040
the studies on this. It's just so stories. They're post, they're adding post-hoc reasoning and
00:46:13.120
they add post-hoc reasoning. Basically you make up how you came to a decision if that decision is
00:46:18.640
changed in front of you. So a famous example of this is people will think they chose like one woman
00:46:23.460
as the most hot from a crowd and they'll do sleight of hand and then show you another woman and say,
00:46:27.080
why'd you choose this woman? And people will provide detailed explanations. And they've done
00:46:30.380
this with political opinions. They've done this with like, this is a well-studied thing in psychology.
00:46:35.020
You have no idea why you make the decisions you make, but they assume because our intuition
00:46:40.400
is that we think we know. It's not even that it's our intuition. It's that our minds are token
00:46:46.660
predictors, like both on a technical, but also like more philosophically. And when someone asks
00:46:53.220
us a question, we want to be able to answer it. We see this with our kids all the time. Like last
00:46:57.440
night, Toasty, our son was telling us how Tommyknockers, which are these like monsters we
00:47:02.600
made up for them. Yeah. He was like, Tommyknockers cannot exist in this house. And we're like,
00:47:09.240
well, how do you know that? And he's like, well, it, my granddad said it to me at his house when
00:47:14.660
I was a baby. He's not been at his grandfather's house. This doesn't make sense. But humans like
00:47:24.080
to give answers for things. And I get that. That's totally respectable, but like, he hallucinated.
00:47:28.500
He literally hallucinated. Yeah. Like we do that too. So stop people. Stop. You're embarrassing
00:47:35.500
yourselves. Now I'm not going to go too deep into some of the ways AI is being used for medical
00:47:39.800
research, because I don't know if people fully care, but I will at least go into some of the
00:47:44.080
drugs and, and some of the methods where it's been used. It's been used for genome sequencing and
00:47:48.540
analysis. It's been used for variant detection, disease prediction. It's been used for clinical
00:47:52.720
genetics and diagnostics. It's been used for drug design and target identification. It's been used for
00:47:57.540
predicting interactions and toxicity. It's been used for streamlining the development in clinical trials.
00:48:02.180
Now, if we're going to go into some of the specific ones that have been developed,
00:48:05.760
one called Rintocertib. I'm, I'm, I'm, I-N-S-O-18-O-5-5. You know what? I'm not going to
00:48:15.840
list these designations for the future ones, but this was developed by Encelco Medicine using their
00:48:20.780
generative AI platform, pharma.ai. This small molecule inhibitor targets TNIK for pulmonary fibrosis,
00:48:29.200
a rare lung disease, which my family has, and has killed multiple family members of mine.
00:48:34.280
So we might've actually funded this research because my family does fund a lot of stuff in
00:48:38.140
that industry. Then another one co-discovered by Existentia and Sumatoa Pharma using AI driven
00:48:44.400
design. This serotonin 5HT1A receptor agonist treats obsessive compulsive disorder. Now note here for
00:48:51.880
this first AI drug development, right? This could literally save my life one day. My, my, I think
00:48:59.520
my aunt died of this. I know my grandfather died of this. My dad has this. So I could easily get,
00:49:05.600
like, I'm very, like, this is like the number one killer in my family. And AI might've developed a
00:49:10.800
solution to it. Like, you can't understand when you're like, AI has done nothing meaningful. It's
00:49:15.360
other than this drug that saves people in my family's life. Yeah. Like maybe, you know,
00:49:20.160
let's say you have like a serious risk of Alzheimer's in your family. You're going to
00:49:23.980
feel very different about AI once AI cures Alzheimer's. Actually, by the way, another
00:49:28.620
Existentia Sumo collaboration was dual 5HT1A agonist 5HT2 agonist, which targets Alzheimer's
00:49:35.400
disease. Amazing. They're really going for those pithy names. The same company, Existentia,
00:49:42.240
developed a cancer treatment, a tumor fighting immune response thing.
00:49:47.000
Oh, point for Simone, because of course the cancer is coming for me.
00:49:50.720
Yeah. And then in terms of DNA stuff, like what's it finding in genetics,
00:49:54.720
the novel autism linked mutations in non-coding DNA using deep learning on whole genome sequences
00:50:00.620
from thousands of families, researchers identified previously undetected mutations
00:50:03.720
in non-coding regions associated with a disorder, autism. Rare DNA sequences for gene activation.
00:50:09.040
AI analyzed vast genomic data to discover custom tailored downstream pro-motor regions,
00:50:14.680
DPR sequences active in humans, but not fruit flies and vice versa.
00:50:18.500
I also think all this like better genetic sequencing with autism might actually
00:50:21.920
fix the autism diagnosis problem of like too many different conditions being grouped into autism.
00:50:28.660
Like, you know, we're participating as a family in autism genetic research.
00:50:31.940
Yeah. But like our kids don't have any of the like genes for autism and that's because they have
00:50:41.740
Even though they've all been diagnosed, you've been diagnosed. Do you have any of the genes?
00:50:44.960
No. And that's the thing. It's like, I think that when, when AI helps us better understand autism
00:50:51.740
and like the genetic components of it, they're going to be like, all right, so these are actually
00:50:56.240
super different things. And on this technical level, we can demonstrate and show it how probably
00:51:01.860
low functioning autism and different forms of autism are going to be seen as very different
00:51:06.120
from what used to be called Asperger's. And it's now just, but my point here being is people are
00:51:12.140
already being fired over this, right? Yeah. If you're looking at AI and it not just that it's
00:51:17.400
already developing life-saving drugs. It's already developing game-changing scientific
00:51:22.200
development. Yeah. People like, I guess if it doesn't immediately affect them, if they are not
00:51:26.580
married to their AI boyfriend and husband now, if they aren't having a personally scary disease.
00:51:33.060
Which is happening, by the way. So a study which surveyed a thousand teens in April and May
00:51:37.120
showed a dramatic rise in AI social interaction with more than 70% of teens having used AI chat
00:51:43.420
companions and 50% using them regularly. 50% of teens are using them regularly.
00:51:51.480
Sure. Despite the widespread use, 67% of teens say that talking to people is still more
00:51:59.360
satisfying overall. Wait, wait, wait, wait, wait. So 33% think talking to AI is more satisfying?
00:52:09.700
But no, we've hit a plateau. It's all a bubble.
00:52:14.580
Okay, guys, have fun being left in the dust. Enjoy it.
00:52:18.660
No. But I see. Because when people are thinking about a product, they think about it from a consumer
00:52:25.340
level. They think about it like an iPod or a, you know, why isn't this in my pocket, right?
00:52:30.460
It can also be hard for people to wrap their heads around it. You know, like when cars started being adopted, it's like, oh, this is just a rich person thing. Like, they break down all the time. They, you know, it's just better to have a horse. Just keep your horse. People couldn't imagine not having horses on the roads. You know, it's similar to the hallucination arguments. Like cars break down. AI's hallucinate. Why? How is that going to transform society?
00:52:52.200
Yeah. Buckle up, guys. You don't have to. But yeah, if you go into this not wearing your seatbelt, this is on you.
00:53:01.760
Yeah. And I could go into like technical things where it looks like parts of AI development have slowed down recently. But in other areas, it looks like it's sped up recently. Like that's the problem with a lot of this is you can say, well, it's slowed down here. It's slowed down here. And then, well, it's sped up here, here and here. Right? And then you'll get some new model, like DeepSync's new model. And they'll be like, oh, and now we have some giant jump. Right?
00:53:22.180
And then we've just been seeing this over and over again. I hope it plateaus. It's going to be scary if it doesn't plateau. But we're not seeing this yet. We're seeing what is kind of a best case scenario, which is steady growth. Steady, fast growth. Not fooming. Okay. But steady, fast growth.
00:53:40.740
Yeah. So yeah, multiple people actually requested this discussion in the comments of the video we ran today, which was on how geopolitics will be changed after the rise of AI and more accelerated demographic collapse. So I'm glad that you addressed all this.
00:53:59.200
There's a lot more. I mean, we're only just getting started. And a lot of people also chimed in in the comments. They're like, well, give me specific dates. I need to know like, you know, what by when we can't do that. Like we can give you dates. We're going to be wrong. Like it's, it's really hard to predict how fast things are going to be. And there are so many factors affecting adoption, including regulatory factors, social factors, that it just makes it really hard for us to say exactly when things are going to happen. Are heuristic with these things? If you're just trying to be like, well, yeah, but like, how do I know when to start planning?
00:54:29.200
This is your reality now. Just like accept it as reality and live as though it's true. That's how we live our lives. We live our lives under the assumption that this is the new world order. And we don't invest in things that are part of the old world order in terms of our time or dependence. And we do lean toward things that are part of the new world order, if that makes sense.
00:54:48.440
Yeah, no, I, I, I absolutely think it makes sense. And I'm just, I totally understand where people are coming from with this, but my God, are they, they, it's, it's like hearing computers transform society and only thinking about the computers that you use for recreation instead of the computers that are used in a manufacturing plant and to keep planes connected and to, you know, like the,
00:55:18.440
and even if the development stopped today, the amount that the existing technology would transform societies in ways that haven't yet happened is almost incalculable.
00:55:30.440
Like that, that's the thing that gets me. I don't need to see AI doing something more than it's already done today. Like I, I, I, I don't need to see something more advanced than Grok 4. Okay.
00:55:43.340
Than opening AI 5. I, I, with these models, I could replace 30% of people in the legal profession. That, that's a big economic thing. Okay.
00:55:53.440
Yep. And I mean, again, we, we can't say how fast this is going to be impactful or not because there are already states in the United States, for example, that are making it illegal, for example, for your psychotherapist.
00:56:09.920
Even though AI outperforms normal therapists on most benchmarks.
00:56:12.620
Well, it just to use AI to help themselves, like, and they're going to cheat anyway, but like, so people are going to try to artificially slow things down in an attempt to protect jobs.
00:56:23.220
Or protect industries because they don't trust it. So again, things will be artificially slowed down. Sometimes things will be artificially sped up by countries saying, okay, we're all about this. We need to make it happen.
00:56:35.080
Or Trump. I cannot believe the Democrats have become such the Luddites.
00:56:39.200
Oh, whatever. Anyway. Yeah. Thanks for addressing this. And I'm excited. We're in the fun timeline.
00:56:46.740
Oh, absolutely. I, I often, I watch AMVs from, you know, the zombie show about the office worker, the, the Japanese office worker.
00:56:57.200
He's having a blast. We are undergoing like multiple apocalypses right now. And I'm like, I am here for every one of them.
00:57:05.500
This is, this is a fun time to be alive during the, the AI slash fertility rate apocalypse, because I get to do the things that I want to win.
00:57:14.440
Have lots of kids and work with AI to make it better.
00:57:20.960
Sending you in the next link. Get ready for it.
00:57:44.040
You are a lovely wife. And I love that you cut my hair now. It feels so much more contained.
00:57:49.100
The more things we bring into the house, whether it's you making food or cutting my hair, it dramatically improves my quality of life because I don't have to go outside or interact with other people.
00:57:58.880
And I really hadn't expected that. And it's, it's pretty awesome.
00:58:02.840
Yeah. I get now why for many people, it's a luxury to have everyone come to your house to deliver services, but it's even better if you don't have to talk with someone else and coordinate with someone else and pay someone else and thank someone else.
00:58:16.820
And not like I'm not appreciative of what other people do and the services they provide, but it's just additional stress. Like this is a generation of people that can't answer the phone, like me included. It's just, and so like the anxiety that you have to undergo to like have a transaction with a human is so high.
00:58:35.720
Even if they, even if they're doing a great job and they're happy and you're happy, you still have to like go through the whole, thank you so much. And, oh, can I have this? And, well, this isn't quite right. Can I have this adjusted? And like, no, I would rather use my mental processing power to just keep our kids somewhat in order.
00:58:52.700
Somewhat in order. That's a tall, what do people think of the episode today?
00:58:58.460
What did they think? They, I think they liked it. I'm trying to think of like, if there was any theme in the comments, a lot of people had small quibbles here or there about birth rates in certain areas. And I think that's because the data is so all over the place. And a lot of people have anchored to old data.
00:59:16.220
And then they're really shocked to see how much the birth rates have changed. Some, I haven't gone deep into it, but some people have questioned why you think like growth in certain areas in populations won't matter due to them sort of being technologically just not online yet and not developed.
00:59:38.220
Yeah. This to me, I just find like a comical thing. Like they, they, they think that they're going to get a Wakanda, right? Like this is not going to happen. Right. You, you, you can't just, when we've seen populations like jump in technology and industry levels, it happens because of some new form of contact or some new form of technology being imported to the region. Like we saw in East Asia.
01:00:01.580
It's very unlikely that you're going to see something like Somalia, which has good fertility rates, just like suddenly develop. It doesn't, it's, it's, and we've tried to force it, right? Like this is fundamentally what the U S tried to do with Iraq, right? Like we tried to force them to become a modern democracy and a modern economy in the same way we did with like South Korea and Japan and Germany. And it just didn't work.
01:00:26.920
What do you think about the city-states that like Patrick's working on in Africa? Couldn't you theoretically create Wakandas?
01:00:35.160
You could, you could. I think one of his city-states would be most likely to do that, but that's not going to have an impact on a wide spread of the region, right?
01:00:53.600
Tate, can you tell me about your dream last night?
01:01:10.040
So you didn't dream about spiders or Tommy knockers or anything else? Just black?
01:01:37.300
What happens if you get near your tummy knockers?
01:01:50.540
So we stay away from the cave in the tunnel, right?