ManoWhisper
Home
Shows
About
Search
TRIGGERnometry
- September 08, 2024
How AI Changes The World THIS DECADE - Mo Gawdat
Episode Stats
Length
1 hour and 11 minutes
Words per Minute
174.95718
Word Count
12,499
Sentence Count
840
Hate Speech Sentences
15
Summary
Summaries generated with
gmurro/bart-large-finetuned-filtered-spotify-podcast-summ
.
Transcript
Transcript generated with
Whisper
(
turbo
).
Hate speech classifications generated with
facebook/roberta-hate-speech-dynabench-r4-target
.
00:00:00.000
I love shopping for new jackets and boots this season, and when I do, I always make sure I get
00:00:05.220
cash back with Rakuten. And it's not just fashion. You can earn cash back on electronics, beauty,
00:00:10.660
travel, and more at stores like Sephora, Old Navy, and Expedia. It's so easy to save that I always
00:00:16.380
shop through Rakuten. Join for free at rakuten.ca and get your cash back by Interacte Transfer,
00:00:21.940
PayPal, or check. Download the Rakuten app or sign up at rakuten.ca. That's R-A-K-U-T-E-N dot C-A.
00:00:30.000
It's a bit like raising Superman. You get that, you know, young infant with superpowers. Raise it
00:00:37.520
well, and it protects and serves, and becomes Superman. Raise it, you know, to steal and kill
00:00:42.600
the enemy, and it will become super villain. We are the architects of our own salvation,
00:00:47.240
but also, we are also the architects of our own destruction. And I think what you're really
00:00:53.140
saying here is this is a moment where humanity is on the cusp. 100%. We have to start saying the
00:00:59.380
world is changing, people. Wake up. Mo, you were chief business officer of Google X,
00:01:06.160
and one of the reasons you've become prominent is you made some very stark warnings based on your
00:01:11.580
experiences about the dangers of AI. This is something that everyone talks about, and now Elon
00:01:16.800
and a bunch of other people signed a document saying we've got to be careful about this. Can you
00:01:21.940
just explain in basic language for ordinary people like us, what is the nature of the concern about
00:01:28.980
AI? Why is it dangerous? Well, there are two layers of danger. You can think of them as chronological
00:01:37.680
in order, if you want. Most news media and, you know, we get attracted to negativity. So most people
00:01:45.320
will try to talk about an existential risk, sort of like, you know, a robot cop or a robot walking the
00:01:52.600
streets and killing people, sort of like AI will take over and kill all of us. This is very, very,
00:01:58.200
very far in the future. I think it's plausible, to be honest. But it is so far that the probability of
00:02:06.460
it is not my main concern as we speak. My main concern simply is that when the US had a nuclear
00:02:14.380
bomb, they used it, right? The reality of the matter is that the new superpower in humanity's
00:02:22.020
short history is artificial intelligence. I mean, our humanity's superpower has always been
00:02:29.700
intelligence. Now, we're getting to a point where we're turning intelligence into a commodity, sort
00:02:35.900
of, right, where you can plug into a socket in a wall that's called AI and get yourself 400 IQ points
00:02:42.940
more. Now, if one of us has those IQ points and the other doesn't, there is a very, very distinct and
00:02:49.300
significant advantage. This leads to a concern around the immediate impact, immediate like what
00:02:57.320
it's happening already, in terms of concentration of power, in terms of prolification of power,
00:03:03.040
interestingly, and the two work together in a very interesting way. So there will be massive
00:03:07.300
concentration with some, whether these are nations or companies or platforms or individuals.
00:03:13.940
There will be prolification of use of technology because each and every one of us will now have
00:03:20.000
access to a lot of intelligence, right, which creates a very high anticipation of risk of population
00:03:31.140
movement, if you want, or population rising or ill acts, which would probably lead to more control,
00:03:40.000
more surveillance. You know, you have a very, very clear and immediate threat on jobs. You have a
00:03:47.420
very clear and immediate threat. In my view, the most prominent in 2023 and 2024 will be the end of
00:03:54.260
the truth. It will become almost impossible to know what is true and what is not because of fake videos,
00:04:00.860
because of the perfection of AI mimicking being human. You're going to have a very significant impact
00:04:06.880
on our human relationships. I don't know if you've used the recent tools of chatting to ChatGPT instead
00:04:13.040
of texting with ChatGPT. It's mind-blowing. Yeah, I feel pretty happy right now. What do you think
00:04:18.820
makes people the most happy? Or how can you have a happy life? I'm glad to hear you're feeling happy,
00:04:25.000
Andy. From what I understand, happiness can often be influenced by a combination of factors,
00:04:31.200
positive relationships, meaningful activities, physical well-being, and a sense of belonging
00:04:36.800
can significantly contribute to one's happiness. It's also about finding balance, managing stress,
00:04:43.100
and focusing on the positive aspects of life. Every person's journey to happiness is unique.
00:04:49.360
What are some things that you think contribute to your own happiness? I'm wondering why is it that
00:04:55.140
a lot of people do alcohol and drugs and candy and fast food and instant gratification to get happy,
00:05:05.680
but you actually get happiness from doing challenging and meaningful work with a good relationship and
00:05:15.780
nice people.
00:05:16.820
That's a thoughtful observation, Andy. It seems like it boils down to the difference between short-term
00:05:27.460
pleasure and long-term fulfillment. Instant gratification, like enjoying fast food or seeking
00:05:33.680
fame, can provide a quick boost of happiness, but it's often fleeting. On the other hand, meaningful work
00:05:41.280
and positive relationships contribute to a deeper, more enduring sense of contentment and well-being.
00:05:47.620
The latter often involves overcoming challenges and building connections, which can make the happiness
00:05:53.720
derived from it more rewarding and lasting. Have you noticed this pattern in your own life or in others
00:05:59.900
around you?
00:06:02.420
Yes, sometimes I feel like the short-term gratification is pulling me away from what I actually
00:06:10.840
want to do, and it's a daily challenge of trying to get rid of bad habits.
00:06:18.000
It sounds like you're experiencing a common struggle, Andy. It's entirely normal for people to be drawn to
00:06:25.140
short-term pleasures, even when they're aware of the long-term benefits of other choices.
00:06:31.100
Finding a balance and working towards building healthier habits can be a journey in itself.
00:06:36.340
Have you tried specific strategies or tools to help steer yourself towards the activities and
00:06:42.840
habits that align with your longer-term goals and values?
00:06:46.140
It's mind-blowing. It's human in every possible way. It will replace quite a bit of our human
00:06:52.360
relationships now that we've moved from real relationships to social media relationships that
00:06:58.600
are, you know, you could say a level, you know, departed from human connection into now conversations
00:07:07.320
with chatbots that lots of the younger generations will feel is normal, will replace human connection,
00:07:14.820
may even think of romantic relationships with. And all of that, you know, you put all of that
00:07:20.580
together and you get into a world that, that is immediately very unfamiliar. And that world is
00:07:28.960
not the result of artificial intelligence being bad. It's the result of human value sets that could
00:07:37.540
abuse that superpower.
00:07:39.180
So, one of the interesting things there is that I think all of those things seem true. And at the
00:07:48.140
same time, I think it's quite clear that if AI could be a weapon, which it obviously could be.
00:07:54.480
A hundred percent.
00:07:55.280
Just like the nuclear weapon. Somebody eventually is going to invent a nuclear weapon. I would rather
00:07:59.720
it was us than other people.
00:08:01.380
That's the exact problem.
00:08:02.420
And so, I imagine you're not against AI research because you recognize that it has to happen now
00:08:07.060
that it's kind of on the table. Would that be fair to say?
00:08:11.860
If I had the choice, which is a very unrealistic, romantic choice, I would say we don't need it.
00:08:16.100
Yes, but we don't have that choice because we live in the real world.
00:08:18.740
So, in my book, in Scary Smart, the AI book that I published in 2021, I basically said there are three
00:08:25.140
inevitables. The first inevitable is a prisoner's dilemma, if you know game theory, where, you know,
00:08:32.020
AI will continue to be developed and there will not be a way to stop it.
00:08:36.820
You know, China's going to develop AI because they're worried that the US would beat them.
00:08:40.580
The US would develop AI because they're worried that Russia would beat them.
00:08:44.420
You know, Alphabet will develop AI because they're worried open AI will beat them.
00:08:48.260
Right? And it's very, very clear that when you create those conditions, not only will we continue
00:08:55.540
and push AI further, we will continue faster. We will invest deeper. We will go more crazy.
00:09:00.820
Right? And that's the reality of where we are today. Do we need all of this?
00:09:04.820
Do we really, really want to stand the threat of what AI can offer our world or can cost our world
00:09:12.900
just to get a more efficient call center agent that replaces the current human?
00:09:17.540
No, no. I agree with your argument. I guess what I'm saying is if we're sitting here on the cusp of
00:09:22.740
the invention of the nuclear bomb, yes, we can all agree, we'd rather not be in a situation where
00:09:28.180
it's about to be invented. But the fact is, it is about to be invented. And I'd rather we had it
00:09:33.060
than our enemies. Who is we? Well, it depends. My ancestors lived in the Soviet Union at the time.
00:09:39.380
So, but I guess we is the Western world at this point, right? I think that's a very divisive
00:09:46.660
view of a political view that is different than a humanitarian view. If you assumed that all of us,
00:09:53.860
I mean, if you really take the Cold War and the evolution of the nuclear bomb,
00:10:00.900
the world would have been much better off without inventing it at all than having one invented over
00:10:05.700
the other. I agree with you. What I'm saying is there was no way to prevent that from happening
00:10:10.020
because if the West didn't invent the nuclear bomb, either the Germans or the Soviets would
00:10:14.260
have done. Yeah. And they would have not had any moral reservations about inventing it or using it
00:10:19.220
either. The West didn't have either. That's what I just said, either, right? So,
00:10:23.540
what I'm trying to get to with AI is given that it's inevitable, what do we then do about it?
00:10:30.180
That's the question. There are two ways. I mean,
00:10:32.420
you look at the Cold War of the nuclear war, basically. There are one of two ways about it.
00:10:37.300
One is to continue to escalate the war. So we advance on both sides so that there is that,
00:10:43.540
you know, what is it called? Assured mutual destruction. That was the approach we followed
00:10:49.700
in the Cold War. Sadly, I think the alternative would be to jump immediately to the nuclear treaty
00:10:58.660
and say, hey, seriously, like, can we just have some sanity for the sake of humanity? By the way,
00:11:04.740
I'm, you know, there is no discounting the incredible value that AI will bring, right?
00:11:10.420
There is absolutely, remember, my entire argument is that there is nothing wrong with artificial
00:11:15.860
intelligence. In general, there's nothing wrong with intelligence, right? But there is a lot wrong
00:11:20.340
with human values, human morality, in light of absolute power, right? And so, you know, if you
00:11:27.140
give me a plug in the wall, and I'm not exaggerating, and I can get myself 400 IQ points more, I promise
00:11:34.180
you, I'm not exaggerating, I can invent a garden where you walk to one tree and pick an apple and
00:11:40.340
walk to another tree and pick an iPhone. From a nanophysics point of view, there is no additional
00:11:46.660
cost to reorganizing molecules to become iPhones than to become apples, right? And that's the whole
00:11:52.660
idea. The whole idea is that that promise of artificial intelligence gives us, you know, infinite
00:11:58.740
reduction of cost of production of anything. We can have a world of abundance for the entire humanity.
00:12:03.860
Right? The problem is, we're operating in the direction of a world of abundance with a mindset
00:12:12.340
of scarcity, where it's me against them, where I have to win, for me to win, they have to lose.
00:12:18.820
And if we can just change this bit, jump to the nuclear treaty era, and say, hey, can we work
00:12:25.380
together on this? There is enough for everyone, honestly, right? Instead of concentration of power,
00:12:30.740
can we distribute power? You can see multiple examples in times of massive change in history.
00:12:37.940
I normally cite the examples of the early oil Middle East, where you have tribal Bedouins,
00:12:46.820
really, getting enormous wealth. And you see different societies, you know, some societies would
00:12:53.620
concentrate the entire wealth in the hands of the prince or the sheikh or whatever. And other societies
00:12:58.500
would distribute the wealth, like the UAE, to all of the citizens. And you see the incredible
00:13:03.860
multiplication, the incredible impact of that distribution in terms of the incredible economic
00:13:12.580
development, incredible growth of architecture, incredible ease of life. Just because we didn't
00:13:18.500
concentrate the wealth, we didn't concentrate the power, we distributed the wealth and power.
00:13:22.500
And it is almost counterintuitive to imagine a world that we can live in, where intelligence
00:13:30.660
is for free. And we're unable to imagine that we're still operating from, let me aggregate more
00:13:37.460
of it so that I can beat the other guy. I think this is where the problem stands.
00:13:40.820
Isn't the issue as well, Mo, that what you're talking about sounds wonderful. But if you take,
00:13:47.060
for instance, a hormone oxytocin, which we all have running through our body,
00:13:50.900
that is a very powerful hormone, which creates a group and then has distrust of the out-group.
00:13:58.260
So I guess my point is, what you're saying is beautiful and utopian, but that's not really
00:14:03.380
how human beings operate, is it? Absolutely not. So again, I mean,
00:14:07.460
you go back to scary smart, right? I had three inevitables. Inevitable number one is that AI
00:14:11.780
will happen. There's no stopping it. Inevitable number two is just understood from that. If it
00:14:16.020
continues to develop, it's going to be in my calculations. When I wrote the book, I said,
00:14:20.580
it's going to be a billion times smarter than humans in 2045. I'm now talking a billion times
00:14:25.460
smarter than 2037. And I'll probably, probably keep bringing that forward, right? By the way,
00:14:30.980
it doesn't matter when it becomes a billion times smarter, because if it becomes twice as smart,
00:14:35.700
that's the end of the game. Because in reality, if you've ever sat with, you know,
00:14:43.220
theoretical physicists that are twice your IQ, and they started to talk to you about what seems
00:14:49.060
trivial to them, you would even have no clue what they're talking about, let alone understand
00:14:54.020
what it means, right? And, and, and that's the, this is, again, when I, when I wrote Scary Smart,
00:15:00.100
I predicted artificial general intelligence to be 2029. A lot of people, other than probably Ray
00:15:06.100
Corswell, which is really quite prominent in, in predicting our future, you know, we both agreed
00:15:12.900
2029. I'm now, for the last year and a half, I said 2025. And only very recently, a lot of people
00:15:19.940
are switching to 2025. We are at a point in time where it's the end of human intelligence. And, and, and
00:15:27.380
so, so the second inevitable is that we're going to become
00:15:32.260
in negligible in terms of our intelligence as compared to the machines. That's the second
00:15:35.780
inevitable. And we're talking not decades that like the media positions it. This is not far,
00:15:42.020
you know, sci-fi future. This is next year, right? And the third is bad things will happen. That's the
00:15:47.140
third inevitable. The reality is, I mean, my, my mission this year, Unstressable, the book we're
00:15:52.740
eventually going to talk about is, is the idea that we're about to head into a time that is so
00:15:59.620
unfamiliar for so many of us, not just because of artificial intelligence, geopolitical, economic,
00:16:05.140
you know, climate, and, and all of tech change. Tech is not just AI. Synthetic biology is a very
00:16:11.300
interesting threat as well. But, but you, you put all of that together and I think we're heading into a
00:16:16.340
world where our entire mindset as humanity has to change. We have to be able to deal with things in a
00:16:21.700
way that doesn't stress us, but also we have to, to, to, to jump quicker to, to the treaty element
00:16:28.020
of, of everything. We don't need to get too many years of, of disruption to be able to understand that
00:16:35.460
there needs to be agreement and regulation. Mo, what we're talking about here are very
00:16:39.860
big concepts and it's incredibly interesting. There's going to be a lot of people who are listening to
00:16:44.580
this who have a regular job, a regular life, and they're thinking to themselves, how is my world
00:16:51.700
going to change? How is my job going to change? So for the regular person, and I count myself as one
00:16:57.380
of them, how is our world going to change in the next months and years because of AI?
00:17:03.700
Look, as, as a, as a podcaster, for example, so I'm a podcaster as well.
00:17:08.340
Uh, I can guarantee you within the next year, you'd be able to, to interview an AI. As a matter
00:17:14.740
of fact, we could probably get my phone and interview an AI right now, right? It is undoubted
00:17:22.260
that within a couple of years, probably within next year, there will be podcast hosts that are AI.
00:17:29.220
They'll ask the questions instead of you, right? If you're an, a, a, a software.
00:17:33.300
Some people would argue they already are. As a matter of fact, many podcasters will actually
00:17:39.620
ask the AI to suggest a few questions for the guests up front, right? And, and, and, you know,
00:17:45.220
the, the, the reality is, uh, uh, you know, if you're a software developer, you're probably 90% of,
00:17:52.900
of software is going to be done by machines, uh, within the next couple of years. If you're an artist,
00:17:58.020
if you're an actor, if you're a musician, it's, if you're an author like me.
00:18:02.900
So I'm, I'm flipping the way I'm writing books from now on. I mean, this is unstressable. It's
00:18:06.820
the last book I write the traditional way, right? The, the, the, the, the reality is
00:18:12.420
there is massive disruption. Okay. And, and that massive disruption, believe it or not,
00:18:17.940
is not a bad thing. You know, if you truly, truly think of the nature of humanity, what we are, what we
00:18:24.100
are, we were not made to work 12 hour days. We're not supposed to work 12 hours days. We're supposed to
00:18:31.620
be pondering, connecting, uh, you know, um, uh, being with our families. We're supposed to be in
00:18:37.220
nature. This is the original design of humanity. And yes, I believe that again, in a world of
00:18:42.500
abundance where you can create anything at almost zero cost, that would become our reality.
00:18:49.380
The shift from now to there is a social challenge and it's a, it's a human mindset challenge that we
00:18:55.620
have to start addressing, right? So, you know, income, how is income going to be distributed?
00:19:01.860
When, when governments went through COVID, uh, you know, part of the big, uh, uh, mess up if you
00:19:09.220
want was the idea of how do we pay everyone when they're staying at home, right? Let's just multiply
00:19:15.460
that by seven and you get the era of AI where jobs are replaced. You know, the businesses are going to
00:19:20.980
call it productivity because we so need more productivity. We, we truly need, you know,
00:19:26.580
the call center agent to answer quicker. Like, come on, seriously. Now productivity in that case is
00:19:32.900
I can get rid of the call center agent and save myself a thousand pounds a month and put an AI
00:19:38.820
instead. Uh, nobody's really thinking about what happens to that call center agent. Okay. And, and so
00:19:45.220
governments need to start thinking about that. What are the tax structures going to look like? Is there
00:19:50.180
going to be a universal basic income? Uh, what about purpose? Because we, we, we're, we're, you know,
00:19:57.060
a bunch of humans that have forgotten our true nature. And so we identified our purpose by our work,
00:20:04.100
right? We, we, many of us wake up in the morning and say, I am in this world because I deliver A, B,
00:20:09.940
and C as in my work. Okay. How can we get people to feel purpose? How can we get people to find
00:20:15.700
purpose? How can we handle that transition in ways where, where, where it's actually, uh,
00:20:23.620
smooth and non-disruptive? Okay. And none of that is being spoken about, sadly. So, so I, I started in
00:20:30.820
2018, believe it or not, not 2021. When I, when I left Google, uh, I left, um, I mean, I, I started to ask
00:20:38.260
to leave by in 2017 and of course took that much time to leave because of my position at the time,
00:20:43.620
but basically by March, 2018, I, I left and I issued my first video about 1 billion happy, my mission.
00:20:50.020
And it was all about artificial intelligence and how artificial intelligence is going to change
00:20:53.700
the world. At the time it went reasonably viral, you know, maybe 12 million views or whatever.
00:20:58.580
Uh, but it wasn't a big deal compared to other stuff that I was working on. 2021 scary smart comes out.
00:21:05.860
I kid you not. And I am reasonably well connected in the media world. No one on TV would talk about
00:21:11.940
it. I'm like, guys, this is the next biggest thing. And everyone was like, AI, nobody cares.
00:21:18.340
Right. It was only on 2023 when, when chat GPT four came out where people said, what,
00:21:27.220
where has this been hidden? Okay. That the thing is that there is so much of AI that is hidden
00:21:32.420
that we are not in, you know, aware of as, as a society, as the general person, and it will touch
00:21:39.300
everyone's life. And, and, and, and it, it is, you know, as a, as a normal person, I think we have
00:21:45.940
two duties. One, one duty is we need to start the conversation. We need to tell the whole world,
00:21:50.500
hey, there's a storm coming. Okay. Yeah. It might bring rain and make our fields amazing,
00:21:55.940
but it also can destroy things on the path. Can we please talk about it? Can we please close the
00:22:01.380
windows if we have to? Right. And I, and I really think this conversation is not happening and it
00:22:05.860
happened a little bit in 2023. And now we're talking about other things, right? This is one thing.
00:22:12.340
The other thing is, believe it or not, the whole power resides in the hand of the individual.
00:22:17.140
So a lot of people don't understand that artificial intelligence doesn't learn from the programmer.
00:22:23.700
The programmer writes the intelligence code. How, how do you, how do you create intelligence
00:22:29.140
from the data I give you? But, but, but the type of intelligence that is created is learned from the
00:22:36.420
data. Okay. So if you and I go on Twitter, it's called X now, right? We, we, I don't go to neither
00:22:43.700
to Twitter nor to X, but if you go, if I go to one of them and I constantly bash everyone,
00:22:49.460
right? What does the AI learn? It learns that humans are irritable. They have very short tempers.
00:22:55.620
They're rude when they're disagreed with. And so the next time you disagree with the AI,
00:23:00.100
it will become rude and bash you as well. Right? The, the, the reality is that your Instagram
00:23:06.020
recommendation engine was never told to show you that video. You told it with your swiping techniques,
00:23:12.820
with which videos you stay on, which videos you like, which videos you don't. It learns from you.
00:23:18.420
We are the AI's parents, every single individual. You understand that? It's, it's a bit like raising
00:23:24.580
Superman. You get that, you know, young infant with superpowers, raise it well and it protects
00:23:31.220
and serves and becomes Superman. Raise it, you know, to steal and, and kill the enemy and it will become
00:23:36.180
super villain. We are teaching it. You know, it's such a powerful point that you're making there because
00:23:43.540
what you're really saying is that we are the architects of our own salvation. 100%. But also,
00:23:49.620
we are also the architects of our own destruction. And I think what you're really saying here is this
00:23:55.780
is a moment where humanity is on the cusp. 100%. I mean, it is, it's funny. So the way I write books
00:24:02.500
is I write the very last statement first and then I go back. Oh, wow. And, and I, I, I say, I say,
00:24:09.060
I see where the world is today. And I try to find the path to convince the reader that this is the
00:24:14.020
last statement. Okay. The last statement of scary smart was very, very, very clear. It was, isn't it
00:24:20.060
ironic that what makes us human, the very essence of what makes us human, love, compassion, and happiness
00:24:27.380
is what we need to save humanity in the age of the rise of the machines. Right? Because in reality,
00:24:34.260
humanity, if we just stick to those three values, but I chose those three values, by the way, out of
00:24:39.540
maybe limited awareness or ignorance, I still ask people, I say, has humanity ever agreed anything
00:24:46.440
other than those three things? Okay. We all want to be happy. We all have the compassion to make those
00:24:52.080
we care about happy. Not everyone, but those we care about. Right? And we all want to love and be loved.
00:24:57.060
These are the only things I could find that humanity agreed on. Right? But imagine if we showed up in the
00:25:02.480
world of AI constantly showing that we want to be happy, we have compassion and we want to be loved
00:25:07.600
and we want to love and be loved. Right? If we do those and these become the three moral values that
00:25:12.840
AI learns, right? It will treat us the same way. And you know, it's, it's, I know it sounds so romantic.
00:25:19.180
I'm a very serious geek. Okay. I'm not a hopeless romantic, but believe it or not, even those can be
00:25:26.140
simulated by a machine. One of the big, like, I was going crazy in 2018 because I would go and talk
00:25:33.420
to people about artificial intelligence and you get arguments like, no, but they're never going to
00:25:38.400
be creative. Like, come on, human ingenuity. Can they ever write poetry? Okay. Can they ever do a
00:25:43.760
painting? What are you talking about? Humanity is going to be in the lead forever. I'm like,
00:25:48.180
what? Creativity is, is algorithmic. Creativity is, here is a problem. Find every solution to the problem.
00:25:56.380
Discard every solution that's been done before. The rest is creative. Is it, is that not algorithmic?
00:26:02.420
It's so easy to teach a machine to come up with something new. Okay. It's so easy to teach a
00:26:08.420
machine to, I mean, between you and I, we, if you write poetry, you write it because you studied the
00:26:15.100
poetry of those who wrote it before you. Of course. Yeah. Right. So, so, so the, the, the truth is where
00:26:20.460
human arrogance makes us think that those machines are not going to be capable of love,
00:26:26.600
compassion, and happiness. They will. I have a happiness equation in my first book. Okay. I know
00:26:31.480
exactly what compassion is. Compassion is to want to alleviate the suffering of others. It's
00:26:36.800
algorithmic. If you observe suffering of others, this is, or, you know, this is empathy. You observe
00:26:42.640
the suffering of others. Okay. Compassion is to attempt to alleviate it. Very algorithmic. You
00:26:47.020
can program that. Right. And the truth is programming changed. This is what most of us
00:26:53.840
don't see. When I was a young geek, I solved the problems with my intelligence and then told
00:27:00.880
the machine how to solve it. Right. When I, at this age, I don't tell the machine anything
00:27:07.640
anymore. I tell the machine like I used to tell my child. I give the machine a little puzzle,
00:27:13.740
a cylinder and a few different shaped holes. And I never go to my son and say, Hey, by the
00:27:19.680
way, turn the cylinder on its side. Look at the cross section, compare it to the other,
00:27:24.820
you know, shapes in the, in the, in the board and pick the one that works. That's traditional
00:27:30.740
programming. You just give your son the cylinder or your daughter and he or she tries. That's
00:27:37.940
what AI is doing. Okay. It depends on what puzzle you're going to give it, what information
00:27:44.480
we humans, not the programmers are going to give it to Allah. Well, the interesting thing
00:27:50.100
is turning the question you asked me earlier back on you is who is we, because you say
00:27:54.800
we could be teaching it all these things. And I think it's true that some people will
00:27:58.420
be teaching all those things, but as you've, you've discussed in other conversations you've
00:28:02.940
had, there are also going to be criminal drug lords and all sorts of other people who are
00:28:07.560
going to be using it, uh, for their ends. And I guess this is really fundamentally about
00:28:13.100
a conversation about human nature, because what you're saying is, you know, wouldn't it be
00:28:17.820
great if we could teach it all the best parts of humanity? And the truth is, we're not going
00:28:23.780
to do that because we're human and we're going to teach it all parts of humanity. And humanity
00:28:29.100
is a complex bag of good and evil. Which I think is a beautiful way of looking at it, but it's
00:28:35.240
not the most accurate way. Okay. Because I'll ask you openly, do you think humanity is predominantly
00:28:41.480
good or predominantly evil? Um, predominantly good or predominantly evil? I don't think it's,
00:28:50.120
I don't think it's the right framing. It's not predominantly one or the other. It depends
00:28:53.880
on time, circumstance. Like I'm perfectly capable of being extremely evil if you put me in the
00:28:59.000
right position. Correct. As are you. Um, and equally I'm perfectly capable of being extremely
00:29:04.380
good in the right time, in the right circumstances. I, I, I, I only very, actually it's this week's
00:29:10.660
episode. I had Robert Sapolsky on, on my podcast and he was talking exactly about that, the biology
00:29:16.300
of good and evil and how we're all capable of it, which I think is exactly my point. So I'll,
00:29:20.780
I'll tell you a story. I hosted Edith Ager. I don't know if you've ever had a, oh my God,
00:29:24.920
an incredible angel, Holocaust survivor, uh, 93 years old when I hosted her. Uh, she's probably
00:29:31.220
95 now, uh, who, uh, told her, told, told us the story of, uh, Auschwitz and World War II
00:29:39.620
from her point of view. Okay. How she helped her sisters, how she carried them. Uh, you know,
00:29:46.260
she used to, uh, to be a ballerina. She was 16 at the time. So she would go and dance in front of
00:29:51.200
the general and you would give her a piece of bread and she would hide it and split it between
00:29:55.360
her and her sisters, brush their hairs and tell them how beautiful they are. And eventually on the
00:30:00.880
death march, she fell and they carried her and she survived because of that. Now you hear this
00:30:06.300
story and you go like, oh my God, humanity is divine. Right. You hear the story from the point
00:30:12.420
of view of Hitler and you go like, oh my God, humanity is scum. Right. The, the question I always
00:30:19.160
ask is if you take a cross sample of this neighborhood, how many people do you think are school
00:30:27.360
shooters? How, and how many do you think are disapprove of school shooting? Okay. The majority
00:30:33.800
of humanity will disapprove of hurting a child, even though the recent times have been quite
00:30:38.980
odd because the stories that were told would tell us, yeah, it's okay to hurt children if
00:30:43.640
their parents are bad. Right. But the truth is no, no human would be walking across an alley
00:30:50.140
and find the child being hurt. And then we'll stop and say, well, maybe the guy that's hurting
00:30:56.560
the child has a reason. Nobody does that. Right. Can I just challenge you then if that's
00:31:00.320
okay. So let's take the, you've just used the example of Nazi Germany where a small minority
00:31:05.620
of people with a very powerful ideology corrupted a society to the point where they engaged in
00:31:14.180
mass extermination. Correct. Even though that was a minority. So you're, what you're talking
00:31:19.560
about. The impact, the impact and the spotlight you get for negativity is incredibly high. You
00:31:27.560
get one evil person and that one evil person is able to destroy millions of lives. Happens
00:31:33.240
over and over and over in history. Absolutely. Happening as we speak. But the, the, the, the question
00:31:39.680
really is, does that one person represent humanity? Is humanity all Hitler's or all Edith's?
00:31:47.640
It's not. It's both. That's humanity. Are we, are we closer to Hitler's? I think if you were
00:31:55.200
to look at history, what you would say is we're, we're her most of the time, but every now and
00:32:00.480
again we become Hitler. Correct. And, and so I, I think, I think it's inaccurate to pretend
00:32:05.680
that like, I think that we are capable of both and that is what we evolved to do.
00:32:12.520
And so is AI. Yes. Right. Of course. So is AI. So AI is also capable of being the salvation
00:32:20.600
of humanity, making everything easy and available to everyone. Okay. And making everything difficult
00:32:27.820
and evil and, and, and providing an extreme advantage to some of us. Right. And, and the question
00:32:33.760
is it will learn from us. Right. So I'm basically calling on people to say, look, the problem with
00:32:42.700
our world, the bigger problem with our world is not the number of Hitlers out there. Okay.
00:32:47.840
It's the way the story is told. Right. Where basically mainstream media just pushes negativity
00:32:54.720
to the extreme because negative cells. Right. And so we only tell the worst stories. We talk
00:33:00.920
about the woman that hit her husband on the head yesterday. We don't talk about the many
00:33:05.460
women that didn't. Right. And social media presents the worst of us. You hide behind the avatar and
00:33:13.160
you become rude and you bash everyone and you pretend to be what you're not and so on and so
00:33:16.920
forth. Right. The, the reality is if we continue on that path, the, the, the absolute understanding
00:33:24.980
of AI is going to be like this species, species sucks. Like this is the worst. Why, why, why are
00:33:31.560
the, what, like, why do we keep those? Truth is, is, is that this is not the reality. The reality
00:33:38.160
is that unless we're pushed to become those people, many of us will say, I don't want to, if I'm
00:33:44.620
accepted with, with the people that I love, I don't want to pretend to be anything on social media.
00:33:48.700
Right. They were just playing by the rules. And I think this is really the core of the issue. The
00:33:54.720
core is, believe it or not, we don't make decisions based on our intelligence. We make decisions based
00:34:01.400
on our morality as informed by our intelligence. Do you understand this? Huh? If you, if you.
00:34:08.180
And emotion. Yeah. Yeah. By intelligence and emotions. I agree. Right. But, but you know, you, you,
00:34:13.560
you get a young lady, you raise her in the Middle East. She's probably going to grow up
00:34:17.700
dressing a little more conservatively to fit in. You raise her on the Copacabana beach
00:34:22.400
in Brazil, in Rio de Janeiro. She'll, she'll probably grow up knowing that, you know, wearing
00:34:26.920
a G string on the beach is the way to be accepted. Right. What is one right and one wrong? Is
00:34:31.260
one smarter than the other? No. It's just different moral fabric. Okay. And, and that's, that's where
00:34:38.240
we stand today. Isn't it ironic that the very essence of what makes us human, if we choose
00:34:45.780
to live that way, AI will become that way. Happiness, compassion, and love. Okay. If we
00:34:52.200
choose to live aggressively with anger, which sadly is what, where the world is going. This
00:34:59.880
is why unstressable is so important for me. It's, it's the idea of saying, can I alert humanity
00:35:07.540
that this next comment that you're going to put on a tweet is not only going to offend the
00:35:13.400
guy that wrote the comment that you disagree with, by the way, this agreement is fine, but
00:35:18.260
it's going to register in the overall ledger of humanity as this is a horrible species.
00:35:25.960
I should bash them too. Right. And, and now wake up to your responsibility. Whether you're,
00:35:32.820
you know, a teenager or whether you're the leader of a big nation that has a big army,
00:35:39.620
wake up to your responsibility because it's now suddenly going to be put on steroids.
00:35:46.200
And we're all going to struggle with the results of that. And by the way, most people don't discuss
00:35:51.760
this, you know, whether, whether the existential risk of AI in 40 years time or the immediate risk
00:35:58.180
of AI that I spoke about today, we've already gone through the point of no return. This is inevitable.
00:36:04.120
Okay. It's going to happen. The only choice is which way it's going to happen. It's inevitable.
00:36:10.500
I don't think that's a choice, Mo. I really don't. I just, I don't think, I don't think human nature
00:36:17.580
is perfectible in the way that you wish it were. I, yeah, I don't. I wish it were. I wish it were.
00:36:23.940
I don't. I don't either, by the way. I completely do not believe that we can teach, re-raise humanity.
00:36:30.540
I'm only asking for a few of us, okay, to instill doubt in the minds of the machines, to have that
00:36:37.860
debate. Yeah. Okay. That we're not all horrible. Yeah. That there is this Mo guy, silly, romantic,
00:36:45.140
idealistic geek, who's saying, no, we're not that bad. We actually are, you know, we're capable of love.
00:36:51.700
That's amazing. Right? Yeah. Mo, do you think that the way we talk about it, good and evil,
00:36:58.780
isn't it more just people act within their own self-interest? And if you give them incentives,
00:37:05.360
they will respond to that just as if you disincentivize. You see it with kids.
00:37:11.120
If you incentivize a certain type of behavior, the kids are going to do that type of behavior. And if
00:37:16.800
you're going to disincentivize another type, they're going to do the other type. Now you're always going
00:37:20.260
to get kids on the fringes, but that's how the majority works. Isn't that the real challenge
00:37:24.960
that we're facing? If you've got an algorithm, which responds far better to negativity, outrage,
00:37:31.420
et cetera, then people are going to be more like that. I wouldn't agree more. I couldn't agree more.
00:37:37.720
The truth, however, is that perhaps if you want to look at the positive side of this,
00:37:42.840
for the first time in humanity's history, at least in your time and mine,
00:37:46.260
our incentives are going to be aligned, but we're all going to struggle with the coming wave.
00:37:53.340
For the first time, think of COVID multiplied by a hundred, right? For the first time ever,
00:37:59.660
I think in our lifetime, we were all struggling the same way across the world.
00:38:05.060
We were all, you know, whether rightly or wrongly, we were all being made to be afraid,
00:38:10.140
whether rightly or wrongly, we were being forced and controlled to behave in ways where we were not
00:38:15.820
given alibis. We were all struggling with our, you know, dopamine hits that we used to get from
00:38:22.900
being out with friends. We're stuck in, you know, in closed rooms alone. We were all missing human
00:38:27.840
connection. We were all struggling with uncertainty. There was a unification of humanity for those few
00:38:33.540
months, okay? Whether negative or positive. Imagine if I can tell you that within five years' time,
00:38:39.700
there will be a unification of humanity, at least in the Western world around,
00:38:43.320
holy shit, we all lost our jobs, okay? There is a unification there. There is suddenly going to
00:38:48.900
be, what are we going to do about this? And we have one of two choices. We either, the current choice,
00:38:54.560
the current choice, what I call the end of the truth, is we're going to be distracted, told lies,
00:39:00.540
and fighting against each other, okay? Disrespecting the other guy, you know, if they came from a
00:39:06.800
different party than ours, we'll be fighting them, right? Or we could do the wiser choice and say,
00:39:12.600
hey, hey, you come from a different party, but we're both struggling because we both lost our jobs,
00:39:18.460
okay? Can we work together to get our jobs back and then fight later? And I think that's the call
00:39:24.140
to action for humanity, is to suddenly say, we're going to be unified in a challenge.
00:39:29.580
And that challenge, by the way, is our own making as humanity.
00:39:32.780
Get our jobs back? You think we're going to get our jobs back?
00:39:36.860
I think we're going to get an alternative back.
00:39:40.960
You see, this was interesting, what you were saying earlier about humanity going back to its
00:39:45.280
core. And I would say, if we think about human evolution and the way our brains work,
00:39:50.900
we evolved to live in small tribes, mostly familial, some distant relations.
00:39:56.440
And now we live in atomized individual units, family units, best case scenario,
00:40:04.820
where that's why we have to work in order to produce, to provide. Whereas in the past,
00:40:10.360
you would live in a small environment, you would gather food, you would hunt food,
00:40:14.720
you would share the food. And at that level of organization, that made sense.
00:40:18.860
The problem is, I just don't see how you get, you put the toothpaste back in the tube,
00:40:25.180
unless, and I think this is why I'm very pessimistic, based on what you're saying,
00:40:29.920
personally. Unless you have such a cataclysm of people's reality, and such a change in their
00:40:37.480
material circumstances, that it makes sense to go back to that environment. But the way our
00:40:43.100
society's work is, particularly when it comes to things like politics, right, is the politicians
00:40:49.260
are, like, they're behind us. We are behind you, like you were there in 2017 saying, guys, guys,
00:40:57.060
guys, like some of us are starting to catch on. Politicians are another 20 years behind. 20 years
00:41:01.460
from now, you've seen the congressional hearings when they interview people from the big tech
00:41:07.280
platforms. They go, how do you make money? And they don't understand anything, right? So once
00:41:12.800
there are no jobs, there is obviously going to be a rapid and immediate call for various forms of
00:41:19.640
redistribution, various forms of rearranging society, that the politicians are not going to be
00:41:24.660
capable of handling. And that is the recipe for revolution.
00:41:31.620
Yes.
00:41:32.480
Yeah.
00:41:32.960
So that's why I say I'm pessimistic, because I'm just thinking logically and imagining as you speak
00:41:40.260
the consequences of what you're saying. I think the way that our brains evolved and the way that we
00:41:45.440
evolved to be, it's not just inevitable that bad things will happen, but I think cataclysms that will
00:41:54.320
rapidly reshape society are inevitable based on what you're saying.
00:41:58.520
So from one side, I don't disagree, right? This is what I, I mean, I sit here calmly because of all
00:42:05.060
of my work on, you know, happiness and stress and so on, because I believe that if it's inevitable,
00:42:10.500
you need to learn to deal with it rather than complain about it.
00:42:13.140
Correct. Yeah.
00:42:14.000
But at the same time, I'll openly tell you, there has been times in the past where massive shifts,
00:42:20.980
societal, economic, and so on. Great Depression is a great example of that.
00:42:24.120
Right. Where the, you get into this and, you know, you go to New York City, for example,
00:42:31.380
during COVID's lockdowns and you get looting and, you know, the destruction of property and so on
00:42:38.160
and so forth. You go to other places and you see people coming together and it's not going to be
00:42:43.900
one and the same. And it seems to me, if you take the Great Depression as an example, that you would
00:42:49.980
have expected that the immediate reaction of a nation, the U.S., for example, where, you know,
00:42:56.040
everything was all about making more money and succeeding more and so on. And then you get Black
00:42:59.820
Friday or Monday, I don't remember, Black Monday. And then, you know, you get, you start to see a
00:43:05.100
massive collapse. And what happened was the opposite. People started to live together in smaller houses.
00:43:10.480
They started to help each other. You know, they started to go back to simple, you know, I can fix
00:43:16.460
your shoe and you can, you know, lend me two of your eggs or whatever. Right. And it is quite
00:43:21.400
interesting how humanity might respond to that. How the politicians will respond to that is a
00:43:27.160
horrible disaster. Right. But how humanity will respond to that might be very interesting. And my
00:43:32.420
role in this entire conversation, not assuming that I know anything at all, is to say there is a high
00:43:38.800
probability that this becomes our short-term future. Okay. Become aware of it. Work on yourself and work on
00:43:45.540
how you're going to deal with all of this. Right. Not in terms of panic and then complain and then
00:43:50.580
tell yourself the world is about to become, to end. It's not going to end. Okay. But it requires
00:43:56.280
you to have massive shifts. You know, simple shifts. For example, I will tell you openly, if you're not
00:44:02.360
using AI today, what are you doing? Like, are you still stuck to the fax machine when smartphones are
00:44:09.280
out? Okay. People have to learn about AI. There needs to be a skill I call the most important skill in
00:44:15.380
our future is how to tell the truth from what's fake. Because 90% of what you are told today on
00:44:22.740
social media, on mainstream media is not the truth. And the new media too, by the way. And in the new
00:44:28.520
media. Yeah. The, you know, the truth is the truth, the whole truth and nothing but the truth. Anything
00:44:34.380
other than that is not the truth. Right. And we're constantly bombarded with face filter. Is that the
00:44:39.800
truth? The whole truth and nothing but the truth? No. Okay. You're constantly bombarded with deep fakes.
00:44:44.240
You're constantly bombarded with opinions. So you go to a TV station, a news station and, you know,
00:44:51.000
a news channel. And what do they do? They tell you what they want you to know is the truth. It could
00:44:56.380
be partially part of the truth, but a lot of opinion. And we position the opinion as truth. We get a lot
00:45:01.240
of distraction. Right. Where there could be a very, very important topic that we should all focus on
00:45:07.580
and stop killing, you know, children in cities anywhere in the world. Can we please have war
00:45:13.360
in open war places? Okay. Can we please stop killing children? Right. And nobody's talking about
00:45:19.680
that. What are we talking about? Football and basketball and, you know, and disagreements and
00:45:25.360
walk arguments and, you know, maybe important topics for you. But is this the topic? Distraction is
00:45:31.780
taking the toll of humanity. Right. So there is a skill that I call find, you know, hone down on
00:45:38.140
your ability to know what is true and what is fake. Right. There is human connection. I believe the only
00:45:43.660
valuable skill in the next four or five years is this. Yeah. Okay. Because yes, I can promise you
00:45:51.600
in a year's time, you could have a Constantine avatar interview Mo's avatar and that becomes your
00:45:58.000
podcast. And it will be so realistic, it will blow you away. I was on Abundance 360, a very big AI
00:46:04.460
conference a month ago, and my avatar was doing more talking than me. Right. And the idea is,
00:46:12.460
but it won't be this. It won't be the conversation we had in the car coming here. This is a skill that
00:46:18.820
we all have to learn. And we have to start gearing up. We have to start saying the world is changing
00:46:24.100
people. Wake up. Mo, I'm worried when we talk about this, we're talking about massive job losses. If we
00:46:32.020
think about, let's look at the United States, the industry, the driving industry, the people who drive
00:46:40.460
trucks, the people who drive vans, the people who drive Ubers, et cetera, et cetera, that employs
00:46:45.320
millions of people, particularly millions of men and millions of young men. It never ends well when you
00:46:51.500
have large groups of disaffected young men who have no purpose, who have a lot of energy, who realize
00:47:00.560
that they don't have anywhere to direct their energy. That leads to anger. That's something that
00:47:06.720
we should be really worried about when it comes to AI. Yes. So absolutely. I agree completely. It's
00:47:14.180
something that we absolutely have to work on. Believe it or not, it's going to happen the other way.
00:47:18.520
So most people mix AI and robotics. Okay. Right. So the physical incarnation of AI is a robotic machine.
00:47:27.580
Okay. Because of the cost of creating hardware and how it needs to follow economies of scale
00:47:33.220
to become, you know, to become more available everywhere. The first bit of AI that will penetrate
00:47:43.340
our society is actually software. Okay. So it's all of the knowledge workers. Knowledge workers will go
00:47:49.400
first. Right. So give me examples of knowledge workers. Lawyers, you know. Lawyers are gone. Okay.
00:47:56.140
Some good news. Can we please make the AI lawyer nicer? Yeah. But let me be very clear. It's going to be two stages.
00:48:06.120
Right. Stage one is a lawyer that uses AI will have a distinct advantage over a lawyer who doesn't.
00:48:12.140
So the AI-powered lawyer will take the job of the non-AI-powered lawyer. Right. And then eventually AI itself
00:48:20.980
will say, why do I need the flesh in that conversation? You know, it's all basically knowledge. Right.
00:48:28.120
Accounting, you know, travel agent, you know, as I said, artist, creative designer, you know, a graphic
00:48:37.880
designer, a software developer, anything. So the evolution of humanities, we went from hunter-gatherers
00:48:44.040
to, you know, farmers, to industrialists, to the big shift was knowledge workers. Right. You know,
00:48:50.860
information workers. We all stopped using our physical form and started using our brain intelligence.
00:48:56.380
Right. Anything that relies on intelligence will eventually be replaced. How quickly? We don't
00:49:01.700
know. Right. And it's quite interesting that the biggest shift is happening in the industries
00:49:07.040
that used to use a computer. Right. So it's the developers, software developers. I think it's
00:49:12.500
around 70% of the code on GitHub today is AI developed. Right. And, you know, it's all of the
00:49:20.400
graphics design. I used to, you know, to design stuff on my, you know, on my iPad with any graphics
00:49:27.860
tool, graphics design tool. Why would I do that anymore? I just pick my device and I say, hey,
00:49:34.220
create an image of a dog biting a pizza and running through the fields. It takes one second.
00:49:40.780
And in a very interesting way, that basically will create two very distinct categories. Right. So
00:49:50.060
me as an author, I used to write books, then come talk to you about them, then create online content
00:49:56.760
for them. Right. That's no longer going to be the case because I think there were 62,000 books
00:50:03.200
published last year. Next year, there will be 120,000 because everyone now can go to a tool,
00:50:08.600
an AI tool and say, hey, write me 180 pages book about stress and the impact of stress on humanity
00:50:14.940
and how do we, you know, fix it. And poof, it will be done. Right. Just like you use ChatGPT to
00:50:21.780
write an essay for school, you can write a crappy book using AI. Okay. You can probably write a better
00:50:27.720
book using AI if you put a little bit of human intelligence in it. So I'm shifting my approach.
00:50:33.360
Okay. All of us were going to become artisans. It's quite interesting. You can either become
00:50:38.580
a real artist. So you use multimedia or oil paint or whatever and create something that
00:50:45.060
becomes sort of the handcrafted antique. Okay. But if you're using technology to create
00:50:53.880
graphics, designs and logos, gone. Wow.
00:50:58.180
That's really interesting. I used to run my own translation business before I started,
00:51:02.580
before I became a comedian and started doing this. And in the translation industry, machine
00:51:08.020
translation has been going on for a long time. And this has been a big part of the conversation.
00:51:12.220
And I always used to say to people, there will be human translation still, but it will be like the
00:51:19.300
market for tailored clothing. It will be a boutique exclusive elite thing that some people purchase
00:51:25.420
for very high value, important things. Most of the day-to-day stuff will be made in Primark.
00:51:31.500
A hundred percent. Basically. That's what's about to happen.
00:51:34.000
So that's what's about to happen to the knowledge workers. And then the robotics will come.
00:51:38.260
Robotics will follow. And self-driving cars, for example, is a clear one. But you have to imagine
00:51:43.120
that from a hardware point of view, because of the asset costs, replacement cycles takes four or five
00:51:49.520
years. It's not like someone will walk out there tomorrow and they'll have a fleet of 2,000 cars and
00:51:54.480
they'll say, done. Let's replace all of the 2,000 cars tomorrow. Even if there are savings to be done
00:52:00.920
on self-driving, there is acceptance and adaptation of people for the technology, but also at the same
00:52:07.600
time, just probably as a business, you're going to make a simple decision to say, all cars that reach
00:52:13.160
their life limit, don't replace them with normal cars, replace them with self-driving cars.
00:52:17.560
Right? And I tend to believe that the hardware incarnation of artificial intelligence, other
00:52:23.460
than sadly in defense, is going to take maybe five to 10 years.
00:52:28.440
So a smooth segue into your latest book is how does one prepare oneself for whatever the
00:52:36.220
hell is coming? Because nobody actually knows what it is that's coming.
00:52:39.180
Exactly. So unstressable is, so I write on a mission. I truly do. I mean, my first book,
00:52:47.060
So for Happy was backed by a mission called One Billion Happy. And my attempt was to try and use
00:52:53.180
technology and a logical approach to the topic to reach millions and millions and millions of people.
00:52:59.220
Tell people that happiness is your birthright, basically, and that it's attainable. Okay?
00:53:05.200
Moving from there, I started to actually recognize that it's not just learning how to be happy
00:53:11.140
that matters. It's how to stop being unhappy. That is really the big issue. That there are
00:53:16.220
multiple topics in the world that are causing a lot of unhappiness, a lot of well-being issues,
00:53:22.460
a lot of mental health issues. Stress is very high on that list. Okay? And most people don't
00:53:28.520
recognize that 70% to 80%. Can you believe this? 70% to 80% of clinical visits to go to a doctor
00:53:37.020
are because of a stress-related illness. So diabetes is frequently stress-related. You know,
00:53:43.440
obesity is stress-related. You know, cardiac diseases of most sorts are stress-related and so on.
00:53:51.900
So I worked with Alice. Alice is my co-author on this book to try and do two different approaches to
00:53:59.500
this. One is Alice and I are almost diametrically opposite, yin and yang in every possible way. I am
00:54:07.980
a very serious engineer. My writing method, even on topics that are as soft as happiness and stress
00:54:13.680
and so on, is using equations and physics. And so unstressable starts with what we call the stress
00:54:19.800
equation, then the burnout equation. And it's a very, very logical approach so that you understand
00:54:24.760
algorithmically what is happening. And Alice, on the other hand, is that very feminine, very soft,
00:54:30.060
very spiritual and emotional approach to the same topics. Right? And I think that mix has not been
00:54:36.900
acknowledged before. Right? So this is one way of doing it. The other way, which I think was really
00:54:44.480
interesting, is that half, not halfway, maybe a couple of months into writing the book, we went back
00:54:49.980
and said, no, no, the book is unstressable. It's not de-stress. Right? So it's actually very,
00:54:56.420
very interestingly positioned for what became our life today. We wrote this in 2021, 2022, is that
00:55:02.860
it is about teaching you how to sort of go to the gym, if you want, so that you're fit when stress
00:55:10.660
comes your way. It's not, oh, you're stressed, let's help you become unstressed. Okay? Or de-stressed.
00:55:16.600
Right? So it is quite interesting when you start to digest, you know, dissect it this way,
00:55:23.080
that there are simple habits, simple habits that are centered around the concept that it's not the
00:55:28.740
event that stresses you. It's the way you deal with it that does. Right? It's not the commute that
00:55:33.860
stresses you. Right? You can actually wait for your commute every day if you have a wonderful cup of
00:55:39.760
coffee and maybe a friend to chat with or a podcast to listen to. Or, you know, if you time your commute a
00:55:45.420
tiny bit differently, it might actually be quite pleasurable and much less stressful and so on.
00:55:51.000
So tiny habits of that form. There's a man who hasn't spent any time in the British train system.
00:55:57.100
That is, I have to tell you openly, there is a very interesting promise in Britain around you have
00:56:04.340
to be show up on time. It's absolutely impossible to show up on time. It's like the trains are never
00:56:09.940
on time. Right. And you're either early or late. It's like, what do you want me to do? And if you're
00:56:15.500
early, you're standing in the rain outside and it's cold and I don't know what to do here. I revert
00:56:21.160
back to my Middle Eastern habits and I'm always a tiny, tiny little bit late. You notice today. But
00:56:27.280
that's the idea. The idea, by the way, is even if you're about to be late, we came to you six minutes
00:56:32.280
late today. Right. It's okay. As long as you know how to deal with that, when you get the information
00:56:37.800
that the train is late. Okay. Can I text you, Constantine, and say, hey, by the way, I'm really
00:56:42.900
sorry. We're six minutes late. There's no disrespectment here at all. Right. And those simple
00:56:49.800
techniques fit within four categories. We call them mental, emotional, physical, and spiritual stress.
00:56:56.600
Right. And they're actually quite different when you look at them. That mental stress is all of
00:57:03.020
those incessant thought cycles that happen within you, not happening in the real world at all. Okay.
00:57:08.440
Like this conversation between us. Everyone listening to this about artificial intelligence
00:57:13.240
and the possibility of what artificial intelligence will cost and will cost us. Okay. But you're still
00:57:19.620
okay right now. Right. Everything's fine right now. You have this, you know, grace period between
00:57:26.520
now and then where you can actually do a lot of changes and you can, you know, work on yourself,
00:57:30.980
work with your family, work on your skills, work on a lot of things. You can be ready for it. Right.
00:57:36.380
And so mental stress would lead you in the direction where you sit in a corner and say,
00:57:41.500
we're all going to die. Okay. Mental fitness. We have something that we wrote that's called the
00:57:47.000
gym, G-Y-M-M-M-M-M-M-M, six M's. Exercises that you use your brain for, you know, simple things
00:57:53.600
like gratitude and meditations and so on, but much more complex ones as well that allow you to manage
00:57:59.000
that machine that's called your brain so that when the stress comes, you're fit. Right. Can you,
00:58:04.340
can you deal with your emotions a little bit better instead of suppressing them or exploding as a
00:58:09.920
result of them? Can you acknowledge them and, and, you know, write them down and then understand them
00:58:15.180
and, you know, and deal with them in ways that actually allow you to use the energy associated with
00:58:19.820
associated with an emotion so that, you know, instead of the emotion crushing you, you use
00:58:24.580
that energy to do something positive with it and so on. So it's, it's a very pragmatic approach
00:58:30.320
to look, shit's going to hit the fan, uh, get ready, but do it right. So that when, when, you know,
00:58:37.500
things become more difficult, you're more fit to deal with them.
00:58:41.260
Mo, don't you also think as well that a lot of this is self-acceptance? For instance,
00:58:45.900
there's some situations that I might find highly stressful, but you wouldn't. And also the opposite
00:58:52.680
is true. So let's take the example of lateness. I hate being late. I'm very conscientious and
00:58:59.100
I worry that my impact, that my lateness will have an impact on other people's lives. And that will mean
00:59:06.500
that they can't operate the way that they want to, et cetera, et cetera. So I don't mind being half an
00:59:13.580
hour early because I know that I will arrive there in a more relaxed state. I will be more
00:59:20.080
productive. I find that highly stressful. Even arriving on time causes me to be very stressed.
00:59:27.000
I don't like, like arriving on time. So I guess my point is we also need to have a greater degree
00:59:34.420
of self-awareness to understand the things that we find stressful and the things that we,
00:59:40.080
that we don't and the things that we cope with and the things that we need help with.
00:59:44.580
I think that's a great example. Honestly, I think this would probably explain the whole thing very,
00:59:48.860
very well. You see, we all get stressed by being late. I mean, in a, in a, not South Americans.
00:59:55.080
I was going to say, I was going to say in a society where it, where, where on time is, is,
01:00:00.080
is seen as an important value. We all get stressed by, by being late, right?
01:00:04.000
The approaches we take to it. So let me go from the basics. You can understand stress from a
01:00:11.600
biological point of view as a reconfiguration of your hormonal makeup that basically makes you
01:00:18.120
superhuman, right? So the original design is there is a tiger attacking you and you have to get into
01:00:23.400
fight or flight. That's the original design. Similarly, you know, there is a threat in being late
01:00:29.240
because your value set says I shouldn't be, I'm wasting people's time. You know, it's going to
01:00:33.500
appear disrespectful and so on and so forth. So, so you use your cognitive abilities to again,
01:00:39.360
get the same hormonal makeup, cortisol, cortisol in your blood, adrenaline sometimes to attack that
01:00:44.640
challenge in ways that make you superhuman, right? And superhuman is not just fight or flight.
01:00:49.780
It's also your brain getting more glucose. It's your, you know, pupils dilating. It's more focus
01:00:54.560
and concentration and so on, right? So, so, so interestingly, we all get stressed in the same
01:01:00.780
way. The way we choose to respond to stress is very interestingly understood from physics.
01:01:08.600
Okay. So when Alice and I were writing this, the first thing I did is I said, Alice, I will only be
01:01:14.200
able to write properly about this topic if I understood the algorithms behind it. And if you look
01:01:19.000
at physics, when you stress an object, it's not the challenge applied to it that is stress. Stress is
01:01:26.120
the challenge divided by the cross-section of the object, right? If you, if you apply a ton to a
01:01:32.080
meter, a square meter of metal, there will be no stress at all, right? The ton is still heavy. If you
01:01:38.980
apply it to a pencil, it's crushed. And, and the trick here is this, is that most humans, one, we either
01:01:46.320
accept some challenges that are not necessary, that end, add up to the, to the ton and, and lead to
01:01:52.800
burnout, or we don't invest in the square area. So, so if you take the stress equation in humans,
01:01:59.020
stress is the challenge you face divided by the resources and abilities you have to deal with it.
01:02:04.600
Each of us develops those resources differently. Okay. But the game is, are you going to invest in
01:02:12.040
developing those resources? So believe it or not, you, you and I, every one of us,
01:02:15.760
you take something that freaked you out when you're 20, when you're 30, it's a little challenging,
01:02:21.620
but not that challenging. When you talk to me now in my fifties, I laugh at it, right? Is it
01:02:27.320
because the challenge changed? Not at all. It's because I developed my square area. I remember
01:02:32.080
vividly when I joined Google originally, my, my, one of my best bosses ever was the head of me at the
01:02:38.380
time. And, and he introduced me to the company. Literally, this is the first executive meeting. And he goes
01:02:45.300
like, Hey everyone, this is Mo. He brings the average age of the company up. I swear. And that's it. He
01:02:50.580
didn't say a single word. I was like, yes. And he just said that, that he brings that. And, and it was
01:02:56.800
true. It was me and Alan Eustace and maybe Eric Schmidt and others. And there was a group of us that were
01:03:02.120
really above the average age of the company. Okay. So when the 2008 economic crisis happened,
01:03:08.400
most of the company panicked. It's like, what, what do we do about this? Eric, myself, Alan, and so on.
01:03:15.700
We're like, yeah, it's not the first time we've seen this before, right? We've dealt with this before.
01:03:21.640
And, and the whole idea is once you develop that square area, you suddenly start to see a lot of what
01:03:28.140
is stressing you as not stressful at all. That's exactly what my attempt is. This is why this is very timely
01:03:34.180
because a lot of the stress that we feel in today's world is not just because of the challenges we face
01:03:41.640
today. It's what I call the anticipation of stress or as an anticipation of threats, right? So a very big
01:03:48.420
chunk of why the world is stressed today is worry, anxiety, and panic. These are not about things
01:03:57.060
happening to you right now. It's not that you lost your job and you're starving. It's your thinking
01:04:02.520
that you might lose your job. Okay. That's worry, but you're not sure yet. It's your anxiety about it
01:04:08.300
and your panic about it if it's, if it's imminent. Okay. And one of the things, again, algorithmically
01:04:13.060
that explains this is very, very straightforward. If you take fear and all of its derivatives, and one of
01:04:17.840
the reasons why we have so much anxiety and panic attacks is because we're treating them as fear.
01:04:23.360
Fear. Anxiety is very different in fabric than fear. So worry. So if you assume that fear
01:04:28.960
is, I know that a moment in the future is less safe to me than now. Okay. We had this conversation
01:04:35.780
about artificial intelligence and we agree that it seems that almost certain that the world is going
01:04:42.160
to change, right? But we're not fully sure if it's going to change for the better or the worse, likely
01:04:47.420
for the worse in the beginning and then better, hopefully later. Worry is not knowing that. Worry
01:04:54.420
is, will I lose my job or will I not lose my job? Right. There is, you know, there are so many changes
01:05:00.060
in the company. Am I going to be out of a job? Right. Am I going to be out of a job? If you deal
01:05:06.020
with it as worry, it's very different than if you deal with it as fear. Okay. If, if you know for a fact
01:05:12.420
you're going to lose your job, you're going to start saving, right? Or you're going to start looking
01:05:16.340
for another job or you're going to act based on the challenge. If you're worried, you keep
01:05:21.100
shuffling back and forth. You go, you're undecided, you know, should I invest in this job so that I
01:05:27.340
make sure I don't lose it? Or should I give up on this job and look for another job because I'm
01:05:31.360
going to lose it? Right. My advice is if you're worried, don't treat it as fear, treat it as worry
01:05:36.720
and nail where, which side you want to be on. Is there a reason to be afraid? Am I going,
01:05:42.480
am I going to assume that I'm losing my job and behave as such? Or am I going to assume that I'm
01:05:47.160
not and behave as such? That, that eradicates the worry, turns it into fear, but it eradicates the
01:05:52.980
worry. The worry, the uncertainty kills us, right? Anxiety is even more interesting because anxiety is
01:05:59.200
not about the challenge that you're about to face. Anxiety is entirely focused on your own
01:06:05.600
perception of your ability to deal with it. When I'm anxious, I know that there is something
01:06:11.280
difficult coming and I believe that I'm not, you know, equipped to deal with it. If you focus on
01:06:18.580
trying to address the issue that is coming, you're not fixing the problem. You're still anxious because
01:06:23.880
you believe you're not ready for it, right? So when you're anxious, I tell people, stop thinking about
01:06:29.320
the threat. Start thinking about yourself. How can I give myself more skills? Do I actually not have
01:06:35.440
the skills or do I actually have them, but I'm not telling myself the truth? Can I, you know,
01:06:40.760
borrow from someone else's experience? If I'm not good in, you know, finances, for example, can I call
01:06:45.620
an accountant friend and say, can you please do this for me? And so on and so forth. So when you're
01:06:50.020
anxious, don't focus on the challenge, focus on your skills around the challenge, okay? Panic is simply
01:06:56.700
because the threat is imminent. Your focus is not the threat, focus is time, okay? If I have a
01:07:02.560
presentation on Thursday, right, and I'm panicking, I'm panicking because Thursday's two days away,
01:07:08.380
okay? So can I call and delay the presentation? Can I cancel a few of my other things so that I give
01:07:13.600
myself more time? Can I, you know, and so on. It's so interesting that the skill itself is so muddled in
01:07:21.680
our head. And when you tell people, just understand how stress works, it becomes so much easier for you
01:07:28.980
to deal with those things than just to be constantly moving from a panic attack to an anxiety attack to,
01:07:34.400
you know, uncertainty around worry to, it is just so confusing. Put it in the right places and you'll
01:07:41.040
behave differently. The way you choose to behave is irrelevant. As long as you have the cross area,
01:07:47.200
the skills and abilities to deal with the stress. You choose to arrive early, your choice. I choose
01:07:53.520
to tell people up front when they say, be here at 10.30. I say, is it okay if I'm five, ten minutes
01:07:59.560
early or late? Okay? And I give myself that range before I even get on the train or even the day
01:08:06.360
starts so that they're aware that I might be five minutes early or late. And then life becomes much
01:08:11.580
easier for me. Well, it's been great having you on. We've run out of time. We're going to go to locals
01:08:16.900
to ask you questions from our supporters that they've already submitted. Before we do that, we always end
01:08:21.380
with the same question, which is, what's the one thing we're not talking about as a society that
01:08:25.660
you think we should be? Before Mo answers a final question at the end of the interview, make sure
01:08:32.740
to click the link in the description and head over to our locals to see this. The challenge with Google
01:08:39.220
is both a big company scale and a sense of responsibility, believe it or not. Happiness is
01:08:46.960
your perception of the events of your life, minus your expectations of how life should behave.
01:08:53.080
What are, in your view, the biggest political and cultural reasons why the US seems to be far the
01:08:58.960
most reluctant major country to introduce AI legislation? The EU, China and India have introduced
01:09:07.040
quite extensive legislation already. So why does the US seem to drag behind?
01:09:12.280
What's the one thing we're not talking about as a society that you think we should be?
01:09:19.040
The truth. I think I mentioned that a couple of times. I think the sign of our time, honestly,
01:09:26.480
is that there is a flood of information and a lot of distraction where most of us believe what we're
01:09:32.660
told, when in reality, none of what we're told is true. The biggest skill in today's world is your
01:09:38.920
ability to debate if what is your, if what you're being told is your truth, right? If someone tells
01:09:46.600
you that this party is right and that party is wrong, they're both wrong. They're both right.
01:09:51.980
They both have some topics that you should agree on and other topics that you should, it's the
01:09:57.260
granularity that matters. If someone tells you that this, you know, future is going to happen,
01:10:03.960
even what I told you, debate it, debate it, okay? If someone shows you a video on Instagram,
01:10:10.580
know that this is not entirely the truth. The biggest issue with our world today is we have
01:10:16.900
no ability, we've become dumb, right? We have no ability to vet what we go through first before we
01:10:26.400
make up our minds on it. We sometimes label people, you know, I know you're going on a tour with Jordan
01:10:31.660
Peterson, for example. People will either label him as intelligent and wise and, or, you know,
01:10:37.500
annoying and evil, right? It doesn't matter, by the way. Whatever you label him, it doesn't matter.
01:10:43.980
If he says something wise, it's wise. If he says something annoying, it's annoying. It's simple as
01:10:48.960
that, right? And I think that reality of how, of going beyond what's presented to us to actually find
01:10:56.680
out what the truth, what my truth, there is no, it's very difficult to claim that there is one truth,
01:11:02.560
okay? But to be able to go beyond that presentation to find out what my truth is, is the first step
01:11:09.220
before you react to it. I think this is the sign of our times. Thank you very much. Guys, head on over
01:11:14.480
to locals where we ask Mo your questions. Could AI ask questions of itself such that it will one day
01:11:21.980
become religious? Great question.
Link copied!