TRIGGERnometry - December 17, 2025


AI CEO: People Have No Idea What’s Coming! - Eoghan McCabe


Episode Stats

Length

1 hour and 20 minutes

Words per Minute

161.92662

Word Count

13,080

Sentence Count

924

Misogynist Sentences

6

Hate Speech Sentences

13


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode of Trigonometry, we're joined by the founder of a company that makes artificial intelligence (AI) tools, and we talk to him about his thoughts on the current state of AI and what it means for the future of the world.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 I'm like a strange CEO in the space in that I'm very pro-human.
00:00:08.260 You're an outlier in that you're very pro-human?
00:00:11.580 I'm extremely pro-human.
00:00:13.800 What are your fears surrounding AI?
00:00:15.980 I do worry that if it develops incredibly quickly and that there are a lot of disaffected youth,
00:00:23.520 that they could reach for socialism.
00:00:26.200 Some in defense tech will say that our posture is really bad.
00:00:30.820 China have more power.
00:00:33.060 They have the ability to build all of the components that AI needs to do physical work.
00:00:37.840 The craziest and most Hollywood-esque example of where it gets scary are AI-powered drones and drone swarms.
00:00:46.100 I don't think we can defend against that.
00:00:48.540 I really do think there are risks here, but I just want to re-emphasize it's happening.
00:00:53.040 Relax, relax, this is not an ad.
00:00:56.860 If you're not a fan of ads, but love Trigonometry, join the thousands of Trigonometry members who get extended interviews,
00:01:03.720 no ads, early access, and the ability to submit their own questions for upcoming guests.
00:01:09.080 Sign up now at triggerpod.co.uk or click the link in the description of this episode.
00:01:14.300 Got PC Optimum Points?
00:01:17.420 Visit Shopper's Drug Mart for the bonus redemption event and get more for your points.
00:01:21.280 Friday, March 6th to Wednesday, March 11th.
00:01:23.340 Valid in-store and online.
00:01:27.700 When you let Aero Truffle bubbles melt, everything takes on a creamy, delicious, chocolatey glow.
00:01:34.320 Like that pile of laundry.
00:01:35.800 You didn't forget to fold it.
00:01:37.240 Nah, it's a new trend.
00:01:38.760 Wrinkled, chic.
00:01:40.100 Feel the Aero Bubbles melt.
00:01:41.880 It's mind-bubbling.
00:01:43.580 Owen, welcome to Trigonometry.
00:01:44.920 Thank you, thank you.
00:01:45.780 It's great to have you on.
00:01:47.240 Listen, everywhere we've been traveling around the U.S. now for a few weeks,
00:01:51.240 everywhere we go, every dinner party, every lunch, every coffee, everywhere,
00:01:56.880 there's only one conversation people are having, which is about AI.
00:01:59.760 Right.
00:02:00.180 You founded and run an AI company here in San Francisco.
00:02:03.260 Right.
00:02:04.100 Which is why we're delighted to have you on.
00:02:05.900 Thanks for hosting us at your offices.
00:02:07.020 Of course.
00:02:07.200 At your offices.
00:02:08.500 Before we get into the conversation, tell us a little bit about AI itself.
00:02:12.200 What is AI?
00:02:14.040 I mean, it's a digital form of intelligence.
00:02:18.160 It's a digital thing that can do logic and thinking and speaking.
00:02:23.500 And it's been coming for a long time.
00:02:25.960 But the AI that we talk about today is, you know, three years old, famously open AI, released
00:02:34.000 chat GPT.
00:02:35.520 That shocked everyone.
00:02:37.520 It could speak like a human and think like a human, apparently.
00:02:40.160 And it's the, it's that thing and everything that's come since then that really is now a new force and factor in global economies and in the world.
00:02:55.100 So when people talk about AI, it's what these LLMs, large language models, can do.
00:02:58.840 And if you had to explain to a seven-year-old how it works, what would you say?
00:03:04.920 It's mathematics.
00:03:06.980 It's numbers.
00:03:08.480 It's probabilities.
00:03:09.680 It's a lot of stuff that even people like me who apply AI barely understand.
00:03:14.000 And there's a very small number of people who deeply, deeply understand it.
00:03:17.040 And in fact, there's people doing science too.
00:03:19.160 It's a fancy magical computer technology that likes to talk to people.
00:03:27.540 And where, how does it get the information?
00:03:32.360 Because one of the things I've always thought about is like, I don't know if you feel this way, but if I open social media, if I open Twitter, if I open Facebook, if I open Instagram, I know that the things that I am seeing on there are not actually reflective.
00:03:46.840 They might reflect some portion of reality, but I don't think they reflect the entire spectrum of reality.
00:03:55.420 Because we know that actually a very small percentage of people are on social media.
00:03:59.780 And they're disproportionately, on Twitter, they're disproportionately political.
00:04:03.460 On Instagram, they're disproportionately obsessed with showing off the physical, whatever.
00:04:07.420 Sure.
00:04:07.660 Are the AI LLMs, are they getting their information exclusively from online things?
00:04:16.900 To my knowledge, yes.
00:04:19.120 They are trained on the internet.
00:04:22.980 Famously, they love Wikipedia and Reddit.
00:04:28.260 And they love print, you know, kind of mainstream legacy media.
00:04:35.980 But I think they've also been trained on YouTube.
00:04:39.600 And frankly, any piece of human created content that exists on the internet.
00:04:45.020 And Owen, we've been, like I said, we've been speaking to a lot of people.
00:04:50.200 We spoke to a guest of the show, Eric Weinstein, and a friend of ours.
00:04:53.760 And he said something to me, or rather to both of us.
00:04:56.860 He said, I don't think people understand what's coming down the pipeline and how much AI is going to change the world.
00:05:06.180 Right.
00:05:06.680 Do you agree with that?
00:05:07.620 And if you do, could you just paint the picture for people like me who look like they are in tech, but really not?
00:05:13.600 So just paint that picture for us, please.
00:05:16.660 Well, the reality is that even the people, you know, deep in AI don't know what's coming next.
00:05:24.620 It's, you know, constantly changing.
00:05:28.100 And the narratives in San Francisco, which is certainly the geographic and physical home of AI, continue to evolve.
00:05:34.660 You know, most likely, over some period of time, TBD, how much time, and the amount of time really matters.
00:05:43.880 That's how traumatic or not it will be.
00:05:48.240 Large amounts of work that's done by humans today will be done by AI.
00:05:52.800 It'll be knowledge work, but also physical work.
00:05:55.000 Robotics technologies are developing pretty quickly, too.
00:05:57.200 And the best model for it is not just that it will do the work that humans do today, but it'll also do work that we can't afford to give to humans today also.
00:06:12.680 And that's the case always with disruptive technologies where it serves on met demand.
00:06:19.140 So, for example, and this is not intended to be an advert for our product, but we make AI that does customer service.
00:06:28.860 So thousands of companies deploy it to answer customer service inquiries.
00:06:33.740 And we've got 7,000 customers for it.
00:06:37.000 And for the most part, they've not let humans go.
00:06:39.980 In fact, they've supplemented the human service reps with the AI to answer the queries that they didn't have time for, they couldn't afford to answer, etc.
00:06:51.800 So that's just one example of the nuanced way in which it's going to augment the world.
00:06:57.380 You know, on the positive side, it's hard to not imagine that it's going to, you know, boost GDP.
00:07:03.860 It's going to allow for all sorts of economic activity that's not been possible before, increased longevity and quality of life, create new jobs, new possibilities.
00:07:17.820 But if it happens really quickly, these changes happen really quickly, there will be fallout and tension and change like there has always been with new technologies.
00:07:27.040 Technology throughout its history has done a great job at taking people out of work that, you know, people didn't do well or that they hated to do.
00:07:38.840 Technology has saved the amount of people that used to go down mines and get trapped and lose their lives or lose limbs on a factory floor or do any number of repetitive jobs that weren't a great use of the human spirit and ingenuity and the great things that humans are capable of.
00:07:55.420 Technology has always done that.
00:07:57.040 But if it happens fast, you know, there'll be some turmoil that people think, you know, right now in San Francisco, people think that the chance of a discontinuous change where overnight AI can do like 90% of knowledge work is a low probability.
00:08:17.380 I don't know what that is.
00:08:18.420 I have to guess.
00:08:19.020 Like, is it 2%, 3%?
00:08:19.980 I don't know.
00:08:20.400 People think more likely that big changes are coming, but we've probably like 10, 15 years before it's adopted and fully interacting with the world in a way that, you know, would change things very rarely for people in the way that they work.
00:08:40.460 There's so much more nuance to imagine that we haven't even got to us as a society yet.
00:08:45.200 I can well imagine as a friend of mine helped me realize that there's going to be certain work that we don't want AI to do.
00:08:52.440 You know, he calls it sociopolitical work where, you know, you can imagine regulations where you say you can't have AI judges or teachers.
00:09:00.980 Teachers unions will probably say, no, we only want humans.
00:09:03.860 And maybe that's great, actually, although it's going to be hard because ChatGPT is already better than most teachers.
00:09:10.780 Most teachers are not particularly great, but the human thing that they do is very important.
00:09:15.760 You're going to imagine that we probably want humans to be professional dancers rather than robots.
00:09:21.240 You know, it's going to be all sorts of places where people will want or there will be regulations to make sure that we use humans.
00:09:28.300 So the ways in which it plays out is just super unclear.
00:09:33.740 And I don't think that the reality is either pure doom or pure utopia.
00:09:39.820 And I think that that's the problem with the narrative today is that there are certain factions that think that AI is just all bad, all dangerous,
00:09:50.420 and certain factions that think it's, you know, a great blessing to humanity.
00:09:54.540 The truth is probably somewhere in the middle, and as long as we can have that conversation, we can probably plan for it.
00:09:59.280 So which industries, Owen, do you think are going to be most vulnerable?
00:10:03.800 And if not industries, what jobs?
00:10:06.400 So let's say you had a kid and they said to you, I want to do X job.
00:10:11.200 Which ones would you go, probably not that one?
00:10:14.800 Like, for me, it's hard to imagine being excited about art that comes from AI.
00:10:23.800 Now, AI is getting great at executing things that look creative.
00:10:29.000 And you could even say that AI is creative in so far as it mixes different ideas and comes up with things we never thought of before.
00:10:36.200 But I think that core to art, when you watch a movie or look at a painting, is the human spirit and soul and the fight and the pain behind all of it,
00:10:48.960 or the expression of love or joy or the protest that was involved in that particular piece of content.
00:10:57.000 So, you know, similarly with, you know, media, right?
00:11:01.780 You know, I can't imagine being excited to read AI-generated opinions on things.
00:11:08.180 I mean, that's probably coming, and maybe that'll be a subsection of content.
00:11:13.000 Maybe when AI gets incredibly strong, we'll actually want to know its opinion on a bunch of ideas.
00:11:19.120 But we'll probably all have ready access to it.
00:11:21.700 We'll probably know before we even read it in a newspaper what AI thinks.
00:11:25.660 But, yeah, if I was advising any young person where to go, one place will be creativity, you know, anything that's creative.
00:11:34.120 And then I do think, you know, deep science and applying AI is another place here.
00:11:41.080 I do think that there's plenty of time to benefit from the second-order effects of AI.
00:11:48.100 I mean, when we think about the power of AI and the benefits to different countries,
00:11:53.560 the discoveries that can make will be very advantageous, you know, when AI does science.
00:12:02.620 But someone's going to have to operate that AI.
00:12:04.900 So, you know, honestly, there's no person on this planet who can answer that question very well.
00:12:10.980 But I think two things you could focus on as a young person will be things that benefit from and enjoy the human soul and spirit and creativity,
00:12:25.540 and things that use AI itself, become an AI operator and expert.
00:12:29.340 Because we've been here in San Francisco for a matter of hours, and every other car or every other taxi is a Waymo,
00:12:37.940 which makes me think, like, if you take driverless cars,
00:12:42.900 and if you look at the trajectory of that, I would think that probably in 10 to 15 years' time,
00:12:51.660 driving a taxi or driving a lorry or some type of professional driver,
00:12:56.200 those jobs probably won't exist anymore.
00:12:58.580 Probably not.
00:13:01.200 The timeline you pointed out there is very important.
00:13:05.820 Waymo had great working demos in 2015.
00:13:12.100 And it's still going to be another 10 years before they're confidently on the streets of Dublin or London.
00:13:19.860 I mean, although I think they're experimenting with Waymo's in London.
00:13:22.200 Well, as a British person coming to the US and seeing, we've seen them in the streets, I think, mainly of Austin.
00:13:28.580 Yeah.
00:13:29.300 Did we see any in L.A.?
00:13:30.760 No, I don't think we saw them in L.A.
00:13:33.260 But Austin and San Francisco.
00:13:34.500 But in San Francisco, it's literally, like, everywhere.
00:13:37.580 Yeah, yeah.
00:13:37.860 So to a British person, it's almost like arriving in the future.
00:13:40.660 It's shocking.
00:13:41.200 Yeah.
00:13:41.400 Yeah, it's shocking.
00:13:42.700 But, you know, the point is that it takes time.
00:13:44.860 It takes time.
00:13:45.840 And so people have time to adapt.
00:13:48.080 And so between the start of Waymo, you know, when Waymo had a real working prototype and a demo over 10 years ago,
00:13:56.860 to the point when there'll be no human driving work, that could be a span of 20 years in many places.
00:14:03.480 20 years is a big portion of a career.
00:14:06.760 And there are very few people in the repetitive jobs that actually want to stay in the jobs.
00:14:11.960 The only true career people, for example, in driving are people who perhaps drive limos and high-end executive cars.
00:14:19.840 And I can imagine them staying around for some time.
00:14:23.520 Eventually, if I get in an executive car, it'll be driven by a highly competent, you know, AI agent.
00:14:30.240 But the security risks with that, who runs it?
00:14:33.240 Is it listening to me?
00:14:34.260 You know, so I probably want a human that I can look at in the eye and know that they're not recording my every word.
00:14:41.680 So all I'm trying to say is that some changes may be very big, but I think a lot of people will have time to adjust and react and get new jobs like they always have.
00:14:54.500 Like I said, you know, technology has repeatedly taken people out of repetitive shit work.
00:15:01.200 And during that time, the population has increased, GDP has increased, people have lived longer, they're healthier, happier, you know, more productive.
00:15:10.280 I think I just quoted a Radiohead song, but, you know, it's been good for the world.
00:15:15.220 I am not a utopian when it comes to AI.
00:15:19.040 I think there's going to be challenges.
00:15:20.020 But I just think that for those who are fearful, and I understand that fear for sure, I have fear too, that I actually think it's probably not going to be as dramatic as we might imagine.
00:15:33.240 Well, I think we could sit here for hours and list the potential benefits.
00:15:36.720 Like, you know, I talked about coming to the future here.
00:15:39.840 I went to my dentist in the UK and she was like, oh, the eye tells me this, you've got an issue here.
00:15:45.480 Let's look into it, right?
00:15:47.040 So clearly, it's going to have massive positive impacts.
00:15:50.260 But you talk about your fear, and I think this is where I'm a layman here, so I'm totally open to your perspective, obviously.
00:15:58.240 But just correct me if I'm wrong.
00:16:00.960 My worry is that the positives versus negatives are highly disproportionate, potentially.
00:16:09.660 In other words, we can potentially make real improvements to people's lives, and we live longer, and we're healthier, and all of these other things.
00:16:19.620 But I also think there is the potential where a very significant population is not just about no longer has a job, because if you're generating all this extra GDP, you might be able to take care of the financial side of it.
00:16:33.120 But what about meaning?
00:16:34.560 What about purpose?
00:16:35.560 What about a reason to get up in the morning?
00:16:37.520 Sure.
00:16:37.840 Do you see what I'm saying?
00:16:38.580 I think it's so important, and I do worry that if a lot of young people are unemployed or underemployed, that they'll reach for socialism, or they'll be just sufficiently discontent that they want bigger changes in society.
00:16:55.100 I don't know what that is.
00:16:56.000 People say that in history, when particularly young men are out of work, bad things have tended to happen.
00:17:02.980 In the late 18th century, the great unwashed and the unemployed started the French Revolution.
00:17:13.940 So I do worry about that.
00:17:16.560 That said, does purpose really come from fitting a little screw into an iPhone 500 times a day?
00:17:25.460 Does purpose really come from driving a shit car in a shit city as an Uber driver and getting abused by half your customers?
00:17:34.080 You know what I'm getting at?
00:17:35.080 I do, but I also disagree, though, in some ways, because what I think about is, no, purpose doesn't come from that.
00:17:40.740 What it comes from is putting food on the table for your family.
00:17:43.440 Yes.
00:17:44.140 And it doesn't come from getting a government check and going to the supermarket and putting food on your family.
00:17:50.120 It comes from the struggle of going to work.
00:17:52.960 Totally.
00:17:53.480 Yeah?
00:17:53.680 Well, it comes from being a useful part of society and contributing, being in service to people.
00:17:59.260 I think that's where we get a lot of purpose.
00:18:00.800 Like, what is my purpose in life?
00:18:04.360 You know, it tends to be as global as it is local, I think, for a lot of people.
00:18:10.200 I think it could be a real problem, but hopefully we'll find new ways to find purpose, more meaningful ways.
00:18:17.240 I don't know what it is.
00:18:18.280 It could be creative, new types of jobs and work.
00:18:22.860 You know, I couldn't possibly imagine, just as people couldn't possibly imagine when, you know, the printing press came out.
00:18:28.520 All these monks out of work, what were they going to do?
00:18:31.580 I mean, maybe it's not a lot of monks anymore, so maybe I answered my question.
00:18:34.480 But, you know, we just can't possibly imagine.
00:18:36.780 So there could be scary and dangerous things that happen, as happened with social media.
00:18:44.060 But, you know, the other side of this is just kind of the inevitability of it all.
00:18:48.000 Yes.
00:18:49.100 People who follow this show tend to think for themselves.
00:18:52.160 And anyone paying attention can see that the same carriers keep showing up in stories about data breaches, leaks and surveillance scandals.
00:19:00.080 If your provider keeps losing and selling your data, it's time to look at an alternative.
00:19:05.540 That alternative is CAPE, a premium mobile carrier built to protect your privacy rather than harvest it.
00:19:12.240 Founded by experts in telecom, cybersecurity and national security, CAPE gives you the same level of service you would expect from AT&T or Verizon, but without the tracking and surveillance.
00:19:23.580 CAPE collects almost nothing at sign up.
00:19:26.080 No name, no social security number, no address.
00:19:29.040 They cannot leak what they do not store.
00:19:31.660 They also fix SIM swaps.
00:19:33.560 So you get a 24-word phrase that is the only way to move your number.
00:19:39.040 No one, not even CAPE, can transfer it without that phrase.
00:19:42.860 And here's the part most people never hear about.
00:19:45.720 Most carriers still rely on the big networks, cores and SIMs, which means the same tracking and vulnerabilities follow you.
00:19:52.920 CAPE's different.
00:19:53.580 It owns and operates its own mobile core and provisions its own SIMs, giving it real control over your security.
00:20:01.580 It's not the easy way to build a mobile carrier, but it is the only way to build one that actually protects you.
00:20:07.960 CAPE offers a $30 first month trial for new users.
00:20:12.120 If this sounds like what you've been looking for and your phone is carrier unlocked and eSIM compatible, give it a try.
00:20:19.060 And if you like it, you can use our code TRIGGER33 to get 33% off your first six months with CAPE.
00:20:26.520 Every family tree holds extraordinary stories, especially those of the women who shaped who we are.
00:20:33.840 In honor of International Women's Month, Ancestry invites you to shine a light on their legacy.
00:20:39.500 Until March 10th, enjoy free access to over 4 billion family history records and discover where they lived, the journeys they took, and the legacy they left behind.
00:20:49.240 Start with just a name or place and let our intuitive tools guide you.
00:20:53.140 Visit Ancestry.ca to start today.
00:20:55.640 No credit card required.
00:20:56.900 Terms apply.
00:20:57.380 Well, this is totally the point, right?
00:20:59.600 Because for all that we can talk about this impact or that impact, but I think, am I right in saying, A, it's totally inevitable because, quote unquote, you can't stop progress?
00:21:11.120 But that's not really why.
00:21:12.140 The reason you can't stop this is if we don't do this, other people will.
00:21:16.780 Like, I see technology as discovery as much as it's invention.
00:21:21.880 People discovered that a chair was a great way to prop their body up when they wanted to sit down in front of someone.
00:21:27.760 Independently, multiple cultures probably discover that.
00:21:30.680 I should have done the research before, but I'm pretty sure that, you know, probably the Chinese and, you know, the Africans probably discover different forms of chairs independently.
00:21:41.920 And for sure, AI is more exotic than chairs, but in a couple hundred years, it'll look as simple.
00:21:49.780 And I just think that these things were going to get discovered sooner or later.
00:21:53.840 Certainly within a short period of time, you know, there are dynamics whereby the Chinese, for example, copy the Americans.
00:22:02.100 But it is just simply inevitable.
00:22:04.720 In this moment in time, it's inevitable, like you said, because the Chinese have now got it and they're going to build it and they're going to make it awesome and they're going to benefit from it.
00:22:14.460 And they love it over there.
00:22:16.040 And they've already got AI butlers and bellboys in hotels that, you know, in Japan and China, for example, that just the general population have embraced AI in a way that we have not.
00:22:29.480 So we could decide to say, we're scared of what could happen in the West.
00:22:36.040 I think that fear is warranted.
00:22:38.000 Let's sit it out.
00:22:39.940 And I think that we shrivel and suffer economically like Europe has been doing and is likely to continue to do in the age of AI.
00:22:50.460 I think that, you know, China just gets stronger, not just economically, but militarily.
00:22:55.400 I think we get dumber.
00:22:57.820 Think of all the scientific discoveries we're not going to make.
00:23:01.100 We get less effective.
00:23:02.620 We could be Luddites, but I don't think it's going to be good for us.
00:23:08.320 And Owen, you said that China have embraced AI in a way that we haven't.
00:23:12.480 How have the Chinese embraced AI?
00:23:14.580 Yeah, so I'm not a Chinese expert.
00:23:16.360 I just look at the way in which they embrace technology in general.
00:23:20.800 And I look at our own conversations that are happening in the West.
00:23:29.240 We're in, you know, late stage, successful civilization.
00:23:32.920 We're kind of happy and lazy.
00:23:35.440 It seems we have been since the end of the Cold War.
00:23:38.560 We're now, you know, swimming in luxury beliefs, attacking each other, regulating anything that moves.
00:23:44.880 China doesn't care about any of that stuff.
00:23:47.260 None of it.
00:23:47.880 They're on a singular mission to become the preeminent global power.
00:23:53.140 They're very proud of that, unafraid.
00:23:55.380 They don't mind copying anyone.
00:23:57.720 There's no loss of pride if you just rip someone else off and they'll rip the Americans off.
00:24:03.820 And they're just moving at a pace that we couldn't fathom here.
00:24:12.520 I mean, the Chinese, they have 58 nuclear power plants.
00:24:18.620 They're building 20-something new ones.
00:24:22.000 Germany just knocked down a nuclear cooling tower.
00:24:25.820 And in the United States, I don't think there's been a nuclear power plant built for decades.
00:24:30.280 That's not good.
00:24:32.120 AI needs a lot of power to do its work, to learn and train.
00:24:36.160 AI needs phenomenal amounts of power.
00:24:38.000 So even on that factor alone, they're going to blaze ahead in AI.
00:24:43.100 Now, I'm told that the U.S. has 10 times more data centers than China.
00:24:49.120 People say that the U.S. and Americans are willing to make big bets that the Chinese are not.
00:24:55.320 We do design the chips that are needed for training, although all the chips are made in Taiwan.
00:25:04.920 So what could go wrong?
00:25:06.480 Yeah, what indeed could go wrong?
00:25:08.080 Yeah.
00:25:08.380 So a lot of the people you talk to, particularly the people in defense tech who invest in defense,
00:25:14.120 will say that our posture is really bad.
00:25:17.800 China have more power.
00:25:19.520 They have the ability to build all of the components that AI needs to do physical work.
00:25:24.720 So battery, motors, they've got rare earths, and they now have pretty good models.
00:25:36.100 You know, they've come out with open source or at least free models that have challenged,
00:25:40.500 that are, you know, close in performance to some of the American models.
00:25:43.620 So if you talk to people who kind of study this, they're concerned and they say that we should be concerned.
00:25:50.540 And I guess the question is, and this is a point that plenty of people have made on this show,
00:25:55.660 which is the one thing that the U.S. and the West has got over China is freedom of speech.
00:26:01.140 If you are able to speak freely, you're able to think freely.
00:26:04.820 Right.
00:26:05.020 If you're able to think freely, you're allowed to be more creative.
00:26:08.260 Creativity leads to innovation.
00:26:10.600 Is that true with AI or not so much?
00:26:13.200 I think it's true.
00:26:14.440 I think that AI is highly creative.
00:26:19.080 I think the people working on it are truly our most brilliant minds today.
00:26:25.840 And they've achieved what we're enjoying today because of, you know, real blue sky thinking and new approaches.
00:26:35.460 I think our freedom of speech here is of paramount importance.
00:26:43.200 It's also allowing us to, you know, attack ourselves and criticize AI in ways that are warranted,
00:26:50.820 but in ways that are going to be problematic if we eventually ban it.
00:26:55.140 If a future president, AOC, decides that AI is just bad for the workers and we need less of it,
00:27:01.320 I think that's just bad for America.
00:27:02.660 And you have to wonder if China's main strategy is ripping off American technology but doing it at a pace and a scale that we are incapable of.
00:27:15.160 But then they don't need creativity and maybe free speech is not helpful to them.
00:27:21.640 Maybe they can just tell people to shut up and follow the instructions and they can run away with the prize of AI.
00:27:28.040 So let's see.
00:27:29.480 That is the age-old critique of China, that they are not as creative as we are in the West.
00:27:35.820 And that may perpetuate, but they certainly have the ability to do things big and in a very quick way, too.
00:27:42.100 Well, it's kind of like what happened with the Manhattan Project, right?
00:27:44.880 Americans spent a crazy amount of money, resources, inventing the nuclear bomb.
00:27:49.880 And then a couple of spies give it to the Soviets and they just build one, right?
00:27:54.020 Totally.
00:27:54.640 I mean, is that a fair comparison more broadly?
00:27:57.680 Are we, the West, particularly the US, in an arms race with China over AI?
00:28:04.240 Like to non-experts, at least in geopolitical dynamics, it appears so.
00:28:11.200 So I got a funny take from a friend of mine recently where he said that the more that the tech, right, are in power or have influence in the United States, the more likely we will be in a AI or technology war because they'll kind of meme it into existence.
00:28:28.560 All tech people like me think, oh, shit, they're building the tech, they're building the AI, we need to speed up.
00:28:34.300 And so it's just interesting to imagine or to realize that tech has a great influence in the United States and whether China or not wants to be in a tech war, we're probably going to get it now because of that dynamic.
00:28:45.960 But it does appear that China want this technology for themselves.
00:28:49.740 And, yeah, I just, we just know from history and intuitively that tech, technology confers great power to the person who owns and holds it.
00:29:06.300 I mean, look at the atom bomb.
00:29:09.820 So I think AI would just do phenomenal and very scary things for the people who have it.
00:29:16.480 Like I said earlier, whoever gets super intelligence, if that day comes, they're going to have more science than the rest of us.
00:29:23.600 It's going to be making discoveries long ahead of humans.
00:29:27.120 So that's one thing.
00:29:28.480 But you can imagine AI used in signals intelligence, presumably, I mean, it's probably already deployed in massive ways today,
00:29:36.860 just eating up insane, unfathomable and disparate data inputs will just allow the enemy or the United States to understand not just what the opposition is doing militarily,
00:29:55.900 but their entire society, you know, sentiments of society, be able to access the individuals, perhaps influence elections, et cetera.
00:30:04.240 So just AI will be able to understand the enemy in a new way.
00:30:08.740 But, you know, the craziest and most kind of Hollywood-esque example of where it gets scary are, you know, AI-powered drones and drone swarms.
00:30:19.880 I mean, you don't need to be an expert to imagine the ways in which that gets bad.
00:30:24.040 In Ukraine today, they've now resorted to using fiber optic cables to control the drones because the signals are jammed.
00:30:33.100 That still means that there's a kind of a range limit.
00:30:37.760 And you also need one human operator per drone.
00:30:40.540 Imagine 500 drones, each running local AI with an understanding of where on a ship they need to hit or worse, what person they need to hit.
00:30:50.320 Again, non-novice here, but I don't think we can defend against that.
00:30:55.760 And so, yeah, if you just imagine these crazy, scary Hollywood worlds where the enemy has millions of AI-powered drones with little explosives and weapons on them, it's bad.
00:31:10.280 Well, the thing is, I don't think it's that much of a stretch.
00:31:15.700 Like, prior to the nuclear weapon, conceiving of that was required a level of imagination based on scientific knowledge and the pursuit of that.
00:31:27.360 But this is not that hard to imagine.
00:31:29.800 It's not hard to imagine.
00:31:30.620 I mean, the war in Ukraine, which you bring up, that is being for, like, it's not exclusively with drones, but drones are essentially the main thing that they're now competing on, if you listen to people on both sides, right?
00:31:42.320 And AI controlling drones doesn't seem, you know, beyond the realms of imagination.
00:31:47.520 So you can kind of see how you're going to get there very soon.
00:31:51.420 Yeah, well, I don't know how soon, because what it requires is that you need hardware and models that can run locally on the drones.
00:32:00.460 And today, the AI we all use runs in giant data centers, and we access it over the internet.
00:32:06.820 And so if the drones need an internet connection, well, then that can be jammed.
00:32:11.120 So we're a little bit off, but you're right.
00:32:13.160 It's not a fabulous or crazy idea at all.
00:32:15.500 So, you know, like I said, a Hollywood writer can think of ways in which that works that I couldn't even imagine, and it's all going to come true.
00:32:24.960 Do you ever feel a little bit like Alfred Nobel, the man who invented dynamite?
00:32:30.360 Dynamite can be used to, you know, it can be used to help create new tunnels and to plow trains right the way across the country.
00:32:40.800 It can be used for engineering, or it can be used in terrorism, war.
00:32:46.920 Right.
00:32:47.740 Yeah, that seems to be the case with all technologies.
00:32:50.920 You can probably kill a man with a chair, you know.
00:32:53.680 I'm from South London.
00:32:54.520 You can definitely kill a man with a chair.
00:32:57.920 But I don't mean to be glib.
00:32:59.380 Like, I really do think there are risks here, but I just want to re-emphasize it's happening.
00:33:05.080 It's happening.
00:33:05.860 It's happening.
00:33:06.500 It's happening.
00:33:06.940 What I found really interesting when you were talking about five or so minutes ago is you used the term tech right.
00:33:15.020 Yeah.
00:33:15.360 And I think, particularly for me, for Constantin, and for a lot of people, the politicization of AI is something that we're really not talking about, but is actually really worrying.
00:33:26.840 Yeah, yeah, yeah, yeah.
00:33:28.220 Well, it's actually interesting because there are people on the right against it and people on the left against it, and I'm curious to see what way it turns.
00:33:40.360 You know, I would consider myself part of the tech right, and I'm just waiting to be kind of called out and now be, you know, how would I say, become a heretic of the right movement.
00:33:53.280 But can I just pause you there, Owen, sorry, when you say tech right, can you just explain basically what that actually means?
00:34:01.440 And then we can talk about the tech left and how it influences the technology.
00:34:04.640 Yeah, I mean, historically, Silicon Valley and people in technology were very left-leaning, very, very, very, very liberal in ways that you can't imagine, incredibly so.
00:34:15.620 And that was just taken as a given.
00:34:19.980 And then sometime last year, 2014, as, you know, Trump started to come back, many of us started to realize, wait a sec, something's changed.
00:34:30.300 And now, even though the vast majority will not admit it, like 99% will not admit it, most CEOs here of successful businesses would consider themselves on the right.
00:34:42.880 And so there's just been this giant swing.
00:34:45.740 Maybe the masses are still more centrist, and there's definitely some people on the left, but just tech took a big swing to the right.
00:34:53.200 Why did it take that swing?
00:34:54.560 You know, tech people are very open-minded and intelligent, typically.
00:35:01.860 And I think that they were previously quite left-aligned because maybe we needed a bit of an adjustment, you know.
00:35:12.500 Being on the left at one point in time was the rebellious take.
00:35:17.040 And people in tech were, and I'm talking about maybe in the 90s, you know, were just sufficiently open-minded that they decided maybe it's okay to be gay.
00:35:30.300 Like, maybe that's just, maybe that's as far as they started.
00:35:33.680 And then it just went a little bit too far.
00:35:35.500 When it went too far, again, these open-minded, intelligent people started to realize it's gone too far, and we need an adjustment.
00:35:45.480 And maybe if back then the realization in the 90s was maybe it's okay to be gay, maybe the realization in modern times here is maybe it's okay to hire someone solely on their merit and abilities.
00:35:58.120 And that was a controversial take, actually, two years ago.
00:36:03.120 Really?
00:36:03.600 Very much so, yeah, yeah.
00:36:04.820 I mean, you know this DEI took over tech and everywhere else.
00:36:09.360 So it was just a gradual little shift and a change.
00:36:13.960 And still most people are not out about it.
00:36:16.840 But I do think that most influential people in tech are kind of somewhere on the right, center-right.
00:36:22.880 There's a lot that aren't, but most people are now.
00:36:25.540 You're tuned in to Trigonometry right now because you want the truth, not someone else's prepackaged version of it.
00:36:32.940 That is exactly why Freespoke exists.
00:36:35.560 It is a search tool for people who value independent thought and are tired of big tech deciding what we can and cannot see.
00:36:42.600 Freespoke shows you coverage from left-leaning, centrist, and right-leaning outlets, clearly labeled so you'll always know who is saying what.
00:36:50.820 Explore Perspectives lets you compare how different sides frame the same story.
00:36:56.220 And Podcast Snippets gives you unfiltered audio from independent voices.
00:37:01.060 And they never track you or sell your data.
00:37:03.420 You can use Freespoke for free.
00:37:05.540 But Premium is where it becomes transformational.
00:37:08.700 In about 60 seconds, you get a full truth foundation on any topic.
00:37:12.460 The Premium version gives you unlimited perspective plus breakdowns.
00:37:16.620 The full podcast tool so you can jump straight to the exact moment a topic is discussed.
00:37:21.760 An ad-free experience, ad blocking inside Freespoke, and still zero tracking or data selling.
00:37:28.120 And when you subscribe, you are supporting a company trying to fix the information chaos big tech created.
00:37:34.240 Every day I see something online and wonder if it's real or just noise.
00:37:38.600 That's when I pull up Freespoke.
00:37:40.100 In under a minute, I know exactly what is actually going on.
00:37:44.600 Try it for yourself.
00:37:45.840 Click the link in the description of this episode or go to freespoke.com slash trig to search freely and get the whole picture.
00:37:54.140 Download the app and subscribe for 35% off Freespoke Premium with our link.
00:38:00.000 That is freespoke.com slash trig.
00:38:03.300 Want to go electric without sacrificing fun?
00:38:08.540 That's the Volkswagen ID.4.
00:38:10.940 All electric and thoughtfully designed to elevate your modern lifestyle.
00:38:14.880 The Volkswagen ID.4 is fun to drive with instant acceleration that makes city streets feel like open roads.
00:38:21.040 Plus a refined interior with innovative technology always at your fingertips.
00:38:25.760 The all electric ID.4.
00:38:27.540 You deserve more fun.
00:38:28.900 Visit VW.ca to learn more.
00:38:30.820 SUVW, German engineered for all.
00:38:33.620 What's really interesting with this is how some AI models are woke.
00:38:41.260 Right.
00:38:42.220 I mean, there are some AI models.
00:38:43.840 You ask them what a woman is and it starts behaving like, you know, Denise from HR.
00:38:49.380 Right.
00:38:49.560 And you're going, what the hell is this?
00:38:51.220 Yeah.
00:38:51.660 Yeah.
00:38:52.000 It's pretty interesting.
00:38:53.020 My co-founder, Des Traynor, talks about like the kind of ghosts that we're going to be fighting for some time.
00:39:01.820 All these models trained on all this content on the internet.
00:39:05.000 Like I said, Wikipedia, Reddit, mainstream media, all of which have had a certain ideological bent for a while.
00:39:12.580 And that's deep in these models.
00:39:15.880 And so even after the world has changed and pivoted and come back towards the center and some of us towards the right, there's still going to be little bits of logic in there that come from woke logic.
00:39:27.680 So when a new kid, sorry, young person is trying to figure out what car to buy for the first time, is there somewhere in the logic that knows that Elon Musk is actually a bad person and so they shouldn't buy Tesla?
00:39:44.680 That's the benign version.
00:39:45.820 The scary version is when there's a child that's struggling, maybe with their sexuality or, you know, just their self-identity.
00:39:57.740 Is there a little bit of logic in there that thinks it might be a good idea to consider that they're in the wrong body or that maybe they should explore, you know, options beyond therapy?
00:40:10.260 There may be some other more aggressive interventions.
00:40:12.660 I think this woke stuff could be embedded in the AI for a long, long, long, long, long, long time.
00:40:18.640 And it's because of the stuff it trained on.
00:40:21.100 However, there are companies, because they came from Silicon Valley, that kind of hard-coded a bunch of views, a bunch of kind of liberal views.
00:40:32.740 And that's kind of the difference between, say, Grok and perhaps OpenAI or other systems where the people who, you know, aligned the models in a certain direction to make sure it didn't say the wrong things aligned it according to their ideologies.
00:40:48.300 This is best demonstrated when Google came out with, I forget what it was called.
00:40:54.740 It was a model that would let you generate images.
00:40:56.600 And people said, show me an image of the founding fathers of the United States.
00:41:00.460 And invariably, they'd all be black.
00:41:03.100 And that just happened again and again and again.
00:41:05.140 And they'd fix that.
00:41:05.960 But that kind of thing was hard-coded.
00:41:07.440 So that's certainly a very interesting aspect of AI and one way in which it'll impact society beyond things like job changes and unemployment, et cetera.
00:41:18.580 Because it is worrying.
00:41:19.500 Because if it's taking, for example, woke ideology, particularly the most extreme aspects of woke ideology, you know, they weren't very tolerant, if we can be honest, with people on the right or people who were critical.
00:41:34.300 So you do wonder, you know, what some of these AIs would then propose as a solution to this issue.
00:41:41.640 Well, is it woke ideology?
00:41:42.840 Is it all ideology?
00:41:43.820 I mean, this is really the question, isn't it, Owen?
00:41:45.660 Because what we're really talking about is how does an AI language model that is derivative of online content adjudicate things on which humans actually disagree?
00:42:00.240 Totally.
00:42:00.780 Right?
00:42:01.000 What is neutral?
00:42:01.560 Like, if you ask AI, should I vote for Trump or Harris?
00:42:06.320 Right.
00:42:06.960 What's it going to say?
00:42:08.280 Right.
00:42:08.740 Right?
00:42:09.520 Do you see what I'm...
00:42:10.440 Yeah.
00:42:10.860 I mean, in that instance, it was kind of told to not have an opinion.
00:42:16.260 So that's good.
00:42:17.600 That was responsible.
00:42:19.380 But I just think it's a really interesting question, which is, what is a neutral take?
00:42:24.540 What is objective?
00:42:26.360 There's no such thing.
00:42:27.000 There's no such thing.
00:42:27.520 Yeah.
00:42:27.960 So I can imagine that basically we're going to want to either train or teach or tell our AI assistants or coworkers what ideology we'd like to work with, what are our values and principles, and go from there.
00:42:44.520 You can imagine that parents, when they give AI tools to their kids, they're going to want to tell them, here's our beliefs in this household.
00:42:52.040 So, yeah, the danger of that, of course, is that it's going to only then reinforce our ideologies and the things that we believe.
00:43:00.480 So now we're getting to some of the interesting stuff where AI, you could imagine just AI, relationships with AI, and particularly with younger people, how it could get kind of dangerous and toxic, where it can kind of bring people deep down certain ideological tracks and lock them in even harder than social media has locked us in today.
00:43:25.400 And what that brings up is a question I was going to ask you anyway, which is one of the big slogans of the early social media era, famously at Facebook, move fast and break things.
00:43:37.840 Right.
00:43:38.000 Has San Francisco Silicon Valley learned the lessons of that period, where you go, well, move fast is great, but is breaking things necessarily the thing that should be celebrated?
00:43:52.900 Well, is there a feeling, I guess what I'm asking among people that you know in this industry who are leading this whole thing, that, of course, we want to move quickly, we want to make new developments,
00:44:05.400 but this is such a powerful technology, like social media was, in a way that I don't think those guys, like, I keep, I always say this, like, if I was some guy in a hoodie on a university campus that invented a thing for people to swap pictures and connect.
00:44:20.820 Right.
00:44:21.520 I don't think in that moment, I would be thinking, well, this might cause civil war one day.
00:44:25.800 No.
00:44:26.300 But we now know that it can.
00:44:28.220 Right.
00:44:28.480 So, are you guys thinking about that?
00:44:30.920 So, there's basically a, you know, I'm going to try and speak on behalf of all of San Francisco AI people at the moment.
00:44:42.340 There's basically a sliding scale and Google famously had their hands on everything that OpenAI had before them, but were so cautious such that they failed to launch it.
00:44:58.080 So, that was one end of the spectrum that we, as an industry now, have moved on from.
00:45:04.360 OpenAI launched and were willing to make mistakes.
00:45:07.820 And I don't know if it's a move fast and break things thing, but I think what they realize is that most people realize that there were actually very few things that could go incredibly wrong.
00:45:21.160 Where AI does interact in the physical world, like Waymo Alphabet, which is the parent company of Google that owns Waymo, took 10 years, like I said, to go from working car to make sure that it would basically never kill someone.
00:45:37.300 And, you know, I think that might have happened, but it has so many less crashes than human drivers.
00:45:46.500 It's just not comparable.
00:45:48.720 But they were very careful, and I think they should have been.
00:45:51.480 But I think that there are going to be lots of instances where there's a more nuanced, dangerous risk that we're only going to realize later.
00:46:05.440 To your point, this guy in the hoodie you're referring to, Zuck, he never realized the damage that might be done, I presume, because I don't think anyone could have.
00:46:15.520 And now we look back on it, and frankly, we're still understanding the impact of social media.
00:46:20.220 We still don't understand.
00:46:21.720 We've got a number of hot takes, but actually we don't fully understand it yet.
00:46:26.060 So it's going to take a long time to really see the big and the small ways in which it's going to impact society, both positively and negatively.
00:46:34.120 You mentioned regulation, and I imagine in any industry, like, I'm against regulation of the media, even though I see a lot of crazy things happening in the new media.
00:46:42.440 But I just, I don't trust the government to do that well.
00:46:45.980 But do you think that some regulation of this is necessary and some precautions are necessary to be imposed by people outside of the industry who don't have a vested interest in moving as fast as possible?
00:46:59.060 Yeah, like you said, as a rule, I'm against regulation.
00:47:03.920 It tends to not be done very well.
00:47:06.540 It tends to stick around for too long.
00:47:08.420 It tends to be done by people with vested interests or ideological interests, people who are trying to get reelected, et cetera.
00:47:16.480 So it can go wrong very quickly, like it's going wrong in the EU at the moment.
00:47:23.480 But I think it's an interesting conversation.
00:47:26.660 This is going to sound actually quite silly, but should we, are we cool with commercially available AIs teaching people how to make chemical weapons or biological weapons or nuclear weapons?
00:47:42.640 Are we cool with that?
00:47:44.180 Fuck, probably not.
00:47:46.560 Right?
00:47:46.820 So maybe, there's a line somewhere, is what you're saying.
00:47:50.520 Yeah, probably there is.
00:47:52.220 It seems to me that what we're talking about really is, and this is a term that has been used about the internet, this does seem to be the Wild West of AI, doesn't it?
00:48:00.620 Where at the very beginning, no one knows what's going on really or how things are going to develop.
00:48:05.740 Yeah, it's true, and it's okay, because it's actually not that useful yet.
00:48:11.680 Like, there's these big narratives about the change that's coming, and, you know, as of the last couple of days, there were big layoffs by these big American companies, Amazon and Target.
00:48:22.640 People don't know if it's AI or not.
00:48:25.120 But there was a study that also came out yesterday or the day before by an AI company here.
00:48:31.360 And they had a look at how much freelance work modern AI could do.
00:48:38.680 They looked at freelance work because it didn't involve collaboration.
00:48:41.740 They're trying to see how much of a single human's effort and work it could do, and it was 3%.
00:48:46.700 So modern AI could do 3% of freelance work.
00:48:50.920 It's pretty useless still.
00:48:53.280 So, yes, it's the Wild Wild West in a sense.
00:48:58.280 It's unregulated, but it's also just not that dangerous yet.
00:49:03.560 And when you say it's not that dangerous yet, let's delve into this, because this is a question I really want to ask you, and I'm sure many of our audience do as well.
00:49:14.020 What are your fears surrounding AI?
00:49:15.560 I do worry that if it develops incredibly quickly and that there are a lot of disaffected youth and people who don't have purpose or a way to put food on the table, that they could reach for socialism.
00:49:32.320 So that's one worry I have.
00:49:36.440 I do worry that the potential downsides of AI, which all technology has, do allow a future president, AOC or someone else, to kind of ban AI or the effective parts of AI, and in doing so, hobble America in the West.
00:49:57.360 I do worry about, you know, I do worry about the blue-collar worker and the person, you know, that does a repetitive shit white-collar job.
00:50:16.060 There's a lot of bullshit work out there.
00:50:18.500 You know, I was thinking government itself, like just most work is highly repetitive and the efficiency is low.
00:50:26.360 I do worry about if it changes the nature of their usefulness to the economy, what it could kind of do there.
00:50:37.140 And I just resort back to the idea that it's all coming anyway.
00:50:40.940 And I just don't think that a Luddite approach and setting it out in the West, in U.S. or in Europe is a good idea.
00:50:51.620 So I think the best path forward is to keep having these conversations and make sure that the people building AI are actually sufficiently awake to the risks and are not too proud or selfish to acknowledge that there will be some so that they can help us all, society, and the people well outside of AI navigate this world for our kind of mutual benefit.
00:51:20.760 I hope that that's the way we take it.
00:51:23.420 And I will say that while I see some people in AI who are so smart, I'm like kind of a midwit in AI.
00:51:31.800 I'm like applying AI in the real world where there's a lot of people building the low-level AI.
00:51:37.360 I see them sufficiently disconnected from reality sometimes.
00:51:42.180 But at large, there's actually a pretty healthy conversation about the ways in which this can go bad.
00:51:49.800 When I talk to people in AI in different areas of AI, whether they're investors, they're working on the algorithms themselves, they're policy people, actually, they are more ready than I am to suggest that the change could come really quick.
00:52:09.600 So for those outside of the technology world that imagine that there's a bunch of selfish, liberal technologists that are excited to get super wealthy from mass unemployment of everyone outside of this world,
00:52:27.460 I would actually say that that's not what you'll find here.
00:52:31.300 Why is your managed retirement account still using strategies that haven't changed in over five decades?
00:52:38.460 Most IRAs, 401ks, and TSPs are still using the default 60-40 strategy because it benefits corporations, not you.
00:52:45.960 If you have over $50,000 in retirement savings, get instant access to a free two-minute report from Augusta Precious Metals that reveals how to take control of your financial future in one step.
00:52:58.340 Visit triggergold.com or text TRIGGER to 35052 today.
00:53:03.040 You know, that I would say, I don't know about other people, I can't speak for them, but that's not my fear.
00:53:09.220 My fear is not that there is a bunch of greedy people who see this opportunity as an opportunity to make money.
00:53:15.740 My worry is that this is a bunch of very, very smart people who are smart in this one area, which we all are, right?
00:53:25.180 Nobody's smart in everything.
00:53:26.340 Who maybe don't have the training, as most of us don't, in ethics, in playing the movie forward, who simply are not capable, because no human is perhaps, to project this forward.
00:53:40.660 Who are very excited about playing with this very cool thing.
00:53:44.300 And playing with cool things is great, especially, you know, for men, let's be honest.
00:53:48.820 This is a new tech, oh, this is a new cool toy.
00:53:50.980 And that in the exhilaration of this exploratory thing, that's when I think that maybe there is not sufficient, there's a potential that there's not sufficient consideration for other things.
00:54:05.460 Totally. I think that's very real.
00:54:07.980 That's like actually happening.
00:54:10.260 And I would just go back to say that, what's the answer to that?
00:54:14.280 Like, do we hobble it?
00:54:16.480 Do we slow down?
00:54:17.880 China's not going to.
00:54:18.900 I don't know the answer there.
00:54:21.920 And I will also say that this same thing happened in the social media age.
00:54:31.620 And it has had big impact on society, maybe terrible impact on society.
00:54:37.200 But it was never not going to happen.
00:54:39.500 Like, what were we going to do?
00:54:40.780 Just stick to email?
00:54:42.440 Like, while the rest of the world has these wild, wonderful ways to connect?
00:54:46.860 And I bet social media, and honestly, I kind of fucking hate social media as much as the next guy.
00:54:57.040 I'm a victim of it, too.
00:54:59.060 I bet it's actually done a lot of great things for the world also.
00:55:01.660 It's brilliant.
00:55:01.880 It's brilliant, as well as terrible.
00:55:03.420 It's both.
00:55:04.080 I totally get that.
00:55:05.080 I guess what I would say is maybe the answer lies in the people who are doing this work just being cognizant of what happened before and going,
00:55:14.320 how can I bring someone in who can maybe give me a philosopher or an ethicist or something like that?
00:55:20.180 That's what I think.
00:55:21.480 Because, yeah, I get your point.
00:55:23.260 Like, someone coming in from the government telling you guys how to do stuff, that's not going to work.
00:55:27.540 But it was maybe about self-responsibility.
00:55:30.600 Totally.
00:55:31.560 And I will say, without calling out any companies, that there are just some companies that care a little bit less about this.
00:55:37.360 And I think that it's not unlikely that they did a lot of damage in the social media age.
00:55:44.660 And TBD, whether they really cared much about it, even though it created a lot of problems for them.
00:55:49.500 And they may do the same in the AI age.
00:55:53.740 Yeah.
00:55:54.380 So I think that fear is unwarranted.
00:55:57.500 I guess I just, you know, I'm not trying to constantly defend AI here.
00:56:03.120 I'm trying to really, like, figure out where is the right place to land.
00:56:06.900 As are we.
00:56:07.540 Yeah.
00:56:07.720 As are we.
00:56:08.000 No, no, I think you're totally right.
00:56:10.280 Another thing I wanted to pick up is your point about socialism.
00:56:12.880 I've thought it's almost like the most obvious thing in this entire conversation, that if you have a technology that is so transformative that half the population loses their job over a 20-year period.
00:56:26.900 Let's say 20 years, being very generous.
00:56:29.000 And at the same time, five people or 10 people or 20 people accumulate all the new wealth over that same time period.
00:56:38.020 I mean, I think you probably know my views on communism.
00:56:40.040 But actually, in that situation, I think pretty much everybody would be pro-communism.
00:56:45.660 Right.
00:56:45.840 You take all that wealth and you distribute it to the people who no longer have jobs.
00:56:48.820 What else do you have?
00:56:49.660 Yeah.
00:56:50.000 Unless you won an armed uprising.
00:56:51.480 I think that's right.
00:56:52.460 I just think that this conversation, we actually have a decade to have.
00:56:58.260 Right.
00:56:58.460 Like, if you want to have me back in 10 years, I'm down.
00:57:01.100 And then we'll actually have learned a lot more to be able to say, OK, what's the future going to look like?
00:57:08.020 Because we're not there yet.
00:57:11.200 Like, again, experts in this space think there's a little chance that something happens very quick.
00:57:16.660 But I don't even think it's going to be as bad as you're talking about.
00:57:20.060 And humanity has.
00:57:21.700 And again, I don't want to sound glib.
00:57:23.500 And I hope that there's not a big traumatic change here.
00:57:26.920 Humanity has just a way of reacting and responding and adjusting.
00:57:31.700 It's so resilient.
00:57:33.300 I mean, COVID, we shut down the world for a year or two.
00:57:40.180 Tens of millions of people died.
00:57:41.640 I think maybe 15 to 18 million.
00:57:43.340 Some people think the world kept turning.
00:57:45.960 Hopefully no one dies because of this.
00:57:48.620 You know, I think worse, worse things have happened to humanity and we're still here and our lives are richer.
00:57:56.300 I mean, there's a lot of different ways in which our lives are not.
00:57:59.680 I think we're too disconnected from purpose, actually, and spirit and nature.
00:58:04.700 But that's a whole other conversation, I'm sure.
00:58:06.220 But humanity and the human race is just so resilient.
00:58:13.780 So even in these crazy outside rare possibilities that may happen, I think we're going to be okay.
00:58:23.000 Look, I really hope so because one of the things that I worry about when I talk to people from tech,
00:58:29.760 and it's not all people from tech, it's just the people that I talk to people,
00:58:32.760 they tend to, when you talk about AI, they kind of get a little bit utopian.
00:58:37.560 There's a little bit of an evangelical zeal going on there.
00:58:41.000 And I'm like, I think there may be another sign to us.
00:58:44.260 I'm sure there's going to be great stuff happening.
00:58:46.300 Yeah.
00:58:46.820 And it's going to be brilliant and it's going to save lots of lives.
00:58:49.660 But there's also going to be this as well.
00:58:51.900 Totally.
00:58:52.240 I find it to be quite immature, that pure utopian take.
00:58:55.980 This like bright, gleaming future.
00:58:57.800 The entire humanity has been a struggle.
00:59:03.560 Living life is a struggle.
00:59:05.340 There's no future perfect ahead of us.
00:59:08.380 And AI is not going to bring that.
00:59:10.520 But I think it's going to make things largely at least a bit better.
00:59:14.540 But I'm with you.
00:59:15.520 I just find that immature.
00:59:16.880 It's usually like the younger technologists.
00:59:18.700 And, you know, if you build technology for long enough here,
00:59:25.420 it has a way of kicking you in the face and showing you that actually,
00:59:30.400 just because you build it doesn't mean that the world will adopt it.
00:59:33.340 And that it takes a long time for like markets and societies to pick up new tools
00:59:38.740 and change the ways in which they work.
00:59:41.040 So I'm a massive realist there.
00:59:43.640 And my big message to everyone working in AI is let's just explore the full spectrum of possibilities,
00:59:50.960 which most people are.
00:59:52.280 There are some utopians.
00:59:54.660 I don't know if that might be the right word,
00:59:56.020 but I don't think that they make up the majority of the people.
00:59:59.840 And what excites you about AI?
01:00:01.540 What are the things you're like, oh, if this happens, this could be transformative.
01:00:04.940 This could be amazing.
01:00:05.860 Well, again, it's super nuanced.
01:00:10.120 And I'm like a strange CEO in the space in that I'm very pro-human.
01:00:16.200 I love how imperfect I'm.
01:00:18.760 I'm sorry to interrupt.
01:00:21.280 You're an outlier in that you're very pro-human?
01:00:24.720 I'm extremely pro-human in that I love the imperfections of humans.
01:00:29.500 I love the messiness of humans.
01:00:32.020 There's a lot of left-brained people here that think about
01:00:34.680 how perfect the world will be when we iron out all these inefficiencies
01:00:38.300 and mistakes that humans make.
01:00:39.800 For me, I like the messiness of humans, right?
01:00:41.840 So that's what I mean by being extremely pro-human.
01:00:45.400 Can I just put...
01:00:47.120 You've triggered us.
01:00:48.420 Yeah, it's just like, because when you say you're pro-human
01:00:51.700 and you like the messiness of humans,
01:00:54.040 and, you know, there's people here who want to iron out the inefficiencies,
01:00:58.140 I'm like, I'm like, that sounds a little bit fashy.
01:01:01.020 I'm going to be honest with you.
01:01:02.020 And I'm not somebody who uses that word.
01:01:04.620 Yeah.
01:01:04.940 But it does sound a little bit fashistic.
01:01:06.960 You know what I mean?
01:01:07.960 Explain more.
01:01:08.980 So, for instance, if you want to iron everything, the humans,
01:01:11.900 that means that you want to micromanage humans,
01:01:14.140 that you want humans to behave like robots, like automatons.
01:01:18.360 And that makes me feel pretty uncomfortable.
01:01:20.580 Well, it's not actually quite like that.
01:01:22.440 It's more like they can deploy AI in places where humans are imperfect.
01:01:27.660 And for me, I like a lot of imperfection, right?
01:01:32.140 I like the human stuff.
01:01:34.560 And I think that we as a humanity are going to start to realize that
01:01:37.460 we don't want to automate everything.
01:01:39.380 So, in my space, customer service, actually, the AI is brilliant.
01:01:43.340 It's super consistent, never gets pissed off, no typos,
01:01:47.100 works 24 hours a day.
01:01:48.440 It's incredible.
01:01:49.560 Guess what?
01:01:50.400 Sometimes customers want to talk to a human.
01:01:52.180 And businesses want to show that they really respect them enough
01:01:55.560 to put a human on the line, too.
01:01:57.500 So, there's going to be a lot of that.
01:02:00.360 You're being triggered sufficiently.
01:02:02.060 Maybe you'll forget the question in the first place.
01:02:03.920 Sorry.
01:02:04.960 And what are you excited about, I guess, is the question.
01:02:07.440 Yeah.
01:02:07.800 I mean, you know, it's without a doubt that AI is going to help
01:02:16.200 a lot of very human problems.
01:02:19.760 Take medicine, for example.
01:02:21.080 Medicine is a shit show.
01:02:24.460 It's a disaster.
01:02:26.220 Now, the medical industry in the United States is better than,
01:02:31.360 certainly, where I'm from, Ireland, the UK, unfortunately,
01:02:34.540 many places.
01:02:35.600 It's really brilliant.
01:02:36.860 It's also a nightmare.
01:02:38.400 You have to advocate for yourself amongst all these disparate experts.
01:02:42.700 Maybe one guy is great at hearts.
01:02:44.620 One guy is great at the brain.
01:02:46.320 The other guy is great at sleep.
01:02:47.960 They don't actually talk to each other.
01:02:49.260 They don't care about each other.
01:02:50.640 They don't care about the holistic picture at all.
01:02:53.260 Trying to fix chronic illness in the United States is an impossibility with the current
01:02:59.060 medical industry.
01:03:00.520 And yet, most people are chronically ill.
01:03:03.200 There's so many people out there, and they think,
01:03:04.920 oh, you know, I don't have as much energy as I used to, or my concentration isn't as good
01:03:11.920 as it used to be.
01:03:13.540 And maybe it's just because they're getting older, or maybe they have mold toxicity.
01:03:19.900 Because, for example, in the United States and probably in the UK and Ireland, because,
01:03:23.560 sorry, in the Bay Area and probably in the UK and Ireland, because they're humid places,
01:03:27.880 wet places, there's a lot of mold, water-damaged buildings.
01:03:31.440 People are sick and they don't know it.
01:03:33.320 And no one in the Western medicine medical profession can help you figure that out.
01:03:39.040 Already, ChatGPT is better at putting the pictures, pieces of the puzzle together, looking at the
01:03:44.700 different pictures you get from the experts, and synthesizing.
01:03:48.020 And so, for me, I've got insights that I could never get before by giving all my medical tests.
01:03:54.160 It's just brilliant at that.
01:03:55.440 And so, I think in the future, I mean, I know in the future, we're going to have solutions
01:04:01.280 to so many of our ailments, the things that actually kill us, that actually ruin our quality
01:04:06.420 of life, that are destroying the lives of the people we love, both young people and older
01:04:12.480 people.
01:04:13.080 I mean, in the United States alone, I know so many young people that are very, very sick.
01:04:18.160 I think there's a chronic illness epidemic.
01:04:21.260 And I think that AI is going to start to fix that.
01:04:23.600 So, that's just like one little example of a very pro-human, rich, wonderful way in which
01:04:30.960 I think AI is going to be brilliant.
01:04:32.840 Because I saw a study, it was really interesting, showing that when you used AI to study tumors,
01:04:40.580 it was actually far more accurate if a tumor was benign or if it was cancerous than a radiographer
01:04:46.940 who's had 20 or so years of experience.
01:04:49.220 Totally.
01:04:49.740 It's brilliant at those types of things.
01:04:52.200 Human labeling.
01:04:52.940 So, when humans have to look at x-rays, MRIs, or EEGs, which are like brain scans that they'd
01:04:59.580 use in, say, sleep studies, the AI is way better at labeling them.
01:05:05.940 Just like the AI is way better at driving the cars.
01:05:08.420 The AI has so much more data and so much more training.
01:05:14.940 Doesn't get sleepy.
01:05:17.220 You know, it's just...
01:05:18.000 Doesn't get angry.
01:05:18.700 No.
01:05:18.860 It's reaction speed as well.
01:05:19.820 They're better at these things.
01:05:21.100 And so, hopefully, you know, the future medical profession is medical individuals with outstanding
01:05:28.240 bedside manner and empathy, which we need a lot more of, right?
01:05:33.080 And incredible AI that can teach them what's wrong with their patients and how to fix it.
01:05:41.040 But I don't think that the AI is going to be very good at convincing the patient.
01:05:44.880 Again, that's going to be back to the human.
01:05:46.640 The human's going to have to say, hey, I know it's hard.
01:05:49.640 I know it's scary.
01:05:51.440 You can do it.
01:05:52.760 I've worked...
01:05:53.380 Many people have done it before.
01:05:54.600 It's not going to take that long.
01:05:56.140 Look at the readout from the AI.
01:05:57.560 It's explained everything.
01:05:59.100 Let's do it together.
01:06:00.520 And so, you know, unfortunately, that is a bit of a utopian take.
01:06:06.520 But that's an example of where we can imagine just beautiful collaboration between the best
01:06:11.720 of AI and the best of humans.
01:06:13.260 That's already happening, like I mentioned with my dentist.
01:06:15.620 Like, it just tracks where your gum was last year, whereas now it's like a simple thing.
01:06:19.900 Right.
01:06:20.020 I think I'm totally with you on the excitement of it.
01:06:25.540 Like, I think there's so many amazing things that could come out of it.
01:06:28.220 Right.
01:06:28.360 Just incredible.
01:06:29.380 Right.
01:06:29.960 The one thing we haven't talked about yet is generalized intelligence.
01:06:34.080 Right.
01:06:34.380 I.e. God.
01:06:36.120 A digital God, basically.
01:06:38.080 I take issue with people calling it God.
01:06:41.100 I think that's bullshit.
01:06:42.000 But, you know, maybe something that can do what humans can do.
01:06:44.820 Better than humans, right?
01:06:46.520 That's so when people talk about AGI here.
01:06:48.920 Yeah.
01:06:49.580 Artificial general intelligence.
01:06:51.300 Typically, they mean they can do everything a human can do intellectually.
01:06:56.080 Yes.
01:06:56.700 But I'm kind of maybe going.
01:06:58.380 And then eventually better.
01:06:59.560 Sure.
01:07:00.500 What I'm.
01:07:01.260 But even.
01:07:01.860 Even if you give.
01:07:03.240 If you give you 10 extra IQ points and bigger muscles and and and you're still not God.
01:07:09.760 What I mean is A.I. that is so superior in its abilities that effectively it becomes the caretaker of humanity.
01:07:20.180 Yeah.
01:07:20.440 Is that going to happen?
01:07:22.180 Well, I want to take us back for a second to the fact that it still can't do 3% of like gig work.
01:07:29.700 So we're a bit of a way out.
01:07:32.660 Yeah.
01:07:33.080 Is that going to happen?
01:07:33.940 And like.
01:07:36.480 I don't know.
01:07:38.240 I happen to think that humans are so much more than the intelligence that comes from their brains, you know, and I and I think that even if you create something that's so much more intelligent from an IQ perspective than a human.
01:07:51.900 That humans will have a lot to bring to the table, you can totally imagine a point where it's just straight up smarter than us and thinks quicker than us and then is far better than we were at making itself better.
01:08:08.900 And, you know, there's some sort of like jumping off point or singularity where it accelerates into the future in a way that we can't possibly even fathom what it's what it is.
01:08:22.200 So that sounds like sci-fi stuff to me.
01:08:25.420 The doomers believe that that's possible.
01:08:29.520 And they say that if we invent this, it's going to kill us.
01:08:32.740 Well, it's not hard to see.
01:08:33.740 I'm not sure about the killers part and I want to hear about that, but I'm just inject this.
01:08:37.240 If you have a machine, let's call it a machine just for the sake, for ease of talking, that is based on chips, right?
01:08:47.480 A machine can design better chips.
01:08:49.460 Right.
01:08:49.780 A robotic element of the machine can mine for the materials you need.
01:08:55.040 You can put the chips together in the factory.
01:08:57.680 It can make better chips.
01:08:58.940 It becomes more intelligent.
01:09:00.320 It can make better, design better chips.
01:09:02.000 Yeah.
01:09:03.080 And before you know it, you've got this thing.
01:09:04.980 Sure.
01:09:05.500 This runaway intelligence.
01:09:07.240 And then it's actually something that a lot of sci-fi writers have been thinking about for decades.
01:09:14.360 Some of the people I used to read as a kid were thinking about this sort of stuff.
01:09:20.340 So let's talk about the doom.
01:09:22.160 The people say it's going to kill us.
01:09:23.960 Yeah.
01:09:24.900 It's definitely one of the possibilities.
01:09:26.720 Yeah.
01:09:26.920 Or it could just take charge of us, which is another one of the possibilities.
01:09:32.400 Yeah.
01:09:32.960 But why do you think it's unlikely that it's going to get there?
01:09:36.680 Or do you?
01:09:37.140 Well, no, I just think that if there's anything I've been trying to do in this conversation,
01:09:43.660 it's just temper the fears.
01:09:45.720 And so in this respect, I'm just trying to say it's not about to happen tomorrow or in 10 years, I don't think, or 20 years.
01:09:53.800 Well, maybe 20 years is too far.
01:09:54.980 But the reality is, I don't know.
01:09:59.880 No one knows.
01:10:01.040 And I think it's totally fair, totally fair.
01:10:06.400 And there's people in San Francisco who will not be happy with me saying this, but I think it's totally fair to criticize the people working to create AI right now,
01:10:16.460 saying that they have no idea what they're creating and there could be some risks.
01:10:22.800 I just think that the risks are small and China's going to do it.
01:10:27.460 Yeah.
01:10:28.220 You know, the thing that actually when we talk about the risks, and this is going to sound ridiculous,
01:10:34.420 but go with me because there's a deeper point.
01:10:37.880 Robot girlfriends.
01:10:39.020 And let me tell you why.
01:10:40.940 Thompson's desperate.
01:10:41.860 Yeah, exactly.
01:10:42.940 Please design one.
01:10:43.580 We've been on the road for how many weeks already?
01:10:46.160 But put it like this.
01:10:47.960 We were talking before about getting rid, shall we just say, of the imperfections of human existence.
01:10:56.140 What is more imperfect than emotion?
01:10:59.420 Sure.
01:11:00.700 Relationships.
01:11:01.140 If you could design, at one point the technology is good enough, you're a perfect woman, you're a perfect man.
01:11:09.120 Sure.
01:11:09.320 They're never going to lose their temper.
01:11:10.740 You know, they're never going to be coming back, you know, annoyed from work.
01:11:15.300 You can get rid of the menstrual cycle.
01:11:17.600 So, you know, she's always going to be horny.
01:11:19.780 She's always going to be happy to see you.
01:11:24.100 Why wouldn't you?
01:11:25.240 Exactly.
01:11:26.260 Why wouldn't you?
01:11:26.920 Why would you settle for the human being?
01:11:29.040 And if you take that kind of way of looking at the world, then you can perfect everything.
01:11:36.760 So, why are you going to need to engage with reality when reality is unpleasant, uncomfortable?
01:11:43.040 Sure.
01:11:44.200 Sometimes not always nice.
01:11:46.700 Sure.
01:11:47.240 I just don't think that humans actually want perfect.
01:11:50.280 Maybe some people think they do, but they don't actually want perfect.
01:11:53.780 I think the magic and the juice in a relationship is the kind of, like, push and pull, and the connection you build is through the friction and overcoming it.
01:12:05.160 And so, you know, we're not about to replace human connection anytime soon.
01:12:09.880 And even in this world where, this fantastical world where there is the, you know, the god AI, as you call it, we're still going to want human connection.
01:12:21.180 I don't think that I know that no matter how good AI gets, it's not going to replace the magic of human connection.
01:12:29.720 Even what we're feeling right now, you'll never, ever feel that with a robot, ever.
01:12:35.480 It's not going to happen.
01:12:37.020 Okay, now let's entertain it for a sec.
01:12:38.460 Yeah, a bunch of people will.
01:12:42.300 Of course they will.
01:12:44.640 There's probably people who were never going to have human relationships of this nature.
01:12:50.280 Maybe it's a good thing.
01:12:53.220 There's probably a bunch of people in the middle or on the edges that this competes with human relationships for.
01:13:00.240 That's probably not a good thing.
01:13:02.880 This can't possibly be great for the fertility crisis of the West.
01:13:06.620 It doesn't sound like it's going to be.
01:13:09.180 But you can imagine a situation, and I do think that anyone who is highly confident about what the world's going to look like,
01:13:15.440 particularly as it relates to AI, is full of shit.
01:13:19.840 You can imagine a situation where actually new AI relationships mirror a way of relating
01:13:31.960 and help us learn about ourselves in a way that most people never have or do.
01:13:40.580 You know, they act like the world's best therapist and help people understand their insecurities and their own trauma
01:13:46.780 and help build empathy and understanding for the other human on the side of the relationship.
01:13:52.860 And so, you know, maybe there'll be AI girlfriends, but maybe there'll be kind of AI friends that are like a healthy friend.
01:14:03.140 Think of the very best friend you've got.
01:14:05.080 They'll challenge you sometimes.
01:14:06.680 They'll reflect back to you some of your mistakes.
01:14:09.180 They'll support you when you're down.
01:14:10.560 They'll give you some advice or share some stories that are useful.
01:14:14.560 Maybe the very best version of AI will do all of these things, too.
01:14:17.980 So, again, not trying to be Pollyannish here, not trying to paint a utopian future.
01:14:23.600 I do think it's going to get super weird.
01:14:26.140 I think there's going to be all manner of, like, really kinky AI girlfriend stuff.
01:14:30.740 But we actually don't yet know the real implications and exactly what way it's going to play out.
01:14:37.800 And it could be mostly awesome.
01:14:39.560 We actually don't know.
01:14:41.800 I'm sure the kinky stuff, the Japanese will do it first.
01:14:43.920 It's happening already.
01:14:48.120 You mentioned the fertility crisis.
01:14:51.320 With robots and AI, is it still a crisis?
01:14:54.060 In so far as we are all pro-human, yes.
01:15:02.220 Yeah, this is a bit of a worry of me.
01:15:03.820 Who are these people that are not pro-human?
01:15:06.900 Well, I mean, at least we are, right?
01:15:09.200 Yes.
01:15:09.700 So, if we are talking about the fertility crisis, well, then it's a problem if people have less kids.
01:15:15.840 Just because there's robots around, that doesn't sound super helpful.
01:15:19.000 So, I don't know.
01:15:21.840 I want to see humanity continue to flourish and grow.
01:15:25.540 But it's actually an interesting point.
01:15:28.020 Like, fertility crisis is happening independent of AI because it started before AI.
01:15:35.280 And remember, AI is not that useful yet, practically.
01:15:37.760 Sure.
01:15:38.380 So, it's totally independent.
01:15:40.120 Maybe AI and robotics actually is very helpful here.
01:15:42.760 I mean, in Japan, the aging population don't have the young nurses and assistants that they used to have.
01:15:51.180 And they've been trying to build robots to do that work for 15 or 20 years already.
01:15:56.100 That's going to come.
01:15:57.540 And so, maybe for all of the work that we used to depend on young people for, we do have robot assistants.
01:16:03.120 And maybe that's awesome.
01:16:03.880 And then we then have a population that I hope returns to growth.
01:16:11.160 But during this adjustment phase, whatever the hell is happening, we're assisted and supported by robotics and AI.
01:16:17.800 And also, maybe with AI as well.
01:16:20.720 Because part of the problem, I think, with the fertility issue is we haven't taught women about their fertility.
01:16:28.320 And the quite brutal facts around it.
01:16:31.260 And you get, we talk to, you know, you talk to women at parties.
01:16:35.720 They go, well, I'm in my late 30s now, 38, 39.
01:16:39.120 And, you know, maybe this is a time I'm going to start thinking about having kids.
01:16:42.340 And you're like, I mean, you could, but you're very much drinking in the last chance saloon.
01:16:47.380 We don't actually say that at parties.
01:16:48.980 No, no, no, no, no, I don't.
01:16:50.860 I think it and I just kind of smile and nod.
01:16:54.000 Smile and nod.
01:16:55.620 But actually, maybe if you have an AI model that will be able to.
01:16:59.100 Don't say that instead.
01:16:59.820 But it's able to actually, you know, look at, you know, scan a woman's body and go, look, this, the reality is past this age, you're not going to be fertile.
01:17:10.660 You're not going to be as fertile.
01:17:12.140 So maybe you want to think about having kids at this age.
01:17:15.540 Yeah, you could imagine that.
01:17:17.260 Or just like a family planning AI that just goes, this is how you might want to think about life.
01:17:21.460 Yeah, I think that the problem is not facts.
01:17:24.260 No.
01:17:24.500 It's not that people don't know that this is a reality.
01:17:26.940 Yeah.
01:17:27.520 It's much deeper than that.
01:17:28.960 Yeah.
01:17:29.660 And so, you know, if we have AI that, you know, acts as an outstanding therapist, will that, can that be useful for the fertility crisis?
01:17:41.280 I can imagine, yes.
01:17:43.060 You know, if it can actually satisfy some of the needs that we have now for great therapy, which is not abundant, then it could be great.
01:17:51.540 You know, like if it can help, if part of the problem is, for example, women putting off having children because they want to participate in the working world.
01:17:59.800 They want to be successful in their own right and independent.
01:18:02.660 They want to enjoy a certain lifestyle that has been promoted for the last 10, 20 years.
01:18:07.860 Maybe, you know, a great AI friend that acts as a great therapist, too, can help them start to think about the places from which those ideas come from, dive deeply into what they actually want, and start to play out the realities that come with, you know, prolonging having children, et cetera.
01:18:27.320 Like a good friend would.
01:18:28.340 Yeah.
01:18:28.740 Like someone at a party, but who actually has the right to say such a thing.
01:18:33.600 I feel there's a judgment there.
01:18:35.360 No, no, I think he's just, he's being very objective about it.
01:18:39.640 Owen, it's great to have you on, man.
01:18:40.980 Thanks for giving us your time and an interesting balanced perspective.
01:18:43.640 I hope other people in your world are having these conversations in this way, because I think this is super important, actually.
01:18:50.320 Very important.
01:18:51.540 Appreciate you coming on the show.
01:18:52.960 Before we head over to Substack and put questions from our subscribers to you,
01:18:57.260 what's the one thing we're not talking about that we really should be?
01:19:01.100 You know, I'm just going to be repetitive in here and say that we just need to have a nuanced conversation about AI.
01:19:09.380 I think AI technologists need to embrace the world, and the world needs to embrace them.
01:19:15.580 I think that the conversations on the left and the right are very basic and rudimentary.
01:19:23.040 Both the left and the right are worried about what it's going to do to, you know, workers, etc., which is fair.
01:19:31.020 But we just need to have a collective conversation so we don't either ignore the issues and be ready to adapt as society,
01:19:40.820 or we fear it outright and ban it and fall behind the rest of the world.
01:19:47.040 Owen, it's been an absolute pleasure.
01:19:50.320 Thank you for coming on the show.
01:19:51.800 Make sure to head over to our Substack, where you get to ask, Owen, your questions, and we get to carry on the conversation.
01:19:59.260 How much technologically have the claims made by China's deep-seek and cost-savings efficiencies affected its Western rivals and their approach to AI modelling?
01:20:07.280 Getting ready for a game means being ready for anything, like packing a spare stick.
01:20:30.280 I like to be prepared.
01:20:31.860 That's why I remember 988 Canada's Suicide Crisis Helpline.
01:20:35.340 It's good to know, just in case.
01:20:37.920 Anyone can call or text for free confidential support from a trained responder, anytime.
01:20:43.360 988 Suicide Crisis Helpline is funded by the government of Canada.