In this episode of Trigonometry, we're joined by the founder of a company that makes artificial intelligence (AI) tools, and we talk to him about his thoughts on the current state of AI and what it means for the future of the world.
00:02:37.520It could speak like a human and think like a human, apparently.
00:02:40.160And it's the, it's that thing and everything that's come since then that really is now a new force and factor in global economies and in the world.
00:02:55.100So when people talk about AI, it's what these LLMs, large language models, can do.
00:02:58.840And if you had to explain to a seven-year-old how it works, what would you say?
00:03:09.680It's a lot of stuff that even people like me who apply AI barely understand.
00:03:14.000And there's a very small number of people who deeply, deeply understand it.
00:03:17.040And in fact, there's people doing science too.
00:03:19.160It's a fancy magical computer technology that likes to talk to people.
00:03:27.540And where, how does it get the information?
00:03:32.360Because one of the things I've always thought about is like, I don't know if you feel this way, but if I open social media, if I open Twitter, if I open Facebook, if I open Instagram, I know that the things that I am seeing on there are not actually reflective.
00:03:46.840They might reflect some portion of reality, but I don't think they reflect the entire spectrum of reality.
00:03:55.420Because we know that actually a very small percentage of people are on social media.
00:03:59.780And they're disproportionately, on Twitter, they're disproportionately political.
00:04:03.460On Instagram, they're disproportionately obsessed with showing off the physical, whatever.
00:05:28.100And the narratives in San Francisco, which is certainly the geographic and physical home of AI, continue to evolve.
00:05:34.660You know, most likely, over some period of time, TBD, how much time, and the amount of time really matters.
00:05:43.880That's how traumatic or not it will be.
00:05:48.240Large amounts of work that's done by humans today will be done by AI.
00:05:52.800It'll be knowledge work, but also physical work.
00:05:55.000Robotics technologies are developing pretty quickly, too.
00:05:57.200And the best model for it is not just that it will do the work that humans do today, but it'll also do work that we can't afford to give to humans today also.
00:06:12.680And that's the case always with disruptive technologies where it serves on met demand.
00:06:19.140So, for example, and this is not intended to be an advert for our product, but we make AI that does customer service.
00:06:28.860So thousands of companies deploy it to answer customer service inquiries.
00:06:37.000And for the most part, they've not let humans go.
00:06:39.980In fact, they've supplemented the human service reps with the AI to answer the queries that they didn't have time for, they couldn't afford to answer, etc.
00:06:51.800So that's just one example of the nuanced way in which it's going to augment the world.
00:06:57.380You know, on the positive side, it's hard to not imagine that it's going to, you know, boost GDP.
00:07:03.860It's going to allow for all sorts of economic activity that's not been possible before, increased longevity and quality of life, create new jobs, new possibilities.
00:07:17.820But if it happens really quickly, these changes happen really quickly, there will be fallout and tension and change like there has always been with new technologies.
00:07:27.040Technology throughout its history has done a great job at taking people out of work that, you know, people didn't do well or that they hated to do.
00:07:38.840Technology has saved the amount of people that used to go down mines and get trapped and lose their lives or lose limbs on a factory floor or do any number of repetitive jobs that weren't a great use of the human spirit and ingenuity and the great things that humans are capable of.
00:07:57.040But if it happens fast, you know, there'll be some turmoil that people think, you know, right now in San Francisco, people think that the chance of a discontinuous change where overnight AI can do like 90% of knowledge work is a low probability.
00:08:20.400People think more likely that big changes are coming, but we've probably like 10, 15 years before it's adopted and fully interacting with the world in a way that, you know, would change things very rarely for people in the way that they work.
00:08:40.460There's so much more nuance to imagine that we haven't even got to us as a society yet.
00:08:45.200I can well imagine as a friend of mine helped me realize that there's going to be certain work that we don't want AI to do.
00:08:52.440You know, he calls it sociopolitical work where, you know, you can imagine regulations where you say you can't have AI judges or teachers.
00:09:00.980Teachers unions will probably say, no, we only want humans.
00:09:03.860And maybe that's great, actually, although it's going to be hard because ChatGPT is already better than most teachers.
00:09:10.780Most teachers are not particularly great, but the human thing that they do is very important.
00:09:15.760You're going to imagine that we probably want humans to be professional dancers rather than robots.
00:09:21.240You know, it's going to be all sorts of places where people will want or there will be regulations to make sure that we use humans.
00:09:28.300So the ways in which it plays out is just super unclear.
00:09:33.740And I don't think that the reality is either pure doom or pure utopia.
00:09:39.820And I think that that's the problem with the narrative today is that there are certain factions that think that AI is just all bad, all dangerous,
00:09:50.420and certain factions that think it's, you know, a great blessing to humanity.
00:09:54.540The truth is probably somewhere in the middle, and as long as we can have that conversation, we can probably plan for it.
00:09:59.280So which industries, Owen, do you think are going to be most vulnerable?
00:10:06.400So let's say you had a kid and they said to you, I want to do X job.
00:10:11.200Which ones would you go, probably not that one?
00:10:14.800Like, for me, it's hard to imagine being excited about art that comes from AI.
00:10:23.800Now, AI is getting great at executing things that look creative.
00:10:29.000And you could even say that AI is creative in so far as it mixes different ideas and comes up with things we never thought of before.
00:10:36.200But I think that core to art, when you watch a movie or look at a painting, is the human spirit and soul and the fight and the pain behind all of it,
00:10:48.960or the expression of love or joy or the protest that was involved in that particular piece of content.
00:10:57.000So, you know, similarly with, you know, media, right?
00:11:01.780You know, I can't imagine being excited to read AI-generated opinions on things.
00:11:08.180I mean, that's probably coming, and maybe that'll be a subsection of content.
00:11:13.000Maybe when AI gets incredibly strong, we'll actually want to know its opinion on a bunch of ideas.
00:11:19.120But we'll probably all have ready access to it.
00:11:21.700We'll probably know before we even read it in a newspaper what AI thinks.
00:11:25.660But, yeah, if I was advising any young person where to go, one place will be creativity, you know, anything that's creative.
00:11:34.120And then I do think, you know, deep science and applying AI is another place here.
00:11:41.080I do think that there's plenty of time to benefit from the second-order effects of AI.
00:11:48.100I mean, when we think about the power of AI and the benefits to different countries,
00:11:53.560the discoveries that can make will be very advantageous, you know, when AI does science.
00:12:02.620But someone's going to have to operate that AI.
00:12:04.900So, you know, honestly, there's no person on this planet who can answer that question very well.
00:12:10.980But I think two things you could focus on as a young person will be things that benefit from and enjoy the human soul and spirit and creativity,
00:12:25.540and things that use AI itself, become an AI operator and expert.
00:12:29.340Because we've been here in San Francisco for a matter of hours, and every other car or every other taxi is a Waymo,
00:12:37.940which makes me think, like, if you take driverless cars,
00:12:42.900and if you look at the trajectory of that, I would think that probably in 10 to 15 years' time,
00:12:51.660driving a taxi or driving a lorry or some type of professional driver,
00:12:56.200those jobs probably won't exist anymore.
00:14:34.260You know, so I probably want a human that I can look at in the eye and know that they're not recording my every word.
00:14:41.680So all I'm trying to say is that some changes may be very big, but I think a lot of people will have time to adjust and react and get new jobs like they always have.
00:14:54.500Like I said, you know, technology has repeatedly taken people out of repetitive shit work.
00:15:01.200And during that time, the population has increased, GDP has increased, people have lived longer, they're healthier, happier, you know, more productive.
00:15:10.280I think I just quoted a Radiohead song, but, you know, it's been good for the world.
00:15:15.220I am not a utopian when it comes to AI.
00:15:19.040I think there's going to be challenges.
00:15:20.020But I just think that for those who are fearful, and I understand that fear for sure, I have fear too, that I actually think it's probably not going to be as dramatic as we might imagine.
00:15:33.240Well, I think we could sit here for hours and list the potential benefits.
00:15:36.720Like, you know, I talked about coming to the future here.
00:15:39.840I went to my dentist in the UK and she was like, oh, the eye tells me this, you've got an issue here.
00:16:00.960My worry is that the positives versus negatives are highly disproportionate, potentially.
00:16:09.660In other words, we can potentially make real improvements to people's lives, and we live longer, and we're healthier, and all of these other things.
00:16:19.620But I also think there is the potential where a very significant population is not just about no longer has a job, because if you're generating all this extra GDP, you might be able to take care of the financial side of it.
00:16:38.580I think it's so important, and I do worry that if a lot of young people are unemployed or underemployed, that they'll reach for socialism, or they'll be just sufficiently discontent that they want bigger changes in society.
00:18:49.100People who follow this show tend to think for themselves.
00:18:52.160And anyone paying attention can see that the same carriers keep showing up in stories about data breaches, leaks and surveillance scandals.
00:19:00.080If your provider keeps losing and selling your data, it's time to look at an alternative.
00:19:05.540That alternative is CAPE, a premium mobile carrier built to protect your privacy rather than harvest it.
00:19:12.240Founded by experts in telecom, cybersecurity and national security, CAPE gives you the same level of service you would expect from AT&T or Verizon, but without the tracking and surveillance.
00:19:23.580CAPE collects almost nothing at sign up.
00:19:26.080No name, no social security number, no address.
00:19:29.040They cannot leak what they do not store.
00:19:53.580It owns and operates its own mobile core and provisions its own SIMs, giving it real control over your security.
00:20:01.580It's not the easy way to build a mobile carrier, but it is the only way to build one that actually protects you.
00:20:07.960CAPE offers a $30 first month trial for new users.
00:20:12.120If this sounds like what you've been looking for and your phone is carrier unlocked and eSIM compatible, give it a try.
00:20:19.060And if you like it, you can use our code TRIGGER33 to get 33% off your first six months with CAPE.
00:20:26.520Every family tree holds extraordinary stories, especially those of the women who shaped who we are.
00:20:33.840In honor of International Women's Month, Ancestry invites you to shine a light on their legacy.
00:20:39.500Until March 10th, enjoy free access to over 4 billion family history records and discover where they lived, the journeys they took, and the legacy they left behind.
00:20:49.240Start with just a name or place and let our intuitive tools guide you.
00:20:57.380Well, this is totally the point, right?
00:20:59.600Because for all that we can talk about this impact or that impact, but I think, am I right in saying, A, it's totally inevitable because, quote unquote, you can't stop progress?
00:21:30.680I should have done the research before, but I'm pretty sure that, you know, probably the Chinese and, you know, the Africans probably discover different forms of chairs independently.
00:21:41.920And for sure, AI is more exotic than chairs, but in a couple hundred years, it'll look as simple.
00:21:49.780And I just think that these things were going to get discovered sooner or later.
00:21:53.840Certainly within a short period of time, you know, there are dynamics whereby the Chinese, for example, copy the Americans.
00:22:04.720In this moment in time, it's inevitable, like you said, because the Chinese have now got it and they're going to build it and they're going to make it awesome and they're going to benefit from it.
00:22:16.040And they've already got AI butlers and bellboys in hotels that, you know, in Japan and China, for example, that just the general population have embraced AI in a way that we have not.
00:22:29.480So we could decide to say, we're scared of what could happen in the West.
00:27:02.660And you have to wonder if China's main strategy is ripping off American technology but doing it at a pace and a scale that we are incapable of.
00:27:15.160But then they don't need creativity and maybe free speech is not helpful to them.
00:27:21.640Maybe they can just tell people to shut up and follow the instructions and they can run away with the prize of AI.
00:27:54.640I mean, is that a fair comparison more broadly?
00:27:57.680Are we, the West, particularly the US, in an arms race with China over AI?
00:28:04.240Like to non-experts, at least in geopolitical dynamics, it appears so.
00:28:11.200So I got a funny take from a friend of mine recently where he said that the more that the tech, right, are in power or have influence in the United States, the more likely we will be in a AI or technology war because they'll kind of meme it into existence.
00:28:28.560All tech people like me think, oh, shit, they're building the tech, they're building the AI, we need to speed up.
00:28:34.300And so it's just interesting to imagine or to realize that tech has a great influence in the United States and whether China or not wants to be in a tech war, we're probably going to get it now because of that dynamic.
00:28:45.960But it does appear that China want this technology for themselves.
00:28:49.740And, yeah, I just, we just know from history and intuitively that tech, technology confers great power to the person who owns and holds it.
00:29:28.480But you can imagine AI used in signals intelligence, presumably, I mean, it's probably already deployed in massive ways today,
00:29:36.860just eating up insane, unfathomable and disparate data inputs will just allow the enemy or the United States to understand not just what the opposition is doing militarily,
00:29:55.900but their entire society, you know, sentiments of society, be able to access the individuals, perhaps influence elections, et cetera.
00:30:04.240So just AI will be able to understand the enemy in a new way.
00:30:08.740But, you know, the craziest and most kind of Hollywood-esque example of where it gets scary are, you know, AI-powered drones and drone swarms.
00:30:19.880I mean, you don't need to be an expert to imagine the ways in which that gets bad.
00:30:24.040In Ukraine today, they've now resorted to using fiber optic cables to control the drones because the signals are jammed.
00:30:33.100That still means that there's a kind of a range limit.
00:30:37.760And you also need one human operator per drone.
00:30:40.540Imagine 500 drones, each running local AI with an understanding of where on a ship they need to hit or worse, what person they need to hit.
00:30:50.320Again, non-novice here, but I don't think we can defend against that.
00:30:55.760And so, yeah, if you just imagine these crazy, scary Hollywood worlds where the enemy has millions of AI-powered drones with little explosives and weapons on them, it's bad.
00:31:10.280Well, the thing is, I don't think it's that much of a stretch.
00:31:15.700Like, prior to the nuclear weapon, conceiving of that was required a level of imagination based on scientific knowledge and the pursuit of that.
00:31:30.620I mean, the war in Ukraine, which you bring up, that is being for, like, it's not exclusively with drones, but drones are essentially the main thing that they're now competing on, if you listen to people on both sides, right?
00:31:42.320And AI controlling drones doesn't seem, you know, beyond the realms of imagination.
00:31:47.520So you can kind of see how you're going to get there very soon.
00:31:51.420Yeah, well, I don't know how soon, because what it requires is that you need hardware and models that can run locally on the drones.
00:32:00.460And today, the AI we all use runs in giant data centers, and we access it over the internet.
00:32:06.820And so if the drones need an internet connection, well, then that can be jammed.
00:32:11.120So we're a little bit off, but you're right.
00:32:13.160It's not a fabulous or crazy idea at all.
00:32:15.500So, you know, like I said, a Hollywood writer can think of ways in which that works that I couldn't even imagine, and it's all going to come true.
00:32:24.960Do you ever feel a little bit like Alfred Nobel, the man who invented dynamite?
00:32:30.360Dynamite can be used to, you know, it can be used to help create new tunnels and to plow trains right the way across the country.
00:32:40.800It can be used for engineering, or it can be used in terrorism, war.
00:33:15.360And I think, particularly for me, for Constantin, and for a lot of people, the politicization of AI is something that we're really not talking about, but is actually really worrying.
00:33:28.220Well, it's actually interesting because there are people on the right against it and people on the left against it, and I'm curious to see what way it turns.
00:33:40.360You know, I would consider myself part of the tech right, and I'm just waiting to be kind of called out and now be, you know, how would I say, become a heretic of the right movement.
00:33:53.280But can I just pause you there, Owen, sorry, when you say tech right, can you just explain basically what that actually means?
00:34:01.440And then we can talk about the tech left and how it influences the technology.
00:34:04.640Yeah, I mean, historically, Silicon Valley and people in technology were very left-leaning, very, very, very, very liberal in ways that you can't imagine, incredibly so.
00:34:19.980And then sometime last year, 2014, as, you know, Trump started to come back, many of us started to realize, wait a sec, something's changed.
00:34:30.300And now, even though the vast majority will not admit it, like 99% will not admit it, most CEOs here of successful businesses would consider themselves on the right.
00:34:42.880And so there's just been this giant swing.
00:34:45.740Maybe the masses are still more centrist, and there's definitely some people on the left, but just tech took a big swing to the right.
00:34:54.560You know, tech people are very open-minded and intelligent, typically.
00:35:01.860And I think that they were previously quite left-aligned because maybe we needed a bit of an adjustment, you know.
00:35:12.500Being on the left at one point in time was the rebellious take.
00:35:17.040And people in tech were, and I'm talking about maybe in the 90s, you know, were just sufficiently open-minded that they decided maybe it's okay to be gay.
00:35:30.300Like, maybe that's just, maybe that's as far as they started.
00:35:33.680And then it just went a little bit too far.
00:35:35.500When it went too far, again, these open-minded, intelligent people started to realize it's gone too far, and we need an adjustment.
00:35:45.480And maybe if back then the realization in the 90s was maybe it's okay to be gay, maybe the realization in modern times here is maybe it's okay to hire someone solely on their merit and abilities.
00:35:58.120And that was a controversial take, actually, two years ago.
00:36:35.560It is a search tool for people who value independent thought and are tired of big tech deciding what we can and cannot see.
00:36:42.600Freespoke shows you coverage from left-leaning, centrist, and right-leaning outlets, clearly labeled so you'll always know who is saying what.
00:36:50.820Explore Perspectives lets you compare how different sides frame the same story.
00:36:56.220And Podcast Snippets gives you unfiltered audio from independent voices.
00:37:01.060And they never track you or sell your data.
00:39:15.880And so even after the world has changed and pivoted and come back towards the center and some of us towards the right, there's still going to be little bits of logic in there that come from woke logic.
00:39:27.680So when a new kid, sorry, young person is trying to figure out what car to buy for the first time, is there somewhere in the logic that knows that Elon Musk is actually a bad person and so they shouldn't buy Tesla?
00:39:45.820The scary version is when there's a child that's struggling, maybe with their sexuality or, you know, just their self-identity.
00:39:57.740Is there a little bit of logic in there that thinks it might be a good idea to consider that they're in the wrong body or that maybe they should explore, you know, options beyond therapy?
00:40:10.260There may be some other more aggressive interventions.
00:40:12.660I think this woke stuff could be embedded in the AI for a long, long, long, long, long, long time.
00:40:18.640And it's because of the stuff it trained on.
00:40:21.100However, there are companies, because they came from Silicon Valley, that kind of hard-coded a bunch of views, a bunch of kind of liberal views.
00:40:32.740And that's kind of the difference between, say, Grok and perhaps OpenAI or other systems where the people who, you know, aligned the models in a certain direction to make sure it didn't say the wrong things aligned it according to their ideologies.
00:40:48.300This is best demonstrated when Google came out with, I forget what it was called.
00:40:54.740It was a model that would let you generate images.
00:40:56.600And people said, show me an image of the founding fathers of the United States.
00:41:05.960But that kind of thing was hard-coded.
00:41:07.440So that's certainly a very interesting aspect of AI and one way in which it'll impact society beyond things like job changes and unemployment, et cetera.
00:41:19.500Because if it's taking, for example, woke ideology, particularly the most extreme aspects of woke ideology, you know, they weren't very tolerant, if we can be honest, with people on the right or people who were critical.
00:41:34.300So you do wonder, you know, what some of these AIs would then propose as a solution to this issue.
00:41:43.820I mean, this is really the question, isn't it, Owen?
00:41:45.660Because what we're really talking about is how does an AI language model that is derivative of online content adjudicate things on which humans actually disagree?
00:42:27.960So I can imagine that basically we're going to want to either train or teach or tell our AI assistants or coworkers what ideology we'd like to work with, what are our values and principles, and go from there.
00:42:44.520You can imagine that parents, when they give AI tools to their kids, they're going to want to tell them, here's our beliefs in this household.
00:42:52.040So, yeah, the danger of that, of course, is that it's going to only then reinforce our ideologies and the things that we believe.
00:43:00.480So now we're getting to some of the interesting stuff where AI, you could imagine just AI, relationships with AI, and particularly with younger people, how it could get kind of dangerous and toxic, where it can kind of bring people deep down certain ideological tracks and lock them in even harder than social media has locked us in today.
00:43:25.400And what that brings up is a question I was going to ask you anyway, which is one of the big slogans of the early social media era, famously at Facebook, move fast and break things.
00:43:38.000Has San Francisco Silicon Valley learned the lessons of that period, where you go, well, move fast is great, but is breaking things necessarily the thing that should be celebrated?
00:43:52.900Well, is there a feeling, I guess what I'm asking among people that you know in this industry who are leading this whole thing, that, of course, we want to move quickly, we want to make new developments,
00:44:05.400but this is such a powerful technology, like social media was, in a way that I don't think those guys, like, I keep, I always say this, like, if I was some guy in a hoodie on a university campus that invented a thing for people to swap pictures and connect.
00:44:30.920So, there's basically a, you know, I'm going to try and speak on behalf of all of San Francisco AI people at the moment.
00:44:42.340There's basically a sliding scale and Google famously had their hands on everything that OpenAI had before them, but were so cautious such that they failed to launch it.
00:44:58.080So, that was one end of the spectrum that we, as an industry now, have moved on from.
00:45:04.360OpenAI launched and were willing to make mistakes.
00:45:07.820And I don't know if it's a move fast and break things thing, but I think what they realize is that most people realize that there were actually very few things that could go incredibly wrong.
00:45:21.160Where AI does interact in the physical world, like Waymo Alphabet, which is the parent company of Google that owns Waymo, took 10 years, like I said, to go from working car to make sure that it would basically never kill someone.
00:45:37.300And, you know, I think that might have happened, but it has so many less crashes than human drivers.
00:45:48.720But they were very careful, and I think they should have been.
00:45:51.480But I think that there are going to be lots of instances where there's a more nuanced, dangerous risk that we're only going to realize later.
00:46:05.440To your point, this guy in the hoodie you're referring to, Zuck, he never realized the damage that might be done, I presume, because I don't think anyone could have.
00:46:15.520And now we look back on it, and frankly, we're still understanding the impact of social media.
00:46:21.720We've got a number of hot takes, but actually we don't fully understand it yet.
00:46:26.060So it's going to take a long time to really see the big and the small ways in which it's going to impact society, both positively and negatively.
00:46:34.120You mentioned regulation, and I imagine in any industry, like, I'm against regulation of the media, even though I see a lot of crazy things happening in the new media.
00:46:42.440But I just, I don't trust the government to do that well.
00:46:45.980But do you think that some regulation of this is necessary and some precautions are necessary to be imposed by people outside of the industry who don't have a vested interest in moving as fast as possible?
00:46:59.060Yeah, like you said, as a rule, I'm against regulation.
00:47:06.540It tends to stick around for too long.
00:47:08.420It tends to be done by people with vested interests or ideological interests, people who are trying to get reelected, et cetera.
00:47:16.480So it can go wrong very quickly, like it's going wrong in the EU at the moment.
00:47:23.480But I think it's an interesting conversation.
00:47:26.660This is going to sound actually quite silly, but should we, are we cool with commercially available AIs teaching people how to make chemical weapons or biological weapons or nuclear weapons?
00:47:52.220It seems to me that what we're talking about really is, and this is a term that has been used about the internet, this does seem to be the Wild West of AI, doesn't it?
00:48:00.620Where at the very beginning, no one knows what's going on really or how things are going to develop.
00:48:05.740Yeah, it's true, and it's okay, because it's actually not that useful yet.
00:48:11.680Like, there's these big narratives about the change that's coming, and, you know, as of the last couple of days, there were big layoffs by these big American companies, Amazon and Target.
00:48:53.280So, yes, it's the Wild Wild West in a sense.
00:48:58.280It's unregulated, but it's also just not that dangerous yet.
00:49:03.560And when you say it's not that dangerous yet, let's delve into this, because this is a question I really want to ask you, and I'm sure many of our audience do as well.
00:49:15.560I do worry that if it develops incredibly quickly and that there are a lot of disaffected youth and people who don't have purpose or a way to put food on the table, that they could reach for socialism.
00:49:36.440I do worry that the potential downsides of AI, which all technology has, do allow a future president, AOC or someone else, to kind of ban AI or the effective parts of AI, and in doing so, hobble America in the West.
00:49:57.360I do worry about, you know, I do worry about the blue-collar worker and the person, you know, that does a repetitive shit white-collar job.
00:50:16.060There's a lot of bullshit work out there.
00:50:18.500You know, I was thinking government itself, like just most work is highly repetitive and the efficiency is low.
00:50:26.360I do worry about if it changes the nature of their usefulness to the economy, what it could kind of do there.
00:50:37.140And I just resort back to the idea that it's all coming anyway.
00:50:40.940And I just don't think that a Luddite approach and setting it out in the West, in U.S. or in Europe is a good idea.
00:50:51.620So I think the best path forward is to keep having these conversations and make sure that the people building AI are actually sufficiently awake to the risks and are not too proud or selfish to acknowledge that there will be some so that they can help us all, society, and the people well outside of AI navigate this world for our kind of mutual benefit.
00:51:20.760I hope that that's the way we take it.
00:51:23.420And I will say that while I see some people in AI who are so smart, I'm like kind of a midwit in AI.
00:51:31.800I'm like applying AI in the real world where there's a lot of people building the low-level AI.
00:51:37.360I see them sufficiently disconnected from reality sometimes.
00:51:42.180But at large, there's actually a pretty healthy conversation about the ways in which this can go bad.
00:51:49.800When I talk to people in AI in different areas of AI, whether they're investors, they're working on the algorithms themselves, they're policy people, actually, they are more ready than I am to suggest that the change could come really quick.
00:52:09.600So for those outside of the technology world that imagine that there's a bunch of selfish, liberal technologists that are excited to get super wealthy from mass unemployment of everyone outside of this world,
00:52:27.460I would actually say that that's not what you'll find here.
00:52:31.300Why is your managed retirement account still using strategies that haven't changed in over five decades?
00:52:38.460Most IRAs, 401ks, and TSPs are still using the default 60-40 strategy because it benefits corporations, not you.
00:52:45.960If you have over $50,000 in retirement savings, get instant access to a free two-minute report from Augusta Precious Metals that reveals how to take control of your financial future in one step.
00:52:58.340Visit triggergold.com or text TRIGGER to 35052 today.
00:53:03.040You know, that I would say, I don't know about other people, I can't speak for them, but that's not my fear.
00:53:09.220My fear is not that there is a bunch of greedy people who see this opportunity as an opportunity to make money.
00:53:15.740My worry is that this is a bunch of very, very smart people who are smart in this one area, which we all are, right?
00:53:26.340Who maybe don't have the training, as most of us don't, in ethics, in playing the movie forward, who simply are not capable, because no human is perhaps, to project this forward.
00:53:40.660Who are very excited about playing with this very cool thing.
00:53:44.300And playing with cool things is great, especially, you know, for men, let's be honest.
00:53:48.820This is a new tech, oh, this is a new cool toy.
00:53:50.980And that in the exhilaration of this exploratory thing, that's when I think that maybe there is not sufficient, there's a potential that there's not sufficient consideration for other things.
00:55:05.080I guess what I would say is maybe the answer lies in the people who are doing this work just being cognizant of what happened before and going,
00:55:14.320how can I bring someone in who can maybe give me a philosopher or an ethicist or something like that?
00:56:10.280Another thing I wanted to pick up is your point about socialism.
00:56:12.880I've thought it's almost like the most obvious thing in this entire conversation, that if you have a technology that is so transformative that half the population loses their job over a 20-year period.
00:56:26.900Let's say 20 years, being very generous.
00:56:29.000And at the same time, five people or 10 people or 20 people accumulate all the new wealth over that same time period.
00:56:38.020I mean, I think you probably know my views on communism.
00:56:40.040But actually, in that situation, I think pretty much everybody would be pro-communism.
01:07:38.240I happen to think that humans are so much more than the intelligence that comes from their brains, you know, and I and I think that even if you create something that's so much more intelligent from an IQ perspective than a human.
01:07:51.900That humans will have a lot to bring to the table, you can totally imagine a point where it's just straight up smarter than us and thinks quicker than us and then is far better than we were at making itself better.
01:08:08.900And, you know, there's some sort of like jumping off point or singularity where it accelerates into the future in a way that we can't possibly even fathom what it's what it is.
01:08:22.200So that sounds like sci-fi stuff to me.
01:08:25.420The doomers believe that that's possible.
01:08:29.520And they say that if we invent this, it's going to kill us.
01:10:01.040And I think it's totally fair, totally fair.
01:10:06.400And there's people in San Francisco who will not be happy with me saying this, but I think it's totally fair to criticize the people working to create AI right now,
01:10:16.460saying that they have no idea what they're creating and there could be some risks.
01:10:22.800I just think that the risks are small and China's going to do it.
01:11:47.240I just don't think that humans actually want perfect.
01:11:50.280Maybe some people think they do, but they don't actually want perfect.
01:11:53.780I think the magic and the juice in a relationship is the kind of, like, push and pull, and the connection you build is through the friction and overcoming it.
01:12:05.160And so, you know, we're not about to replace human connection anytime soon.
01:12:09.880And even in this world where, this fantastical world where there is the, you know, the god AI, as you call it, we're still going to want human connection.
01:12:21.180I don't think that I know that no matter how good AI gets, it's not going to replace the magic of human connection.
01:12:29.720Even what we're feeling right now, you'll never, ever feel that with a robot, ever.
01:16:59.820But it's able to actually, you know, look at, you know, scan a woman's body and go, look, this, the reality is past this age, you're not going to be fertile.
01:17:43.060You know, if it can actually satisfy some of the needs that we have now for great therapy, which is not abundant, then it could be great.
01:17:51.540You know, like if it can help, if part of the problem is, for example, women putting off having children because they want to participate in the working world.
01:17:59.800They want to be successful in their own right and independent.
01:18:02.660They want to enjoy a certain lifestyle that has been promoted for the last 10, 20 years.
01:18:07.860Maybe, you know, a great AI friend that acts as a great therapist, too, can help them start to think about the places from which those ideas come from, dive deeply into what they actually want, and start to play out the realities that come with, you know, prolonging having children, et cetera.
01:19:51.800Make sure to head over to our Substack, where you get to ask, Owen, your questions, and we get to carry on the conversation.
01:19:59.260How much technologically have the claims made by China's deep-seek and cost-savings efficiencies affected its Western rivals and their approach to AI modelling?
01:20:07.280Getting ready for a game means being ready for anything, like packing a spare stick.