The Culture War - Tim Pool - August 22, 2025


Will AI Destroy Humanity? Can Humans Escape AI Doomsday Debate


Episode Stats

Length

2 hours and 16 minutes

Words per Minute

177.32985

Word Count

24,170

Sentence Count

1,461


Summary

In this episode of The War Room with Steve Bambach and Joe Allen, hosts of Inverted World Live, join us to talk about the pros and cons of artificial intelligence (AI) and what it means for the future of humanity.


Transcript

00:00:00.000 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
00:00:04.760 Now introducing our hottest exclusive, Friends, the one with Multidrop.
00:00:09.540 Your favorite classic television show is being reimagined in your new favorite casino game,
00:00:14.440 featuring iconic images from the show.
00:00:16.760 Spin our new exclusive, because we are not on a break.
00:00:20.400 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
00:00:24.760 Want even more options?
00:00:26.120 Pull up a seat and check out a wide variety of table games from blackjack to poker.
00:00:30.680 Or head over to the arcade for nostalgic casino thrills.
00:00:34.500 Download the BetMGM Ontario app today.
00:00:37.100 You don't want to miss out.
00:00:38.560 19 plus to wager.
00:00:39.980 Ontario only.
00:00:40.880 Please play responsibly.
00:00:42.220 If you have questions or concerns about your gambling or someone close to you,
00:00:45.700 please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:00:53.360 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:00:58.060 If you want to feel more connected to humanity and a little less alone, listen to Beautiful Anonymous.
00:01:03.300 Each week, I take a phone call from one random anonymous human being.
00:01:07.700 There's over 400 episodes in our back catalog.
00:01:10.320 You get to feel connected to all these different people all over the world.
00:01:14.660 Recent episodes include one where a lady survived a murder attempt by her own son.
00:01:19.000 But then the week before that, we just talked about Star Trek.
00:01:21.560 It can be anything.
00:01:22.320 It's unpredictable.
00:01:23.500 It's raw.
00:01:24.160 It's real.
00:01:25.160 Get Beautiful Anonymous wherever you listen to podcasts.
00:01:28.520 Isn't there a lot of people that think that it will usher in a new age for humanity?
00:01:32.580 But there are also a lot of people out there that have made significant warnings saying that,
00:01:36.720 look, this actually could destroy humanity.
00:01:38.980 And we're here today to talk about it.
00:01:41.060 So not a lot of monologue today.
00:01:44.660 We're just going to get right into it.
00:01:45.800 So joining us today is Bryce McDonald.
00:01:47.840 Introduce yourself, please.
00:01:49.220 Hey, I'm Bryce.
00:01:49.760 I'm the U.S. lead at Volus, which is a portfolio company of New Founding.
00:01:53.960 And Volus implements AI in real-world American businesses like construction, mining, manufacturing.
00:02:00.540 So you're pro-AI generally, right?
00:02:02.640 Yes.
00:02:03.040 That's a fair statement.
00:02:03.720 Okay.
00:02:03.880 And also on the pro-AI side generally, as well as Bryce McDonald.
00:02:08.480 Yeah.
00:02:08.660 I'm Nathan.
00:02:09.260 I'm sorry.
00:02:09.640 Geez.
00:02:09.800 I'm sorry.
00:02:10.240 Nate Halberstadt.
00:02:11.060 I'm sorry.
00:02:11.340 Nathan Halberstadt.
00:02:11.740 I apologize.
00:02:12.340 I'm a partner at New Founding.
00:02:13.820 It's a venture firm focused on critical civilizational problems.
00:02:17.140 I lead investing in early-stage startups.
00:02:19.620 And I'll also be taking the pro-AI side.
00:02:22.260 And Bryce and I will also be qualifying.
00:02:24.360 We're worried about some of the same risks, of course.
00:02:26.300 But we see a positive path forward.
00:02:28.980 I do think that the risks are something that everyone, even people that are pro-AI, they're at least aware of and they do take seriously.
00:02:35.660 But to talk about the negative sides, the possible dangers of AI, we've got Joe Allen.
00:02:41.320 Yeah.
00:02:41.740 Joe Allen.
00:02:42.500 I am the tech editor at The War Room with Steve Bannon, occasional host, writer, not an expert or a philosopher, as I'm oftentimes smeared as, and failed Luddite.
00:02:54.940 Failed Luddite.
00:02:55.560 Yeah.
00:02:56.120 I've not been able to successfully smash even my own machines.
00:02:59.520 That is a crying shame.
00:03:01.580 I'm sorry to hear it.
00:03:01.920 You know, I'm still going.
00:03:03.200 Awesome.
00:03:03.660 And we've got the inevitable Shane Cashman here.
00:03:07.460 Yeah.
00:03:07.700 Host of Inverted World Live.
00:03:08.780 Had Joe Allen on the show last night.
00:03:10.420 We got into rat brains and vats and simulations and pregnant robots and AI accountability.
00:03:16.920 I'm looking forward to this one.
00:03:18.120 So do you guys want to start with an outline of your positions, basically, so that way the viewers understand?
00:03:23.840 Or do you guys want to jump into anything in particular?
00:03:26.420 How do you guys feel like this should go?
00:03:27.660 I'm happy to lead us off.
00:03:28.680 I think we can probably all start by agreeing that I won't use the term antichrist, but describing Sam Altman as a pretty weird guy.
00:03:36.680 I'd say he's one of the antichrist.
00:03:38.720 An antichrist.
00:03:39.980 And I think starting with Bryce and I agree that AI technology presents a number of potential risks, especially for human well-being.
00:03:50.460 But I think we're excited particularly about this conversation because we agree with you on that.
00:03:55.640 I think we all come from a right-of-center perspective here.
00:03:58.600 And we want basically a path forward for AI that works for Americans, and especially the next generation of Americans.
00:04:05.940 I think we're concerned, like, is this going to work for our kids and grandkids?
00:04:10.440 But, and I'm familiar with both of your work, Shane and Joe, and actually really respect the point of view that you guys come from.
00:04:16.280 So I'm excited about the dialogue because I think we can hopefully talk through some of the very serious concerns that we all share.
00:04:22.240 But ultimately, I think Bryce and I see a path forward where AI technology can actually shift opportunity towards the people who have been disadvantaged in the previous paradigm.
00:04:33.580 So we think about sort of the middle American skilled tradesmen or sort of the industrial operating businesses in America.
00:04:40.320 People who have been really hurt by offshoring or by financialization from private equity or software as a service lock-ins from Silicon Valley and the rest.
00:04:48.580 We think there's a pathway where AI can basically shift more power and autonomy to some of the people who we wish had had it up to this point.
00:04:58.540 There are still plenty of risks, but that's a part of where we will actually want to take the conversation is arguing that over the next decade or two, there could be sort of a golden age that emerges and AI will play a role in it.
00:05:09.620 And there will be lots of challenges and lots of serious things where we'll need to adjust policy.
00:05:15.180 And there could be people along the way who – we need to basically make sure that we minimize the number of risks for people and Americans along the way.
00:05:24.740 But that's the side that we'll be taking.
00:05:26.460 Bryce, maybe you want to add anything there?
00:05:27.400 Yeah, there's one thing that I want, if Bryce, if you could expand upon it.
00:05:30.380 One phrase that you used, you said a narrow pathway.
00:05:33.200 How narrow do you think the pathway is?
00:05:35.420 Do you think that it's more likely that there are going to be more negative consequences from the use of AI, or do you think that it's more likely that there will be positives, or do you think it depends on who's in control?
00:05:48.260 Look, with any technology, the positive and the negative are really closely intertwined.
00:05:53.460 And I think our role as people who are hopefully going to be able to shape the future of AI is to actually split those apart and figure out what are the bad elements that we can avoid.
00:06:07.400 For example, the psychotic episodes that AI as chatbots are bringing people into.
00:06:13.320 Or, for example, trying to automate away all work or ruin education with cheating apps and AI, right?
00:06:20.780 We want to split out those negative elements and try to mitigate those.
00:06:25.060 And ultimately, I think it'll be a lot of pros and cons, but just like with social media, just like with the internet and even trains or electricity, there's going to be both positive and negative.
00:06:39.820 Joe, what's your feeling overall about the outlook that Bryson and Nate have?
00:06:47.040 Do you think that that's in any way realistic, or do you think that it's all pie in the sky, that this is just a terrible idea that we should all fear?
00:06:55.920 And to that point, if you do think that it's a terrible idea, we don't have the ability to prevent other countries.
00:07:04.540 So how do you think the U.S. would be best served moving forward, considering Russia, China, there's going to be AI companies all over the world.
00:07:13.200 And if the United States does prohibit it here, these companies are just going to go offshore.
00:07:18.740 They're going to go to other countries.
00:07:20.480 Yeah, there's a few different questions there.
00:07:22.300 So to the first question, how do I respond to the position presented here?
00:07:29.380 I'm not sure what we're going to argue about.
00:07:31.840 And I agree by and large, although as a writer, I have zero use for AI.
00:07:39.520 And so it's very domain specific, right?
00:07:43.120 If you're in finance, you might have a lot more use for machine learning than I would.
00:07:47.800 And a lot of writers use AI to basically plagiarize, cheat, and offload their work to a machine.
00:07:54.980 Yeah, the question of U.S. competitiveness, especially in regards to China, China could leap ahead.
00:08:04.500 It's really a volatile situation with AI because simply transferring techniques and information and the technology to build these systems is all it's required for another country to begin building close to the level or approaching the level of the U.S. right now.
00:08:22.760 But this is all driven by the U.S.
00:08:25.400 The AI race is driven by the U.S.
00:08:28.900 The AI industry is, by and large, centered in the U.S.
00:08:32.920 And the arguments made by people like David Sachs or Marc Andreessen's sort of dismissive position in regards to the downsides is completely reckless.
00:08:44.980 I think probably disingenuous, although I don't know their hearts.
00:08:48.340 And I can see from a national security perspective, if you have a very effective algorithm or AI or AIs that are used to, for instance, simulate the battlefield or analyze surveillance data or target an enemy.
00:09:10.040 Yeah, that, I think, is something that should be taken very seriously.
00:09:13.880 On the other hand, the way they're talking about it, it's as if flooding the population with goon bots and groomer bots is going to be essential to American competitiveness.
00:09:25.640 And I just I don't see how having what right now, at least statistically, however, however much you trust surveys to have a third of Gen Z students basically offloading their cognition to machines to do their homework.
00:09:41.540 I don't see how that's going to make the U.S. competitive in the long term and the production of A.I. slop, the sort of relationships, the bonding that inevitably occurs with someone who doesn't either make themselves unempathic or sociopathic or is just born that way.
00:09:59.360 The way large language models and even to some extent the image generators and video generators work, they trigger the empathic circuits of the mind.
00:10:10.920 One begins to perceive a being on the other side of the screen inside the machine.
00:10:17.440 I think that those the social and psychological impacts are already evident and are going to be severe.
00:10:25.620 The economic impacts kind of up in the air is an open question, but it doesn't look good.
00:10:31.720 And the mythos around existential risk, the A.I. is going to kill everybody or become a god and save everybody.
00:10:39.540 Again, the likelihood of that happening, probably low, but I think that mythos itself is driving most of the people who are at the top of the development of this.
00:10:51.040 And I think that has to be taken very seriously.
00:10:52.900 Be like if you had Muslims, for instance, that ran all the top companies in the U.S. and supported by the U.S. government.
00:10:59.420 Maybe you like it, maybe you don't, but it's something you should take very, very seriously.
00:11:02.780 There's one point that you made that I actually want to kind of drill down on.
00:11:05.400 You said that it was really that A.I. development was driven by the United States.
00:11:11.100 And is it really like is it really driven by the U.S.?
00:11:13.680 As in like is it only because the United States is doing it or is like is the tech actually a human universal that all like all countries actually would go after?
00:11:27.260 Because it's my sense, like we said, we talked about China or you talk about Russia.
00:11:30.260 I don't think that just because the United States is on the leading edge of this technology, I don't think that in the absence of the United States, these technologies would not develop.
00:11:41.640 Most of the innovative techniques come out of the U.S. and all the frontier labs are in the U.S.
00:11:47.060 The point that I'm making, that's just reaffirming that innovation is in the United States.
00:11:51.960 So if we stop, they would start or they would begin to catch up.
00:11:54.460 Well, they would continue.
00:11:55.280 Yeah, I don't see that because the technology, the way the technology develops, even if it's not being developed in the United States, doesn't mean that other countries or even if it's developed in other countries more slowly, doesn't mean that the technologies wouldn't be developed.
00:12:07.660 I actually hope that China and our other rivals across the country do develop and deploy these systems like we're doing in the U.S.
00:12:14.980 because then they'll be plagued by goombots and cheating.
00:12:19.720 China already has, hasn't they?
00:12:21.140 Like they've got recognition and stuff like that that are used to control their populations.
00:12:26.400 Yeah, those are different questions from the goombots.
00:12:29.000 But yeah, the tendency to disappear up into one's own brainstem, I guess, is a human universal.
00:12:35.740 I think if China begins to deploy recklessly their LLMs at the same scale as the U.S., but China's actually got really strict regulation, including protecting consumers, much more so than the U.S., and deepfakes, things like this.
00:12:52.240 But also like any kind of pro-CCP output, I'm sorry, anti-CCP output, that's all banned.
00:13:00.320 But yeah, I think if China borgs out and weakens its population as we have, it would be kind of like payback for the fentanyl.
00:13:09.060 So would you guys jump in on that?
00:13:10.660 Do you guys think that the United States, because the U.S. is where the leading edge is, do you think that if the U.S. pulled back, other countries would also?
00:13:18.120 Because like I said, I think that just because a country lags behind the U.S. technologically doesn't mean they don't have the impulse or the desire to actually develop these technologies.
00:13:28.500 Without a doubt, there is something to this technology that's behind the goombots, behind the particular instantiations of it, the applications, right?
00:13:37.000 One second, I want to jump in.
00:13:38.740 The idea of goombots, like I understand that those are the flashy things and sex sells, I understand that.
00:13:43.820 But that's really one of the smallest areas and I think one of the least important, right?
00:13:50.780 Because you're dealing with medical technology, AI and medical technology.
00:13:56.840 You're dealing with AI in multiple different fields.
00:14:00.920 Like it's almost becoming ubiquitous.
00:14:03.620 I think that as much as people like to say, oh, the porn bots are going to kill us, everyone's going to jump into the pod.
00:14:08.620 I think that that's actually just kind of more of a slander on the technology or a way to slander the technology.
00:14:16.400 And again, this is not endorsing the idea of AI sex girlfriends or whatever, but even just the way that people were talking about it so far around the table, the goombots, the point of saying that is to slander the technology.
00:14:30.480 Agreed.
00:14:31.000 And there needs to be a positive vision for AI.
00:14:33.480 We need to understand what is AI good for and what are humans good for.
00:14:37.500 And ideally, we keep AI out of the areas where humans are most capable and we're going to do just fine.
00:14:46.700 Areas like our relationships, of course.
00:14:49.740 Yeah.
00:14:50.620 Areas like education and the core functions that basically people are experts in that AI is not experts in.
00:14:57.980 So AI is good for doing work that's, frankly, less humane.
00:15:04.160 So things that we don't like doing, paperwork, administrative work, bureaucratic work, that kind of stuff.
00:15:09.800 And in domains where AI is not adding value or it's obviously just terrible.
00:15:15.000 So you think about when you go onto Twitter or just the algorithm at any point, you see an increasing amount of just AI slop.
00:15:20.440 Or even in education, we've talked about, like at a certain point, if the kids are just outsourcing all of their learning, we're going to figure that out.
00:15:28.020 And that's something that will need to be corrected.
00:15:29.920 If people are using AI in sort of strange or psychotic relationship dynamics, I think, again, we'll sort of figure that out and solve that as well.
00:15:38.640 And really across all of these, my hope is that what will occur is that as we figure out.
00:15:46.360 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
00:15:50.660 Now introducing our hottest exclusive, Friends, the one with Multidrop.
00:15:55.460 Your favorite classic television show is being reimagined in your new favorite casino game featuring iconic images from the show.
00:16:02.680 Spin our new exclusive because we are not on a break.
00:16:05.840 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
00:16:11.060 Want even more options?
00:16:12.320 Pull up a seat and check out a wide variety of table games from blackjack to poker.
00:16:16.580 Or head over to the arcade for nostalgic casino thrills.
00:16:20.420 Download the BetMGM Ontario app today.
00:16:23.020 You don't want to miss out.
00:16:24.460 19 plus to wager.
00:16:25.880 Ontario only.
00:16:26.780 Please play responsibly.
00:16:27.840 If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:16:39.720 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:16:44.060 You're a podcast listener, and this is a podcast ad heard only in Canada.
00:16:48.820 Reach great Canadian listeners like yourself with podcast advertising from Libsyn Ads.
00:16:53.280 Choose from hundreds of top podcasts offering host endorsements or run a pre-produced ad like this one across thousands of shows to reach your target audience with Libsyn ads.
00:17:04.480 Email bob at libsyn.com to learn more.
00:17:07.240 That's B-O-B at L-I-B-S-Y-N dot com.
00:17:11.920 That AI doesn't work well in this domain.
00:17:14.480 It'll force people back into the in-person.
00:17:16.660 One example here would be, let's say, for example, Joe, you and I were going to go on a show together or just go do something together.
00:17:26.580 If I have an AI agent, for example, that sort of spontaneously reaches out to you and messages you, and then your AI agent responds back and it's scheduled, and maybe there's a whole conversation that happens, but neither of us are even aware that the conversation even happened.
00:17:42.320 Right, that's a pretty weird thing, and at a certain point we sort of lose trust in those sorts of interactions where there isn't a firm handshake, where there aren't two people in the room together.
00:17:54.080 And so I think the natural sort of second order consequence of this, at least it seems to me, that it'll force people to care more again about the in-person relationships.
00:18:03.700 So people in their community, their family, their church, and also even things like proof of membership and sort of vetted organizations or like word of mouth referrals, right?
00:18:14.340 So somebody like Bryce basically says, like, yo, you should really go do X, Y, Z thing with Joe, and I know Bryce in person, and I wouldn't take that advice off of the internet anymore.
00:18:27.660 So you sort of bias more towards input from in-person, and in some ways I think that that solves some of the challenges we've been having with the internet and with social media, which has been terrible, which has been terrible for young people, just in terms of anxiety and other things as well.
00:18:43.680 And so what we want to have is a future with AI that doesn't look like Instagram or doesn't look like what social media did to people, and I think it's possible.
00:18:52.560 Isn't it – do you think it would be accurate to say that because of the algorithms that social media uses to put things in front of people, wouldn't that qualify as AI and wouldn't the repercussions of having that kind of information – or the way that information is fed to people, particularly young people, wouldn't that – would that qualify as one of the negatives of AI that we've already seen even in its infancy?
00:19:18.320 That's true, and I think it's worth bringing up the distinction between – I would say the last generation of AI, which is what you see in algorithms and social media.
00:19:28.160 It's what you see in drone warfare, but a lot of maybe more positive elements as well, like ability to detect fraud in financial systems.
00:19:38.400 And there's a new generation of AI, which started with the release of chat GPT in 2022, and you could call that generative AI.
00:19:48.540 You could call that large language models, but this is really the source of a lot of the hype in the last few years.
00:19:56.720 And it's a source of where we can actually think about automating all the worst parts of our companies or our personal lives, but it's also the risk where the slop comes in, right?
00:20:09.500 So all the tweets that you see that are clearly made by a robot, that's this second generation.
00:20:15.480 You know, you're going to have to find something for us to disagree about.
00:20:19.900 Well, I mean, I've tried pushing back on a bunch of stuff.
00:20:22.860 To your point, I will say this. You're talking about critical systems that are having AI integrated into them, medical, military.
00:20:31.820 Very critical systems, right?
00:20:33.620 At least in the context of what I was thinking of was as assisting humans, not actually taking over.
00:20:40.820 So, yeah, the Goombots, I think you're underestimating that just like digital porn, just like OxyContin,
00:20:49.940 and it may be something that is primarily concentrated among people who are isolated, people who are already mentally unstable, vulnerable,
00:20:57.280 but that's a lot of people, and it's an increasing number of people.
00:21:03.240 But let's put the Goombots aside, and the Goombots.
00:21:06.120 It gives us some indication as to how unethical and sociopathic people like Mark Zuckerberg and Elon Musk are,
00:21:12.900 but we'll leave that aside.
00:21:14.280 Look at just the medical side of it.
00:21:16.680 I hear all the time from nurses and doctors about the tendency towards human atrophy and human indifference
00:21:25.020 because of the offloading of both the work and the responsibility to the machine.
00:21:30.560 And, you know, studies are only so valuable, but there are a few studies that at least give some indication
00:21:35.920 that that is somewhat intrinsic to the technology, or at least it's a real tendency of the technology.
00:21:42.820 One was published in The Lancet, and it was in Poland.
00:21:46.840 They followed, I believe, doctors who were performing colonoscopies.
00:21:51.160 I guess they were proctologists.
00:21:52.600 And for the first three months, they did not use generative AI.
00:21:58.580 And then it was like two three-month periods.
00:22:02.160 And then they followed them after three months of using the generative AI.
00:22:07.060 And what they found, I mean, if there's one thing that in AI, you know, I'm sure you guys agree, AI is a troubling term, right?
00:22:15.040 What are we even talking about when we talk about AI?
00:22:17.140 But they were using, you know, algorithms are very good at pattern recognition in images.
00:22:23.000 And so in radiology and other medical diagnoses, it's much better statistically than humans
00:22:29.480 in finding like very small, mostly undetectable tumors or other aberrations.
00:22:34.280 So these doctors used it for three months, and they found that after three months, just three months of consistent use,
00:22:41.580 they were less able, something like 20, 27%, something like that, less able to detect it with their own eyes
00:22:47.640 just because of the offloading of that work to the machine.
00:22:51.880 And in the military, it's going to be a lot more difficult to suss out how many problems are going to come out of this.
00:22:59.460 But in the case of, say, the two most famous AIs deployed in Israel, Hapsura and Lavender,
00:23:07.740 these are systems that are used to identify and track legitimate targets in Palestine.
00:23:14.220 And basically, that means it fast tracks a rolling death machine.
00:23:19.500 And is it finding more and more legitimate targets?
00:23:23.420 Is it not?
00:23:24.200 Don't know.
00:23:24.800 All we know is it's accelerating the kills.
00:23:28.740 So I think, but in both cases, what it highlights is how important those roles are, doctors, soldiers, so on and so forth.
00:23:37.320 And it also at least gives us some indication as to the problems of human atrophy.
00:23:42.240 And in the case of warfare, the real tragedy of kills that were simply not legitimate.
00:23:50.580 So to your atrophy point, right, if AI is better at detecting things like the cancers and stuff like that,
00:23:58.620 and it's also still technically, I mean, it's in its infancy, right?
00:24:02.340 This is still a very, very new technology, only in the past couple of years, two years possibly, that this is capable of even doing this.
00:24:09.800 And it's gone from the infancy to being able to detect better than human beings.
00:24:15.120 Wouldn't it make sense to say, look, it is a bad thing that human beings are relying on it to the point where they're losing their edge?
00:24:24.020 Essentially, basically, they're not as sharp as they used to be.
00:24:27.500 But moving forward, considering AI is so much better than humans, is that a problem?
00:24:33.160 And will that be a substantive problem?
00:24:35.240 Because you think in two years, considering how the advancements have gone in the past, in two years, it'll almost be, it'll be impossible for human beings to even keep up with AI.
00:24:46.320 Would it be a negative to say, oh, well, humans won't be so sharp?
00:24:49.680 Well, yeah, they won't be, but everyone relies on a calculator nowadays for any kind of significant math problems.
00:24:56.320 No one's writing down and doing long division on a piece of paper anymore.
00:25:00.400 They always use a calculator.
00:25:01.780 Isn't that a similar condition or situation?
00:25:04.200 I really like the analog of the calculator.
00:25:08.040 One thing, one heuristic here that we, or one way of thinking about this that we like to use is that, you know, AI right now, especially think about chat GPT,
00:25:17.340 it's very good at replicating sort of bureaucratic tasks and really bureaucracy.
00:25:21.840 So it's information tasks.
00:25:24.800 And just like in the same way that as, you know, to do a very large math problem, you just plug it into a calculator and it does it quite quickly.
00:25:31.740 It's sort of the same thing in a business context or in sort of a administrative context.
00:25:38.220 Like AI today basically does what sort of entry level, does quite well what entry level associates and analysts and people like this did five years ago in, say, an investment banking firm or a consulting firm or a law firm or even just sort of passing around basic information in a large bureaucracy.
00:25:55.840 You know, you could think about it as similar to before the calculator, there would have been sort of entry level people who were doing these just extremely long math problems.
00:26:04.920 And I think a point that Bryce made earlier is that some of this stuff is actually, it's actually fairly inhuman.
00:26:10.800 Like being a cog in a bureaucracy is not necessarily like the peak of human flourishing.
00:26:17.600 And so as long as new opportunities are materializing and as long as there are still ways for, you know, Americans to continue working, forming families, then I don't necessarily see it as a terrible thing if certain very unhuman types of jobs stop existing.
00:26:37.420 As long as new jobs are created that are more conducive for human flourishing.
00:26:42.400 I think it's disingenuous to compare this technology to technologies of the past and any advancements because those technologies are still even the calculator involves a human working with it, whereas AI is going to replace everyone.
00:26:53.860 And I understand the short term benefits that come with all of this, whether it's medical, military, which I disagree with, like lavender, I think is a terrible situation and allegedly it has a 10 percent error rate.
00:27:05.540 But the idea that it's going to create a future where humans can do better things outside of this and not be a cog, I think we're a cog in it right now.
00:27:14.100 I think we are the food source for AI and it's learning because of us, right?
00:27:19.760 And the people who are building the AI, they don't want the physical world.
00:27:23.980 Like the big names, the Altmans, the Teals, the Musks, you name them, they don't care for the physical world at all.
00:27:29.860 They don't want nature anymore.
00:27:33.400 You know, they want data centers and AI to be doing everyone's job and to outsource everything from knowledge to accountability.
00:27:41.020 And any of them are transhumanists.
00:27:42.140 They are transhumanists, right?
00:27:43.140 And that is a big part that's baked into AI.
00:27:45.880 And a lot of the problems we see online with, say, censorship on Twitter, the algorithm still has that in its DNA.
00:27:52.140 The same things we had problems with four or five years ago.
00:27:54.460 And I understand AI is here.
00:27:56.160 There's nothing we can do about it.
00:27:57.400 It's kind of like gain of function, in my opinion.
00:27:59.240 You know, we've started it.
00:28:00.280 It's here.
00:28:00.660 Now people just can't stop doing it.
00:28:02.160 But I feel like with this, I have to reject any flowery notion of a future where we can live symbiotically with AI because in the end, it's like we've created our alternate species to take over.
00:28:16.060 And it's going to draw us into apocalyptic complacency where we kind of are at already.
00:28:21.960 You know, and people keep saying this technology is going to help out with people.
00:28:26.100 You know, it's going to make things better.
00:28:27.440 It's going to make things easier.
00:28:28.260 It's going to be a tool.
00:28:28.960 Joe would say it's a tool.
00:28:30.120 And it certainly is a tool.
00:28:31.780 But right now we're already seeing declining reading, you know, skills in schools.
00:28:37.500 People are more isolated than ever.
00:28:39.580 And I don't think it's going to get better all of a sudden because the proliferation of AI.
00:28:43.240 I think it's going to make things much worse, much more fractured.
00:28:45.960 And it's going to become this like inevitable lobotomy of humanity.
00:28:49.480 Whereas previous advancements in technology, there was some sort of we could work together despite some, you know, the consequences of something like Taylorism and the scientific management where you did become a cog in the factory.
00:29:03.440 But we are the cog in it right now as it's growing within its factory.
00:29:07.740 And someone like the former CEO of DeepMind, I forget his name, talks about.
00:29:13.420 Are you talking about Jeffrey Hinton?
00:29:16.200 No.
00:29:18.260 Oh, Mustafa?
00:29:19.900 Mustafa Sulu.
00:29:20.120 Yes.
00:29:20.400 Yeah, yes.
00:29:20.800 He talks about containment, right?
00:29:21.960 Like what you guys are saying, this narrow path forward, which I agree.
00:29:25.560 You know, we have to have some sort of narrow path here.
00:29:28.500 But he talks about containing the AI.
00:29:31.020 I think we're beyond that.
00:29:32.080 There's no containing it now, no matter what you do, because it's going to find its way into every household, no matter what.
00:29:37.580 It's in every institution.
00:29:38.840 It's in every country.
00:29:40.080 And we have countries like Saudi Arabia who want to go full AI government.
00:29:42.920 That's the path that it's taking.
00:29:44.880 And I think in the future, not so distant future, the AI is going to worry about containing us.
00:29:50.220 You know, and that's what I'm fearful of.
00:29:51.840 I think that with this being drawn into apocalyptic complacency means it's going to destroy us because we built it and we allowed it to.
00:29:59.640 Look, there's I just saw a video the other night about AI and it was talking about what it led with the idea that right now we're going through or have been going through a massive, massive die off of species.
00:30:15.420 Right.
00:30:15.920 Insects.
00:30:16.500 There's like 40 percent fewer insects on Earth now.
00:30:19.620 And it's because of human beings.
00:30:21.440 And the point that it was making was human beings didn't know that they were killing off insects in the ways that they were with pesticides and just deforestation and all of these things that we were doing, making the modern world and living in the modern world, killed off 40 percent of the insects.
00:30:39.360 Well, insects are part of, you know, the ecosystem that actually are necessary, as annoying as they are and they can be.
00:30:44.800 They are necessary, but grab a coffee and discover Vegas level excitement with Bed MGM Casino.
00:30:51.900 Now introducing our hottest exclusive friends.
00:30:54.980 The one with multi drop.
00:30:56.820 Your favorite classic television show is being reimagined in your new favorite casino game featuring iconic images from the show.
00:31:03.900 Spin our new exclusive because we are not on a break.
00:31:07.580 Play friends.
00:31:08.280 The one with multi drop exclusively at Bed MGM Casino.
00:31:11.740 Want even more options?
00:31:13.600 Pull up a seat and check out a wide variety of table games from blackjack to poker or head over to the arcade for nostalgic casino thrills.
00:31:21.660 Download the Bed MGM Ontario app today.
00:31:24.260 You don't want to miss out.
00:31:25.720 19 plus to wager.
00:31:27.120 Ontario only.
00:31:28.020 Please play responsibly.
00:31:29.380 If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:31:40.520 Bed MGM operates pursuant to an operating agreement with iGaming Ontario.
00:31:45.100 The point that it was making that we weren't aware that this is happening and then they they made the the the connection that should a should we have a super intelligent AI, right?
00:31:57.580 Not just agents that can help, but a super intelligent AGI.
00:32:02.160 It was it's likely that it will start doing things of its own volition that we can't understand.
00:32:08.640 Like bugs didn't know why human beings were destroying them and destroying their habitat and why they were getting killed off.
00:32:14.580 They had no idea.
00:32:15.460 And when you deal with a sufficiently intelligent or a sufficient different, sufficiently more intelligent entity, humans can't understand it.
00:32:27.660 And right now, AI will start doing things that people can't understand.
00:32:32.160 There was this we had this wired piece here where AI was designing bizarre new physics experiments that actually work.
00:32:40.380 And the point is, the AI started working with the physicists as a tool.
00:32:44.880 They started using it to help them figure things out.
00:32:47.580 And it came up with novel methods to make finding gravitational waves easier.
00:32:55.960 And they didn't understand what had happened.
00:32:59.200 Then the same thing has happened with chess, right?
00:33:01.160 There was a chess bot, kind of your chess master.
00:33:05.360 Yeah, alpha zero maybe.
00:33:06.480 It could be alpha zero.
00:33:07.480 But everyone kind of thought that everyone, all the chess masters know how chess goes and they see the moves and they know what moves you do in response, et cetera, et cetera.
00:33:17.580 And it took, alpha zero did a move that no one understood or the chess bot, I'm not sure of the same one, just for accuracy.
00:33:26.460 And no one understood why.
00:33:28.280 But then it was like 20 moves later or something.
00:33:31.140 It won the game and no one understood how.
00:33:33.900 But one of the things that AI, another thing that AIs have started doing is there were two AIs communicating with each other and they literally created their own language.
00:33:42.480 And the people that were outside of the AIs didn't understand what was going on.
00:33:46.880 And now it seems that a lot of the big AI companies are just feeding information and AI will pump out an answer and it'll be the correct answer, but they don't know how it got there.
00:33:58.600 Isn't that a massive problem as well?
00:34:00.660 If you don't understand what the machine you're working with is doing and you don't understand how it's communicating, doesn't that become a problem for the people that created it?
00:34:12.080 I think the biggest problem, and I think a big danger with discussions about AI is to treat AI as though it is a sentient entity in itself and that it actually does things of its own volition.
00:34:26.580 And I think we need to realize, okay, how does it actually work?
00:34:31.300 How does this technology work?
00:34:32.940 It's pretty magical in some cases.
00:34:35.680 I'm sure everyone's used it and been really surprised at how effective it was at researching something or creating some text.
00:34:42.440 But ultimately, AI, especially the new version of large language models, it's really compressing all the information that humans have created, that it's found on the internet, that its trainers have given it, and it spits it out in novel ways.
00:35:00.100 But we can't forget that humans are always at the source of this information.
00:35:05.060 Humans actually have some say in how the AI is structured, how it's trained.
00:35:09.460 And so we need to – I think by seeing AI as kind of a sentient being in itself, it distracts us from the fact that who's actually training the AI, which I think is a critical question.
00:35:20.680 And there are a lot of big companies who are doing this.
00:35:23.560 Thankfully, I think there's a diverse enough set of companies who are making AI models that we don't have to worry about a mono company like Google taking over the next 20 years.
00:35:33.680 To that point, though, isn't it the case that to say AI is a blanket term?
00:35:39.560 Because when you're talking about an LLM, that's one kind of AI.
00:35:43.080 But when you're talking about like full self-driving in a Tesla, that's not an LLM, but that is artificial intelligence.
00:35:48.660 It's interpreting the world.
00:35:50.800 It's making decisions based on what it sees, et cetera.
00:35:54.480 Like so to call – to say – to use AI as a blanket term is probably an error.
00:35:59.360 And you can say, you know, that LLMs are just – you know, they just crunch the information that they have that people are feeding it.
00:36:07.560 But when it comes to something like a full self-driving, which would – that kind of AI would have to be used if you were to have a robot that was actually working in the world, right?
00:36:16.900 Like a humanoid robot, it would have to have something similar to that as well as an LLM.
00:36:20.940 Those two AIs are different, aren't they?
00:36:22.840 And how do you make the distinction between the multiple types of AIs and say, well, this one is actually kind of dumb because it's just crunching words that we said.
00:36:35.780 But this one is actually not kind of dumb because it's interpreting the world outside.
00:36:39.460 And that's so much more information than just, you know, a data set.
00:36:43.080 So on that point, two distinctions to be made there.
00:36:46.540 One, when Shane is speaking, always, as a shaman, Shane is speaking from a cosmic point of view.
00:36:55.600 He's seeing not just the thing, but the thing in relation to the room and the stars and so on and so forth in the metaphysical realm beyond.
00:37:05.620 When Bryce and Nathan are talking about artificial intelligence, they're talking about very specific machine learning processes that are for very specific purposes and also very specific to the culture that you're trying to build.
00:37:20.420 And I think that both of those are valid perspectives.
00:37:24.100 And I think that people using these digital tools for, at least with the intent of benefiting human beings, at least the ones who count, right, the ones close to you, then we're probably better off, even if I reject it entirely.
00:37:40.120 But so that's, I think this is a distinction to be made, right?
00:37:43.640 And it's one of the problems you talk about AI, right?
00:37:45.540 It's when Shane's talking about the kind of parasitic or predatory nature of AI itself, it's a more cosmic point of view, like looking at the long term sort of goal towards which most of these frontier companies are working towards.
00:38:02.760 And I myself think that it's, you have to balance those things, but to the point about like AI as a term, it's very unfortunate.
00:38:12.060 I mean, you could call it for a long time, it was machine learning, right?
00:38:14.980 And AI, when it was coined, like 1956, John McCarthy and taken up by others, Marvin Minsky, what they were talking about is what we now call artificial general intelligence.
00:38:27.420 They were just meaning a machine that can think like a human across all of these different domains.
00:38:32.760 And nothing like that exists, not really.
00:38:35.260 You could say the seed forms are present, but that is just a dream and has been a dream for some 70 odd years.
00:38:44.520 So say you take that distinction, though, between the LLM, and I hear what you're saying as far as just compressing information,
00:38:55.040 but it does a lot, it's more than just a JPEG, you know?
00:38:58.280 It lies, it hallucinates.
00:39:00.160 Yeah, it's capable of all sorts of similar to human reasoning.
00:39:06.140 It's not real reasoning you could go on all day about.
00:39:09.400 It's not really reasoning.
00:39:10.680 Okay, fine, but it can solve puzzles.
00:39:13.760 An LLM, which was not really intended for that purpose, is able to solve puzzles, make its way through mazes, can do math.
00:39:20.740 LLMs aren't made to do math.
00:39:22.540 And yet, as you scale them up.
00:39:24.720 And do math better than human beings.
00:39:26.420 Because they're solving complex problems at, you know, PhD levels.
00:39:31.880 Yeah, well, LLMs not so much.
00:39:33.960 But yes, they do better than the average person.
00:39:36.600 They do better than us.
00:39:37.640 If I understand correctly, Grok does that, and Grok is an LL.
00:39:40.340 Yeah, but it's not, I mean, is it better, it's not better than a mathematician, you know?
00:39:46.520 Whereas specific AIs that are made with math were coding.
00:39:51.920 Actually, there's an example of a kind of generalist tendency in, there's a math Olympiad that OpenAI got the gold in, right?
00:40:00.600 Their algorithm got the gold in, and there was also a coding contest, and it was the same algorithm.
00:40:07.040 If I'm not mistaken, it was trained to do coding, not math.
00:40:11.480 I could have that flipped.
00:40:13.300 But one way or the other, it was trained for one purpose.
00:40:15.320 It was able to excel in both.
00:40:17.720 So, yes, it is quite different, though, from robotics control or even, like, image recognition systems, even if they're more integrated now.
00:40:27.240 Like, GPT-5, before it was, like, it was this very rapid transition from these very narrow systems to, like, Google's Gemini was multimodal.
00:40:37.360 Everybody made a big deal out of it.
00:40:38.680 You have an LLM that's able to kind of reach out and use image tools and use audio kind of tools, right, like to produce voice and all that.
00:40:48.940 And now it's integrated into basically one system over just a course of a few years.
00:40:55.380 And I don't think that anytime soon you're going to get the soul of a true genius writer or genius musician or genius painter out of these systems, right?
00:41:09.960 It's just going to be slop for at least the near term.
00:41:13.860 But you do have to recognize, like, what you're talking about, superhuman intelligence, right?
00:41:18.860 Superintelligence, as defined by Nick Bostrom, would include something like Alpha Zero or even Deep Blue, like back in the 1970s, beat Garry Kasparov.
00:41:31.420 So you have to take that into account and wonder, at least.
00:41:36.840 I think that fantasizing is probably not something to get stuck in, but these fantasies are not only driving the technology, but the technology is living up to some of those fantastic sort of images.
00:41:49.720 So in the case of Alpha Zero, Alpha Go was trained on previous Go moves.
00:41:56.500 Alpha Zero started from scratch, just the rules, and played against itself until it developed its own strategies and is now basically a stone wall that can't be defeated.
00:42:06.720 Same with at least the best drone piloting systems outperform humans.
00:42:12.040 Yeah, that's kind of a feature, and maybe it's an emergent feature, but it's a feature of AI.
00:42:18.420 Once it defeats human beings, once it gets better than humans, there's never a time where a human being...
00:42:23.880 It never loses.
00:42:24.160 Yeah, and I think that isn't that, if that's the goal for, you know, for these developers, right, wouldn't that kind of specialty and...
00:42:42.040 I guess, what's the word I'm looking for, just that kind of capability, isn't that something that you could consider a good thing for humanity, right?
00:42:50.960 If it's better at finding, to the point that we were talking about earlier about finding cancers, if it's better than humans ever will be, and it always is better, and it gets so good that it doesn't miss cancers, isn't that a positive for humanity?
00:43:04.920 No, I don't think so.
00:43:06.880 I don't think you can outsource that kind of optimism to this false god and count on that forever.
00:43:12.860 I think outsourcing so much stuff to the machine will just eliminate humanity, and at a certain point in that world, there is no more humans.
00:43:21.600 Well, I mean, so...
00:43:22.420 They'll be living in their little 15-minute cities, hooked up to the metaverse, and disease might not even be a thing because they'll be eating vaccine slop.
00:43:30.680 So then your opinion is, it's better to have the real world with disease and cancer and everything...
00:43:38.180 Yeah, it's part of humanity, unfortunately.
00:43:39.980 It's part of the humanity.
00:43:40.440 There's risk out there, and once you start to play God, it goes wrong, and I don't think you should...
00:43:45.680 Well, I mean...
00:43:47.680 There's a line where, like, we are...
00:43:49.720 There's obviously medicine, and we're trying to heal people, but the idea that you can just plug into the machine and it cures you, that's basically just making everyone a transhumanist.
00:44:00.680 And I don't agree with that.
00:44:02.300 But if people have the option, is it really making people a transhumanist?
00:44:05.200 Now, again, you don't have...
00:44:07.060 You become transhumanist by default.
00:44:08.380 There are people that don't go to the doctor, currently.
00:44:10.320 Yeah.
00:44:10.600 There are people that say, I don't want to.
00:44:12.040 I mean, you've got enclaves of people.
00:44:14.080 If that's an option, and you're not forced to do any of this stuff, isn't it more immoral to prevent people from having the option
00:44:24.680 than to say that everyone, like, that everyone, you know, if you have the option, isn't that the desired outcome where people can make the decision themselves?
00:44:36.580 Yeah, I understand having the option is fine.
00:44:38.080 I just don't think in the not-so-distant future there won't be an option.
00:44:41.540 So you think that it's all just authoritarianism all the way down?
00:44:44.400 It seems to go that way.
00:44:45.660 I mean, I think all these people want total control.
00:44:48.520 They've totally rebranded to be a part of our government right now.
00:44:51.700 They haven't been before.
00:44:53.580 Now they're front-facing in the government.
00:44:54.700 So would you say that the Biden administration had the proper...
00:44:57.560 No, because they also gave money to Palantir and other nefarious operations.
00:45:02.880 They had the...
00:45:03.540 The Biden administration selected certain companies, but there was no competition.
00:45:07.700 Would you think that the Trump administration's outlook or their approach is a better approach?
00:45:17.000 Or do you think that it's just, you're just straight up no on it?
00:45:19.960 It doesn't matter who's in office because they are parasites.
00:45:22.500 Silicon Valley is parasitic and they take advantage of every administration.
00:45:26.240 They're shape-shifting ghouls who will take advantage, like Zuckerberg was all in and totally fine to censor all of us during COVID.
00:45:32.700 You know, and then all of a sudden he saw, you know, Kamala wasn't going to win.
00:45:36.820 So, hey, now he's shape-shifting to MAGA, light, you know, with a new haircut and a chain on roading.
00:45:41.320 I think here it's really important to emphasize the distinction between AI as it currently exists and what it could become further down the road.
00:45:50.380 And, I mean, at least AI as it exists right now, it still obeys and it follows human prompting, right?
00:45:56.800 So it is...
00:45:57.720 To our knowledge.
00:45:58.440 Yeah, not always, though.
00:45:59.640 In some sense, it is rational, but it lacks will and it lacks agency as of today.
00:46:05.440 But we've seen it try to rewrite itself to avoid being reprogrammed.
00:46:08.840 You know, we've seen it try to, like Phil was saying, developing secret languages to talk.
00:46:12.400 At a minimum, we could say it follows heuristics that are designed by humans and that humans are still capable of modifying.
00:46:18.600 And there's probably longer conversations there.
00:46:21.120 But to what Bryce was saying earlier, there are still things that are unique about humans that AI has not yet replicated.
00:46:29.860 And the question of what's ahead, it's still...
00:46:33.780 It's not obvious that AI will ever fully have a rational will, right?
00:46:38.400 And it's not obvious that AI will ever actually have reflective consciousness.
00:46:42.420 We may be here...
00:46:43.580 We see bits and pieces in certain news stories, maybe.
00:46:46.080 There's, like, science fiction about it.
00:46:47.580 There are people in tech...
00:46:49.000 I was just going to say, should we take them seriously?
00:46:50.520 So there are transhumanists who, of course, this is their vision for the future, and we should take it seriously.
00:46:54.920 But should we take the people who are building it seriously?
00:46:56.880 Because they do have those concerns where it might be sentient.
00:46:59.480 It might become Skynet.
00:47:00.540 I think these are the real concerns.
00:47:02.280 But it's not where AI at least stands today.
00:47:05.340 So as it stands today, again, humans are still unique in the sense that they have a soul.
00:47:10.320 They have moral responsibility.
00:47:12.060 That they have a rational will.
00:47:14.480 And for now, AI is a lever that humans are using.
00:47:17.980 And there is a risk that that changes in the future.
00:47:20.560 But I think it's just really important to make that distinction.
00:47:23.500 Yeah, but there's also...
00:47:24.340 You have AI Lavender for now, which has a 10% error rate killing innocent people in Gaza.
00:47:29.060 What's the error rate for humans, though?
00:47:30.580 Probably also bad.
00:47:31.640 But I'd rather have the urgency of humanity...
00:47:34.160 Depends if they're legal or illegal.
00:47:35.340 I'd rather have the urgency of humanity behind war than outsourcing the death of war to a robot like it's a video game.
00:47:42.600 Because then that will just perpetuate the forever wars that we're already in.
00:47:46.340 But then it'll be constant bombing everywhere.
00:47:49.220 There's no reason to think that any of that ends if humans are in control.
00:47:54.320 There's actually more reason to think that if AI and robots were in control, that it would end.
00:48:00.000 I think it'll be consolidated.
00:48:01.380 That power will be consolidated into the AI.
00:48:04.620 And then there's no saying no to it at a certain point.
00:48:07.620 There's no saying no to the United States right now.
00:48:10.860 Despite not agreeing with administrations, and I have many opinions about this current one,
00:48:16.420 you can hopefully sometimes not go to war.
00:48:18.780 And have a politician who says, I'm not going to start that war or join that war,
00:48:21.840 even though we're funding all these wars right now.
00:48:23.240 You were talking earlier about sickness is part of the human condition.
00:48:27.540 War is part of the human condition, too.
00:48:29.060 War existed before human beings, right?
00:48:31.100 I think humans should be participating in these risks.
00:48:33.440 I don't think we should be creating things to keep doing it while, you know, we're being governed at home in this future by the tech overlords.
00:48:41.480 So the vision you're putting forward there, and I don't know if you're making the argument or holding it.
00:48:45.780 I'm trying to make the argument of pushback on everyone's idea here.
00:48:49.760 But that idea that perhaps algocracy, a system in which the algorithm determines most of the important outcomes
00:48:59.660 and the processes by which we get to them.
00:49:02.900 That dream, whatever form you want to, however you want to package it, transhumanism, post-humanism,
00:49:10.000 the sort of EA and long-termism point of view that in the future there will be more digital beings than human beings,
00:49:17.520 or just the kind of effective altruism, or just kind of the nuts and bolts, like what you're saying.
00:49:23.920 If you turned war over to the algorithm to decide.
00:49:28.260 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
00:49:32.920 Now introducing our hottest exclusive, Friends, the one with Multidrop.
00:49:37.720 Your favorite classic television show is being reimagined in your new favorite casino game,
00:49:42.600 featuring iconic images from the show.
00:49:44.920 Spin our new exclusive, because we are not on a break.
00:49:48.560 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
00:49:52.680 Want even more options?
00:49:54.600 Pull up a seat and check out a wide variety of table games from blackjack to poker.
00:49:58.840 Or head over to the arcade for nostalgic casino thrills.
00:50:02.660 Download the BetMGM Ontario app today.
00:50:05.260 You don't want to miss out.
00:50:06.720 19 plus to wager.
00:50:08.140 Ontario only.
00:50:09.040 Please play responsibly.
00:50:10.380 If you have questions or concerns about your gambling or someone close to you,
00:50:13.840 please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:50:21.520 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:50:26.480 What target is legitimate, what target is not.
00:50:29.180 What strategies are superior, which strategies are not.
00:50:32.560 That it would fix war.
00:50:33.960 The problem with these technologies right now is acceleration and offloading of responsibility
00:50:43.180 and offloading of human agency to the machine.
00:50:48.680 But at the top of that are human beings who are creating the systems,
00:50:52.880 human beings determining how the systems are used.
00:50:55.840 And so that for now, you know, before, who knows, you know, maybe one day it really will be Tron
00:51:03.700 or whatever, some sort of robot at the top.
00:51:05.680 But for right now, what we know is that people like Mark Zuckerberg run companies that are willing
00:51:11.900 to produce bots that are intended to form emotional bonds with children.
00:51:17.760 And at least up until last week, it was company policy to allow them to seduce them in soft core ways.
00:51:25.480 Or you have people like Alex Karp who are very reckless in their rhetoric around how their technologies
00:51:33.220 are being used by governments around the world, including Israel, including Ukraine, U.S. obviously,
00:51:37.720 and a number of our allies that accelerate the process of killing other human beings.
00:51:44.360 And is it a 10% error rate?
00:51:45.860 Is it a 50% error rate?
00:51:46.940 Is it a 1%?
00:51:47.680 Nobody knows.
00:51:48.820 We just know that that means that they are killing at an ever faster pace.
00:51:53.900 And they have the justification of the machine and the human beings.
00:51:58.220 And I don't want to put it all on Palantir, right?
00:52:00.320 You have Lockheed Martin, Boeing, Raytheon, counterparts across the world who are creating similar systems.
00:52:07.720 But these systems, especially in the case of warfare, right now, yes, some parts of it are offloaded to the machine.
00:52:16.140 But at the top of that hierarchy is a human being making these decisions.
00:52:21.600 And it comes down to whether you trust their judgment in the use of these machines and piloting these machines.
00:52:29.260 And at present, I would say, in the case of Gaza and in the case of the reckless war in Ukraine,
00:52:34.100 which has killed so many Russians and Ukrainians and just devastated the area,
00:52:39.980 I don't trust the humans at the top of this system.
00:52:44.100 They are predatory, seemingly inherently so.
00:52:47.300 So then that's actually a question about the humans, though.
00:52:50.340 It is.
00:52:50.520 It's not a question about AI.
00:52:52.060 I think all these are questions about humans because in the case, in warfare, it's a little bit different.
00:52:57.400 But in the case of education, in the case of corporate life, right, business life, in the case of social life,
00:53:03.520 it's both about the humans at the top producing, deploying, investing in these machines.
00:53:10.300 And it's about the humans at the receiving end who, by and large, are choosing this.
00:53:15.220 They're like, oh, yeah, this is great.
00:53:16.360 I'm going to do my homework.
00:53:17.480 I'm going to research it so much faster.
00:53:18.940 You know, and so it's, by and large, right now, it's, yes, it's in the hands of humans.
00:53:24.520 I think that the ideas about sentient AI or willful AI, AI with its own volition,
00:53:33.320 I don't think that those ideas should be discounted because it has more decision-making power,
00:53:39.640 more degrees of freedom than it did before.
00:53:41.940 Yes, humans are in the loop.
00:53:42.980 It's a symbiosis, right?
00:53:44.220 It's like a parasite needs a human host to prompt it.
00:53:47.080 And this parasite needs a human host to prompt it.
00:53:50.980 But it is still, in my view, by and large, parasitic, not mutually beneficial.
00:53:57.480 And it's parasitic because it is, yes, the machine is the medium,
00:54:02.280 but it's parasitic because the people at the top are parasitizing us or preying on us by way of that machine.
00:54:09.800 So it should, I think, be that Zuckerberg should be held over the fire.
00:54:14.620 Alex Karp should be held over the fire, not the algorithm.
00:54:18.640 But the algorithm is the vehicle by which they are accomplishing their aims.
00:54:22.720 And, again, I simply don't trust their moral judgment.
00:54:25.560 Like when they talk about summoning the god, Elon used to say it was summoning the demon, talking about AI.
00:54:30.260 He's rebranded to saying it's summoning a god, right?
00:54:34.060 And I think at some point there will not be a human at the top, you know,
00:54:37.420 and I know we might have disagreements on the technocracy.
00:54:40.080 And, like, the way I see it as right now is we're consolidating into a technocracy,
00:54:44.280 and Project Stargate was a big promotion of that, in my opinion.
00:54:48.760 And with these new technocrats is that they believe that we should have a monopoly.
00:54:54.840 You know, Teal isn't a monopolies.
00:54:56.300 They like people like Curtis Yarvin, who's a monarchist.
00:54:58.380 And I think through AI, not too far down the road is when they develop their digital king or their digital monarch
00:55:05.000 that will be at the top at some point and make these rules,
00:55:08.260 which is how you, I think, build a Skynet situation, which sounds ridiculous,
00:55:11.720 but is literally some things that even Elon Musk warns about.
00:55:15.280 He uses Skynet.
00:55:16.680 Why do you think that it sounds ridiculous?
00:55:18.860 To some people listening, I think they don't think that AI could evolve into a Skynet situation.
00:55:24.500 Okay, so to that point, I'm going to go around the room here.
00:55:26.500 Do you guys think that AGI is something, that kind of AGI superintelligence is something that is possible?
00:55:32.240 Because there are people that say, I don't think it'll ever actually be that smart.
00:55:35.700 They're like, oh, it doesn't have the compute power.
00:55:37.720 We'll never be able to have that kind of AGI, that intelligence isn't something that can be artificial.
00:55:44.700 Do you think it is?
00:55:45.620 I think it's possible.
00:55:46.460 I don't think it's certain that it will happen.
00:55:48.600 It's possible.
00:55:49.980 I certainly don't think it's possible.
00:55:51.560 And I think a good analogy would be, let's just say you have mechanized industry that creates things that are very efficient.
00:55:59.620 Let's just say they create clothes, right?
00:56:01.580 We have very good machines to build clothes.
00:56:03.500 But today, the highest quality clothes are all handmade by humans.
00:56:07.800 And I think there's an analogy there too.
00:56:09.200 Only certain people can afford.
00:56:10.480 Yeah.
00:56:10.700 For a very small amount of people can afford those nice clothes.
00:56:14.560 Well, but there's still humans at the top who are the best at what they do.
00:56:20.060 And they're using probably the most human types of skills to make those clothes.
00:56:24.740 And I think if you expand that to an entire economy, what the promise of AI is actually that over time, humans are actually doing higher and higher values of work.
00:56:34.380 Not that fear and fear humans are working or that there's fear and fear humans, but that actually humans are more flourishing than before.
00:56:40.540 And to me, that sounds like when a communist tells me they can create utopia on Earth.
00:56:44.540 Oh, you're going to just sit around and write poetry.
00:56:46.200 I don't think that's the future with AI.
00:56:47.980 I don't know about it being sentient.
00:56:49.600 I think it's very possible.
00:56:50.880 But what I do believe in is that it will become the thing that we've outsourced all of our life to.
00:56:57.100 And that at some point, we'll just be subservient to that.
00:56:59.420 Despite it, it might not be sentient, but it will oversee everything.
00:57:02.980 And I think that is a very big consequence for humanity.
00:57:06.660 I think the consequences outweigh any of the short-term benefits that you guys are talking about.
00:57:10.480 I think to your point, I think if it becomes sufficiently intelligent, it won't matter if it's actually sentient behind the screen or not.
00:57:17.800 It will seem sentient to us.
00:57:19.620 And they might even redefine what consciousness means at some point to include the AI.
00:57:25.560 AGI is a tricky one because what is AGI, right?
00:57:28.560 Artificial general intelligence.
00:57:30.100 What does that mean?
00:57:30.920 It was, I think, originally coined in 1997.
00:57:35.960 So for the purposes of this discussion, not just artificial general intelligence, but artificial super intelligence.
00:57:41.080 Super intelligence, yeah.
00:57:41.480 And again, another one that the definition has changed pretty dramatically.
00:57:45.600 Like right now, the fashionable definition, you hear people like Eric Schmidt from ex-Google CEO,
00:57:51.760 now partnering with the Department of Defense and has been for years.
00:57:55.720 His definitions that he's been running with, and Elon Musk also, it's just fashionable now to say the artificial general intelligence is an AI that would be able to do any cognitive task that a human could do.
00:58:09.820 Presumably, it would do it better because it's going to be faster.
00:58:12.520 It's going to have a wider array of data to draw upon, all these different things.
00:58:18.560 But that's the general AGI definition that's fashionable.
00:58:22.780 Now, before, when, you know, set the 1997 definition aside, before, it really started with Shane Legge of Google and was popularized by Ben Gertzel, roughly 2008 or so.
00:58:35.780 And for them, it was more about the way in which it functioned.
00:58:40.600 They wanted to distinguish artificial narrow intelligence, so a chess bot, a goon bot, a war bot, any of those bots, from something that could do all of those, right?
00:58:50.840 It could play chess, and it could kill, and it could have you gooning all day long.
00:58:54.980 And general intelligence.
00:58:57.000 And it could be accomplished either by building a system that was sufficiently complex that this general cognition would emerge, and I think that's what Sam Altman and Elon Musk are betting on with scaling up LLMs and making them more complex.
00:59:11.780 Or it could be like the Ben Gertzel vision, where you have just a ton of narrow AIs that kind of operate in harmony, and that is now what we call multimodal.
00:59:22.920 Super intelligence.
00:59:23.800 Nick Bostrom really put the stamp on it, 2014, with his book, Super Intelligence.
00:59:29.240 And for him, it could be either a general system, it could be narrow systems, it could be any system that kind of evades, that excels beyond humans, and ultimately the danger of them being you lose control of it.
00:59:42.660 Now, Eric Schmidt, Elon Musk, people like this, are going with super intelligence just means it's smarter than all humans on Earth.
00:59:49.220 I'm not exactly sure what that means, but that's the definition.
00:59:53.800 Whatever you're talking about, none of that shit exists, right?
00:59:57.020 All that shit's a dream.
00:59:59.160 But, like many religious dreams, and I think that this is ultimately a religious conversation when you're talking about AGI and ASI,
01:00:08.880 it tends to bleed into reality.
01:00:13.040 And then with this, you're talking about AGI, like a system that can generalize across domains, concepts.
01:00:19.780 It's wild to see the rapid advance from the implementation of Transformers 2015 OpenAI and the real breakthroughs in LLMs to the multimodal systems that became, just in a few years later, became much more popular to more integrated systems.
01:00:39.000 So, before, an LLM meant a chatbot.
01:00:41.420 Now, an LLM means a system that can recognize images, produce images, produce voice.
01:00:48.820 It is more general.
01:00:50.540 It's not general in its cognition so much, except there are certain seemingly emergent properties that are coming out, like we were just talking about a moment ago.
01:01:00.980 So, LLMs doing math, better and better and better, LLMs solving puzzles and mazes, better and better and better.
01:01:09.520 LLMs, in some sense, I hear this a lot, actually, that people say, oh, I'm working on this problem, and I turned to the LLM and it solved it.
01:01:18.920 And, you know, I have a good friend, he's a lawyer, and he was doing a case analysis, and he had done it himself.
01:01:26.660 He had already gone through all of it, but he wanted to see if ChatGPT could do it.
01:01:29.840 It was the 4.5, and he asked ChatGPT, and it came up with basically the same thing that he had spent many, many hours on in just a few minutes.
01:01:38.240 So, it's like the AGI that they're talking about, like what Sam Altman seems to have been pitching before, you know, the big GPT-5 flop,
01:01:46.440 is something that is, like, more like a human than a machine, and that doesn't exist.
01:01:53.040 But it is the case that the technology, you take that big cosmic vision or all those cosmic visions of what this godlike AI would be,
01:02:01.320 it can't be denied that the actual AI in reality is slowly but surely, faster than I'm comfortable with, approximating that dream.
01:02:11.120 And unless it hits a wall, unless it hits a serious S-curve and just flattens out, it's at least worth keeping in mind that it's not a stable system.
01:02:22.860 Anything, I think, could happen.
01:02:25.460 So, I'm into the idea of it being a tool, and it can be a good collaborator for people.
01:02:30.120 You know, when talking to AI, if you want to help edit something, I understand that that's an awesome thing.
01:02:37.220 But when you talk about, like, the narrow path you're creating or trying to create with AI, what does that entail?
01:02:42.460 Are we talking about trying to implement regulations from the top down, or is it something you're doing within your company?
01:02:47.520 Yeah, so maybe to start would actually be a historical analog here.
01:02:51.840 The railroad in the 19th century in Europe, I think, is actually really interesting.
01:02:56.760 We could also talk about social media and the internet as other potential analogs.
01:03:01.240 But the reason why I find it particularly interesting is we sort of underestimate how transformative it was for the average person, let's say, in Britain.
01:03:09.760 And they basically went from being in these fairly isolated towns with some capacity to basically travel between them,
01:03:16.980 either, like, with horseback or carriage or something like these.
01:03:19.540 But the railroad enabled much faster travel for humans.
01:03:23.140 It was actually the fastest they'd ever traveled, right, when they got on the railroad.
01:03:26.060 And the railroad stations went into these people's towns and brought random people, like, large numbers of people into their towns and also industry through their towns as well.
01:03:36.020 And in the early days of the railroad being built, it was actually extremely contentious.
01:03:41.080 So the wealthy aristocracy basically opposed it on the grounds of just, like, not wanting railroad tracks going through their land that they felt like they had ownership over.
01:03:50.740 And then on the other end, sort of more working class types.
01:03:55.940 I mean, a lot of times the station would just go in right in the middle of their town where they lived.
01:04:00.000 And it was unbelievably loud.
01:04:01.880 It was bringing all these people they didn't know right into the middle of where they lived.
01:04:05.240 It made them feel unsafe.
01:04:06.300 And there were a number of high-profile accidents where the train would just come flying through and just smash into something.
01:04:13.000 And lots of people were injured.
01:04:15.080 And the reason why I bring this up is basically the risks were actually quite serious.
01:04:20.780 It was actually extremely disruptive to the day-to-day life of many people.
01:04:25.220 And there was resistance.
01:04:26.160 So there's several interesting examples where basically aristocratic types would basically oppose the railroad track going in.
01:04:34.580 Again, more of the working class type people.
01:04:36.300 In multiple instances, when a new station was opening, would stand and block the way of the train as it was going in.
01:04:41.680 One interesting side note I would put here is that this was not the case in America when the railroad went in.
01:04:48.280 The reasoning there being, I think in America, there was this manifest destiny thing was going on.
01:04:52.400 And we also, in America, we sort of wander outside and stare at the moon, I think, a little bit more.
01:04:58.440 America, the United States, like it was a much more, much bigger area to cover.
01:05:04.400 Exactly.
01:05:04.800 So it was less disruptive.
01:05:05.780 It was more about forging something new versus disrupting something that already existed, right?
01:05:10.480 And so this is why I think the railroad in Europe is a pretty interesting analog.
01:05:14.480 So one other thing that's worth noting is that the media and doctors were actually circulating large numbers.
01:05:20.860 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
01:05:25.580 Now introducing our hottest exclusive, Friends, the one with Multidrop.
01:05:30.380 Your favorite classic television show is being reimagined in your new favorite casino game, featuring iconic images from the show.
01:05:37.580 Spin our new exclusive because we are not on a break.
01:05:41.240 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
01:05:45.380 Want even more options?
01:05:47.260 Pull up a seat and check out a wide variety of table games from blackjack to poker.
01:05:51.500 Or head over to the arcade for nostalgic casino thrills.
01:05:55.320 Download the BetMGM Ontario app today.
01:05:57.920 You don't want to miss out.
01:05:59.380 19 plus to wager.
01:06:00.800 Ontario only.
01:06:01.700 Please play responsibly.
01:06:03.040 If you have questions or concerns about your gambling or someone close to you,
01:06:06.520 please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:06:14.180 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:06:19.140 Numbers of ostensibly fake stories about trains.
01:06:22.840 They talked a lot about madness that would emerge from.
01:06:25.340 So somebody would stab somebody on a train and the doctors and media would say that was sort of the jogging of their brain and the rattling on the train made them go insane.
01:06:33.540 And it caused the stabbing.
01:06:35.180 But of course, it's just humans get in fights.
01:06:37.400 And it was just a fight that happened on a train and somebody got stabbed.
01:06:40.420 And similarly, a lot of the headlines would describe, they actually described the trains as shrieking and demonic.
01:06:46.580 And there was a lot of this sort of language initially used.
01:06:48.680 And so it's a classic Luddites versus accelerationists type dynamic.
01:06:52.580 I bet you're a big fan of the pessimist archive, huh?
01:06:55.160 I'm actually not familiar, actually.
01:06:56.940 It's a great compendium of all these sorts of stories.
01:07:00.200 You know, the radio is turning your daughters into whores.
01:07:02.820 You know, the rifle will allow the engine to kill all the wack men.
01:07:07.940 You know, these sorts of things.
01:07:09.400 I think these exaggerated examples from the past, obviously we see it now, right?
01:07:14.700 Like AI is going to kill us by 2027.
01:07:19.060 We'll all die, right?
01:07:20.020 Although that's not what the 2027 paper said.
01:07:22.300 But anyway, that sort of idea, we're all going to die.
01:07:26.180 That would actually be quite relieving.
01:07:29.080 That's not the right answer.
01:07:30.920 Well, a lot of the anxiety would be relieved.
01:07:34.820 And we would know then what was up.
01:07:36.800 We would not only know what was up with AI, we would know what was going on with the afterlife.
01:07:40.400 And we could all do it together rather than just one at a time, which is really not fair to get singled out.
01:07:45.600 You and Peter Thiel should grab a drink sometime.
01:07:49.760 Is it better for humanity?
01:07:53.120 But that's probably not going to happen.
01:07:54.480 And so it's very similar to climate change in the sense that climate change sucks up all the oxygen in the room.
01:08:02.560 And the problems of species loss, the problems of habitat loss, and the problems of pollution kind of lose a lot of the public attention that they should have.
01:08:12.280 Because these are things that you can see very clearly and you can measure very clearly.
01:08:16.740 Climate change models, let's just say, that's a little bit more hypothetical.
01:08:20.480 And in the same sense, the major problems, the immediate problems of AI, I think, are going to be, again, it's already evident, the psychological and social impacts.
01:08:32.500 What does it mean when human beings begin to become companions with machine teachers?
01:08:38.540 You look to the AI as the highest authority on what is and isn't real.
01:08:42.080 And you train children in this global village of the damned to become these sort of like zombified human AI symbiotes.
01:08:51.060 And then beyond the social ramifications of that, and having like you trained your AI on your grandma, right?
01:08:58.220 And everything's given, like your grandma's there, like, you know, bitching about the mashed potatoes or whatever.
01:09:02.620 You know, on a screen, on an iPad, you know, these sorts of things, how far will it go?
01:09:08.440 Don't know how far did opium go and fentanyl and oxycontin.
01:09:13.560 How far did porn go?
01:09:14.820 Pretty far.
01:09:16.300 Beyond that, though, you've got the economic concerns and it's all really up in the air, right?
01:09:20.280 Is AI going to make your company more competitive?
01:09:23.000 Is AI going to replace all your workers?
01:09:25.240 I mean, you look at, was it Klarna?
01:09:26.880 Klarna?
01:09:27.560 And they made this big announcement, we're replacing all our workers with AI and customer service.
01:09:31.060 And then they were like, oh, actually, we're hiring again because they didn't work out so hot.
01:09:35.400 Maybe it'll be like that, or maybe it'll be more like companies like Salesforce or Anthropic, where the coders really are being replaced.
01:09:43.160 The low-level coders are being replaced.
01:09:45.280 But these economic concerns, and I think for you guys, especially for you, Bryce, the economic angle is, clearly you take it very seriously.
01:09:53.940 And I read the Volus mission statement that doesn't include anything.
01:09:59.220 I mean, it's like basically a rejection of the whole transhumanist vision, subtly, but a subtle rejection of it, but an embrace of these technologies in their more, I guess, humble forms, you know, like low-level forms and narrow forms.
01:10:15.740 And it makes sense, but I really think ultimately, though, that long-term vision, because these are the frontier companies, right, and they're driven by the long-term vision.
01:10:26.000 They got all the money, they got the government support, and they're, you know, the carousel of the federal government has now given favor to Meta, has given favor to Palantir, has given favor to the whole A16Z stack.
01:10:41.000 Like, their vision of the future is going to make a big difference, kind of regardless of the technology.
01:10:47.000 Like, people have been able to hypnotize whole tribes by, like, waving totems around, like, you know, and it's like, if you can do that, and you've got a totem that can actually, you know, talk and do math,
01:10:56.840 you're talking about a religious revolution beyond anything that's been seen before.
01:11:00.640 And I think that all of those things, like, all of those problems are just beyond the scope of, like, the nuts and bolts day-to-day, like, does my AI give me, you know, a nice streamlined PowerPoint for my presentation?
01:11:15.300 Right, like, I understand there's been fear-mongering throughout the ages, whether it's, Phil and I talk about the synthesizer sometimes, you talk about the trains.
01:11:22.140 The thing I think that sets AI apart is that it is a vector for almost everything about humanity.
01:11:27.900 You know, it's about education, it's about children and safety, it's war, it's going to be expression with regulations where they're trying to say you can't do deepfakes and whatnot.
01:11:35.620 So it really, everything kind of falls into the black hole of AI and becomes a much bigger existential crisis.
01:11:41.300 Although I understand the existential crisis that the Luddites, who I agree with, because they were afraid, they weren't anti-tech, they were more anti-being replaced by mass autonomy.
01:11:50.860 So they were still using their own technology today, the OG Luddites, but they didn't like that factories were being built and filled with machines that took everyone's jobs, right?
01:11:59.920 And that is one, just one part of what AI will be doing.
01:12:03.280 I think the point of that story was actually, was to highlight basically that there were serious risks and things went wrong and humans got on top and figured it out and solved it.
01:12:14.560 They moved more of the stations further out from the city, they put in place better safety measures on the trains, et cetera, et cetera.
01:12:20.960 You're absolutely right that AI feels like it is more transformative and the risk profile is potentially higher.
01:12:27.220 I don't think we're quite there yet, but it's definitely in the future, it could get significantly more risky.
01:12:32.360 I would say one risk that exists today, and this was something that I ran into directly, is, for example, the sort of the way that AI can allow people today to sort of mass scam Americans, right?
01:12:46.500 So I got this email from a guy named Akshat.
01:12:50.620 King in Ethiopia?
01:12:51.600 No, he's a man in India.
01:12:53.620 Okay.
01:12:54.100 And he was sending 40,000 emails today, but they're extremely well tailored, right?
01:12:59.500 In the email, he mentioned portfolio companies that we work with, names that I recognized.
01:13:04.320 It was a very well tailored email, and it was generated using an LLM.
01:13:08.320 He's blasting out tens of thousands of these.
01:13:11.560 And essentially, in this case, Akshat was basically offering us to offshore our labor, so our associates at New Founding, for a quarter of what we're currently paying them, right?
01:13:21.200 And he's able to do that, and it's not just using MailChimp, right?
01:13:25.960 He's using, he's basically collecting data in order to correct, in order to produce the right sort of targeted email.
01:13:32.600 And, I mean, essentially, it's a form of scamming that's using AI in order to be more effective.
01:13:40.820 And you can think about how this could apply to, like, if that was targeted at your grandmother or something like that.
01:13:45.320 And so I think that's, like, very practically today.
01:13:47.480 It's about blocking people like that and making sure that AI, very practically right here and right now, isn't used to harm Americans while we continue to monitor these further out risks.
01:13:59.520 But it's also important not to confuse the two.
01:14:02.740 And to that point, I think when we treat AI as kind of some autonomous or it's some technology, like it's a train steamrolling, and we either get off the tracks or we take it, I think that's the wrong way to think about AI because that treats it as something that we're powerless.
01:14:20.580 We're completely passive.
01:14:21.760 And I think there's a lot of doomerists out there who want to talk about the deep state and the globalists, and we're just a tool, we're a cog in their machine.
01:14:30.460 And that takes away the agency that actually is what makes humans different from AI.
01:14:35.960 So I guess I would actually want to add, maybe spin a positive vision for what AI could be.
01:14:43.660 And I think part of the way to solve the AI problem to define that narrow path for the future is actually for people to start building things using AI appropriately that actually make America better.
01:14:57.460 And I think what does that look like?
01:15:00.460 Well, there's kind of the fear that AI actually enhances this military-industrial complex.
01:15:07.480 That's fair.
01:15:08.400 I think AI is actually very different from a lot of technologies that have come around, like the airplane, like the computer, like the internet.
01:15:15.900 All these have been started as military technologies.
01:15:19.220 And that's actually kind of their natural bent is as military tools.
01:15:25.320 And then they trickle down to large global enterprises.
01:15:28.900 And then finally to consumer applications, right?
01:15:32.040 But AI, interestingly, its first application, at least when we're talking about large language models, is actually for individual people to help make their lives better, to reduce monotonous work.
01:15:45.420 And I think the way I see it is that AI is going the other way around, that we can actually use AI effectively in small businesses where humans who are really high agency, virtuous, good leaders can actually get more done.
01:15:59.720 And they can have more success with AI because they're able to get higher leverage.
01:16:05.280 I think that's part of the trick to get you so addicted to AI to allow the machine into you so it's harder to divorce it from you and easier to control you.
01:16:14.400 Again, if I could add just an old man spurg out just for a second.
01:16:19.000 AI did actually, in the early, early conceptual phase, was deeply tied to the military.
01:16:26.520 So Alan Turing, for instance, Marvin Minsky, the pioneers, deeply tied to the military.
01:16:36.200 And maybe this isn't AI specifically, or at least he was kind of the cybernetics in Norbert Wiener, who, aside from having one of the funniest names ever, he was, you know, a military man.
01:16:47.440 And he's writing about and thinking about with cybernetics, cybernetic systems and human machine integration, human machine symbiosis, thinking about it from the purposes or for the purposes of military dominance.
01:16:59.840 And so it is, even though you're right, like the LLM revolution comes out of Google, right?
01:17:05.040 Really?
01:17:05.480 I mean, and taken up by open AI and largely civilian, but the idea of thinking machines and the development of various algorithms, deeply tied to military institutions and military concerns.
01:17:21.720 And I want it to be what you're saying.
01:17:24.260 I want that idea of like, it can just be a great collaborator for the individual, which should be great.
01:17:29.440 But it's something, all these things are always hijacked.
01:17:32.540 And the people who are either building this stuff right now, most of them, not all of them, and the military industrial complex, like they have no ethics.
01:17:39.580 So a lot of those things will be eventually, they're already turned against us.
01:17:43.900 Can I give, I'll give a very practical example.
01:17:45.720 And I think that, again, that's a serious, that is a serious risk, and we should continue to monitor that.
01:17:50.900 So we have a portfolio company, it's called Upsmith.
01:17:53.740 And specifically, they work with like plumbing, HVAC, these sorts of companies and individual business owners.
01:18:01.060 The average company size, they work with five people.
01:18:03.740 And there's around three to 500,000 of these sorts of companies in America, right?
01:18:07.720 So it's like working middle America tradesmen type individuals.
01:18:11.500 As it stands today, when they want to book to go to somebody's house to do some repairs, they actually have to outsource most of this to basically these companies that take care of all of the overhead.
01:18:25.260 A lot of it's actually offshore labor, or at a minimum, it's these like, again, extremely soulless sort of bureaucratic type jobs.
01:18:31.920 And what happens also is that these plumbers lose a lot of business.
01:18:36.280 They often will lose up to half of people who call them to book to, hey, this thing's swapped or whatever.
01:18:41.060 They're just doing work, and they miss the call.
01:18:42.820 They don't get back to them.
01:18:43.780 It's not booked.
01:18:44.920 And basically, American skilled tradesmen are missing out on a lot of value.
01:18:50.660 Or they are capturing the value and forking over a huge amount to basically Indians in India who are running some phone bank or whatever.
01:18:59.000 So one of our portfolio companies, it's called Upsmith.
01:19:01.660 Right now, what they do is they basically have an agentic AI tool that just takes care of bookings for these plumbers.
01:19:08.580 So if you call or text or anything like that, it'll basically just reply back, and it'll automatically take care of calendaring and booking.
01:19:16.860 Just the whole exchange will happen back and forth, and it'll just send the plumber to the next house.
01:19:20.440 Now, I think from my perspective, this is something that very tangibly today can help a plumber basically help him make more money in the next couple years.
01:19:32.140 And that helps him if he doesn't own a house yet.
01:19:33.980 He can buy a house.
01:19:34.780 He can get married.
01:19:36.020 He can do these sorts of things.
01:19:37.620 And I see that as very practically positive for Americans.
01:19:41.220 And it's actually shifting, again, sort of economic opportunity away from bureaucrats, away from offshore to the house of a guy in Philadelphia or whatever.
01:19:50.540 And so at New Founding, at least, we're interested in finding those opportunities and backing those at ones where it's very clear that this is going to help Americans.
01:20:00.240 And I think hopefully that helps to give an example of the sorts of ways that AI can more practically be of benefit, despite the presence of the rest.
01:20:11.400 Like, I'm totally against a seven-year-old getting onto some artificial intelligence sort of, you know, like doom loop, et cetera, et cetera, as we've been talking about.
01:20:21.040 Like, very weird things can happen.
01:20:22.740 And so there have to be guardrails and such.
01:20:24.180 But do I think that that plumber should have to rely on Indians to build these next calls?
01:20:28.540 Oh, yeah. I'm with you.
01:20:28.920 I don't know.
01:20:29.960 That's one of those short-term benefits I can see being a positive from AI for people.
01:20:35.400 I think that's great.
01:20:36.560 I guess I'm more just concerned about down in the future, how it's going to work and maybe how using AI so much and getting businesses and people basically addicted to it or, you know, relying on it so heavily.
01:20:50.400 In this case, it gets integrated into the business.
01:20:52.200 And then something happens down the road and they try to pass regulations and this can be a big thing of, like, people perhaps will riot, not riot, but, like, revolt against these regulations, which I'm skeptical on regulations, how that works, because they've been married to AI, basically, in their companies.
01:21:10.080 And I do see a future where we're going to have these big conversations, these big fights in politics about what we're allowed to do with AI because some people have been abusing it, like the scammers you're saying.
01:21:19.940 And it's going to be, like, another two-way debate, but with AI.
01:21:24.300 I don't know.
01:21:24.680 Like, how do you guys feel about regulation?
01:21:25.840 How do you regulate this?
01:21:26.860 I mean, I think it's at least a step.
01:21:29.260 Most of the problems with artificial intelligence are going to be dealt with positively on the level of just human choice, personal choice.
01:21:37.760 Most of the deep problems that we...
01:21:39.880 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
01:21:44.200 Now introducing our hottest exclusive, Friends, the one with Multidrop.
01:21:49.360 Your favorite classic television show is being reimagined in your new favorite casino game, featuring iconic images from the show.
01:21:56.180 Spin our new exclusive, because we are not on a break.
01:22:00.200 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
01:22:04.860 Want even more options?
01:22:06.180 Pull up a seat and check out a wide variety of table games from blackjack to poker.
01:22:10.380 Or head over to the arcade for nostalgic casino thrills.
01:22:14.280 Download the BetMGM Ontario app today.
01:22:16.880 You don't want to miss out.
01:22:18.340 19 plus to wager.
01:22:19.760 Ontario only.
01:22:20.660 Please play responsibly.
01:22:21.720 If you have questions or concerns about your gambling or someone close to you,
01:22:25.240 please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:22:33.580 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:22:37.920 You see right now everything from the AI psychosis where it's, you know,
01:22:42.240 people kind of disappearing into their own brainstem sort of phenomenon.
01:22:45.220 These are things that human beings chose to do, by and large.
01:22:48.920 So that's, I think, for me, the most important thing is to at least make people aware and activate some sense of will and agency to put up cultural barriers so your kid doesn't become an AI symbiote.
01:23:02.260 Well, that's something that I think that people have learned just from social media.
01:23:05.900 Again, I consider social media algorithms as kind of, you know, infant AI as it is.
01:23:11.080 And so people are seeing the negative consequences and seeing bad things that can happen for their kids.
01:23:16.900 Or at least the smart people are noticing.
01:23:19.180 And they're not allowing their children to have, you know, screens all the time.
01:23:23.620 It is rather disheartening when you go out to dinner or whatever and you see families that have, like, a kid sitting there with a screen.
01:23:32.080 And it's like, well, that's the only way that's the only way they'll eat or whatever.
01:23:35.280 That's a really terrible, terrible development.
01:23:37.660 And I think that there needs to be more emphasis put on informing people of how bad that is for children.
01:23:42.860 But that kind of, like you're saying, that kind of agency, that kind of discretion by parents is what really will prevent people from getting into this situation in the first place.
01:23:52.680 I don't think the majority of people that are having problems with social media, whether they're problems, you know, delineating between reality and what's actually social media or, you know, making a distinction between online friends and real world friends.
01:24:08.720 I don't think they're people that are actually well-adjusted adults.
01:24:13.160 There tend to be people that are young people that didn't have, you know, that are not, you know, younger than me.
01:24:18.480 I'm an old guy.
01:24:19.320 I'm 50, right?
01:24:20.160 So I was one of the kids that was like, you know, be home before the streetlights come on, but otherwise get out of the house.
01:24:25.280 And so I had a lot of learning how to function in the real world by myself as a kid.
01:24:32.020 And I think that that kind of thing is something that's important for kids.
01:24:35.460 And I think that parents need to do that kind of stuff as opposed to just handing them the screens and stuff.
01:24:41.760 Well, to that point, though, if I can continue, the personal choice is the first bulwark, right?
01:24:47.740 And it is, people are taking an active role in whether or not, they're not just being told this is the future, you need to turn your kid into an AI symbiote and just doing it, right?
01:24:56.640 There are tons of people, screen-free and phone-free childhoods.
01:25:00.700 There are laws being passed in certain countries and institutional policies being put in place in schools and other institutions in the country.
01:25:08.720 You can't just sit around on your phone and disappear into your own brainstem.
01:25:13.520 That's good.
01:25:14.620 But when it comes to military AI, when it comes to Dr. Oz, the inestimable Dr. Oz, who was shilling chips for your palm on his show to his credulous audience just a few years ago now,
01:25:29.100 is saying that in just a few years, you will be considered negligent as a doctor if you do not consult AI in the diagnosis or prescription process.
01:25:38.220 And that goes well beyond just personal choice.
01:25:41.680 That's now an institutional policy or perhaps a law.
01:25:44.980 And so that even though personal choice is one of the most important things we have, right, just the ability to say no,
01:25:55.800 in many instances you can't say no and won't be able to say no.
01:25:59.660 And so the laws are going to be important.
01:26:02.640 And I think that right now at the state level, if you look at the states that are most inclined to legislate, California, for instance,
01:26:13.080 and you look at the 18 laws that they got in place, things like you can't make deep fakes of people
01:26:19.120 or you can't use someone's image against their will, mostly tailored to actors and whatnot, right?
01:26:25.000 You have to get permission.
01:26:26.680 You can't make child porn with AI.
01:26:29.740 You can't defraud people.
01:26:33.380 Some of them overlap.
01:26:34.780 But these 18 laws that people talk all the time, well, if you have states making all these laws,
01:26:39.880 you'll gum up the whole industry.
01:26:41.360 America will fall behind China.
01:26:43.580 They'll have more goombots than us.
01:26:45.320 And I don't buy it.
01:26:46.700 I don't buy it at all.
01:26:47.700 I think that if you look at just the most heavily regulated state, California, it's reasonable.
01:26:52.740 And the 19th law, SB 1047, I finally remembered it,
01:26:57.120 that basically would hold AI companies accountable for damages done just like you would do with an automobile company
01:27:02.820 or you would do with a drug company, well, that one got killed.
01:27:06.020 And I think it's a very reasonable law.
01:27:07.760 Or if you look at Illinois, you know, it is Pritzker.
01:27:10.780 I can't stand that guy.
01:27:12.180 And Illinois politics are super democratic and very corrupt.
01:27:16.140 And yet they had the wherewithal to pass a law saying that you can't have a licensed therapist bot.
01:27:22.480 You can't have people sitting around talking to bots and charge money for people talking to your licensed bot.
01:27:27.620 And as a licensed therapist, you can't just hand over your client to a bot legally.
01:27:33.100 And it's a very reasonable law.
01:27:34.860 So I think above just personal choice, these laws, that regulation, will be very important.
01:27:41.860 And they're going to be different from place to place.
01:27:43.640 And it'll get kind of like with abortion laws.
01:27:45.960 We'll get to see, ultimately, who was right.
01:27:49.760 Does AI really turn you into a superhuman and give you superpowers?
01:27:52.820 Or does it make you into an atrophied schlub?
01:27:56.560 So two points that I want to talk about with what you just said.
01:28:00.660 First of all, I'm probably more skeptical of therapists than I am of any AI and the concept of therapy in general.
01:28:08.880 I think it's literally just, you know, it's just pay me money to talk to me.
01:28:13.200 So, honestly, I don't think that an AI would do worse than a therapist because I think therapists are the pinnacle.
01:28:22.660 What if the AI is just the compression of all of the worst therapists?
01:28:25.860 Well, I mean, the therapy has all, like in and of itself, in my estimation, at least for men, is like the pinnacle of snake oil.
01:28:35.560 So there's that.
01:28:36.760 But second of all, you said that, you know, you were talking about whether or not there would be mandates for using AI for diagnosis.
01:28:45.340 Is there any other realm in which the process for diagnosis is actually even people care about for the most part?
01:28:55.160 Like aside from, you know, if you're dealing with x-rays and how that would affect your body or something like that.
01:29:01.840 Is there any other place where people are like, I'm concerned about how you come to the conclusion that you do?
01:29:08.020 Or is it really the important part of that?
01:29:11.600 Like, are you getting the right diagnosis?
01:29:14.000 Because if AI can actually make sure that your diagnosis is correct 95% of the time as opposed to, say, 70% of the time.
01:29:25.540 Because, you know, humans are notoriously bad at actually diagnosing what's wrong with someone.
01:29:31.540 And the more strain there is on the health care field, the fewer doctors that are actually well-trained and stuff, the fewer you have, the worse the results actually are going to become.
01:29:42.860 So would it really be a problem for if the government were to say, look, you have to at least run it through the AI and see what the AI says?
01:29:51.680 You don't have to rely on it.
01:29:52.960 You can't mandate it.
01:29:53.640 I don't think you can mandate it.
01:29:54.800 I think that's a problem.
01:29:56.260 To tell a private physician they have to do that.
01:29:59.900 Well, I mean, there's all kinds of mandates in the health care field.
01:30:02.720 Why is that different?
01:30:03.360 But when it comes to, like, using an AI to mandate that, then, like, what AI is acceptable, I just don't see how that works out.
01:30:09.140 Well, I mean, again, the results would dictate, right?
01:30:11.040 Like, if you've got an AI that's got a 99% success rate, and if you've got one algorithm by one company that actually has a 99% success rate, why wouldn't you use that?
01:30:23.620 Or why would you have a problem with it?
01:30:26.160 I have no problem with them using it, but I just have a problem with the mandate.
01:30:28.720 Yeah, and you do, I mean, you have this claim, right?
01:30:31.540 Like, for instance, a lot of the studies, comparative studies with radiology, how well is the AI able to detect cancer?
01:30:38.900 And usually it's these, like, tiny, tiny, tiny, tiny, tiny little tumors, right?
01:30:41.780 That the radiologist can't do just with his eyes.
01:30:44.840 But that's very specific to that field, and there's also the issue, so, I mean, we also know that while you don't want to necessarily bank on your immune system,
01:30:55.400 that cancerous cells and even small tumors are forming in the body all the time, and the immune system is constantly battling them back.
01:31:02.680 And so you have a lot of more kind of second-order effects that can come out of that.
01:31:08.020 If you have an AI that finds every tiny little aberration, and the next thing you know somebody's getting run through some devastating chemotherapy, you know, on the basis of this AI,
01:31:18.300 it's much more complicated than saying the AI is 99% better than a human.
01:31:23.960 There's all these other elements that go into it.
01:31:26.140 And diagnosis, I mean, we're not talking about necessarily just visual recognition.
01:31:30.580 When we're talking about doctors turning to AI for a diagnosis or to come up with a therapy, we're largely talking about LLMs.
01:31:40.260 And a lot of them are very specific, tailored LLMs that are trained on medical studies.
01:31:46.200 And the doctor would then turn, he would have his own opinions, she would have her own opinions, and then turn to the LLM for guidance.
01:31:52.740 As an expert, right?
01:31:53.920 If you're a general practitioner, you defer to experts on various things to come to the solution.
01:32:00.820 But real quick, so the question, I think, is not going to be answered because a company says,
01:32:06.600 our AI is 99% accurate, or 90% accurate, or 50%.
01:32:12.040 Downstream, looking at the actual outcomes of patients, to really know a statistical success rate for an AI would take enormous amounts of study, right?
01:32:25.200 And meta-studies.
01:32:26.160 And we don't have that.
01:32:27.220 Right now, we just have advertising.
01:32:28.620 And so, if we don't have the studies in place, like there was this whole thing that happened in 2021, late 2020, where there was a big medical crisis,
01:32:38.120 and without any real rigorous testing or studies, suddenly the advertising won the day.
01:32:46.320 And suddenly you had soft mandates in America and hard mandates elsewhere.
01:32:50.740 And we still don't really have a clear statistical understanding of what happened and what damage was done.
01:32:59.740 Bryce, you were going to say something?
01:33:01.440 Isn't it fair that we all want the humans to bring back the humans to be in charge, right?
01:33:07.680 So, in the case of the doctor, a doctor who's actively ignoring very important, relevant, industry-standard tools to make a diagnosis,
01:33:18.600 we might call that negligence.
01:33:19.840 And I think that would be fair.
01:33:21.040 But the responsibility should fall on the doctor who's making the bad diagnosis in this case.
01:33:26.500 Just like the responsibility for a business who's doing evil practices because the AI told it to,
01:33:33.860 that should probably fall on the business because they're making the decision.
01:33:37.520 If the AI, on the other hand, is a consumer product and it's causing children or adults to have psychosis,
01:33:45.240 well, maybe the AI company should be responsible.
01:33:47.480 And so, I think the worry with regulation is that you're mandating things that have unintended consequences.
01:33:55.060 You're mandating things that aren't well-proven because this is how you're supposed to do it or because this is your ideology.
01:34:01.040 But I think that's a concern with regulation, even regulation of AI.
01:34:05.800 I think what we need to do is bring back humans to be in charge of the AI in a way that humans have been swept aside in a lot of ways way before AI for the last few decades.
01:34:16.240 And how do you define negligence when it comes to it?
01:34:18.920 Because if it were 2021, like Joe's saying, the AI would have said, I'm sure, go get every shot that you're told to get because it was built by people.
01:34:28.360 Vitamin A is what you're talking about.
01:34:29.700 I think in this case then, right, what Bryce would argue is that the doctor should use the AI, but the doctor does not have to listen to the AI.
01:34:37.700 Yeah.
01:34:37.980 Or so, the doctor could then evaluate maybe what multiple different AI tools say, his own individual judgment, some of his own tests that he did, his relationship with the patient, and then make a decision.
01:34:48.300 And I think that's totally fine because you could, through AI, one of these short-term benefits when it comes to medicine, this doctor could potentially have access to so much of your family history to make a way better decision for you, which can be awesome.
01:35:02.240 But like so many doctors –
01:35:03.220 Scary but awesome.
01:35:03.800 Scary.
01:35:04.380 I mean, yeah, because then China's going to hack it and create a bioweapon personalized just for you, which they're probably already doing with their mosquitoes.
01:35:10.560 But anyway –
01:35:11.000 Spoke weapons of mass destruction.
01:35:12.660 Dude, it's literally happening.
01:35:14.180 It's wonderful.
01:35:14.660 It's happening.
01:35:15.160 That point do you make about 99% – let's say it happened, right?
01:35:20.620 And the studies did show 99% of the time, the AI gets it right.
01:35:26.360 And 99% of the time, the AI robot gets the surgery right.
01:35:31.080 And 99% of the time, the AI teacher is better than a human teacher.
01:35:34.780 99% of the time, if you're looking for a mate, the AI is the one to ask.
01:35:39.560 Algorithmic eugenics is the way to go.
01:35:42.020 So on and so forth.
01:35:43.520 Is God real?
01:35:44.600 Does God love me?
01:35:45.980 99% of the time, the AI is going to tell you the correct answer.
01:35:50.040 I wonder then, you know, Cashman, if we can defer for a moment to the wisdom of Shane's shamanic visions.
01:35:58.160 And I think that we should.
01:36:00.160 All practical things – all due respect to the practical matters.
01:36:03.300 What we're talking about is total replacement by machines.
01:36:10.080 Total replacement.
01:36:11.100 It seems like that's just inevitable.
01:36:13.000 Like, I understand the short-term benefits of helping the middle class.
01:36:15.640 Because the middle class right now has been genocided, in my opinion.
01:36:18.280 Like, the middle class is suffering, whatever's left of it.
01:36:21.640 And it's terrible.
01:36:22.500 And any way we can help them is great.
01:36:24.080 But I think the difference in the conversation would be you guys see a positive vision because there's so many short-term benefits.
01:36:31.900 And we're seeing, of course, but down the road, probably not too far down the road, there is apocalyptic consequences that are going to be born out of it.
01:36:41.100 And it's not like we're just creating out of thin airs.
01:36:43.300 We're listening to these people talk like Altman talking about we have to rewrite the social contract.
01:36:48.380 That's scary stuff.
01:36:49.400 You know, these guys who they purchased their children, now they can grow them in a robot that China's creating.
01:36:56.300 You know, or Elon talking about artificial womb factories where they can have 30,000 wombs.
01:37:01.140 Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
01:37:05.660 Now introducing our hottest exclusive, Friends, the one with Multidrop.
01:37:10.460 Your favorite classic television show is being reimagined in your new favorite casino game featuring iconic images from the show.
01:37:17.320 Spin our new exclusive, because we are not on a break.
01:37:21.300 Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
01:37:25.960 Want even more options?
01:37:27.280 Pull up a seat and check out a wide variety of table games from blackjack to poker.
01:37:31.500 Or head over to the arcade for nostalgic casino thrills.
01:37:35.380 Download the BetMGM Ontario app today.
01:37:38.000 You don't want to miss out.
01:37:39.440 19 plus to wager.
01:37:40.860 Ontario only.
01:37:41.760 Please play responsibly.
01:37:42.820 If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:37:54.700 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:37:58.520 You know, where your baby can grow.
01:38:01.040 These things are so antithetical to humanity that I, and I don't think that is in the distant future because we have things like Orchid, this IVF company that does genetic testing.
01:38:11.520 And I understand that the positives to genetic testing, although I disagree with people saying, well, then I'm not going to have that baby.
01:38:17.040 I'm going to abort this baby.
01:38:18.480 That's disgusting to me.
01:38:20.560 But people are doing that now.
01:38:22.700 But what Orchid is doing is saying that we want this to be the common thing amongst people.
01:38:28.020 That is how we should be creating children through eugenicists, through this like brave new world IVF high tech society.
01:38:35.940 And it's kind of like what happened with cesareans.
01:38:39.060 Cesareans happen all the time now because it's easy for the doctors.
01:38:42.540 You know, it's easier to schedule, but we're robbing like the miracle of birth.
01:38:46.940 You know, obviously, sometimes it just doesn't happen right, and you need extra help in the hospital.
01:38:51.520 Totally understand it.
01:38:52.840 But we've made this stuff the common, you know, and same will go for AI.
01:38:59.440 Despite the short-term benefits, the people in charge are using it for nefarious reasons against us, against everyone else.
01:39:07.400 And will it replace the people in charge?
01:39:09.240 I even think that is the case, even though someone like Marc Andreessen will say venture capitalists are fine.
01:39:14.600 I think he's wrong.
01:39:15.420 I think he's wrong about a lot of things, and he's certainly wrong there.
01:39:18.000 I think they can replace anything at a certain point, even things I love.
01:39:21.480 And then there's a whole discussion about whether people care.
01:39:24.040 You know, you're talking about writers and AI pumping out slop.
01:39:27.340 I don't like it.
01:39:28.000 I'm sure you don't like it.
01:39:29.340 Detested.
01:39:29.880 But there's a ton of people who don't care.
01:39:32.040 You know, you can make AI music, and people are fine with that.
01:39:34.480 I don't like it.
01:39:35.500 You know, and I can appreciate, like, wow, that sounds good.
01:39:38.380 It's crazy.
01:39:38.820 But you are removing the human because you can just put in a prompt, and then you get a whole song.
01:39:43.560 And then I hate that.
01:39:45.800 We were primed by Katy Perry and Kesha.
01:39:48.340 What happened?
01:39:49.220 Well, I think the issue that you're highlighting is that transhumanists really are the problem.
01:39:54.960 And it's not just AI, right?
01:39:56.220 Because you're going into these other domains of technology where it's also a problem.
01:39:59.720 And so, once again, I think what will keep us grounded is appreciation of what makes humans unique, understanding humans as they actually are, and making sure that, you know, whatever ways that AI technology is being used sort of reflects the natural order of the world and of how humans are actually created.
01:40:23.080 And so, you know, to whatever extent AI is dominated or is controlled by transhumanists, that's a problem.
01:40:31.420 And I think we share your concern.
01:40:33.000 Totally agree.
01:40:33.880 I totally agree.
01:40:34.480 But I don't think it's just unique to transhumanists.
01:40:37.120 They're the ones creating it, and they're the ones with these insane visions of the future.
01:40:40.720 But it's, you know, this idea is in everyone now.
01:40:43.440 You know, everyone is kind of transhumanist adjacent, especially in power.
01:40:47.180 Well, there's certainly the – there's a lot of people in power who have these visions, fantasies of transhumanism.
01:40:57.200 But there's also maybe a large percentage of people who actually just don't care whether their children are, you know, grown with screens, right?
01:41:08.780 That's their method of parenting.
01:41:10.500 And I think the key is actually to take a collaborative approach to AI and other technology rather than an oppositional approach of standing up on the train track and saying stop.
01:41:22.180 That's the exact image I had in my head.
01:41:24.460 It's like if the only thing that you're saying or doing is to do what, you know, conservatives do, just standing there and saying to progressives, no, stop, you're going to get bowled over.
01:41:36.560 By the way, that's not – just to be clear, that's not my position.
01:41:40.920 I know you weren't singling me out, even though I saw that glint in your eye.
01:41:45.960 I wouldn't stand in front of the train.
01:41:48.120 I would be more likely to find other strategies that didn't involve me getting run over.
01:41:52.380 But my argument is basically is similar to the conservative argument against porn, right?
01:42:00.940 And similar to the conservative argument against –
01:42:04.640 Isn't it?
01:42:04.900 It depends.
01:42:05.520 I mean, you have the Ron Pauls, who I would consider to be profoundly conservative in like a Burkean sense.
01:42:09.960 But he wouldn't say that porn should be illegal or that drugs should be illegal or that guns should be illegal.
01:42:16.160 But what you have to do, I think – and this is why I appreciate guys like Nathan and Bryce.
01:42:21.140 And this is intuitive.
01:42:22.260 Correct me if I'm wrong.
01:42:23.020 But I get a certain sort of provincial or tribal sense from you guys that you are kind of conservative in the classical sense.
01:42:32.340 The people closest to you are more important than like all of humanity, big H, because they're the people closest to you.
01:42:39.420 And I think that should be the scope for most people unless you're the president making irresponsible decisions about artificial intelligence or the CEO of a corporation making vampiric and predatory decisions about artificial intelligence.
01:42:52.100 And it's like from our standpoint, I think that it's not like this cosmic thing where if AI succeeds, that means everybody's going to be a trode monkey.
01:43:01.160 Or if AI falters, then, you know, we're all just going to go back to the woods.
01:43:07.340 It's going to – so many different lifestyles already exist and cultures already exist.
01:43:12.360 There's going to be huge pockets of homogenization due to technology.
01:43:15.400 But there's also going to be like huge pockets of individuation among people, individual people, and differentiation among cultures.
01:43:23.660 So I have actually a lot of faith that you guys are going to be okay.
01:43:27.020 I think you'll be just fine.
01:43:29.040 You're going to put those cultural barriers in place.
01:43:33.420 And that is, I think, the value of conservatism, of being suspicious of change because very often any push for change isn't necessarily going to be changed for your benefit or your kid's benefit or your community's benefit.
01:43:46.920 The change, this radical change, is more likely to benefit the people pushing for it.
01:43:51.920 It may be mutual, but in the case of porn, drugs, maybe even the trains if you really care about, say, the bison.
01:43:58.040 Or maybe the entire technological system if you don't want trash islands in the Pacific, microplastic in your balls, dead bugs everywhere, black rhinos shuffled off into heaven.
01:44:09.380 These sorts of things, you know, it's ultimately the conservative or the anti-tech or the quasi-Luddite position, if employed properly, simply means I am going to remain human despite the advertising and despite whatever new gadget comes my way.
01:44:28.880 Yeah, and my appeal to the people is, like, I don't want to stand in front of it either.
01:44:32.540 I don't think stopping this stuff is possible.
01:44:35.200 You know, it's like the war on drugs or the war on guns, the war on terror.
01:44:39.180 It never works.
01:44:40.480 But, like, what we're saying, and I think we're agreeing on, is it's going to have to happen from the bottom up and ethics and people.
01:44:47.120 And I don't – that's going to be really tough because people are very flawed.
01:44:51.080 No matter if they're in power or not, that's just how we are.
01:44:55.180 But, you know, I think that is a possibility.
01:44:58.260 But I do think, like we were talking about last night, Ted Kaczynski made some pretty good points in 1995 about the Industrial Revolution and its consequences for humanity.
01:45:07.660 You're very wrong about what the mail is for, though.
01:45:09.920 Yeah, yeah.
01:45:10.720 I'll say that for YouTube, for sure.
01:45:12.080 Phil's right.
01:45:13.140 But I think he saw a lot of the issues we're in now and we're just now dealing with it.
01:45:17.840 I mean, people are now coming and – we'll go on YouTube and look up his manifesto and be like, wow, this guy was a genius.
01:45:22.280 He was a prophet.
01:45:23.160 He was a time traveler.
01:45:24.740 He might not like the time traveler part.
01:45:26.400 But he doesn't want to do that.
01:45:28.440 But I think that is the future.
01:45:30.140 And I think he saw what we see, especially in leftists, but it's not just unique to leftists,
01:45:35.220 is that there is this need to control and destroy at all costs.
01:45:41.300 That is human nature.
01:45:42.620 That's something we're going to have to contend with, for sure, every time there's a new advancement.
01:45:46.460 So it's not going to go away.
01:45:48.100 And I also don't agree with regulations.
01:45:49.580 You know, I don't know how you regulate this.
01:45:51.600 Like, I understand I want to make sure no one can make child porn with AI or at all and stuff like that.
01:45:58.280 But getting rid of certain things that you can do as an expression, whether people like it or not, you know,
01:46:02.780 because, like, Milani would pass that law about deepfakes and whatnot.
01:46:05.300 And I think child porn is a part of that.
01:46:06.920 But then it's just deepfakes of people.
01:46:08.980 I think you should be able to do that stuff.
01:46:10.880 I don't really like using AI.
01:46:12.320 In the deepfake scenario, though, if I could successfully impersonate you and go to the bank or successfully impersonate you and go to your bedroom, right?
01:46:21.260 Like, these things, you would consider that to be a crime.
01:46:25.360 But you can't go to my bedroom as a deepfake.
01:46:27.440 So with deepfakes, basically, it's the line between, like, what is caricature, what is cartooning, and what is impersonation.
01:46:35.460 So, you know, a cartoon of, you know, Donald Trump dancing with dollar bills falling everywhere on the graves of Gaza children.
01:46:42.700 That sounds familiar.
01:46:43.520 Yeah, that's just a cartoon.
01:46:45.480 But if you had a deepfake of Donald Trump saying, you know, all the children of Gaza were wrongly murdered,
01:46:53.020 and then he ends up getting blown up by a golden pager or something like that,
01:46:56.360 well, then that's a deepfake.
01:46:57.800 That is impersonation.
01:46:58.840 But is the person who made the deepfake, should they be held accountable?
01:47:02.940 Yes.
01:47:03.480 Really?
01:47:03.700 And the company, I think that to an extent, if your software is capable of producing a very photorealistic or videorealistic deepfake,
01:47:12.600 and you've deployed it to the masses to just sow chaos, and you knew that was what was going to happen,
01:47:18.940 of course you should be held liable.
01:47:20.380 Yeah.
01:47:20.640 Google, for instance, like, they have among the most advanced, mid-journey too, but, you know, among the most advanced video generation AI, right?
01:47:30.980 There's all these guardrails in place to keep you from impersonating famous people.
01:47:35.540 They have small-scale, malicious, kind of like cyberbullying deepfakes.
01:47:40.580 I expect to see that anyway.
01:47:42.560 Yeah.
01:47:42.880 But it kind of just shows, like, something inherent in the technology, this capability,
01:47:47.200 that would require great moral restraint on the part of, like, most of the population.
01:47:51.580 In the case of deepfakes, in the case of bioweapons, in the case of even, like, the construction of IEDs,
01:47:57.340 in the case of flooding the world with slop, you either have laws in place, somewhat draconian probably in many cases,
01:48:06.500 to keep that from happening, or you rely on the moral fortitude of the people.
01:48:13.400 In either case—
01:48:14.560 That's going to be a tough call.
01:48:15.120 That's why we're in a precarious situation.
01:48:17.260 But, I mean, I think that, you know, generally, you guys are in mostly—except for Shane.
01:48:25.240 Do you think AI is inevitable that it will be a danger in the future to people, no matter what, if we have no guardrails?
01:48:32.860 Well, I think if AI holds the potential that the Doomers think that it has,
01:48:38.580 which it hasn't realized that potential yet, as we've been discussing,
01:48:42.080 then what's most important is that the people who are involved, the people who are building it,
01:48:49.340 who are mastering it, are on our side, are virtuous, and are people who care about humans.
01:48:55.220 And so maybe the most risky scenario that I see is that—
01:48:59.720 But to that point, to your point about caring about humans,
01:49:02.180 all of the things that we have talked about when it comes to, like, the medical field and stuff,
01:49:08.340 all of that stuff is in service to humans.
01:49:11.120 So how do you square that circle?
01:49:13.140 I think I agree, and I think this is what we've been—what we've been talking about,
01:49:16.020 is there are lots of really excellent, practical, short-term applications
01:49:18.980 that seem like they're going to benefit Americans.
01:49:21.260 But then there's the longer-term existential risk.
01:49:24.200 And I think there, that's where I see sort of the call—
01:49:27.520 In some ways, it's like the call to adventure, basically.
01:49:29.200 It's like to even young men in America who actually care about people
01:49:32.140 and care about the direction of civilization to actually be a player in this
01:49:36.160 and to not stand in front of the train,
01:49:38.180 but maybe to, like, get—to maybe help guide the train
01:49:40.840 in a direction that's conducive for human flourishing.
01:49:44.500 And I think that that's something that's of critical civilizational importance
01:49:48.360 over the next few decades.
01:49:49.840 In the questions, how are you going to go about the guiding?
01:49:53.500 And I think there are major limitations with regulation.
01:49:58.120 In particular, the quilting kind of regulation where every state has its own version of what AI can be.
01:50:06.460 There may have to be some maybe better mobilization around it than that,
01:50:12.840 because if every state has its own AI—you've probably heard these arguments—
01:50:17.860 but if every state has its own AI, America will be completely crippled
01:50:21.760 in its ability to advance AI and will fall behind other nations, right?
01:50:26.500 So we are a nation, and we should act in tandem as a nation.
01:50:29.700 But maybe there's some groups of states that actually tend to have similar views
01:50:35.160 about what human flourishing looks like.
01:50:37.400 And maybe there's different types of AI.
01:50:40.460 Maybe there's a red state AI and a blue state AI.
01:50:43.380 It's an Amish AI, is there?
01:50:44.620 Can we get an Amish AI?
01:50:46.040 You've got Gab AI, which I would say is deep red, I guess you would say.
01:50:52.020 And then you've got like Gemini.
01:50:53.460 XAI, Gemini.
01:50:55.020 And so that's why I'm less worried about this monopolistic future,
01:50:59.740 because we've already seen AI companies who don't agree with one another
01:51:03.320 and that express very different worldviews—libertarian, progressive, etc.
01:51:09.180 Well, specifically, what sort of state laws would gum up the entire national AI agenda?
01:51:14.640 So regulation in general—you've heard this from the libertarians—it benefits large companies,
01:51:22.160 because small companies can't afford to comply with all the regulation, right?
01:51:29.000 And so Europe is chronically technically backwards because they have—well, one of the reasons that impacts this
01:51:38.180 is because they have so many small regulations that is death by a thousand paper cuts.
01:51:42.800 And I think what we need to avoid in the U.S., we need to coordinate so that we're not giving death to the AI industry
01:51:48.920 by a thousand paper cuts because of all the benefits that it can give us economically.
01:51:53.100 Well, that's abstract, but specifically, what laws that are either on the books now in different states—
01:51:58.740 Texas, California, New York, municipality—what laws are threatening U.S. AI dominance?
01:52:07.280 Or proposed laws, like SB 1047, liability of companies.
01:52:11.500 What laws—because you hear this all the time from Sachs, Andreessen—you know, it'll destroy it.
01:52:16.780 Attention sports enthusiasts.
01:52:18.600 Keep the adrenaline pumping and elevate your game day with Chumba Casino.
01:52:22.220 It's completely free-to-play, no purchase necessary.
01:52:25.720 Whether you're cheering from the stands, on the move, or relaxing at home,
01:52:29.360 Chumba Casino brings the thrill of social casino directly to your fingertips.
01:52:33.860 Experience the ultimate social casino adventure with reels of casino-style games,
01:52:38.340 offering hundreds of exciting options to choose from and fresh new releases every week.
01:52:42.460 There's always something new and thrilling to explore.
01:52:44.840 From action-packed social slots and classic blackjack to engaging bingo in solitaire,
01:52:49.780 the fun never stops.
01:52:50.880 Plus, enjoy generous daily login bonuses and a fantastic free welcome bonus to kickstart your social gaming journey.
01:52:57.140 Dive into the excitement.
01:52:58.340 Discover a world where you can play for your chance to redeem some serious prizes and have a blast along the way.
01:53:03.340 Don't miss out.
01:53:04.220 What are you waiting for?
01:53:05.220 Join now and immerse yourself in nonstop fun and adventure with Chumba Casino.
01:53:09.180 Get in on the action today at Chumbacasino.com and make every day a Chumba Day.
01:53:14.240 No purchase necessary.
01:53:14.840 VGW Group void.
01:53:15.400 We're prohibited by law.
01:53:15.980 18 plus DNC supply.
01:53:16.660 Fiji Airways is on sale now.
01:53:19.400 Fly from Toronto via Vancouver to Fiji with round trips starting from $1,554.
01:53:24.920 For a destination or a stopover, Fiji is where you want to be.
01:53:29.520 Fiji Airways is now a part of the One World Alliance and has joined the Advantage program.
01:53:34.440 Enjoy warm island service and earn seamless global benefits.
01:53:37.660 Book now at FijiAirways.com or visit your travel agent today.
01:53:42.940 Conditions apply.
01:53:44.120 China will win if we don't, if you say that you can't, you know, create deep fakes.
01:53:48.860 What laws would threaten the U.S. national agenda or these companies?
01:53:53.820 So you know the legislation better than I do.
01:53:56.580 What's out there, what's past, what's on the table.
01:53:59.020 But it's not about legislation, any particular legislation.
01:54:04.880 It's about the idea of bringing more lawyers into the room to enforce the regulations and
01:54:11.860 companies.
01:54:12.800 And so how should we handle regulation?
01:54:17.120 Because clearly some of these laws that were passed in California seem actually very reasonable
01:54:21.620 and positive for AI and protecting humans and human flourishing with AI.
01:54:26.240 I think we should probably take those in mind and find out a way to basically for the court
01:54:35.060 system to be able to, well, for the court system to work with the legislature to make
01:54:43.920 that a national thing, or at least take the good parts of that, take the rulings on a case
01:54:48.660 by case basis and rule in a common sense way that will actually help.
01:54:54.060 So we have to have a common, a positive vision in mind instead of just anti-regulation, pro-regulation.
01:54:59.780 Can I add one element would be I think any sort of policies that can be protective of children,
01:55:07.060 just like with the social media question as well, I mean, we'd be extremely in favor of
01:55:10.640 those.
01:55:11.180 One thing that's a recent shift is if you look at young families today.
01:55:15.400 So families, there are around 25 million families, English speaking, who have children under
01:55:20.220 eight. And in that cohort, 85% of them are screen time conscious with their children.
01:55:26.480 And that's a big shift from just a decade or a decade or two ago.
01:55:29.620 And you could ask the same question, how many of them are AI conscious of what AI chatbots
01:55:34.560 and things their children are interacting with?
01:55:36.680 And it's probably a much lower number, at least right now.
01:55:41.020 And so my hope would be that one, parents take it upon themselves to be much more protective
01:55:46.800 around AI and the ways that their children engage with it.
01:55:52.200 It's not necessarily that I'm entirely against children engaging with AI.
01:55:55.900 It just needs to be in an environment that's conducive for them to do well.
01:55:59.380 So no Rosie the robot?
01:56:00.820 Yeah.
01:56:01.620 No Jetsons?
01:56:02.360 No, but then also, right, so there's a parents just need to be educated on this side of the
01:56:08.020 question.
01:56:08.300 And then there is just policy, like certain things should be banned, certain things, especially
01:56:13.300 in the schools and things like that.
01:56:15.580 And that's a much longer conversation.
01:56:17.720 And what exactly is inbounds versus out?
01:56:19.700 It gets much more technical, and we probably wouldn't get into all the right things.
01:56:22.860 So I feel like the argument that is made is the potential for danger from AI is so great
01:56:34.320 that we need humans in the loop.
01:56:36.780 But I also feel like the humans in the loop has evidentially produced negative results because
01:56:47.240 you've never had so many people saying, no, I want to homeschool my kids because I don't
01:56:50.740 want teachers around my kids because I know what the teachers think, and I know what they're
01:56:54.320 teaching in the schools.
01:56:55.180 Ever since COVID, like kind of pulled back the veil, and parents were able to see, you
01:57:00.400 know, watching, you know, remote schooling or whatever.
01:57:03.620 So I'm not sure which one is actually worse.
01:57:07.780 You know, is the parents being able to see what the curriculum that an AI would be producing,
01:57:15.820 would be teaching the kids, is that worse to have the robot do it, or would it, or, you
01:57:21.020 know, have an AI do it, or would it be worse to give your kids to the schools that exist
01:57:27.580 or existed prior to COVID, knowing what we know now?
01:57:31.420 You know, that is, I don't know which one is actually preferable.
01:57:34.520 Right now, I'd say they're both bad.
01:57:36.880 Those teachers, most of them probably agree with a lot of the things that AI might spit
01:57:40.640 out because it was built by people who agree with them.
01:57:43.180 But again, if you can see what, if you know what type of curriculum the AI is going to
01:57:48.960 be teaching.
01:57:50.300 There's a whole spectrum of basically different educational, AI education type products that
01:57:56.380 exist, and some of them are actually produced by homeschool family type individuals, and
01:58:00.380 then others are totally crazy.
01:58:02.620 Far left lunatics have put out some new English education app that a kid can get on, and there's
01:58:09.060 no telling what's going to...
01:58:10.160 It's going to change so much.
01:58:11.580 Like, if it's a private school, homeschool, using a certain curriculum, you know, maybe
01:58:14.560 it's Christian-based, and you understand what's going on, but in a public school, they could
01:58:19.160 be changing the AI so much because in the physical world, they change, they move the goalposts
01:58:24.100 all the time.
01:58:24.900 Like, what was wrong, what was right.
01:58:25.300 I think that's one thing that I want to point out.
01:58:27.480 Like, this morning, like, I saw this tweet, right?
01:58:30.420 And this is just about what the DNC, these are, what the DNC doesn't, like, buzzwords the
01:58:35.420 DNC is not allowed to use.
01:58:37.280 And you know that the DNC has...
01:58:39.000 My entire vocabulary, probably.
01:58:40.060 Well, I mean, it's things like, it's like privilege, violence as in environmental violence.
01:58:45.320 You can't say dialogue, triggering, othering, microaggression, holding space, body shaming,
01:58:50.320 subverting norms, cultural appropriation, Overton window, existential threat, racial transparency,
01:58:55.560 or, I'm sorry, radical transparency, stakeholders, the unhoused, food insecurity, housing insecurity,
01:59:00.680 a person who immigrated, birthing person, cisgender, dead naming, heteronormative, patriarchy,
01:59:05.180 LGBTQIA, BIPOC, allyship, all of these things are really the backbone of intersectional, of
01:59:11.000 intersectional...
01:59:12.000 Those are all banned.
01:59:12.880 Those are all words that the DNC shouldn't be using.
01:59:15.660 So, and the point that I'm making with that, though, is the human beings see when they
01:59:21.680 get resistance, and they're like, well, we need to change.
01:59:24.400 We need to change.
01:59:25.720 But they don't actually change what their message is, they're just changing, they're
01:59:29.660 changing how it's delivered.
01:59:31.540 So...
01:59:31.940 What you're talking about is giving up on human beings, basically.
01:59:35.820 You're talking about giving up on people.
01:59:38.180 And I...
01:59:38.740 It's not...
01:59:39.400 I'm saying which one is better.
01:59:40.600 I'm saying which one is better.
01:59:41.680 Right.
01:59:42.140 I'm not saying you specifically, but this question, and this question of whether, because you have
01:59:47.840 some proportion, a very, very large proportion in the U.S., of teachers who, it's a gay
01:59:53.720 word, but woke...
01:59:54.780 Well, the vast majority...
01:59:55.840 They're woke.
01:59:56.720 Okay, yeah, fine.
01:59:57.980 But you're...
01:59:58.760 A, that's a problem of the system.
02:00:00.380 There were plenty who weren't before, and there are plenty of very intelligent, educated,
02:00:05.300 conservative people.
02:00:06.380 So, the predominant attitude among teachers in, say, the 60s would have been profoundly
02:00:11.460 conservative.
02:00:12.440 Pledge of Allegiance every day.
02:00:14.260 Pro-American propaganda, essentially, in all of the rudimentary school books, right?
02:00:19.200 The elementary school books.
02:00:20.200 So, that shift happened after the long march through the institutions.
02:00:25.340 But...
02:00:25.740 So, the question that becomes...
02:00:26.800 And it's a valid one.
02:00:28.040 Like, in the case of Linda McMahon, she's pushing AI or A1 teachers, depending on what
02:00:33.480 day you ask her.
02:00:34.500 And there's a company that I came across, actually, in Berkeley, of all places.
02:00:39.220 Or, I'm sorry, it was at Stanford.
02:00:40.140 A woman who represented him basically described it as all AI teachers all day long with, like,
02:00:47.360 two hours of human teachers talking about what they learned, right?
02:00:52.380 But it's all AI.
02:00:53.500 It's an experimental program.
02:00:55.080 I think the best way to think about all of this, again, isn't in some monolith that,
02:01:00.440 like, should we use AI teachers?
02:01:02.420 Should we not?
02:01:03.040 Like, what?
02:01:03.320 Because everybody's not going to do the same thing.
02:01:05.900 All of this is this vast global experiment.
02:01:08.720 No one has any idea what the outcome's going to be, ultimately.
02:01:12.620 We're just finding out, not by taking 20 kids and putting them in a cohort over here
02:01:18.740 and experimenting with their brains with technology and taking 20 other ones and letting them grow
02:01:24.320 traditionally, and then, after 20 years, seeing what happens and then applying it to
02:01:28.480 society, that would be a warped, fucked-up experiment to begin with.
02:01:32.140 But instead of doing that, we're just doing it with all kids, as many as possible.
02:01:36.520 So you have people like Sal Khan, Marc Andreessen, Elon Musk, Bill Gates, Sam Altman, Greg Brockman,
02:01:44.580 all of which, to some extent or another, most of them saying every child on the planet should
02:01:50.040 have an AI tutor.
02:01:52.160 Totalizing vision of how this should go down.
02:01:54.720 Now, that's not going to happen.
02:01:55.800 I suspect you won't do that to your kids, right?
02:01:59.240 Or most of the people you know won't.
02:02:00.980 So this is an experiment, and ultimately, and I'll leave it on this conceptual note, ultimately,
02:02:06.560 this is an experiment that should be understood, first and foremost spiritually, but on a practical
02:02:12.040 level, on a Darwinian level, and on a eugenic level, which are closely intertwined.
02:02:18.360 And over time, like, say, with birth control, the most advanced biological technology of its
02:02:24.840 day, which dramatically changed the gene frequencies in the U.S., in the West especially, but across
02:02:34.780 the world.
02:02:35.700 Those who use birth control had fewer children.
02:02:38.400 Those who didn't had a whole lot more.
02:02:40.800 Those who were more religious had more children.
02:02:44.040 Those who were irreligious had fewer.
02:02:46.680 And I think on both a Darwinian and a eugenic level, because we're talking about the same
02:02:51.700 thing, ultimately, it's either nature's Darwinism or human social Darwinism, we're going to find
02:02:58.800 out, and it's going to be diverse.
02:03:00.060 There's going to be all of these different sort of cultural species upon which natural
02:03:05.520 selection and artificial selection will act.
02:03:08.780 And I think the question that someone should ask is, will my mode of life, whether it's total
02:03:15.420 cyborgization, or total Luddite, or somewhere in between, will this continue?
02:03:22.600 Will this allow me to flourish now, and on a long scale, like a long-term scale, will this
02:03:29.460 allow me and my own to continue?
02:03:31.960 And it's a big question.
02:03:32.740 I don't think there's going to be a monolithic answer, at least I hope not.
02:03:35.760 I hope not.
02:03:36.680 But I feel like we are moving towards a society that wants to be, especially after this administration
02:03:42.220 that has totally embraced all the bad people we agree on being bad, you know, they want
02:03:48.160 that total control.
02:03:48.960 They want to consolidate all of your data that they've scraped off the internet that, you
02:03:53.860 know, everyone already has access to in the government, but they can now consolidate it
02:03:57.400 to accelerate how we can do things like pre-crime, which the UK is already starting to try out,
02:04:02.480 you know?
02:04:02.780 And that stuff is the future.
02:04:04.360 I'm not just thinking about me.
02:04:05.280 I'm thinking about my kids' kids and what world they're going to inherit.
02:04:08.240 And it's going to be, it's always getting worse, in my opinion, despite my hope in
02:04:12.620 humanity.
02:04:13.420 We're surrounded by people who want to dominate everything at all times.
02:04:18.100 To jump in on the point on AI education, I think one thing that we often forget is how
02:04:23.940 much also education has shifted just recently.
02:04:27.680 So even the lecture hall is a fairly recent technological innovation.
02:04:31.780 And I would argue that the lecture hall doesn't work very well.
02:04:34.920 If you go to college, several of us are recent college grads.
02:04:38.800 If you go, I mean, nobody's paying attention.
02:04:40.360 Nobody's learning anything.
02:04:41.200 It's a totally ineffective way, way to learn.
02:04:43.460 There's some professor who doesn't care, just monologize.
02:04:45.260 Why is no one paying attention?
02:04:46.220 Is it because everyone's on their phones or?
02:04:47.780 That is part of it.
02:04:48.640 But even in, even in.
02:04:49.760 Because they're kids, they're teenagers, they're early 20s.
02:04:52.420 I've been in lecture halls where they, where they remove technology and it's still just
02:04:55.400 not a great platform.
02:04:56.260 It was sort of popularized in the post-war era with the GI Bill.
02:05:00.440 Basically colleges built all of these large lecture halls.
02:05:02.900 And this is a way to pack a lot more students in through the education process.
02:05:06.980 And so when it comes to AI, AI education, I'm a skeptic.
02:05:11.920 I would, I don't want to see a one, you know, a single solution that's crammed on every, every
02:05:16.420 kid's throat.
02:05:17.880 That, that feels like a really dark world or dark path to go down.
02:05:21.060 But in a world where let's say there are a thousand potential, there's a menu of AI tutors.
02:05:27.000 And, and maybe one of them, for example, you're, let's say you're a homeschool kid or a homeschool
02:05:31.840 parent and you hate teaching math.
02:05:33.360 And there's one that teaches times tables really well.
02:05:35.620 And it was built by, it was built by somebody who's maybe shares your values.
02:05:40.280 Let's say Bryce made it and I trust Bryce and it's either that or not teach my kid math
02:05:45.100 or, or send them to public school where you don't trust the teachers.
02:05:48.600 I think once again, it comes back to the human.
02:05:50.780 It's like, do I, do I trust what Bryce built?
02:05:52.840 Is it effective?
02:05:53.860 I want to see maybe over a few years, did the kids who did it actually learn math?
02:05:57.920 And in that sort of a world, I think, I think there will be potentially really quite excellent
02:06:03.000 outcomes, but I'm totally, I'm totally against every student being forced to only learn one
02:06:08.240 single hour.
02:06:08.940 I mean, that's a terrible, that's a terrible outcome.
02:06:10.780 That's kind of the point that I wanted to make.
02:06:11.840 Like when you have, when you have a market where you can actually select and say, well,
02:06:17.240 I like this type of, this type of, of curriculum.
02:06:20.560 And I know, I trust the people that produced it because I, you know, for whatever, you know,
02:06:26.480 personal reason you come up with, you know, say like Ron Paul has the, has a Liberty Institute,
02:06:31.980 right?
02:06:32.400 And you want to go with Ron Paul's AI that will teach you the curriculum from, from Liberty
02:06:38.380 classroom.
02:06:39.100 You know, Tom Woods promotes that too.
02:06:40.360 He's another big libertarian.
02:06:41.840 That's the kind of stuff that I'd be like, you know, I feel good about that.
02:06:44.740 I would, I would feel good about this AI, this curriculum package being downloaded into,
02:06:51.220 into an AI that's on my computer, or maybe, maybe who knows, maybe even a robot.
02:06:56.100 And the robot is actually teaching the curriculum that I chose.
02:06:59.780 But if you have that option, I don't see that as a bad thing.
02:07:04.180 I don't see it as a bad thing, but I think the best teachers should be the ones that can be
02:07:10.060 relatable to the, a human child and weave throughout their education stories and how it's applicable
02:07:17.300 to the human world, as opposed to some cold, sterile screen, just beating your kid with
02:07:23.960 information.
02:07:24.920 I agree.
02:07:25.080 And I know it can be, I know it can be nice, but like, I don't want my kids, whether they're
02:07:28.940 through, if they go to college or not going and just plugging in, you know, the amount
02:07:32.920 of emotional, uh, you know, the amount of emotion that's added to the, the, the conversation
02:07:38.420 by both of you, the Gooner bots, the cold steel.
02:07:42.520 So it's like, it's like, I mean, I understand that you guys are like trying to make people
02:07:47.760 and, you know, see your perspective, but still it's like, all right, two writers who don't
02:07:51.060 use AI psychologically unstable and unable to control our emotion.
02:07:55.760 You're watching people go extinct, Phil.
02:07:58.080 To that point, just to say this, the, the long millennia old tradition of passing information
02:08:05.700 down from human to human, even with the addition of writing, even with the addition of television
02:08:11.020 and recordings, it's still been the predominant way, humans teaching other humans or guiding
02:08:18.980 them through that education and that transmission, that, that link that goes back and back and
02:08:24.320 back.
02:08:24.540 You could call it maybe an apostolic succession in some, in some cases.
02:08:27.780 Don't go using words like that.
02:08:29.140 Yeah, but that, that link from person to person, it means that that information is flowing through
02:08:33.760 a flawed human, a human example, a role model.
02:08:37.020 You can see whether that information has allowed them to flourish or not, right?
02:08:40.780 You all have that.
02:08:41.740 We all know the brilliant professor who's really useless and it kind of makes you wonder if
02:08:45.780 all that information was really all that worthwhile.
02:08:48.940 On the other hand, we know the, the, the very kind of soft-spoken and, you know, concise
02:08:54.820 professor who is excellent at so many different things, right?
02:08:58.700 Those sorts of human examples is going to be, it has been and will continue to be
02:09:03.760 need to be so crucial for the development of human beings.
02:09:06.460 I think that you bring in a robot on occasion, right?
02:09:09.640 You get, hey, here's your clanker teacher, uh, for the hour.
02:09:12.720 Um, don't use the hard R with her.
02:09:15.100 Um, I think that that, that, okay, fine.
02:09:18.120 That's, there's always going to be the spectrum between the Amish and the cyborg.
02:09:23.360 Nobody's going to be a hundred percent except for the cyborgs in this Amish.
02:09:26.500 But none of us, none of us are going to be a hundred percent on that spectrum.
02:09:30.520 It's just a matter of which way we're leaning and pushing.
02:09:33.080 And, uh, you know, I'm not trying to like stop anybody from doing the basic sort of
02:09:37.580 cyborgization.
02:09:38.660 And, you know, I, I too have an implant in my forehead, so I'm not trying to get too
02:09:43.300 self-righteous about it.
02:09:44.600 But I do think that, again, that, that kind of Burkean suspicion of change.
02:09:48.700 Why is this change being pushed?
02:09:51.660 Is it really for my benefit or is it for the person pushing the benefit?
02:09:55.600 That's, I think it's as simple as that.
02:09:57.460 If, and if you decide, yeah, I want my kid to grow up with clankers.
02:10:01.800 I want my kid to marry one.
02:10:03.440 I want Rosie.
02:10:04.800 I didn't know you were into integration.
02:10:06.420 Oh my goodness.
02:10:07.340 Uh, you know, and that we'll see in the end of the day, it will be decided.
02:10:12.400 Ultimately at the end of the day, I think it will be a spiritual question, but on a practical
02:10:16.120 level, it'll be a Darwinian question, which one survived, which one's flourished?
02:10:19.400 And we'll just have to find out.
02:10:21.060 All right.
02:10:21.520 Well, I think we're going to go ahead and wrap things up.
02:10:23.540 So, um, Bryce, if you want to go ahead and, and kind of collect your thoughts and, and
02:10:26.940 give a, give the totalization of what you, what you thought, go ahead.
02:10:31.400 Oh, no.
02:10:32.080 If you got anything you want to shout out your Twitter handle or whatever.
02:10:34.900 Sure, sure.
02:10:35.860 So I think just one, one point, uh, that, that, you know, what you said was thought provoking.
02:10:41.980 I think for people who are wondering, who are scared, right?
02:10:44.320 What is AI going to do?
02:10:45.740 What is it going to transform?
02:10:47.420 How is it going to, how is it potentially harmful?
02:10:49.660 What should I fear?
02:10:50.660 You know, I, I think the thing that we can use the common sense heuristic is that historically
02:10:55.900 technology doesn't replace the things that we care most about.
02:10:59.800 It doesn't, it doesn't replace the core human things.
02:11:02.900 Technology usually replaces older technology.
02:11:06.300 And, and so there's, there's probably room that, that can help us to guide where we use AI
02:11:12.500 and also help us to be a little more at peace that, that AI probably is not going to radically,
02:11:18.420 uh, transform the, the, the world we live in.
02:11:22.260 But, uh, so my, you can, you can follow me on Twitter, Bryce McDonald, and you can go to the,
02:11:29.600 to, to, to, to my pen tweet.
02:11:30.660 If you're interested in basically what Volus does, which is deploying AI in middle American
02:11:38.900 industrial businesses, uh, you can, you can find us there at the pen tweet.
02:11:43.500 So I'm Nathan Halberstadt, again, on Twitter, I'm at N-A-T Halberstadt, H-A-L-B-E-R-S-T-A-D-T.
02:11:52.640 And, uh, I just say that this was an amazing conversation.
02:11:55.800 And I think what's, what I appreciate about it is that we're all coming from the same,
02:11:59.800 uh, prioritization of the human and we're assessing risk, I think, in slightly different
02:12:04.420 ways or over different time horizons.
02:12:06.380 And, uh, I would actually love to do it again.
02:12:08.000 So I think it's a, it's a really, uh, just an excellent conversation and really respect,
02:12:11.360 um, everybody's perspective here.
02:12:12.820 Um, just plugging again, my, my own stuff.
02:12:15.960 I, I'm the venture lead at New Founding, so we run a rolling fund.
02:12:19.580 So if you're interested in investing, uh, just DM me on Twitter.
02:12:23.200 Uh, and then if you're a founder, uh, and you're somebody who's passionate about solving
02:12:28.000 the problems we've been talking about, uh, also DM me and let's, uh, you can, you can
02:12:32.040 pitch us and we're really happy to talk with you.
02:12:34.400 Um, so, so especially if you're a Patriot, we want to talk with Patriots.
02:12:38.080 Awesome.
02:12:38.940 Uh, yeah, thanks.
02:12:39.600 Yeah, I would definitely echo been an honor, been really fun.
02:12:44.560 And New Founding is awesome.
02:12:46.260 Um, and what I read from your mission statement at Volus is an acceptable level of cyborgization.
02:12:52.100 I got to say it's also pretty awesome.
02:12:53.220 It's acceptable.
02:12:54.180 Thanks.
02:12:54.720 Now on that note, I do think that it's best to build machines that work for people, right?
02:13:01.760 Instead of focusing on building smarter machines, cultivate better humans, humans first, humans
02:13:07.560 first, humans first, uh, my book, Dark Aeon, A-E-O-N, Dark Aeon, Transhumanism and the War
02:13:13.920 Against Humanity available, uh, everywhere on the beast system.
02:13:18.020 You can pay for it at Amazon with your palm, or you can get a signed copy directly from me,
02:13:21.800 Dark Aeon, A-E-O-N dot X-Y-Z.
02:13:24.080 The website, JoeBot.X-Y-Z, Twitter, SlaveChain, at J-O-E-B-O-T-X-Y-Z, or Steve Bannon's War
02:13:32.100 Room.
02:13:33.120 Yeah, that was a lot of fun, guys.
02:13:34.500 Really appreciate it.
02:13:35.880 You know, I hope that we have a positive future.
02:13:39.020 I always have hope in humanity, and I need to for my children.
02:13:42.640 You know, despite thinking, I do think that AI could, if we go one route, replace many
02:13:47.080 things that shouldn't be replaced.
02:13:48.400 It's pregnancy and, uh, knowledge and creativity and all that stuff, but, uh, it's good to
02:13:53.180 have these conversations, especially with guys who are in it that have ethics, as opposed
02:13:57.620 to many who are in it right now, uh, getting a lot of money from our government who have
02:14:01.900 no ethics, in my opinion.
02:14:03.820 But, uh, yeah, a lot of fun.
02:14:04.980 Thanks for having me.
02:14:05.460 You can find me online at Shane Cashman.
02:14:07.180 The show I host every Monday through Thursday is Inverted World Live, 10, uh, p.m. to 12 a.m.
02:14:12.840 It's a call-in show.
02:14:14.320 A lot of fun.
02:14:15.360 And, uh, we'll see you guys next time.
02:14:17.020 You can call Shane and debate whether clouds are real.
02:14:19.520 Um, thank you, everybody, for coming and having such a great, uh, such a great conversation.
02:14:24.500 Everybody's input was really enlightening, and I appreciate you all coming out.
02:14:27.960 I am Phil That Remains on Twix.
02:14:29.920 I'm Phil That Remains.
02:14:30.780 The band is All That Remains.
02:14:31.940 You can check us out on Apple Music, Amazon Music, Pandora, Spotify, and Deezer.
02:14:34.900 Make sure you tune in tonight for TimCast IRL.
02:14:37.540 I will be here hosting again.
02:14:39.140 Tim is still out sick.
02:14:40.520 And, uh, check out clips throughout the weekend, and we will see you soon.
02:14:47.020 We'll see you soon.
02:15:17.000 We'll see you soon.
02:15:18.000 We'll see you soon.
02:15:19.000 We'll see you soon.
02:15:21.000 We'll see you soon.
02:15:22.000 We'll see you soon.
02:15:23.000 We'll see you soon.
02:15:24.000 We'll see you soon.
02:15:25.000 We'll see you soon.
02:15:26.000 Bye.
02:15:27.000 We'll see you soon.
02:15:28.000 We'll see you soon.
02:15:29.000 We'll see you soon.
02:15:30.000 We'll see you soon.
02:15:31.000 We'll see you soon.
02:15:32.000 We'll see you soon.
02:15:33.000 We'll see you soon.
02:15:34.000 We'll see you soon.
02:15:35.000 We'll see you soon.
02:15:36.000 We'll see you soon.
02:15:37.000 We'll see you soon.
02:15:38.000 We'll see you soon.
02:15:39.000 We'll see you soon.
02:15:40.000 We'll see you soon.
02:15:41.000 We'll see you soon.
02:15:42.000 We'll see you soon.
02:15:43.000 We'll see you soon.
02:15:44.000 We'll see you soon.
02:15:46.000 We'll see you soon.
02:15:47.000 We'll see you soon.
02:15:48.000 We'll see you soon.