In this episode of The War Room with Steve Bambach and Joe Allen, hosts of Inverted World Live, join us to talk about the pros and cons of artificial intelligence (AI) and what it means for the future of humanity.
00:02:42.500I am the tech editor at The War Room with Steve Bannon, occasional host, writer, not an expert or a philosopher, as I'm oftentimes smeared as, and failed Luddite.
00:03:39.980And I think starting with Bryce and I agree that AI technology presents a number of potential risks, especially for human well-being.
00:03:50.460But I think we're excited particularly about this conversation because we agree with you on that.
00:03:55.640I think we all come from a right-of-center perspective here.
00:03:58.600And we want basically a path forward for AI that works for Americans, and especially the next generation of Americans.
00:04:05.940I think we're concerned, like, is this going to work for our kids and grandkids?
00:04:10.440But, and I'm familiar with both of your work, Shane and Joe, and actually really respect the point of view that you guys come from.
00:04:16.280So I'm excited about the dialogue because I think we can hopefully talk through some of the very serious concerns that we all share.
00:04:22.240But ultimately, I think Bryce and I see a path forward where AI technology can actually shift opportunity towards the people who have been disadvantaged in the previous paradigm.
00:04:33.580So we think about sort of the middle American skilled tradesmen or sort of the industrial operating businesses in America.
00:04:40.320People who have been really hurt by offshoring or by financialization from private equity or software as a service lock-ins from Silicon Valley and the rest.
00:04:48.580We think there's a pathway where AI can basically shift more power and autonomy to some of the people who we wish had had it up to this point.
00:04:58.540There are still plenty of risks, but that's a part of where we will actually want to take the conversation is arguing that over the next decade or two, there could be sort of a golden age that emerges and AI will play a role in it.
00:05:09.620And there will be lots of challenges and lots of serious things where we'll need to adjust policy.
00:05:15.180And there could be people along the way who – we need to basically make sure that we minimize the number of risks for people and Americans along the way.
00:05:24.740But that's the side that we'll be taking.
00:05:26.460Bryce, maybe you want to add anything there?
00:05:27.400Yeah, there's one thing that I want, if Bryce, if you could expand upon it.
00:05:30.380One phrase that you used, you said a narrow pathway.
00:05:33.200How narrow do you think the pathway is?
00:05:35.420Do you think that it's more likely that there are going to be more negative consequences from the use of AI, or do you think that it's more likely that there will be positives, or do you think it depends on who's in control?
00:05:48.260Look, with any technology, the positive and the negative are really closely intertwined.
00:05:53.460And I think our role as people who are hopefully going to be able to shape the future of AI is to actually split those apart and figure out what are the bad elements that we can avoid.
00:06:07.400For example, the psychotic episodes that AI as chatbots are bringing people into.
00:06:13.320Or, for example, trying to automate away all work or ruin education with cheating apps and AI, right?
00:06:20.780We want to split out those negative elements and try to mitigate those.
00:06:25.060And ultimately, I think it'll be a lot of pros and cons, but just like with social media, just like with the internet and even trains or electricity, there's going to be both positive and negative.
00:06:39.820Joe, what's your feeling overall about the outlook that Bryson and Nate have?
00:06:47.040Do you think that that's in any way realistic, or do you think that it's all pie in the sky, that this is just a terrible idea that we should all fear?
00:06:55.920And to that point, if you do think that it's a terrible idea, we don't have the ability to prevent other countries.
00:07:04.540So how do you think the U.S. would be best served moving forward, considering Russia, China, there's going to be AI companies all over the world.
00:07:13.200And if the United States does prohibit it here, these companies are just going to go offshore.
00:07:18.740They're going to go to other countries.
00:07:20.480Yeah, there's a few different questions there.
00:07:22.300So to the first question, how do I respond to the position presented here?
00:07:29.380I'm not sure what we're going to argue about.
00:07:31.840And I agree by and large, although as a writer, I have zero use for AI.
00:07:39.520And so it's very domain specific, right?
00:07:43.120If you're in finance, you might have a lot more use for machine learning than I would.
00:07:47.800And a lot of writers use AI to basically plagiarize, cheat, and offload their work to a machine.
00:07:54.980Yeah, the question of U.S. competitiveness, especially in regards to China, China could leap ahead.
00:08:04.500It's really a volatile situation with AI because simply transferring techniques and information and the technology to build these systems is all it's required for another country to begin building close to the level or approaching the level of the U.S. right now.
00:08:28.900The AI industry is, by and large, centered in the U.S.
00:08:32.920And the arguments made by people like David Sachs or Marc Andreessen's sort of dismissive position in regards to the downsides is completely reckless.
00:08:44.980I think probably disingenuous, although I don't know their hearts.
00:08:48.340And I can see from a national security perspective, if you have a very effective algorithm or AI or AIs that are used to, for instance, simulate the battlefield or analyze surveillance data or target an enemy.
00:09:10.040Yeah, that, I think, is something that should be taken very seriously.
00:09:13.880On the other hand, the way they're talking about it, it's as if flooding the population with goon bots and groomer bots is going to be essential to American competitiveness.
00:09:25.640And I just I don't see how having what right now, at least statistically, however, however much you trust surveys to have a third of Gen Z students basically offloading their cognition to machines to do their homework.
00:09:41.540I don't see how that's going to make the U.S. competitive in the long term and the production of A.I. slop, the sort of relationships, the bonding that inevitably occurs with someone who doesn't either make themselves unempathic or sociopathic or is just born that way.
00:09:59.360The way large language models and even to some extent the image generators and video generators work, they trigger the empathic circuits of the mind.
00:10:10.920One begins to perceive a being on the other side of the screen inside the machine.
00:10:17.440I think that those the social and psychological impacts are already evident and are going to be severe.
00:10:25.620The economic impacts kind of up in the air is an open question, but it doesn't look good.
00:10:31.720And the mythos around existential risk, the A.I. is going to kill everybody or become a god and save everybody.
00:10:39.540Again, the likelihood of that happening, probably low, but I think that mythos itself is driving most of the people who are at the top of the development of this.
00:10:51.040And I think that has to be taken very seriously.
00:10:52.900Be like if you had Muslims, for instance, that ran all the top companies in the U.S. and supported by the U.S. government.
00:10:59.420Maybe you like it, maybe you don't, but it's something you should take very, very seriously.
00:11:02.780There's one point that you made that I actually want to kind of drill down on.
00:11:05.400You said that it was really that A.I. development was driven by the United States.
00:11:11.100And is it really like is it really driven by the U.S.?
00:11:13.680As in like is it only because the United States is doing it or is like is the tech actually a human universal that all like all countries actually would go after?
00:11:27.260Because it's my sense, like we said, we talked about China or you talk about Russia.
00:11:30.260I don't think that just because the United States is on the leading edge of this technology, I don't think that in the absence of the United States, these technologies would not develop.
00:11:41.640Most of the innovative techniques come out of the U.S. and all the frontier labs are in the U.S.
00:11:47.060The point that I'm making, that's just reaffirming that innovation is in the United States.
00:11:51.960So if we stop, they would start or they would begin to catch up.
00:11:55.280Yeah, I don't see that because the technology, the way the technology develops, even if it's not being developed in the United States, doesn't mean that other countries or even if it's developed in other countries more slowly, doesn't mean that the technologies wouldn't be developed.
00:12:07.660I actually hope that China and our other rivals across the country do develop and deploy these systems like we're doing in the U.S.
00:12:14.980because then they'll be plagued by goombots and cheating.
00:12:21.140Like they've got recognition and stuff like that that are used to control their populations.
00:12:26.400Yeah, those are different questions from the goombots.
00:12:29.000But yeah, the tendency to disappear up into one's own brainstem, I guess, is a human universal.
00:12:35.740I think if China begins to deploy recklessly their LLMs at the same scale as the U.S., but China's actually got really strict regulation, including protecting consumers, much more so than the U.S., and deepfakes, things like this.
00:12:52.240But also like any kind of pro-CCP output, I'm sorry, anti-CCP output, that's all banned.
00:13:00.320But yeah, I think if China borgs out and weakens its population as we have, it would be kind of like payback for the fentanyl.
00:13:10.660Do you guys think that the United States, because the U.S. is where the leading edge is, do you think that if the U.S. pulled back, other countries would also?
00:13:18.120Because like I said, I think that just because a country lags behind the U.S. technologically doesn't mean they don't have the impulse or the desire to actually develop these technologies.
00:13:28.500Without a doubt, there is something to this technology that's behind the goombots, behind the particular instantiations of it, the applications, right?
00:14:03.620I think that as much as people like to say, oh, the porn bots are going to kill us, everyone's going to jump into the pod.
00:14:08.620I think that that's actually just kind of more of a slander on the technology or a way to slander the technology.
00:14:16.400And again, this is not endorsing the idea of AI sex girlfriends or whatever, but even just the way that people were talking about it so far around the table, the goombots, the point of saying that is to slander the technology.
00:14:50.620Areas like education and the core functions that basically people are experts in that AI is not experts in.
00:14:57.980So AI is good for doing work that's, frankly, less humane.
00:15:04.160So things that we don't like doing, paperwork, administrative work, bureaucratic work, that kind of stuff.
00:15:09.800And in domains where AI is not adding value or it's obviously just terrible.
00:15:15.000So you think about when you go onto Twitter or just the algorithm at any point, you see an increasing amount of just AI slop.
00:15:20.440Or even in education, we've talked about, like at a certain point, if the kids are just outsourcing all of their learning, we're going to figure that out.
00:15:28.020And that's something that will need to be corrected.
00:15:29.920If people are using AI in sort of strange or psychotic relationship dynamics, I think, again, we'll sort of figure that out and solve that as well.
00:15:38.640And really across all of these, my hope is that what will occur is that as we figure out.
00:15:46.360Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
00:15:50.660Now introducing our hottest exclusive, Friends, the one with Multidrop.
00:15:55.460Your favorite classic television show is being reimagined in your new favorite casino game featuring iconic images from the show.
00:16:02.680Spin our new exclusive because we are not on a break.
00:16:05.840Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
00:16:27.840If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:16:39.720BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:16:44.060You're a podcast listener, and this is a podcast ad heard only in Canada.
00:16:48.820Reach great Canadian listeners like yourself with podcast advertising from Libsyn Ads.
00:16:53.280Choose from hundreds of top podcasts offering host endorsements or run a pre-produced ad like this one across thousands of shows to reach your target audience with Libsyn ads.
00:17:04.480Email bob at libsyn.com to learn more.
00:17:11.920That AI doesn't work well in this domain.
00:17:14.480It'll force people back into the in-person.
00:17:16.660One example here would be, let's say, for example, Joe, you and I were going to go on a show together or just go do something together.
00:17:26.580If I have an AI agent, for example, that sort of spontaneously reaches out to you and messages you, and then your AI agent responds back and it's scheduled, and maybe there's a whole conversation that happens, but neither of us are even aware that the conversation even happened.
00:17:42.320Right, that's a pretty weird thing, and at a certain point we sort of lose trust in those sorts of interactions where there isn't a firm handshake, where there aren't two people in the room together.
00:17:54.080And so I think the natural sort of second order consequence of this, at least it seems to me, that it'll force people to care more again about the in-person relationships.
00:18:03.700So people in their community, their family, their church, and also even things like proof of membership and sort of vetted organizations or like word of mouth referrals, right?
00:18:14.340So somebody like Bryce basically says, like, yo, you should really go do X, Y, Z thing with Joe, and I know Bryce in person, and I wouldn't take that advice off of the internet anymore.
00:18:27.660So you sort of bias more towards input from in-person, and in some ways I think that that solves some of the challenges we've been having with the internet and with social media, which has been terrible, which has been terrible for young people, just in terms of anxiety and other things as well.
00:18:43.680And so what we want to have is a future with AI that doesn't look like Instagram or doesn't look like what social media did to people, and I think it's possible.
00:18:52.560Isn't it – do you think it would be accurate to say that because of the algorithms that social media uses to put things in front of people, wouldn't that qualify as AI and wouldn't the repercussions of having that kind of information – or the way that information is fed to people, particularly young people, wouldn't that – would that qualify as one of the negatives of AI that we've already seen even in its infancy?
00:19:18.320That's true, and I think it's worth bringing up the distinction between – I would say the last generation of AI, which is what you see in algorithms and social media.
00:19:28.160It's what you see in drone warfare, but a lot of maybe more positive elements as well, like ability to detect fraud in financial systems.
00:19:38.400And there's a new generation of AI, which started with the release of chat GPT in 2022, and you could call that generative AI.
00:19:48.540You could call that large language models, but this is really the source of a lot of the hype in the last few years.
00:19:56.720And it's a source of where we can actually think about automating all the worst parts of our companies or our personal lives, but it's also the risk where the slop comes in, right?
00:20:09.500So all the tweets that you see that are clearly made by a robot, that's this second generation.
00:20:15.480You know, you're going to have to find something for us to disagree about.
00:20:19.900Well, I mean, I've tried pushing back on a bunch of stuff.
00:20:22.860To your point, I will say this. You're talking about critical systems that are having AI integrated into them, medical, military.
00:20:33.620At least in the context of what I was thinking of was as assisting humans, not actually taking over.
00:20:40.820So, yeah, the Goombots, I think you're underestimating that just like digital porn, just like OxyContin,
00:20:49.940and it may be something that is primarily concentrated among people who are isolated, people who are already mentally unstable, vulnerable,
00:20:57.280but that's a lot of people, and it's an increasing number of people.
00:21:03.240But let's put the Goombots aside, and the Goombots.
00:21:06.120It gives us some indication as to how unethical and sociopathic people like Mark Zuckerberg and Elon Musk are,
00:23:24.800All we know is it's accelerating the kills.
00:23:28.740So I think, but in both cases, what it highlights is how important those roles are, doctors, soldiers, so on and so forth.
00:23:37.320And it also at least gives us some indication as to the problems of human atrophy.
00:23:42.240And in the case of warfare, the real tragedy of kills that were simply not legitimate.
00:23:50.580So to your atrophy point, right, if AI is better at detecting things like the cancers and stuff like that,
00:23:58.620and it's also still technically, I mean, it's in its infancy, right?
00:24:02.340This is still a very, very new technology, only in the past couple of years, two years possibly, that this is capable of even doing this.
00:24:09.800And it's gone from the infancy to being able to detect better than human beings.
00:24:15.120Wouldn't it make sense to say, look, it is a bad thing that human beings are relying on it to the point where they're losing their edge?
00:24:24.020Essentially, basically, they're not as sharp as they used to be.
00:24:27.500But moving forward, considering AI is so much better than humans, is that a problem?
00:24:33.160And will that be a substantive problem?
00:24:35.240Because you think in two years, considering how the advancements have gone in the past, in two years, it'll almost be, it'll be impossible for human beings to even keep up with AI.
00:24:46.320Would it be a negative to say, oh, well, humans won't be so sharp?
00:24:49.680Well, yeah, they won't be, but everyone relies on a calculator nowadays for any kind of significant math problems.
00:24:56.320No one's writing down and doing long division on a piece of paper anymore.
00:25:01.780Isn't that a similar condition or situation?
00:25:04.200I really like the analog of the calculator.
00:25:08.040One thing, one heuristic here that we, or one way of thinking about this that we like to use is that, you know, AI right now, especially think about chat GPT,
00:25:17.340it's very good at replicating sort of bureaucratic tasks and really bureaucracy.
00:25:24.800And just like in the same way that as, you know, to do a very large math problem, you just plug it into a calculator and it does it quite quickly.
00:25:31.740It's sort of the same thing in a business context or in sort of a administrative context.
00:25:38.220Like AI today basically does what sort of entry level, does quite well what entry level associates and analysts and people like this did five years ago in, say, an investment banking firm or a consulting firm or a law firm or even just sort of passing around basic information in a large bureaucracy.
00:25:55.840You know, you could think about it as similar to before the calculator, there would have been sort of entry level people who were doing these just extremely long math problems.
00:26:04.920And I think a point that Bryce made earlier is that some of this stuff is actually, it's actually fairly inhuman.
00:26:10.800Like being a cog in a bureaucracy is not necessarily like the peak of human flourishing.
00:26:17.600And so as long as new opportunities are materializing and as long as there are still ways for, you know, Americans to continue working, forming families, then I don't necessarily see it as a terrible thing if certain very unhuman types of jobs stop existing.
00:26:37.420As long as new jobs are created that are more conducive for human flourishing.
00:26:42.400I think it's disingenuous to compare this technology to technologies of the past and any advancements because those technologies are still even the calculator involves a human working with it, whereas AI is going to replace everyone.
00:26:53.860And I understand the short term benefits that come with all of this, whether it's medical, military, which I disagree with, like lavender, I think is a terrible situation and allegedly it has a 10 percent error rate.
00:27:05.540But the idea that it's going to create a future where humans can do better things outside of this and not be a cog, I think we're a cog in it right now.
00:27:14.100I think we are the food source for AI and it's learning because of us, right?
00:27:19.760And the people who are building the AI, they don't want the physical world.
00:27:23.980Like the big names, the Altmans, the Teals, the Musks, you name them, they don't care for the physical world at all.
00:28:02.160But I feel like with this, I have to reject any flowery notion of a future where we can live symbiotically with AI because in the end, it's like we've created our alternate species to take over.
00:28:16.060And it's going to draw us into apocalyptic complacency where we kind of are at already.
00:28:21.960You know, and people keep saying this technology is going to help out with people.
00:28:26.100You know, it's going to make things better.
00:28:39.580And I don't think it's going to get better all of a sudden because the proliferation of AI.
00:28:43.240I think it's going to make things much worse, much more fractured.
00:28:45.960And it's going to become this like inevitable lobotomy of humanity.
00:28:49.480Whereas previous advancements in technology, there was some sort of we could work together despite some, you know, the consequences of something like Taylorism and the scientific management where you did become a cog in the factory.
00:29:03.440But we are the cog in it right now as it's growing within its factory.
00:29:07.740And someone like the former CEO of DeepMind, I forget his name, talks about.
00:29:44.880And I think in the future, not so distant future, the AI is going to worry about containing us.
00:29:50.220You know, and that's what I'm fearful of.
00:29:51.840I think that with this being drawn into apocalyptic complacency means it's going to destroy us because we built it and we allowed it to.
00:29:59.640Look, there's I just saw a video the other night about AI and it was talking about what it led with the idea that right now we're going through or have been going through a massive, massive die off of species.
00:30:21.440And the point that it was making was human beings didn't know that they were killing off insects in the ways that they were with pesticides and just deforestation and all of these things that we were doing, making the modern world and living in the modern world, killed off 40 percent of the insects.
00:30:39.360Well, insects are part of, you know, the ecosystem that actually are necessary, as annoying as they are and they can be.
00:30:44.800They are necessary, but grab a coffee and discover Vegas level excitement with Bed MGM Casino.
00:31:13.600Pull up a seat and check out a wide variety of table games from blackjack to poker or head over to the arcade for nostalgic casino thrills.
00:31:21.660Download the Bed MGM Ontario app today.
00:31:29.380If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:31:40.520Bed MGM operates pursuant to an operating agreement with iGaming Ontario.
00:31:45.100The point that it was making that we weren't aware that this is happening and then they they made the the the connection that should a should we have a super intelligent AI, right?
00:31:57.580Not just agents that can help, but a super intelligent AGI.
00:32:02.160It was it's likely that it will start doing things of its own volition that we can't understand.
00:32:08.640Like bugs didn't know why human beings were destroying them and destroying their habitat and why they were getting killed off.
00:32:15.460And when you deal with a sufficiently intelligent or a sufficient different, sufficiently more intelligent entity, humans can't understand it.
00:32:27.660And right now, AI will start doing things that people can't understand.
00:32:32.160There was this we had this wired piece here where AI was designing bizarre new physics experiments that actually work.
00:32:40.380And the point is, the AI started working with the physicists as a tool.
00:32:44.880They started using it to help them figure things out.
00:32:47.580And it came up with novel methods to make finding gravitational waves easier.
00:32:55.960And they didn't understand what had happened.
00:32:59.200Then the same thing has happened with chess, right?
00:33:01.160There was a chess bot, kind of your chess master.
00:33:07.480But everyone kind of thought that everyone, all the chess masters know how chess goes and they see the moves and they know what moves you do in response, et cetera, et cetera.
00:33:17.580And it took, alpha zero did a move that no one understood or the chess bot, I'm not sure of the same one, just for accuracy.
00:33:28.280But then it was like 20 moves later or something.
00:33:31.140It won the game and no one understood how.
00:33:33.900But one of the things that AI, another thing that AIs have started doing is there were two AIs communicating with each other and they literally created their own language.
00:33:42.480And the people that were outside of the AIs didn't understand what was going on.
00:33:46.880And now it seems that a lot of the big AI companies are just feeding information and AI will pump out an answer and it'll be the correct answer, but they don't know how it got there.
00:34:00.660If you don't understand what the machine you're working with is doing and you don't understand how it's communicating, doesn't that become a problem for the people that created it?
00:34:12.080I think the biggest problem, and I think a big danger with discussions about AI is to treat AI as though it is a sentient entity in itself and that it actually does things of its own volition.
00:34:26.580And I think we need to realize, okay, how does it actually work?
00:34:35.680I'm sure everyone's used it and been really surprised at how effective it was at researching something or creating some text.
00:34:42.440But ultimately, AI, especially the new version of large language models, it's really compressing all the information that humans have created, that it's found on the internet, that its trainers have given it, and it spits it out in novel ways.
00:35:00.100But we can't forget that humans are always at the source of this information.
00:35:05.060Humans actually have some say in how the AI is structured, how it's trained.
00:35:09.460And so we need to – I think by seeing AI as kind of a sentient being in itself, it distracts us from the fact that who's actually training the AI, which I think is a critical question.
00:35:20.680And there are a lot of big companies who are doing this.
00:35:23.560Thankfully, I think there's a diverse enough set of companies who are making AI models that we don't have to worry about a mono company like Google taking over the next 20 years.
00:35:33.680To that point, though, isn't it the case that to say AI is a blanket term?
00:35:39.560Because when you're talking about an LLM, that's one kind of AI.
00:35:43.080But when you're talking about like full self-driving in a Tesla, that's not an LLM, but that is artificial intelligence.
00:35:50.800It's making decisions based on what it sees, et cetera.
00:35:54.480Like so to call – to say – to use AI as a blanket term is probably an error.
00:35:59.360And you can say, you know, that LLMs are just – you know, they just crunch the information that they have that people are feeding it.
00:36:07.560But when it comes to something like a full self-driving, which would – that kind of AI would have to be used if you were to have a robot that was actually working in the world, right?
00:36:16.900Like a humanoid robot, it would have to have something similar to that as well as an LLM.
00:36:20.940Those two AIs are different, aren't they?
00:36:22.840And how do you make the distinction between the multiple types of AIs and say, well, this one is actually kind of dumb because it's just crunching words that we said.
00:36:35.780But this one is actually not kind of dumb because it's interpreting the world outside.
00:36:39.460And that's so much more information than just, you know, a data set.
00:36:43.080So on that point, two distinctions to be made there.
00:36:46.540One, when Shane is speaking, always, as a shaman, Shane is speaking from a cosmic point of view.
00:36:55.600He's seeing not just the thing, but the thing in relation to the room and the stars and so on and so forth in the metaphysical realm beyond.
00:37:05.620When Bryce and Nathan are talking about artificial intelligence, they're talking about very specific machine learning processes that are for very specific purposes and also very specific to the culture that you're trying to build.
00:37:20.420And I think that both of those are valid perspectives.
00:37:24.100And I think that people using these digital tools for, at least with the intent of benefiting human beings, at least the ones who count, right, the ones close to you, then we're probably better off, even if I reject it entirely.
00:37:40.120But so that's, I think this is a distinction to be made, right?
00:37:43.640And it's one of the problems you talk about AI, right?
00:37:45.540It's when Shane's talking about the kind of parasitic or predatory nature of AI itself, it's a more cosmic point of view, like looking at the long term sort of goal towards which most of these frontier companies are working towards.
00:38:02.760And I myself think that it's, you have to balance those things, but to the point about like AI as a term, it's very unfortunate.
00:38:12.060I mean, you could call it for a long time, it was machine learning, right?
00:38:14.980And AI, when it was coined, like 1956, John McCarthy and taken up by others, Marvin Minsky, what they were talking about is what we now call artificial general intelligence.
00:38:27.420They were just meaning a machine that can think like a human across all of these different domains.
00:38:32.760And nothing like that exists, not really.
00:38:35.260You could say the seed forms are present, but that is just a dream and has been a dream for some 70 odd years.
00:38:44.520So say you take that distinction, though, between the LLM, and I hear what you're saying as far as just compressing information,
00:38:55.040but it does a lot, it's more than just a JPEG, you know?
00:40:17.720So, yes, it is quite different, though, from robotics control or even, like, image recognition systems, even if they're more integrated now.
00:40:27.240Like, GPT-5, before it was, like, it was this very rapid transition from these very narrow systems to, like, Google's Gemini was multimodal.
00:40:38.680You have an LLM that's able to kind of reach out and use image tools and use audio kind of tools, right, like to produce voice and all that.
00:40:48.940And now it's integrated into basically one system over just a course of a few years.
00:40:55.380And I don't think that anytime soon you're going to get the soul of a true genius writer or genius musician or genius painter out of these systems, right?
00:41:09.960It's just going to be slop for at least the near term.
00:41:13.860But you do have to recognize, like, what you're talking about, superhuman intelligence, right?
00:41:18.860Superintelligence, as defined by Nick Bostrom, would include something like Alpha Zero or even Deep Blue, like back in the 1970s, beat Garry Kasparov.
00:41:31.420So you have to take that into account and wonder, at least.
00:41:36.840I think that fantasizing is probably not something to get stuck in, but these fantasies are not only driving the technology, but the technology is living up to some of those fantastic sort of images.
00:41:49.720So in the case of Alpha Zero, Alpha Go was trained on previous Go moves.
00:41:56.500Alpha Zero started from scratch, just the rules, and played against itself until it developed its own strategies and is now basically a stone wall that can't be defeated.
00:42:06.720Same with at least the best drone piloting systems outperform humans.
00:42:12.040Yeah, that's kind of a feature, and maybe it's an emergent feature, but it's a feature of AI.
00:42:18.420Once it defeats human beings, once it gets better than humans, there's never a time where a human being...
00:42:24.160Yeah, and I think that isn't that, if that's the goal for, you know, for these developers, right, wouldn't that kind of specialty and...
00:42:42.040I guess, what's the word I'm looking for, just that kind of capability, isn't that something that you could consider a good thing for humanity, right?
00:42:50.960If it's better at finding, to the point that we were talking about earlier about finding cancers, if it's better than humans ever will be, and it always is better, and it gets so good that it doesn't miss cancers, isn't that a positive for humanity?
00:43:06.880I don't think you can outsource that kind of optimism to this false god and count on that forever.
00:43:12.860I think outsourcing so much stuff to the machine will just eliminate humanity, and at a certain point in that world, there is no more humans.
00:43:22.420They'll be living in their little 15-minute cities, hooked up to the metaverse, and disease might not even be a thing because they'll be eating vaccine slop.
00:43:30.680So then your opinion is, it's better to have the real world with disease and cancer and everything...
00:43:38.180Yeah, it's part of humanity, unfortunately.
00:43:49.720There's obviously medicine, and we're trying to heal people, but the idea that you can just plug into the machine and it cures you, that's basically just making everyone a transhumanist.
00:44:10.600There are people that say, I don't want to.
00:44:12.040I mean, you've got enclaves of people.
00:44:14.080If that's an option, and you're not forced to do any of this stuff, isn't it more immoral to prevent people from having the option
00:44:24.680than to say that everyone, like, that everyone, you know, if you have the option, isn't that the desired outcome where people can make the decision themselves?
00:44:36.580Yeah, I understand having the option is fine.
00:44:38.080I just don't think in the not-so-distant future there won't be an option.
00:44:41.540So you think that it's all just authoritarianism all the way down?
00:45:03.540The Biden administration selected certain companies, but there was no competition.
00:45:07.700Would you think that the Trump administration's outlook or their approach is a better approach?
00:45:17.000Or do you think that it's just, you're just straight up no on it?
00:45:19.960It doesn't matter who's in office because they are parasites.
00:45:22.500Silicon Valley is parasitic and they take advantage of every administration.
00:45:26.240They're shape-shifting ghouls who will take advantage, like Zuckerberg was all in and totally fine to censor all of us during COVID.
00:45:32.700You know, and then all of a sudden he saw, you know, Kamala wasn't going to win.
00:45:36.820So, hey, now he's shape-shifting to MAGA, light, you know, with a new haircut and a chain on roading.
00:45:41.320I think here it's really important to emphasize the distinction between AI as it currently exists and what it could become further down the road.
00:45:50.380And, I mean, at least AI as it exists right now, it still obeys and it follows human prompting, right?
00:48:01.380That power will be consolidated into the AI.
00:48:04.620And then there's no saying no to it at a certain point.
00:48:07.620There's no saying no to the United States right now.
00:48:10.860Despite not agreeing with administrations, and I have many opinions about this current one,
00:48:16.420you can hopefully sometimes not go to war.
00:48:18.780And have a politician who says, I'm not going to start that war or join that war,
00:48:21.840even though we're funding all these wars right now.
00:48:23.240You were talking earlier about sickness is part of the human condition.
00:48:27.540War is part of the human condition, too.
00:48:29.060War existed before human beings, right?
00:48:31.100I think humans should be participating in these risks.
00:48:33.440I don't think we should be creating things to keep doing it while, you know, we're being governed at home in this future by the tech overlords.
00:48:41.480So the vision you're putting forward there, and I don't know if you're making the argument or holding it.
00:48:45.780I'm trying to make the argument of pushback on everyone's idea here.
00:48:49.760But that idea that perhaps algocracy, a system in which the algorithm determines most of the important outcomes
00:48:59.660and the processes by which we get to them.
00:49:02.900That dream, whatever form you want to, however you want to package it, transhumanism, post-humanism,
00:49:10.000the sort of EA and long-termism point of view that in the future there will be more digital beings than human beings,
00:49:17.520or just the kind of effective altruism, or just kind of the nuts and bolts, like what you're saying.
00:49:23.920If you turned war over to the algorithm to decide.
00:49:28.260Grab a coffee and discover Vegas-level excitement with BetMGM Casino.
00:49:32.920Now introducing our hottest exclusive, Friends, the one with Multidrop.
00:49:37.720Your favorite classic television show is being reimagined in your new favorite casino game,
00:49:42.600featuring iconic images from the show.
00:49:44.920Spin our new exclusive, because we are not on a break.
00:49:48.560Play Friends, the one with Multidrop, exclusively at BetMGM Casino.
00:56:10.700For a very small amount of people can afford those nice clothes.
00:56:14.560Well, but there's still humans at the top who are the best at what they do.
00:56:20.060And they're using probably the most human types of skills to make those clothes.
00:56:24.740And I think if you expand that to an entire economy, what the promise of AI is actually that over time, humans are actually doing higher and higher values of work.
00:56:34.380Not that fear and fear humans are working or that there's fear and fear humans, but that actually humans are more flourishing than before.
00:56:40.540And to me, that sounds like when a communist tells me they can create utopia on Earth.
00:56:44.540Oh, you're going to just sit around and write poetry.
00:56:46.200I don't think that's the future with AI.
00:57:41.480And again, another one that the definition has changed pretty dramatically.
00:57:45.600Like right now, the fashionable definition, you hear people like Eric Schmidt from ex-Google CEO,
00:57:51.760now partnering with the Department of Defense and has been for years.
00:57:55.720His definitions that he's been running with, and Elon Musk also, it's just fashionable now to say the artificial general intelligence is an AI that would be able to do any cognitive task that a human could do.
00:58:09.820Presumably, it would do it better because it's going to be faster.
00:58:12.520It's going to have a wider array of data to draw upon, all these different things.
00:58:18.560But that's the general AGI definition that's fashionable.
00:58:22.780Now, before, when, you know, set the 1997 definition aside, before, it really started with Shane Legge of Google and was popularized by Ben Gertzel, roughly 2008 or so.
00:58:35.780And for them, it was more about the way in which it functioned.
00:58:40.600They wanted to distinguish artificial narrow intelligence, so a chess bot, a goon bot, a war bot, any of those bots, from something that could do all of those, right?
00:58:50.840It could play chess, and it could kill, and it could have you gooning all day long.
00:58:57.000And it could be accomplished either by building a system that was sufficiently complex that this general cognition would emerge, and I think that's what Sam Altman and Elon Musk are betting on with scaling up LLMs and making them more complex.
00:59:11.780Or it could be like the Ben Gertzel vision, where you have just a ton of narrow AIs that kind of operate in harmony, and that is now what we call multimodal.
00:59:23.800Nick Bostrom really put the stamp on it, 2014, with his book, Super Intelligence.
00:59:29.240And for him, it could be either a general system, it could be narrow systems, it could be any system that kind of evades, that excels beyond humans, and ultimately the danger of them being you lose control of it.
00:59:42.660Now, Eric Schmidt, Elon Musk, people like this, are going with super intelligence just means it's smarter than all humans on Earth.
00:59:49.220I'm not exactly sure what that means, but that's the definition.
00:59:53.800Whatever you're talking about, none of that shit exists, right?
01:00:13.040And then with this, you're talking about AGI, like a system that can generalize across domains, concepts.
01:00:19.780It's wild to see the rapid advance from the implementation of Transformers 2015 OpenAI and the real breakthroughs in LLMs to the multimodal systems that became, just in a few years later, became much more popular to more integrated systems.
01:00:50.540It's not general in its cognition so much, except there are certain seemingly emergent properties that are coming out, like we were just talking about a moment ago.
01:01:00.980So, LLMs doing math, better and better and better, LLMs solving puzzles and mazes, better and better and better.
01:01:09.520LLMs, in some sense, I hear this a lot, actually, that people say, oh, I'm working on this problem, and I turned to the LLM and it solved it.
01:01:18.920And, you know, I have a good friend, he's a lawyer, and he was doing a case analysis, and he had done it himself.
01:01:26.660He had already gone through all of it, but he wanted to see if ChatGPT could do it.
01:01:29.840It was the 4.5, and he asked ChatGPT, and it came up with basically the same thing that he had spent many, many hours on in just a few minutes.
01:01:38.240So, it's like the AGI that they're talking about, like what Sam Altman seems to have been pitching before, you know, the big GPT-5 flop,
01:01:46.440is something that is, like, more like a human than a machine, and that doesn't exist.
01:01:53.040But it is the case that the technology, you take that big cosmic vision or all those cosmic visions of what this godlike AI would be,
01:02:01.320it can't be denied that the actual AI in reality is slowly but surely, faster than I'm comfortable with, approximating that dream.
01:02:11.120And unless it hits a wall, unless it hits a serious S-curve and just flattens out, it's at least worth keeping in mind that it's not a stable system.
01:02:25.460So, I'm into the idea of it being a tool, and it can be a good collaborator for people.
01:02:30.120You know, when talking to AI, if you want to help edit something, I understand that that's an awesome thing.
01:02:37.220But when you talk about, like, the narrow path you're creating or trying to create with AI, what does that entail?
01:02:42.460Are we talking about trying to implement regulations from the top down, or is it something you're doing within your company?
01:02:47.520Yeah, so maybe to start would actually be a historical analog here.
01:02:51.840The railroad in the 19th century in Europe, I think, is actually really interesting.
01:02:56.760We could also talk about social media and the internet as other potential analogs.
01:03:01.240But the reason why I find it particularly interesting is we sort of underestimate how transformative it was for the average person, let's say, in Britain.
01:03:09.760And they basically went from being in these fairly isolated towns with some capacity to basically travel between them,
01:03:16.980either, like, with horseback or carriage or something like these.
01:03:19.540But the railroad enabled much faster travel for humans.
01:03:23.140It was actually the fastest they'd ever traveled, right, when they got on the railroad.
01:03:26.060And the railroad stations went into these people's towns and brought random people, like, large numbers of people into their towns and also industry through their towns as well.
01:03:36.020And in the early days of the railroad being built, it was actually extremely contentious.
01:03:41.080So the wealthy aristocracy basically opposed it on the grounds of just, like, not wanting railroad tracks going through their land that they felt like they had ownership over.
01:03:50.740And then on the other end, sort of more working class types.
01:03:55.940I mean, a lot of times the station would just go in right in the middle of their town where they lived.
01:06:03.040If you have questions or concerns about your gambling or someone close to you,
01:06:06.520please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:06:14.180BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:06:19.140Numbers of ostensibly fake stories about trains.
01:06:22.840They talked a lot about madness that would emerge from.
01:06:25.340So somebody would stab somebody on a train and the doctors and media would say that was sort of the jogging of their brain and the rattling on the train made them go insane.
01:07:53.120But that's probably not going to happen.
01:07:54.480And so it's very similar to climate change in the sense that climate change sucks up all the oxygen in the room.
01:08:02.560And the problems of species loss, the problems of habitat loss, and the problems of pollution kind of lose a lot of the public attention that they should have.
01:08:12.280Because these are things that you can see very clearly and you can measure very clearly.
01:08:16.740Climate change models, let's just say, that's a little bit more hypothetical.
01:08:20.480And in the same sense, the major problems, the immediate problems of AI, I think, are going to be, again, it's already evident, the psychological and social impacts.
01:08:32.500What does it mean when human beings begin to become companions with machine teachers?
01:08:38.540You look to the AI as the highest authority on what is and isn't real.
01:08:42.080And you train children in this global village of the damned to become these sort of like zombified human AI symbiotes.
01:08:51.060And then beyond the social ramifications of that, and having like you trained your AI on your grandma, right?
01:08:58.220And everything's given, like your grandma's there, like, you know, bitching about the mashed potatoes or whatever.
01:09:02.620You know, on a screen, on an iPad, you know, these sorts of things, how far will it go?
01:09:08.440Don't know how far did opium go and fentanyl and oxycontin.
01:09:27.560And they made this big announcement, we're replacing all our workers with AI and customer service.
01:09:31.060And then they were like, oh, actually, we're hiring again because they didn't work out so hot.
01:09:35.400Maybe it'll be like that, or maybe it'll be more like companies like Salesforce or Anthropic, where the coders really are being replaced.
01:09:43.160The low-level coders are being replaced.
01:09:45.280But these economic concerns, and I think for you guys, especially for you, Bryce, the economic angle is, clearly you take it very seriously.
01:09:53.940And I read the Volus mission statement that doesn't include anything.
01:09:59.220I mean, it's like basically a rejection of the whole transhumanist vision, subtly, but a subtle rejection of it, but an embrace of these technologies in their more, I guess, humble forms, you know, like low-level forms and narrow forms.
01:10:15.740And it makes sense, but I really think ultimately, though, that long-term vision, because these are the frontier companies, right, and they're driven by the long-term vision.
01:10:26.000They got all the money, they got the government support, and they're, you know, the carousel of the federal government has now given favor to Meta, has given favor to Palantir, has given favor to the whole A16Z stack.
01:10:41.000Like, their vision of the future is going to make a big difference, kind of regardless of the technology.
01:10:47.000Like, people have been able to hypnotize whole tribes by, like, waving totems around, like, you know, and it's like, if you can do that, and you've got a totem that can actually, you know, talk and do math,
01:10:56.840you're talking about a religious revolution beyond anything that's been seen before.
01:11:00.640And I think that all of those things, like, all of those problems are just beyond the scope of, like, the nuts and bolts day-to-day, like, does my AI give me, you know, a nice streamlined PowerPoint for my presentation?
01:11:15.300Right, like, I understand there's been fear-mongering throughout the ages, whether it's, Phil and I talk about the synthesizer sometimes, you talk about the trains.
01:11:22.140The thing I think that sets AI apart is that it is a vector for almost everything about humanity.
01:11:27.900You know, it's about education, it's about children and safety, it's war, it's going to be expression with regulations where they're trying to say you can't do deepfakes and whatnot.
01:11:35.620So it really, everything kind of falls into the black hole of AI and becomes a much bigger existential crisis.
01:11:41.300Although I understand the existential crisis that the Luddites, who I agree with, because they were afraid, they weren't anti-tech, they were more anti-being replaced by mass autonomy.
01:11:50.860So they were still using their own technology today, the OG Luddites, but they didn't like that factories were being built and filled with machines that took everyone's jobs, right?
01:11:59.920And that is one, just one part of what AI will be doing.
01:12:03.280I think the point of that story was actually, was to highlight basically that there were serious risks and things went wrong and humans got on top and figured it out and solved it.
01:12:14.560They moved more of the stations further out from the city, they put in place better safety measures on the trains, et cetera, et cetera.
01:12:20.960You're absolutely right that AI feels like it is more transformative and the risk profile is potentially higher.
01:12:27.220I don't think we're quite there yet, but it's definitely in the future, it could get significantly more risky.
01:12:32.360I would say one risk that exists today, and this was something that I ran into directly, is, for example, the sort of the way that AI can allow people today to sort of mass scam Americans, right?
01:12:46.500So I got this email from a guy named Akshat.
01:12:54.100And he was sending 40,000 emails today, but they're extremely well tailored, right?
01:12:59.500In the email, he mentioned portfolio companies that we work with, names that I recognized.
01:13:04.320It was a very well tailored email, and it was generated using an LLM.
01:13:08.320He's blasting out tens of thousands of these.
01:13:11.560And essentially, in this case, Akshat was basically offering us to offshore our labor, so our associates at New Founding, for a quarter of what we're currently paying them, right?
01:13:21.200And he's able to do that, and it's not just using MailChimp, right?
01:13:25.960He's using, he's basically collecting data in order to correct, in order to produce the right sort of targeted email.
01:13:32.600And, I mean, essentially, it's a form of scamming that's using AI in order to be more effective.
01:13:40.820And you can think about how this could apply to, like, if that was targeted at your grandmother or something like that.
01:13:45.320And so I think that's, like, very practically today.
01:13:47.480It's about blocking people like that and making sure that AI, very practically right here and right now, isn't used to harm Americans while we continue to monitor these further out risks.
01:13:59.520But it's also important not to confuse the two.
01:14:02.740And to that point, I think when we treat AI as kind of some autonomous or it's some technology, like it's a train steamrolling, and we either get off the tracks or we take it, I think that's the wrong way to think about AI because that treats it as something that we're powerless.
01:14:21.760And I think there's a lot of doomerists out there who want to talk about the deep state and the globalists, and we're just a tool, we're a cog in their machine.
01:14:30.460And that takes away the agency that actually is what makes humans different from AI.
01:14:35.960So I guess I would actually want to add, maybe spin a positive vision for what AI could be.
01:14:43.660And I think part of the way to solve the AI problem to define that narrow path for the future is actually for people to start building things using AI appropriately that actually make America better.
01:15:08.400I think AI is actually very different from a lot of technologies that have come around, like the airplane, like the computer, like the internet.
01:15:15.900All these have been started as military technologies.
01:15:19.220And that's actually kind of their natural bent is as military tools.
01:15:25.320And then they trickle down to large global enterprises.
01:15:28.900And then finally to consumer applications, right?
01:15:32.040But AI, interestingly, its first application, at least when we're talking about large language models, is actually for individual people to help make their lives better, to reduce monotonous work.
01:15:45.420And I think the way I see it is that AI is going the other way around, that we can actually use AI effectively in small businesses where humans who are really high agency, virtuous, good leaders can actually get more done.
01:15:59.720And they can have more success with AI because they're able to get higher leverage.
01:16:05.280I think that's part of the trick to get you so addicted to AI to allow the machine into you so it's harder to divorce it from you and easier to control you.
01:16:14.400Again, if I could add just an old man spurg out just for a second.
01:16:19.000AI did actually, in the early, early conceptual phase, was deeply tied to the military.
01:16:26.520So Alan Turing, for instance, Marvin Minsky, the pioneers, deeply tied to the military.
01:16:36.200And maybe this isn't AI specifically, or at least he was kind of the cybernetics in Norbert Wiener, who, aside from having one of the funniest names ever, he was, you know, a military man.
01:16:47.440And he's writing about and thinking about with cybernetics, cybernetic systems and human machine integration, human machine symbiosis, thinking about it from the purposes or for the purposes of military dominance.
01:16:59.840And so it is, even though you're right, like the LLM revolution comes out of Google, right?
01:17:05.480I mean, and taken up by open AI and largely civilian, but the idea of thinking machines and the development of various algorithms, deeply tied to military institutions and military concerns.
01:17:21.720And I want it to be what you're saying.
01:17:24.260I want that idea of like, it can just be a great collaborator for the individual, which should be great.
01:17:29.440But it's something, all these things are always hijacked.
01:17:32.540And the people who are either building this stuff right now, most of them, not all of them, and the military industrial complex, like they have no ethics.
01:17:39.580So a lot of those things will be eventually, they're already turned against us.
01:17:43.900Can I give, I'll give a very practical example.
01:17:45.720And I think that, again, that's a serious, that is a serious risk, and we should continue to monitor that.
01:17:50.900So we have a portfolio company, it's called Upsmith.
01:17:53.740And specifically, they work with like plumbing, HVAC, these sorts of companies and individual business owners.
01:18:01.060The average company size, they work with five people.
01:18:03.740And there's around three to 500,000 of these sorts of companies in America, right?
01:18:07.720So it's like working middle America tradesmen type individuals.
01:18:11.500As it stands today, when they want to book to go to somebody's house to do some repairs, they actually have to outsource most of this to basically these companies that take care of all of the overhead.
01:18:25.260A lot of it's actually offshore labor, or at a minimum, it's these like, again, extremely soulless sort of bureaucratic type jobs.
01:18:31.920And what happens also is that these plumbers lose a lot of business.
01:18:36.280They often will lose up to half of people who call them to book to, hey, this thing's swapped or whatever.
01:18:41.060They're just doing work, and they miss the call.
01:18:44.920And basically, American skilled tradesmen are missing out on a lot of value.
01:18:50.660Or they are capturing the value and forking over a huge amount to basically Indians in India who are running some phone bank or whatever.
01:18:59.000So one of our portfolio companies, it's called Upsmith.
01:19:01.660Right now, what they do is they basically have an agentic AI tool that just takes care of bookings for these plumbers.
01:19:08.580So if you call or text or anything like that, it'll basically just reply back, and it'll automatically take care of calendaring and booking.
01:19:16.860Just the whole exchange will happen back and forth, and it'll just send the plumber to the next house.
01:19:20.440Now, I think from my perspective, this is something that very tangibly today can help a plumber basically help him make more money in the next couple years.
01:19:32.140And that helps him if he doesn't own a house yet.
01:19:37.620And I see that as very practically positive for Americans.
01:19:41.220And it's actually shifting, again, sort of economic opportunity away from bureaucrats, away from offshore to the house of a guy in Philadelphia or whatever.
01:19:50.540And so at New Founding, at least, we're interested in finding those opportunities and backing those at ones where it's very clear that this is going to help Americans.
01:20:00.240And I think hopefully that helps to give an example of the sorts of ways that AI can more practically be of benefit, despite the presence of the rest.
01:20:11.400Like, I'm totally against a seven-year-old getting onto some artificial intelligence sort of, you know, like doom loop, et cetera, et cetera, as we've been talking about.
01:20:36.560I guess I'm more just concerned about down in the future, how it's going to work and maybe how using AI so much and getting businesses and people basically addicted to it or, you know, relying on it so heavily.
01:20:50.400In this case, it gets integrated into the business.
01:20:52.200And then something happens down the road and they try to pass regulations and this can be a big thing of, like, people perhaps will riot, not riot, but, like, revolt against these regulations, which I'm skeptical on regulations, how that works, because they've been married to AI, basically, in their companies.
01:21:10.080And I do see a future where we're going to have these big conversations, these big fights in politics about what we're allowed to do with AI because some people have been abusing it, like the scammers you're saying.
01:21:19.940And it's going to be, like, another two-way debate, but with AI.
01:21:29.260Most of the problems with artificial intelligence are going to be dealt with positively on the level of just human choice, personal choice.
01:22:21.720If you have questions or concerns about your gambling or someone close to you,
01:22:25.240please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:22:33.580BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:22:37.920You see right now everything from the AI psychosis where it's, you know,
01:22:42.240people kind of disappearing into their own brainstem sort of phenomenon.
01:22:45.220These are things that human beings chose to do, by and large.
01:22:48.920So that's, I think, for me, the most important thing is to at least make people aware and activate some sense of will and agency to put up cultural barriers so your kid doesn't become an AI symbiote.
01:23:02.260Well, that's something that I think that people have learned just from social media.
01:23:05.900Again, I consider social media algorithms as kind of, you know, infant AI as it is.
01:23:11.080And so people are seeing the negative consequences and seeing bad things that can happen for their kids.
01:23:16.900Or at least the smart people are noticing.
01:23:19.180And they're not allowing their children to have, you know, screens all the time.
01:23:23.620It is rather disheartening when you go out to dinner or whatever and you see families that have, like, a kid sitting there with a screen.
01:23:32.080And it's like, well, that's the only way that's the only way they'll eat or whatever.
01:23:35.280That's a really terrible, terrible development.
01:23:37.660And I think that there needs to be more emphasis put on informing people of how bad that is for children.
01:23:42.860But that kind of, like you're saying, that kind of agency, that kind of discretion by parents is what really will prevent people from getting into this situation in the first place.
01:23:52.680I don't think the majority of people that are having problems with social media, whether they're problems, you know, delineating between reality and what's actually social media or, you know, making a distinction between online friends and real world friends.
01:24:08.720I don't think they're people that are actually well-adjusted adults.
01:24:13.160There tend to be people that are young people that didn't have, you know, that are not, you know, younger than me.
01:24:20.160So I was one of the kids that was like, you know, be home before the streetlights come on, but otherwise get out of the house.
01:24:25.280And so I had a lot of learning how to function in the real world by myself as a kid.
01:24:32.020And I think that that kind of thing is something that's important for kids.
01:24:35.460And I think that parents need to do that kind of stuff as opposed to just handing them the screens and stuff.
01:24:41.760Well, to that point, though, if I can continue, the personal choice is the first bulwark, right?
01:24:47.740And it is, people are taking an active role in whether or not, they're not just being told this is the future, you need to turn your kid into an AI symbiote and just doing it, right?
01:24:56.640There are tons of people, screen-free and phone-free childhoods.
01:25:00.700There are laws being passed in certain countries and institutional policies being put in place in schools and other institutions in the country.
01:25:08.720You can't just sit around on your phone and disappear into your own brainstem.
01:25:14.620But when it comes to military AI, when it comes to Dr. Oz, the inestimable Dr. Oz, who was shilling chips for your palm on his show to his credulous audience just a few years ago now,
01:25:29.100is saying that in just a few years, you will be considered negligent as a doctor if you do not consult AI in the diagnosis or prescription process.
01:25:38.220And that goes well beyond just personal choice.
01:25:41.680That's now an institutional policy or perhaps a law.
01:25:44.980And so that even though personal choice is one of the most important things we have, right, just the ability to say no,
01:25:55.800in many instances you can't say no and won't be able to say no.
01:25:59.660And so the laws are going to be important.
01:26:02.640And I think that right now at the state level, if you look at the states that are most inclined to legislate, California, for instance,
01:26:13.080and you look at the 18 laws that they got in place, things like you can't make deep fakes of people
01:26:19.120or you can't use someone's image against their will, mostly tailored to actors and whatnot, right?
01:28:36.760But second of all, you said that, you know, you were talking about whether or not there would be mandates for using AI for diagnosis.
01:28:45.340Is there any other realm in which the process for diagnosis is actually even people care about for the most part?
01:28:55.160Like aside from, you know, if you're dealing with x-rays and how that would affect your body or something like that.
01:29:01.840Is there any other place where people are like, I'm concerned about how you come to the conclusion that you do?
01:29:08.020Or is it really the important part of that?
01:29:11.600Like, are you getting the right diagnosis?
01:29:14.000Because if AI can actually make sure that your diagnosis is correct 95% of the time as opposed to, say, 70% of the time.
01:29:25.540Because, you know, humans are notoriously bad at actually diagnosing what's wrong with someone.
01:29:31.540And the more strain there is on the health care field, the fewer doctors that are actually well-trained and stuff, the fewer you have, the worse the results actually are going to become.
01:29:42.860So would it really be a problem for if the government were to say, look, you have to at least run it through the AI and see what the AI says?
01:30:03.360But when it comes to, like, using an AI to mandate that, then, like, what AI is acceptable, I just don't see how that works out.
01:30:09.140Well, I mean, again, the results would dictate, right?
01:30:11.040Like, if you've got an AI that's got a 99% success rate, and if you've got one algorithm by one company that actually has a 99% success rate, why wouldn't you use that?
01:30:23.620Or why would you have a problem with it?
01:30:26.160I have no problem with them using it, but I just have a problem with the mandate.
01:30:28.720Yeah, and you do, I mean, you have this claim, right?
01:30:31.540Like, for instance, a lot of the studies, comparative studies with radiology, how well is the AI able to detect cancer?
01:30:38.900And usually it's these, like, tiny, tiny, tiny, tiny, tiny little tumors, right?
01:30:41.780That the radiologist can't do just with his eyes.
01:30:44.840But that's very specific to that field, and there's also the issue, so, I mean, we also know that while you don't want to necessarily bank on your immune system,
01:30:55.400that cancerous cells and even small tumors are forming in the body all the time, and the immune system is constantly battling them back.
01:31:02.680And so you have a lot of more kind of second-order effects that can come out of that.
01:31:08.020If you have an AI that finds every tiny little aberration, and the next thing you know somebody's getting run through some devastating chemotherapy, you know, on the basis of this AI,
01:31:18.300it's much more complicated than saying the AI is 99% better than a human.
01:31:23.960There's all these other elements that go into it.
01:31:26.140And diagnosis, I mean, we're not talking about necessarily just visual recognition.
01:31:30.580When we're talking about doctors turning to AI for a diagnosis or to come up with a therapy, we're largely talking about LLMs.
01:31:40.260And a lot of them are very specific, tailored LLMs that are trained on medical studies.
01:31:46.200And the doctor would then turn, he would have his own opinions, she would have her own opinions, and then turn to the LLM for guidance.
01:31:53.920If you're a general practitioner, you defer to experts on various things to come to the solution.
01:32:00.820But real quick, so the question, I think, is not going to be answered because a company says,
01:32:06.600our AI is 99% accurate, or 90% accurate, or 50%.
01:32:12.040Downstream, looking at the actual outcomes of patients, to really know a statistical success rate for an AI would take enormous amounts of study, right?
01:32:28.620And so, if we don't have the studies in place, like there was this whole thing that happened in 2021, late 2020, where there was a big medical crisis,
01:32:38.120and without any real rigorous testing or studies, suddenly the advertising won the day.
01:32:46.320And suddenly you had soft mandates in America and hard mandates elsewhere.
01:32:50.740And we still don't really have a clear statistical understanding of what happened and what damage was done.
01:32:59.740Bryce, you were going to say something?
01:33:01.440Isn't it fair that we all want the humans to bring back the humans to be in charge, right?
01:33:07.680So, in the case of the doctor, a doctor who's actively ignoring very important, relevant, industry-standard tools to make a diagnosis,
01:33:21.040But the responsibility should fall on the doctor who's making the bad diagnosis in this case.
01:33:26.500Just like the responsibility for a business who's doing evil practices because the AI told it to,
01:33:33.860that should probably fall on the business because they're making the decision.
01:33:37.520If the AI, on the other hand, is a consumer product and it's causing children or adults to have psychosis,
01:33:45.240well, maybe the AI company should be responsible.
01:33:47.480And so, I think the worry with regulation is that you're mandating things that have unintended consequences.
01:33:55.060You're mandating things that aren't well-proven because this is how you're supposed to do it or because this is your ideology.
01:34:01.040But I think that's a concern with regulation, even regulation of AI.
01:34:05.800I think what we need to do is bring back humans to be in charge of the AI in a way that humans have been swept aside in a lot of ways way before AI for the last few decades.
01:34:16.240And how do you define negligence when it comes to it?
01:34:18.920Because if it were 2021, like Joe's saying, the AI would have said, I'm sure, go get every shot that you're told to get because it was built by people.
01:34:28.360Vitamin A is what you're talking about.
01:34:29.700I think in this case then, right, what Bryce would argue is that the doctor should use the AI, but the doctor does not have to listen to the AI.
01:34:37.980Or so, the doctor could then evaluate maybe what multiple different AI tools say, his own individual judgment, some of his own tests that he did, his relationship with the patient, and then make a decision.
01:34:48.300And I think that's totally fine because you could, through AI, one of these short-term benefits when it comes to medicine, this doctor could potentially have access to so much of your family history to make a way better decision for you, which can be awesome.
01:35:04.380I mean, yeah, because then China's going to hack it and create a bioweapon personalized just for you, which they're probably already doing with their mosquitoes.
01:36:22.500And any way we can help them is great.
01:36:24.080But I think the difference in the conversation would be you guys see a positive vision because there's so many short-term benefits.
01:36:31.900And we're seeing, of course, but down the road, probably not too far down the road, there is apocalyptic consequences that are going to be born out of it.
01:36:41.100And it's not like we're just creating out of thin airs.
01:36:43.300We're listening to these people talk like Altman talking about we have to rewrite the social contract.
01:37:42.820If you have questions or concerns about your gambling or someone close to you, please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
01:37:54.700BetMGM operates pursuant to an operating agreement with iGaming Ontario.
01:38:01.040These things are so antithetical to humanity that I, and I don't think that is in the distant future because we have things like Orchid, this IVF company that does genetic testing.
01:38:11.520And I understand that the positives to genetic testing, although I disagree with people saying, well, then I'm not going to have that baby.
01:39:56.220Because you're going into these other domains of technology where it's also a problem.
01:39:59.720And so, once again, I think what will keep us grounded is appreciation of what makes humans unique, understanding humans as they actually are, and making sure that, you know, whatever ways that AI technology is being used sort of reflects the natural order of the world and of how humans are actually created.
01:40:23.080And so, you know, to whatever extent AI is dominated or is controlled by transhumanists, that's a problem.
01:40:34.480But I don't think it's just unique to transhumanists.
01:40:37.120They're the ones creating it, and they're the ones with these insane visions of the future.
01:40:40.720But it's, you know, this idea is in everyone now.
01:40:43.440You know, everyone is kind of transhumanist adjacent, especially in power.
01:40:47.180Well, there's certainly the – there's a lot of people in power who have these visions, fantasies of transhumanism.
01:40:57.200But there's also maybe a large percentage of people who actually just don't care whether their children are, you know, grown with screens, right?
01:41:10.500And I think the key is actually to take a collaborative approach to AI and other technology rather than an oppositional approach of standing up on the train track and saying stop.
01:41:22.180That's the exact image I had in my head.
01:41:24.460It's like if the only thing that you're saying or doing is to do what, you know, conservatives do, just standing there and saying to progressives, no, stop, you're going to get bowled over.
01:41:36.560By the way, that's not – just to be clear, that's not my position.
01:41:40.920I know you weren't singling me out, even though I saw that glint in your eye.
01:41:45.960I wouldn't stand in front of the train.
01:41:48.120I would be more likely to find other strategies that didn't involve me getting run over.
01:41:52.380But my argument is basically is similar to the conservative argument against porn, right?
01:42:00.940And similar to the conservative argument against –
01:42:23.020But I get a certain sort of provincial or tribal sense from you guys that you are kind of conservative in the classical sense.
01:42:32.340The people closest to you are more important than like all of humanity, big H, because they're the people closest to you.
01:42:39.420And I think that should be the scope for most people unless you're the president making irresponsible decisions about artificial intelligence or the CEO of a corporation making vampiric and predatory decisions about artificial intelligence.
01:42:52.100And it's like from our standpoint, I think that it's not like this cosmic thing where if AI succeeds, that means everybody's going to be a trode monkey.
01:43:01.160Or if AI falters, then, you know, we're all just going to go back to the woods.
01:43:07.340It's going to – so many different lifestyles already exist and cultures already exist.
01:43:12.360There's going to be huge pockets of homogenization due to technology.
01:43:15.400But there's also going to be like huge pockets of individuation among people, individual people, and differentiation among cultures.
01:43:23.660So I have actually a lot of faith that you guys are going to be okay.
01:43:29.040You're going to put those cultural barriers in place.
01:43:33.420And that is, I think, the value of conservatism, of being suspicious of change because very often any push for change isn't necessarily going to be changed for your benefit or your kid's benefit or your community's benefit.
01:43:46.920The change, this radical change, is more likely to benefit the people pushing for it.
01:43:51.920It may be mutual, but in the case of porn, drugs, maybe even the trains if you really care about, say, the bison.
01:43:58.040Or maybe the entire technological system if you don't want trash islands in the Pacific, microplastic in your balls, dead bugs everywhere, black rhinos shuffled off into heaven.
01:44:09.380These sorts of things, you know, it's ultimately the conservative or the anti-tech or the quasi-Luddite position, if employed properly, simply means I am going to remain human despite the advertising and despite whatever new gadget comes my way.
01:44:28.880Yeah, and my appeal to the people is, like, I don't want to stand in front of it either.
01:44:32.540I don't think stopping this stuff is possible.
01:44:35.200You know, it's like the war on drugs or the war on guns, the war on terror.
01:44:40.480But, like, what we're saying, and I think we're agreeing on, is it's going to have to happen from the bottom up and ethics and people.
01:44:47.120And I don't – that's going to be really tough because people are very flawed.
01:44:51.080No matter if they're in power or not, that's just how we are.
01:44:55.180But, you know, I think that is a possibility.
01:44:58.260But I do think, like we were talking about last night, Ted Kaczynski made some pretty good points in 1995 about the Industrial Revolution and its consequences for humanity.
01:45:07.660You're very wrong about what the mail is for, though.
01:46:12.320In the deepfake scenario, though, if I could successfully impersonate you and go to the bank or successfully impersonate you and go to your bedroom, right?
01:46:21.260Like, these things, you would consider that to be a crime.
01:46:25.360But you can't go to my bedroom as a deepfake.
01:46:27.440So with deepfakes, basically, it's the line between, like, what is caricature, what is cartooning, and what is impersonation.
01:46:35.460So, you know, a cartoon of, you know, Donald Trump dancing with dollar bills falling everywhere on the graves of Gaza children.
01:47:20.640Google, for instance, like, they have among the most advanced, mid-journey too, but, you know, among the most advanced video generation AI, right?
01:47:30.980There's all these guardrails in place to keep you from impersonating famous people.
01:47:35.540They have small-scale, malicious, kind of like cyberbullying deepfakes.