00:00:00.640Spring break isn't what it used to be. It's better. This spring, stay three nights and get a $50 Best Western gift card.
00:00:08.020Life's a trip. Make the most of it at Best Western. Visit BestWestern.com for complete terms and conditions.
00:00:16.000Hey everybody, how's it going? Thanks for joining me this afternoon. I am Oren McIntyre.
00:00:21.060Before we get started today, I just want to remind you that one of the ways we keep the lights on around here is, of course, subscriptions to Blaze TV.
00:00:27.500So if you want to support the show and you also want to get access to all the great behind-the-scenes footage from your favorite BlazeTV hosts, you need to head to blazetv.com and use the promo code ORIN to get $20 off your subscription today.
00:00:40.960That's blazetv.com slash ORIN to get $20 off your subscription today.
00:00:46.400hey guys if you have been following my channel for a while if you're familiar with my work at
00:00:53.060all you know that managerial theory elite theory is something that i find very interesting it's a
00:00:59.960realm of politics that i think is criminally understudied but one that i am obsessed with
00:01:05.420so the nature of how managerial systems work together how they impact our politics and how
00:01:11.460they lead us to particular conclusions in our political life. These are all things that I
00:01:17.100really obsess over because I think that while this has already been a very dominant thing that we
00:01:22.860have needed to understand since basically FDR and the New Deal, it continues to grow in importance
00:01:29.820and it's shifting in very important ways. And one of the really important ways that is shifting is
00:01:35.840AI. AI has a lot of possibilities to shake up pretty much every aspect of human civilization
00:01:42.500from art to government to finance to the way that we work with each other, the way we even
00:01:48.180understand our own existence. I think people who undersell the impact that ADI is going to have on
00:01:53.760the entire civilization are really missing the boat on this. When I get that there are a lot of
00:01:57.940technological advances that get oversold in their importance, but I really do believe that
00:02:02.340artificial intelligence is going to be a watershed moment in a lot of areas. And one of those areas
00:02:07.520is the way we organize our governments. AI is going to revolutionize the way that we understand
00:02:13.460our interactions with the state, the way that the state can attempt to manage us, which is something
00:02:19.400I think a lot of people are uncomfortable with. I'm certainly uncomfortable with, but something
00:02:23.720we have to acknowledge that the state is always trying to do and will get better at doing as AI
00:02:28.780steps into the frame so i want to start by playing you a clip from the ceo of cindel talking about
00:02:34.860what they think ai is going to be able to do the the area of recklessness is the is the spending
00:02:46.060of governments around the world who are all with little exception all spending well beyond their
00:02:51.820means. That's the recklessness of this moment in history. This is not a parallel to the 1920s
00:02:58.780in terms of the recklessness of the private capital markets. It's a story of the recklessness
00:03:04.680of government spending. Within the private sector, there's a huge question as to where AI will take
00:03:10.600us. And I was carefully taking notes and listening to what Larry has to say or to what Madame Lagarde
00:03:16.660has to say because this is one of the big issues of our moment will AI create
00:03:24.040the productivity acceleration that is honestly this hoped for in Washington
00:03:30.780and in the halls of government around the world as a ways to overcome the
00:03:38.020profit spending that we're currently engaged in like the world the world
00:03:42.220needs a savior and the hope is that ai is the savior that we need for productivity and the
00:03:49.980challenge with this is it is it may or may not be we just don't know yet the so you'll see there that
00:03:57.860he used the word savior now interestingly a lot of silicon valley guys when they're being honest
00:04:05.660about it will acknowledge that ultimately they see their construction of ai as possibly a
00:04:12.340construction of a new deity of creating their own god and this is something that of course is deeply
00:04:18.820ingrained in human existence we only have to go back to you know the the creation of the golden
00:04:24.120calf by the uh hebrews as an example of how deeply it is ingrained in our human understanding to want
00:04:32.820to create some form of God for ourselves, even when we know where the real thing is,
00:04:38.720even if you've been following the real thing through the desert, ultimately it's very easy
00:04:43.920for people to fall away and want to create their own God for many different reasons.
00:04:48.240Because, of course, a God you create is also a God at some level that you think you understand
00:04:53.340or you think that you can control or have influence over.
00:04:56.940And this is obviously something that we want to tell ourselves as limited human beings.
00:05:02.820And there's also a deep strain inside of, I think, particularly the Anglo understanding of the world to want to kind of replace God with a logical system, understanding that there is ultimately something that undergirds our reality, but we want it to be something that we control.
00:05:22.280We understand that we can kind of grasp in a very physical sense.
00:05:26.480And so in this way, an AI God could stand in for the divine and would be something we're far more comfortable with, even as we think of all the horrible implications that that could also carry.
00:05:38.860Many of the same people who will talk about creating an AI God will also acknowledge that a lack of control over AI could have very disastrous results for the human race.
00:05:49.200and so we have this weird pull towards creating something we know is possibly going to destroy us
00:05:58.300at the very least will radically change the way that we live but we still feel compelled to bring
00:06:03.980this thing into existence it's as if it's bringing itself into existence through us this is a process
00:06:10.240that the philosopher nick land calls hyperstition and if you're familiar with my channel you also
00:06:15.700know i'm going to use a good amount of nick land as i discuss this issue however it's very
00:06:20.480interesting in this case that the citadel ceo talked about ai not just as a god but as a savior
00:06:27.260and he says that it's the spending of these governments that is ultimately out of control
00:06:32.820and perhaps ai can produce the level of productivity necessary to help us escape this scenario now
00:06:39.580that's a very interesting admission because one of the things that i've speculated on routinely
00:06:44.640when it comes to artificial intelligence is the possibility of it stepping in for the
00:06:51.160current function of the managerial elite. So quick refresher, I've done many different videos and
00:06:57.220written an entire book called The Total State on the nature of the managerial elite and the world
00:07:02.460that they have created. But we're going to need some of those basics to have this discussion. So
00:07:07.320I'll give you the cliff notes. But if you want a lot more, there are plenty of videos. In fact,
00:07:11.660There's entire playlists of which this video will be part that lays out kind of the thesis about the managerial elite, who they are, what they do, how they came into being, and the implications for our social organization.
00:07:24.280But as I've said before, scale is something that is incredibly difficult to achieve in human endeavors.
00:07:31.760One of the things that we've seen over time is that there seem to be natural boundaries to human organization.
00:07:37.500At first, it was something like the tribe, and eventually it evolved into something like the city-state. We saw empires of regional context and then perhaps continent-spanning empires and eventually global empires.
00:07:51.680And at each stage of this increase in complexity, we've had to find new ways to administer our societies.
00:07:59.860And the ways that we do this can vary quite greatly, especially as technological innovations change the way that we do things like communicate with each other.
00:08:12.180Just fundamental ways that our humanity can change as we scale up our civilization.
00:08:17.980And each one of these, it comes very often from the increase in efficiency, more efficient processes allow for greater output, which tends to allow you to scale higher and higher.
00:08:30.340However, to achieve this maximalization of industrial capacity, economic capacity, social organization, everything else, we've had to create rather artificial structures on which to kind of lay our social skeleton.
00:08:49.200So one of the things that we've done is created these large bureaucratic institutions.
00:08:55.240Now, bureaucratic institutions have existed on some level throughout history.
00:08:58.800In fact, so one of the core points about elite theory is you tend to oscillate between more feudal societies and more bureaucratic societies.
00:09:07.480And so we obviously had bureaucracies before we entered the modern age.
00:09:11.640But the thing that makes the modern bureaucracy rather different is the level of technology that goes along with it.
00:09:18.520the ability to mass communicate, to instantaneously conduct a meeting or distribute orders or move
00:09:26.560dollars around, manipulate the economy at scale. These things create a very different environment
00:09:33.000in which the velocity of information and dollars and everything else flows. And so it becomes
00:09:39.260really important for us to have this social scaffolding that allows the government to
00:09:45.000interact with its population and frankly manage its population in order to produce the efficiency
00:09:50.680that generates the outcomes it needs to scale. So for instance, we've invested a whole lot in
00:09:56.940large government bureaucracies, educational bureaucracies, media, economic institutions,
00:10:03.660all of these things help us operate our societies at a much higher level than we would have if we
00:10:10.740did not have them it allows us to have nations that are spanning continents or empires that even
00:10:17.640span the world we have lots of global interactivity because of this managerial structure however the
00:10:25.480managerial structure does have limitations for a couple of reasons one the managerial structure
00:10:32.340requires us to become inhuman at some level the managerial structure is looking to strip out the
00:10:39.540differences that different cultures, peoples, religions, folkways contain. Because the more
00:10:48.220you can standardize the process, the more you can reliably produce the result. And really,
00:10:53.880efficiency is very often about reliably producing the same result over and over again. If we know
00:10:59.620what we're getting, even if what we're getting is not the best, then we can plan for it. And if we
00:11:04.360can plan for it, we can manage it. And if we can manage it, we can make sure to increase the level
00:11:08.660efficiency at which those processes operate. And so it's really important to managerial structures
00:11:14.160that we don't have a lot of these issues like, oh, you have to take off a certain amount of time
00:11:20.180because of a religious observance or because you have children or you're unwilling to go through
00:11:25.320a particular process that makes the government more money or more powerful because you have
00:11:30.580certain expectations about the way your community should interact. Well, these are all big
00:11:35.020hindrances to managerial processes and they need to be stripped away. So turning people into less
00:11:41.960human forms is a critical aspect of managerial theory. However, ironically, another limitation
00:11:49.120on managerial theory is it ultimately is still human. There is still a human actor involved and
00:11:55.560there's only so much efficiency you can derive from the human being itself. The human needs sleep,
00:12:01.800The human needs rest. The human needs relaxation. They need to be paid. They need to have some kind of family interaction. As hard as our managerial elites are working to strip out these religious aspects and these familial bonds, eventually they're just degrading the quality of the very people they're attempting to manage.
00:12:22.400Because when you strip away people's connections to their community, to their family, when you take away their purpose, their religion, when you take away all these other codes and all these other relations that truly embed us in a human society in order to scale us up into these managerial constructs, you're ultimately robbing us of what makes us happy.
00:12:43.060And that's going to work for a while. But as you can probably see, as you look around you from the plummeting rates of happiness across pretty much all these metrics, you recognize that, no, there is a significant cost to this. Yes, we are scaling up. We are creating more efficiency, often in areas that don't really matter.
00:13:01.520You know, we're really good at flat screen TV making, very cheap flat screen TV making, but we're miserable. And so like, ultimately, yes, you are creating these higher levels of economic output and trade and organization, but it's costing these populations something.
00:13:17.380And ultimately, a lot of these CEOs realize that the burden of maintaining these human costs while trying to increase efficiency is a real problem.
00:13:31.760So it's been a push for a long time to try to figure out how you can transcend even the limits of the managerial class, right?
00:13:42.140Like we've scaled things up as high as we can get.
00:13:44.800we've produced the most efficiency that we can out of these human systems we made them as inhuman
00:13:50.320as we can while still operating them with humans right and this is if you watch anything if you go
00:13:56.480back and watch like terry gilliams movie brazil or you look at other uh kind of dystopian future
00:14:03.480understandings this is an aspect of technology that is recognized over and over again that it
00:14:08.940locks humans into these deeply inhuman processes.
00:14:13.720Even guys like Ted Kaczynski had the very wrong idea about how to solve this problem,
00:14:19.760but wrote extensively about how it was a very real thing that was constantly impacting the well-being of humans
00:14:27.560and had to change at some fundamental level.
00:14:30.520However, the plan, it seems, as you're listening to a guy like the CEO of Citadel there,
00:14:36.940is we're going to replace even the managerial class with AI.
00:14:42.780And that AI is going to be far more efficient at what it's doing
00:14:47.000because it doesn't need to take breaks and it doesn't have a family
00:14:50.420and it doesn't need to go to church and it doesn't feel bad about not having that stuff
00:16:34.880In fact, in many ways, this was the core promise of neo-reaction.
00:16:40.020Neo-reaction is a wide-ranging collection of different political outlooks, but broadly
00:16:45.340falls under this understanding that there's something critical that needs to shift in
00:16:50.380the core of the modern nation state in order to allow us to live in a better way. And one of the
00:16:56.660ways that was often seen that that could occur in our X was technological innovation that ultimately
00:17:02.840made smaller states possible. So what we've seen for a very long time, really for thousands of
00:17:09.600years at this point, is that increasing complexity has had mostly advantages and very few
00:17:16.280disadvantages. Now, over time, we've seen inefficient bureaucracies and sprawling empires
00:17:23.180come apart. They reach the kind of the edge of the efficiency that they could produce through
00:17:28.320their like proto-managerial regimes. They're kind of oligarchic or bureaucratic systems,
00:17:34.300you know, the Byzantine systems that tend to crop up in late empire. So there were downsides to
00:17:40.680maximizing you know the scale of your empire but it was mainly you wanted to get up as far as you
00:17:46.940could to the edge of that without going over right like that was pretty much the goal and not everybody
00:17:52.200succeeded in that in fact literally everyone failed that's kind of why there is this natural
00:17:56.420life cycle of empires but what we've seen is over time every iteration of this uh the the impact of
00:18:03.740scale got more and more powerful especially after we hit the age of discovery we hit kind of the
00:18:08.860global capitalist network once we start having the ability to trade across borders and have
00:18:13.620these economic advantages once we see science start to accelerate in its you know its discoveries
00:18:20.460and its productions and interact with capital we see that this whole process speeds up quite rapidly
00:18:26.500and the hope was like ultimately yes scale had been the answer for a long time but if we get
00:18:33.340enough technology to overcome the advantages of just raw human beings raw number of human beings
00:18:40.380then maybe that means we can scale society back down and this was what was often called patchwork
00:18:46.080in neo-reaction the idea that you would have these little patches these mini states which you would
00:18:51.860probably think of in more classic sense as a city state a place like singapore you know the old joke
00:18:57.100is let there be a thousand Lichtensteins, this idea that these smaller sovereign areas could
00:19:03.500start to emerge. And you can start to see how that might be possible in our modern age. Of course,
00:19:09.680the big advantages of scale right now have been the ability to just move mass amounts of people,
00:19:17.820mass amounts of economic force to produce a large amount of military force to leverage your mass
00:19:26.000production and mass consumption as a overall advantage this has been why we keep seeing the
00:19:32.080rise of larger and larger states that's why a place like europe considered turning into the
00:19:36.600european union instead of just having smaller states because they figured out they had to start
00:19:40.860competing with places like russia and china and the united states and they could not do it in
00:19:46.100their current form you can go back to machiavelli and the desire to unify italy as an understanding
00:19:52.420of why scale continued to be so much more powerful however we're starting to see scale hit its limits
00:19:59.460right that that's kind of the whole point that's why they're looking for ai as a new way to do
00:20:04.420things because scale has hit its limits and if ai introduces a way in which we can replicate the
00:20:10.240advantages of a managerial bureaucracy but without the number of people or the number of resources
00:20:16.900necessary to operate one, well, then we have a real discussion on whether or not we can scale
00:20:22.760down our societies. Because if we're seeing the negative impacts of scale, we're seeing
00:20:28.540the social disillusion, the loss of national identity, the loss of social fabric, the spiritual
00:20:35.080degradation that occurs, the fact that your elites seem to separate and hold in contempt
00:20:41.780uh the the homeland from which your empire sprung and we're seeing all these negative impacts maybe
00:20:47.540we can avoid so many of those by scaling back down into more reasonably sized human communities
00:20:54.020that can leverage the advantages of things like ai to still operate at a very high level we also see
00:21:00.380of course increases in uh kind of these wildcat technologies that would allow a city state to
00:21:09.840protect itself so things like drone warfare make it very clear that all of a sudden maybe you don't
00:21:15.820need a army of you know millions of people to be a significant threat to those that want to
00:21:22.400invade you perhaps simply having a well-placed set of tactical drones an understanding of how
00:21:29.620to use certain technologies asymmetrical warfare could make you a dangerous enough target to where
00:21:35.420most people, even those of larger sizes, are not going to want to attack your city state.
00:21:41.040And so this creates a moment where we can start to consider shrinking back down into these much
00:21:48.620smaller human arrangements. And that's important because in addition to solving all these other
00:21:53.640problems of social scale, it also provides the possibility that we could have sovereignty at
00:21:59.300this smaller unit of governance, which, again, just radically changes the way that humans can
00:22:05.360understand a healthy order, a healthy social order. They don't have to always make the trade
00:22:11.180of scale. There might be other options. Of course, there are problems with trying to scale down,
00:22:20.060right? Like it doesn't mean that AI just has to be used to scale things down. In fact,
00:22:26.600it's very clear from guys like the Citadel CEO, they're much more interested in using AI to
00:22:31.560continue to scale up. AI could create these city states. It could create these smaller
00:22:38.040civilizational zones, but it could also be used by the same people who are currently operating
00:22:43.940these massive networks, these attempts to globalize human existence to achieve that
00:22:50.320without costing them as much, right? Like they can ultimately just replace the managerial elite and
00:22:56.300all of the negative consequences of maintaining these massive human bureaucracies. You can even
00:23:01.240transcend the limitations of human organization functionally to scale up to the maximalist extent
00:23:07.920and so we we while many people like myself if we're bullish on ai at all it can be as this way
00:23:16.540to replace the managerial elite and allow for smaller civilizations and that's something
00:23:20.160i hope ai does like i i have many many skepticisms of ai again if you watch this channel you know
00:23:26.900that's true but the one upside the one continuous upside that i've been sold that i would like to
00:23:32.580believe in that i think would be a true value to humanity is the removal of the need for mass
00:23:38.460managerial systems in order to operate humanity at scale to maintain these massively scaled
00:23:44.180civilizations but there's no guarantee that that's what ai gets used for even if it can produce that
00:23:50.160outcome and i think it can i it doesn't mean that that's the only way it gets deployed and if it's
00:23:56.440If AI enables the advantages of scale more efficiently, or at least at the same level of efficiency, as it does shrink down those advantages of scale to smaller nations, well, then ultimately these large civilizations that have both the advantage of scale and efficiency through AI will probably still win out, right?
00:24:18.040so it does the opposite of solving our problem here and so we run into this very serious problem
00:24:25.560because it becomes very clear that as we're hoping perhaps the ai brings about this revolution that
00:24:32.000frees us from the tyranny of massive empire of globalistic understandings of civilization it
00:24:38.540could also do exactly the opposite uh and we have to be very careful about how this gets done
00:24:45.260Because if we are looking to transcend the limits of human organization, we are by definition looking to become less human, right?
00:24:59.240Like that is one of the things that is critical about humanity.
00:25:03.220If you think of somebody like Martin Heidegger, and he talks about Dasein, the nature of human beings, like the actual state of being.
00:25:12.640And one of the things he says that really makes Dasein what it is, what makes human existence what it is, is its recognition of its own limitations.
00:25:21.860It's turned towards its own mortality.
00:25:26.040And because I will not live forever, my life has certain meanings.
00:25:31.160Everything in my life has a very particular context that I cannot escape.
00:25:35.060And unlike a rabbit or something else that does have mortality but is not conscious of that mortality, my consciousness of that ending, that limitation, really does radically alter the way that I behave.
00:25:49.900And we see this again, you can look at, you know, tales like Interview with a Vampire or Highlander or Tuck Everlasting, if you remember that book from elementary school.
00:26:02.800And these are might have been like the first time that you really contemplated what it would mean to live without limits, right?
00:26:10.180What would it mean to live 100 lifetimes or 10 lifetimes or 1000 lifetimes?
00:26:15.580How would that change your interaction with the world?
00:26:18.340It wouldn't just be a small thing. Like the fact that you now have an eternity, that you now have a completely limitless existence from at least a temporal perspective is a radical change. Again, you can look at Lord of the Rings or you can look at even Warhammer 40k, you know, and these are fantasy properties or sci-fi properties in a lot of ways.
00:26:43.440And of course, there's there's a silly aspect to this. But they if you look at the elves in those stories, they also kind of endure this change in understanding, right? Like they live very different lives than the limited creatures around them because they don't expect to have a natural death or at least not one for a very long time.
00:27:03.280And so their existence is radically changed, like their ontological understanding is radically different, because they do not have that natural in that natural limitation. And the temporal example is one that I think is most relatable, because it's one that we've seen explored on a regular basis, when it comes to our fiction.
00:27:24.800But one of the things that we haven't seen explored as much and really kind of, I think, popped into our consciousness later on, especially with something like the explosion of Baudrillard and the Matrix, you know, popularizing kind of his understanding is what happens if we transcend certain aspects of our humanity in a very real sense, not just our temporal humanity.
00:27:46.860But what if we, you know, transcended our physical limitations, our limitations when it came to organizing our society?
00:27:55.320What if what if those things change in an incredibly radical way?
00:28:01.720You know, a lot of people will point to very correctly to C.S. Lewis's fantastic book, That Hideous Strength, which was based off of his famous essay where he talks about men without chess.
00:28:14.560You know, this is probably if you've heard C.S. Lewis quoted, you've probably seen something from these works.
00:28:21.180And, you know, this is itself an exploration of many of these themes.
00:28:26.080So it's something that I think has entered into the modern consciousness, especially once you kind of get to perhaps, you know, Mary Shelley's Frankenstein is often kind of held up as the first sci-fi work.
00:28:37.500But ever since we've had science fiction, it has at some level addressed this idea that we could transcend not just our mortality, but other aspects of our human existence and how that would warp and change us in very serious ways.
00:28:51.840And I think we have to look at the same thing when it comes to how we organize societies. If we transcend all the classical limitations of the ancient city on levels of scale, on familial relations, religion, everything else, we fundamentally change the nature of humans in a way that we might not be able to get back.
00:29:14.200That's kind of C.S. Lewis's point in the abolition of man is once you have created the last generation that knew those limitations, knew human nature, knew what it was to be human at your core, then it's possible that you could lose everything going forward because the next generation would simply have no understanding of humanity in a very real sense.
00:29:37.380You would have abolished man because you would have taken all the limitations, all the things that are attached to what people will cynically call our meat suit, right?
00:29:48.440And this is often, again, a dream that is nested in many classic desires.
00:29:55.420Gnosticism is often pointed to as this desire to transcend the physical in a real way, the human limitation.
00:30:01.980many spiritual practices are about pushing the body to its limitations to kind of reach
00:30:07.980the ability to kind of touch the divine, the noumenal, the thing in itself in a way that
00:30:16.220you could not do otherwise. And I think this is ultimately why many people, including guys like
00:30:21.860Nick Land, who, again, I respect his thinking quite a bit, but whose objectives I often disagree
00:30:28.380with. And one of the things that he often looks for is this Nietzschean transcendence of human
00:30:35.040nature as such to achieve a higher existence beyond our physical bodies, like things like
00:30:41.520space travel, things like exploring the galaxy, achieving certain scientific understandings,
00:30:49.040and I think in his case, understandings of the world would require a disembodiment of human
00:30:54.600intelligence and that's kind of what we're trying to do with ai here and that's i think what many of
00:31:01.280these people are ultimately giving birth to whether they recognize it or not you know when i interviewed
00:31:06.400nick land he said that the managerial elite were kind of this human security system against us
00:31:12.900finally escaping uh into something different to making this you know nietzschean transcendent
00:31:18.900move uh and there's probably some truth to that like there is something innately human
00:31:25.380that is still wrapped up in the managerial elite even if they have made us less human
00:31:31.740in their journey towards kind of our current society uh but i think that he might be wrong
00:31:38.280about the role that the managerial elite are ultimately playing um perhaps he would see the
00:31:44.540destruction of the managerial leader through ai as just kind of the final uh you know destruction
00:31:50.560of the human security system and the the final leap to transcend but i don't think you would
00:31:54.900even get to that point if it wasn't for these managers because one of the things you have to
00:31:59.000remember about ai is while you think of it as something that just exists in your pocket when
00:32:04.120you click on you know grok on twitter or you know gemini on google or something else ultimately
00:32:10.780what you're really touching is a large amount of like actual physical resources that are being
00:32:17.040expended we've already heard the explosions of ai and and how that's impacting different areas
00:32:23.400uh ai companies buying up large tracts of land uh you know drawing huge amounts vast amounts of
00:32:29.860power from the grid uh polluting water as they kind of create these these massive plants in
00:32:35.900addition to creating these massive AI plants and the environmental and human impact that they're
00:32:41.000causing they also create you know shortages and things like chips for your computer RAM and and
00:32:48.700video processing chips and these are essential for people's day-to-day lives so as you get further
00:32:55.020and further down this AI path it's becoming harder and harder to have like a personal PC experience
00:33:01.020to have those things, but they are all being put into like these AI farms. And so you're being
00:33:06.380removed from even that computing process. It's becoming more and more abstract to you as a human
00:33:11.760being. You interact less and less with the physical components of computing and the digitization of
00:33:17.820the world. And that becomes more and more this self-contained sealed off process that is occurring
00:33:22.980away from human input, which is again, what Nick Land predicted in his theory of accelerationism
00:33:30.140just so you understand when we're talking about technological techno capital acceleration we are
00:33:35.720not talking about political acceleration that's a totally different thing the idea that you just
00:33:40.600make things worse because that allows you to like build your utopia after the collapse of society
00:33:46.520that is a completely different thing unfortunately that's become the popular understanding of the
00:33:50.940phrase accelerationism so i just want to make that clear that's not what i'm talking about at
00:33:54.580all that's a stupid ideology what i'm talking about is this techno capital acceleration that
00:34:00.800brings us to the possibility of the singularity where where a non-human intelligence takes what
00:34:08.020that which was human and brings it out somewhere else like take you know takes off to the stars
00:34:13.660and this is really nick land's understanding that humanity is valuable in the sense that it
00:34:20.080produces intelligence that intelligence itself intelligence for intelligence sake is the goal
00:34:26.060it's not intelligence for the better of humanity or the continuation of your people uh but it is
00:34:32.120uh the creation of intelligence for intelligence sake now land prefers different cultures for this
00:34:38.520he says very specifically the anglo people have uh a incredibly high rate of production of
00:34:44.560intelligence and have a unique ability to kind of create this Faustian transcendence. He also
00:34:52.000has spoken about East Asian and Jewish intelligence as also being impressive and
00:34:58.260could possibly be kind of melded into this project. But his whole point is ultimately
00:35:02.640what we're looking for is this disconnection from the material. We want this creation of
00:35:10.440intelligence that can do things that no other uh human could do that as long as humanity is bound
00:35:16.260uh or intelligence is bound up to human concerns it will never reach its full potential he calls
00:35:21.800this monkey business as long as as our intelligence is still trying to keep kind of our monkey bodies
00:35:27.580alive in his words uh then it will never create this kind of utopian understanding or at least
00:35:33.280this transcendent understanding utopian is the wrong word for for land he recognizes i think
00:35:38.000many of the horrors that could come out of this but ultimately embraces them in the search for this
00:35:42.160transcendence so i think it's a it's a very interesting moment uh because while a lot of
00:35:50.940these tech companies are familiar with land's work i'm not sure they always understand the
00:35:56.660implications of what they're doing i certainly don't think that the managers who are attempting
00:36:01.780to swap out their current cadre with artificial intelligence really understand how possibly
00:36:08.980irrelevant they could be making themselves i don't think they would be
00:36:12.060so eager to engage in this if they thought they would come to that
00:36:16.700however they really can't help themselves because a lot of the strategy that the managerial elites
00:36:26.420have deployed has made it impossible for them to really continue to govern in any other way
00:36:31.300You know, the manager elite have created these vast scales that they obviously like can't ultimately keep up with. As he said, the efficiency simply does not exist. And so they're out of road, really, when it comes to improving, you know, or increasing the control of the managerial bureaucracy, dehumanizing their populations.
00:36:51.600But also because of some of the decisions made by the managerial elite to create this process, they've made their own governments more and more difficult to manage while increasing their scale.
00:37:01.460So, for instance, one guy, Ed Dutton, has made the assertion, and I think this is largely correct, that Christianity was kind of the ultimate technology of scale previously, like before the modern world.
00:37:17.540Because before Christianity, religion was largely very tribal. It was really tied to your ethnos, your people in a very definitive way. Usually, in many cases, you had household gods that couldn't even be worshipped by anyone outside of your household.
00:37:34.120If you read The Ancient City by Fustel de Colange, you'll find him talking about how basically the Roman Empire had to break the idea of household gods and get everyone to worship kind of these more central gods before they could really create this larger civilization.
00:37:53.100You couldn't have people having household-only gods or even city-only gods.
00:37:58.200You need people to be centralized under these larger religious umbrellas.
00:38:02.020that's why the cult of Diana and others became more popular because they allowed for larger
00:38:07.080levels of social organization because people simply cannot interact with those that they
00:38:12.180do not agree with religiously. This is why multiculturalism is a lie and why we're seeing
00:38:17.000so many problems in our world right now because all politics is fundamentally theological and
00:38:22.980you need to share a particular theology to ultimately have a similar goal. So one of the
00:38:28.780things that christianity lets you do is maximally scale up uh civilization under the banner of
00:38:35.820christendom even though you had these other tribes and ethnic identities that continue to exist
00:38:41.740christianity had a nice balance of maintaining ethnic specificity and particularity without uh
00:38:48.180you know without it didn't crush that but it also allowed you to cooperate with other christians
00:38:53.140now that wasn't always the case of course there are many christian civilizations that fought each
00:38:56.700other. But we do understand there was kind of this somewhat pan-European identity that existed
00:39:03.420under what we call Christendom, right? And what tied that together, what allowed so many peoples
00:39:10.340to kind of exist under the same banner at different times, was this idea of Christianity.
00:39:16.820Now, what our managerial elites have done is turn our new theology into kind of secular humanism,
00:39:21.820That's become the theological practice that ties our current empires and even our globalist
00:39:29.680understandings together. However, as you bring in people from other dynamic cultures, you bring in
00:39:37.220Muslims, you bring in Hindus, you bring in people who do not believe in Christianity and also don't
00:39:43.780really subscribe to this secular humanism, then you start to see the clash of the theologies play
00:39:50.160out right like you recognize these can't coexist now of course the idea from the globalists is
00:39:55.140like well we'll just kind of globo homo islam before they caliphate us right like that's that's
00:40:00.760and they've been winning that bet for a while but it's starting to to seem clear in places like
00:40:05.480europe that it might lose that bet that actually it can't really do that anymore and this is a
00:40:09.840huge problem for the managerial elite because they don't know how to handle like truly religious
00:40:14.700people like they they don't they've been busy like wearing away the western european uh instinct
00:40:21.440towards religion the christian instinct for religion for a long time uh but the the others
00:40:25.720that haven't been in constant uh conflict like islam they they have the ability to come in there
00:40:31.300and cause serious damage and so the idea is ultimately that ai could like solve this problem
00:40:37.860because then i don't have to worry about the compatibility of islam with with hinduism and
00:40:43.740with Christianity, because we've functionally made a God. We've functionally made a savior,
00:40:48.700as the CEO said in that video, that exists at a top layer above everything else. It kind of has
00:40:54.900this ultimate sovereignty that allows it to kind of dictate down to all these other religions.
00:40:59.940And you can see this because they try to play this idea that our society is religiously objective.
00:41:06.860Of course, that's not the case, but they sell us this idea. And it's largely predicated on the idea
00:41:12.200that this secular humanism will take the place and become our political theology in a very real
00:41:19.240sense. But we have more problems because AI is not itself neutral, right? Like AI is deployed
00:41:27.060by someone. It's been programmed by someone. You'll notice that right before the war in Iran,
00:41:34.160the Trump administration had like a real knockdown drag out with AI companies because the worry was
00:41:40.600that they were building restrictions into what the AI would do when it was in the service of the
00:41:47.460United States government. What will it target? Who will it kill? What information will it keep?
00:41:52.880What will it collect? And that's a huge problem for a nation state, for a country that wants its
00:42:00.260own sovereignty, because if there is some level of restriction built into your military technology
00:42:06.240that is dictated by some outside force, you are no longer sovereign, right? If the US is not making
00:42:11.960decisions on what it targets, who it targets, who it kills during a war, is it really a sovereign
00:42:18.020nation? Or is there another layer of sovereignty that exists over top of it? If an AI company gets
00:42:23.720to decide where your military goes and how it deploys and what it targets and all these things,
00:42:29.900well then you don't control your military the ai company does and this is an incredibly dangerous
00:42:36.580scenario and and then you know however you feel about the trump administration and the current
00:42:41.320war in iran uh you can recognize the huge problem that's baked into this because the people who
00:42:46.960design the software the people who create the ai well they're building their own presuppositions
00:42:52.860into this right and so the ai is mimicking their understanding of the world it might
00:42:58.280pervert it through different processes or its own reasoning but ultimately they are still feeding in
00:43:04.740kind of the core text the core understanding and therefore the core theology that your ai has and
00:43:11.160so a real question we might have to ask ourselves very soon is what religion is my ai
00:43:17.260like that's a really really scary question but it might be one that becomes very important very
00:43:24.580quickly. Because if we're turning over these decision-making patterns to AI, then that means
00:43:29.980it will deploy the theology of those that programmed it to ultimately make massive decisions
00:43:36.280about our lives. And if it's operating a global scale for a managerial elite that is looking to
00:43:41.180maximize its reach, that means that we will probably have the most inhuman religion you
00:43:46.100can imagine informing the morality of those that are making the most important decisions
00:43:51.340for a civilization. Now, one of the underlying assertions of accelerationism is that changes in
00:43:59.340AI are functionally inevitable. You can't stop them. And in fact, they're happening so quickly
00:44:04.280that you can't even control them. And so much of this will take on a life of its own because we
00:44:09.860simply as humans are seeing the decision space for ourselves collapse. We don't have time to
00:44:14.980deliberate over where AI is going before it already gets there. And so it could be that all
00:44:19.820these questions are largely irrelevant because the AI production will continue apace. But I will say
00:44:25.020this, we still do not have this independent AI in any kind of energy efficient way. And so there's
00:44:32.800also a real battle going on because AI requires this massive infrastructure, this incredible drain
00:44:39.260on power, this destruction of the environment, which stands in real opposition to a lot of
00:44:45.400people's well-being but especially like emerging concerns from the new right which finds itself to
00:44:50.760be more and more environmentalist by the day am i gonna allow ai to destroy my homeland so that it
00:44:57.000can scale up civilization for globalists how long can a civilization or especially a globe-spanning
00:45:03.500empire based on ai operate if people start you know targeting ai centers and saying i don't want
00:45:10.700this around anymore i don't want this controlling me and i know how to stop it these are very
00:45:15.440fragile you know this is really a glass cannon uh that is being constructed it might be able to
00:45:21.420scale up and create incredible amounts of efficiency if it's operating but we've already
00:45:25.880seen with things like air travel and just-in-time delivery that our civilization is already losing
00:45:31.440the ability to maintain uh these systems and so the real question is can ai race uh you know in
00:45:39.140in a way that can develop quick enough to maintain itself where it doesn't need humans to do this
00:45:45.180anymore and it doesn't have to take in to account the concerns of humans because otherwise there's
00:45:50.240always the human option of simply you know destroying ai in its crib and kind of uh resetting
00:45:56.980things which would itself have disastrous results so i don't have any immediate solutions to this
00:46:02.520and i know this isn't like a a super uh happy episode but these are just things that i think
00:46:08.440about, things that I'm trying to understand, things that I think are critical to the future
00:46:12.100of our country, and I think our humanity as a whole, and things we are not thinking enough
00:46:17.600about, in my opinion. So I just want to start the conversation with my own theories. If you have
00:46:22.560some of yours, you have some input, by all means, please put it down in the comments below. I'd love
00:46:26.460to see what you're thinking about this. But that said, let's go to the questions of the people.
00:46:33.680Manahute says, the concept of progress acts as a protective mechanism to shield us from the
00:46:37.980terrors of the future frank herbert dune yep yep fantastic and of course frank herbert thought a
00:46:43.960lot about this before anyone else did uh also has the famous line famous line from dune about uh
00:46:49.120you know men turning their thinking over to machines but then being ruled by other men who
00:46:53.300understood how to use those machines uh all of these very relevant dune is a very relevant book
00:46:58.360to today it's not it's not just star wars slop like dune is actually a pretty important book
00:47:02.740uh jackson day says don't worry if it does create a productivity boost governments will just spin
00:47:10.040that too yeah they will right which is kind of its own problem like the machine continues to
00:47:15.120feed itself it continues to need to expand there's never enough uh for it but that that actually just
00:47:20.740makes me more concerned a lot not less concerned because that means that even once we do achieve
00:47:25.580like this maximalist efficiency from ai the desire will simply be to make things less human
00:47:31.200to produce more ai centers to build more and more of our civilization to this other efficient form
00:47:37.100of production we could be seeing the move from mass managerial structures uh to ai structures
00:47:43.740as like a and at the same level of civilizational change in the same way that the industrial
00:47:49.340revolution changed humanity and how it understands itself we might be seeing that with ai
00:47:54.580uh david doden says scaling down administrators is a fool's errand many of them are invented
00:48:03.620to create ethnic patronage network ai affirmative action admins say urban airport tsa is no better
00:48:10.980yeah and again that's entirely possible and but this is true about ai in general right like one
00:48:15.920of the problems with ai is it destroys employment across the board it'll destroy uh patronage
00:48:21.000networks just as easily as it destroys all these other possible uh employment opportunities that
00:48:26.440it's going to take from human beings so the automation of ethnic employment networks creates
00:48:31.020its own problems that you know that they'll have to tackle but i really don't know how that will
00:48:35.880ultimately be spun out he also says their concern is the national debt how can fink and buds deal
00:48:42.060with a global government debt default america's remittance scheme still required to avoid their
00:48:49.720usury schemes to blow up if ai replaces the usual the usury uh managers in a healthy way
00:48:56.520uh then that is a win again possible you know entirely possible uh like i said this is one of
00:49:04.440the few uh positive aspects of ai that i've had really explained to me like how will ai benefit
00:49:11.000humanity uh this would be one uh though honestly i'm still pretty skeptical this is ultimately going
00:49:16.900to be the way it is finally deployed all right guys well we're going to go ahead and wrap things
00:49:22.740up but i want to thank everybody so much for watching it's been fantastic to speak with you
00:49:26.760if it's your first time on this channel you need to go ahead and click subscribe on youtube bell
00:49:31.540notification all that stuff so you know when we go live if you want to get these broadcasts as
00:49:35.960podcasts you need to subscribe to your macintyre show on your favorite podcast platform and when
00:49:40.060you do leave a rating or review it really helps with the algorithm magic which i know is a little
00:49:45.100ironic after the speech. Thank you everybody for watching. And as always, I'll talk to you next time.