In this episode of War Room, host Stephen K. Bambao talks to Sam Hammond, founder of the Foundation for American Innovation (FAI) and author of the book AI and Leviathan, about what it means to be a technology entrepreneur in the 21st century.
00:00:55.000I'm Joe Allen with War Room Battleground.
00:00:58.000If you followed my travels over the last few months, you know that I have crisscrossed the country multiple times, even at one point ending up in Switzerland interviewing robots who, as you might imagine, turned out to be racist.
00:01:11.000I mainly tried to keep my company confined to transhumanists, Luddites, the occasional normie, and, of course, war room posse.
00:01:24.000In the recent weeks, I found myself in some very strange situations.
00:01:29.000One of the more unique was a Latin mass at a city I won't disclose in which all the women, of course, had their heads covered.
00:01:39.000There were children everywhere, beautiful iconography and the strict social order you could see in the health of the family.
00:01:52.000As I met, you could see how that social order produces a certain type of human being, a human being who is devoted not only to family but to God, a human being who is infinitely capable, I believe, of responding to the future in a way that does not destroy the past.
00:02:12.000Now, after the mass, I found myself sitting outside of a coffee shop.
00:02:17.000I was speaking to some of the young people. Some of them had their spouses with them.
00:02:21.000Most all of them will soon have children, and they talked about what the future would look like for their children.
00:02:30.000How would they protect their children from the predations of tech corporations in a future in which tech corporations basically rule the world?
00:02:40.000What was really interesting about that was sitting right across from us was an anarchist.
00:02:46.000He was sitting and working on a pair of boots. He's a cobbler by trade.
00:02:53.000The anarchist was diametrically opposed to the Catholic way of thinking.
00:02:58.000He clearly was not religious in any dogmatic way, very much obsessed with esoteric traditions, otherwise known as the occult.
00:03:08.000Certainly, he was not into the kinds of rigid social order that a Catholic or any other kind of deeply religious community would produce.
00:03:17.000And yet, he agreed completely with the assessment that tech corporations are a primary threat to human life.
00:03:25.000Now, a few weeks later, Thanksgiving, I found myself in another strange kind of juxtaposition of social schemes.
00:03:35.000In the afternoon, I joined a group of families who were deeply Christian.
00:03:43.000In regard to technology, they are as suspicious as I am.
00:03:47.000In regard to religion, let's just say that if I'm going to heaven, it'll be after them.
00:03:53.000But you could see in these families that same deep devotion in the way in which it expressed itself outwardly in their marriages, in their children, in their homes, in the way they order their lives.
00:04:06.000Later on, I found myself in another and perhaps more exciting milieu.
00:04:12.000I was invited by one Sam Hammond to a Thanksgiving dinner comprised mainly of what we would now call tech accelerationists, although in the old days you would call them transhumanists.
00:04:24.000The turkey was delicious. The stuffing was as well. The wine, to the extent I sipped it, was not bad.
00:04:32.000And I'm not sure, but there were bottles of Brian Johnson's snake oil lining the counter, and it may be that I myself imbibed some of this bizarre immortalist elixir.
00:04:46.000Now, in the same spirit of consilience that anyone would approach a Thanksgiving dinner, we all discussed our differences civilly.
00:04:54.000And in the desire to continue that conversation, I would like to welcome the War Room audience to encounter the mind of the futurist and chief economist at the Foundation for American Innovation, Sam Hammond.
00:05:10.000Sam, thank you very much for joining me.
00:05:14.000It's got to be the Brian Johnson snake oil.
00:05:17.000So, Sam, as we've discussed, your imagination is monstrous.
00:05:23.000You basically have a brain full of Shoggoths, their tentacles creeping out of your eyes, creeping out of your ears, and in the bizarre words that you speak, casting the future in terms of a singularity.
00:05:37.000Your book or pamphlet, AI and Leviathan, is absolutely brilliant, however nightmarish this future that you paint may be.
00:05:46.000If you would, just give the War Room audience a sense of what you're trying to communicate with AI and Leviathan.
00:05:57.000Yeah, so the inspiration comes from an essay by Tyler Cowen called The Libertarian Paradox, where he noted that, you know, libertarians have been fighting for small government, limited government for years.
00:06:08.000We also value markets, creative destruction, the wealth-producing propensity of capitalism.
00:06:13.000But maybe these things were a package deal.
00:06:16.000You know, we got the welfare state, we got the administrative state, the managerial state as a byproduct of capitalism and the success of the Industrial Revolution.
00:06:23.000And so that, you know, sort of shook the foundation of my worldview.
00:06:27.000I was raised and grew up pretty libertarian.
00:06:30.000But coming to understand that these sort of enemies that we fight, like the Leviathan, the state, got stronger as a byproduct, as a bundled package deal with prosperity.
00:06:42.000That looking ahead, are there other package deals with AI and the AI transformation?
00:06:47.000And so the way I sort of see it is that we are sort of sitting on a knife edge.
00:06:51.000You know, the powers of AI are incredible for everything from, you know, healthcare, biomedicine, education.
00:06:58.000But they also are incredibly powerful tools for surveillance, for censorship, for social control.
00:07:03.000And there's a kind of race dynamic going on between the state and the rest of society.
00:07:09.000We end up in a digital panopticon a la China where, you know, there's a group in Shanghai that runs, you know, the surveillance systems.
00:07:25.000Or will we sort of start to fragment as the capabilities that are now today only possessed by the CIA or Mossad or by state agencies become sort of democratized and we can all rebundle our organizations around smaller communities.
00:07:39.000And I sort of want to walk a middle ground where we can have our cake you need it too.
00:07:44.000And I think it's going to be a very narrow path.
00:07:47.000And, you know, one of my roles or purposes is to try to communicate that this is a package deal.
00:07:53.000That you have these sort of naive techno-optimists that think we're going to just plow forward into the brave new world and everything is going to be hunky-dory.
00:08:02.000You have people who think we're just straight up doomed.
00:08:05.000I think we're somewhere in between where we're going to have to make very hard trade-offs.
00:08:09.000And at the very least we should be communicating those trade-offs to the public.
00:08:12.000You know, one thing I really appreciate about your perspective, Sam, is that you don't pull punches and you've never been shy about stating things that might be shocking to normal people.
00:08:22.000Now, I'm not a normal person, so I'm not shocked.
00:08:29.000But your vision of the future is deeply informed by your education as an economist, your political education.
00:08:36.000You had described kind of a progression from a naive anarchist to one who is much more open to the possibilities of the uses of the state and maybe even the inevitability of a large degree of centralization.
00:08:51.000And the pairing that I read here in AI and Leviathan, the pairing I read here is particularly interesting because on the technological level you basically take as axiomatic the claims of Ray Kurzweil and other futurists that we are indeed heading towards a technological singularity.
00:09:12.000And you're looking at how will the state respond and, you know, spoiler alert, but you say that most likely it will stabilize as Chinese style.
00:09:23.000One world or one state control, one sort of centralized leader using AI to control the populace.
00:09:32.000On the other side of that, you see something more akin to liberal democracy as it evolves into a high tech society where corporations basically take up that role and in between the anarchic states.
00:09:44.000My first question regarding the technological singularity, why do you believe that in fact these technologies will keep increasing at an exponential pace?
00:09:56.000And do you really believe that by say 2045, we'll see something like Ray Kurzweil's view where we have AIs that are millions of times smarter than human beings, most human beings locked into those systems with trodes, being regularly genetically updated to keep up with the machine?
00:10:14.000Is that really the generalized future you see us going towards and why?
00:10:18.000Yeah, I don't know if I can put the specifics. Right. So there's predicting the future is hard. Right.
00:10:25.000There are some things that are more easily predicted. Right. So we can predict it's maybe setting aside the bogus climate science.
00:10:32.000It's easier in principle to predict, you know, one degree warming over a century than what the weather will be in a month. Right.
00:10:38.000Because the weather in a month is a random dynamic system similar with societies.
00:10:42.000So, you know, I can say with confidence that the world of 2045 will be at least as different seeming to us as the world of 1950 was to the world of, say, 1650, just radical transformation.
00:10:54.000And what that looks like in practice will will in part be up to us.
00:10:58.000But, yes, I mean, you know, one thing I think with some of the more naive techno optimists where they sort of get their overconfidence is the fact that we've lived through a period of rough relative stagnation over the last 40, 50 years as we've shifted from building actually new technologies to offshoring and globalization and these sort of substitutes to true innovation.
00:11:21.000And as we exit that era of stagnation and reencounter, like when history has restarted, we have to be prepared for a very tumultuous transition.
00:11:30.000You know, I'm someone who thinks that the mind is essentially computation.
00:11:36.000And I think that is a prior that maybe makes me more open to the idea that we're going to recreate mind in a digital substrate.
00:11:44.000Right. And I think, you know, I don't just take this axiomatic. I think it's partly we're seeing it play out. Right.
00:11:49.000We're seeing similarities between these vision models and their internals and the way our visual cortex works.
00:11:54.000We're seeing we're learning a little bit about how like human language works by studying these language models.
00:11:59.000And so it's less that we're building this totally alien technology, although it is alien in some some respects,
00:12:05.000but we're really building a simulation or emulation of our of ourselves, but in a format that is potentially unbounded. Right.
00:12:14.000When you say the mind is computation, are you using computation as a metaphor for what happens inside the human brain or, as I would say, in the human soul?
00:12:24.000Or is it something more fundamental that the human mind and the computer truly do share the same sorts of patterns and the same sorts of processes that lead to what are what we call mind?
00:12:38.000Yeah. And I think to say the mind is computational how to say that it's a classical computer. Those are two different things. Right.
00:12:45.000So in some in some sense, the computational aspect of our mind gives credence to this notion of an immaterial soul. Right.
00:12:53.000Because what what is software? Software is not the bits. It's not the transistors. Software is this immaterial pattern that sits on top of those things.
00:13:02.000And the core insight of Alan Turing and their founders of computer science was that these things are independent of the substrate.
00:13:08.000You can build a computer out of pneumatic tubes. You can build a computer out of hydraulic locks and gates.
00:13:14.000And similar with the mind, the we live in a material reality constructed by our brain that is running on a particular kind of wet hardware.
00:13:23.000But there's nothing in principle that prevents us from putting that into a different form of hardware.
00:13:27.000And as we've seen over the progress of A.I. over the last, say, 15 years, has come not so much from deep insights into, you know, how do how do proteins fold? Right.
00:13:38.000You know, protein folding was solved because essentially the computers caught up and we they that that problem existed within the envelope of the kinds of supercomputers we had to model protein folding.
00:13:50.000And as computation continues to to double or quadruple in its performance and as our algorithms get more efficient, the human mind will fall within that envelope, too.
00:14:01.000And then, you know, what Kurzweil foresees is by 2045 is the biggest supercomputers will have more computation than not just a single brain, but of all the brains put together.
00:14:10.000And we don't really know, you know, what that transpires or what what comes out of that.
00:14:15.000Will it be something that we wield as some single unified entity?
00:14:19.000Will we distribute that compute in a way that gives everyone a stake or will it be monopolized?
00:14:24.000These are the sort of questions ahead of us.
00:14:28.000And I think the the biggest risks come from folks who think that this is just a normal technology and that the world of 2045 will just look like the world of today only more so.
00:14:38.000Yeah, the idea that this is a normal technology, I think it's been blown away by.
00:14:42.000What is it some over half of young people say that they've used chat bots as companions or view them as companions.
00:14:50.000They talk to them as if they were actual people.
00:14:53.000I think that goes well beyond anything we saw with video gaming or even social media.
00:14:58.000And we're only three years into the chat GPT era.
00:15:01.000On the other hand, I am praying for either a solar flare or at least an S curve.
00:15:07.000But setting that aside, the political approach that you have here, I think, is really interesting.
00:15:14.000And it's something that gets lost on a lot of people, especially on the right without naming names.
00:15:20.000There are a lot of people that associate transhumanism with globalism, with leftism.
00:15:26.000They associate techno optimism and accelerationism with the same.
00:15:30.000If they haven't been snapped out of that hypnosis by the Trump administration yet, I think they really should.
00:15:36.000But, you know, we've covered accelerationism and its newest form, effective accelerationism.
00:15:43.000And I think that whether you would take that label on, you certainly sympathize with that camp.
00:15:50.000Yeah, I would say I'm an accelerationist everywhere, but, you know, super intelligence.
00:15:55.000I think it's appropriate if we are going to build this thing to go in it with a degree of humility and trepidation.
00:16:02.000The real point I mean to get to get at, though, is that you, by and large, would be categorized as on the right, at least in everywhere.
00:16:12.000It counts except for technology, I would say.
00:16:14.000And maybe you would say that techno futurism is a form of right wing expression.
00:16:21.000But I think it's on a practical level really interesting how your interests align with someone like Steve Bannon in regard to something like chip exports to China.
00:16:31.000You're very much a hawk on that, correct?
00:16:34.000You wrote a fantastic article for, I believe, the American conservative a few weeks back.
00:16:38.000Why, if you believe acceleration and technological progress are, in fact, the ways forward, why would you want to preference America or privilege America and shut out the Chinese on this?
00:16:52.000Don't they need more advanced noodles, too?
00:16:55.000I think there's, like, two big reasons.
00:16:58.000One is if we are on this knife edge between different, you know, forking paths the way the world could go, and if AI is potentially monopolizing a technology as it is, as it stands to be, that hegemonic power of AI could be kind of fractal.
00:17:13.000It could lead to runaway growth of one, you know, Google, but also at the world stage, runaway power of a single country, especially as, you know, not just through, you know, autonomous weaponry, surveillance, automated cyber attacks, but just the flywheel that will kick off as we begin to automate our industrial factories and so on and so forth.
00:17:30.000So I'd much rather the US be in that lead in part because then we can bake in, at least, gives us some hope of baking in values like civil liberties, respect for privacy, sort of, you know, building a constitution into the AI.
00:17:45.000So, you know, I think the world wars were kind of inseparable from industrialization.
00:17:51.000We moved from a world of, you know, agrarian craft economies to ones where we were racing to build the biggest rail networks because if we had more trains, then we could move more tanks.
00:18:01.000And it was the fact that the France and Germany, Russia, these countries in the lead up to World War I were in this all-out race to build the biggest rail networks.
00:18:11.000And it was the fact that it was close that I think led the conflict.
00:18:15.000You see, you get more war and more conflict when you have two great powers that are kind of neck and neck.
00:18:22.000And to the extent that we can, you know, make the US lead in AI and the core infrastructure sort of uncatchable, I think it reduces the threat of there being an all-out war.
00:18:33.000Do you think the Chinese are actually in a position to catch up?
00:18:37.000Do you think that Huawei, for instance, could actually meet the demand for data centers for computation in any way comparable to where the US is at right now?
00:18:47.000They could if we had a totally laissez-faire free market here, right?
00:18:51.000So, you know, the manufacturing equipment that goes into building these chips are some of the most complex pieces of engineering ever produced by mankind.
00:18:58.000The components that go in involve thousands, tens of thousands of suppliers.
00:19:03.000There are dozens within the supply chain, dozens of, you know, mini-monopolists.
00:19:07.000You know, ASML in the Netherlands builds the lithography machines.
00:19:26.000Now, I think this is important because, you know, China has other advantages.
00:19:29.000They are massively are producing us on energy.
00:19:31.000You know, they're adding, you know, 400 gigawatts to their grid every year.
00:19:35.000The stat I read is that they add a United States roughly every seven years to the energy grid.
00:19:40.000Whereas our energy grid has been flatlined.
00:19:42.000And we look at what goes into building these AI models, you know, other than having the talent and the engineers.
00:19:46.000It's really the data centers and the energy to power them.
00:19:49.000And we're already running up against, you know, hard constraints there, which is why we're making all these deals with the UAE and Saudi Arabia and so on where there's actually abundant energy.
00:20:03.000And, you know, we see the, you know, their their their comparative advantage is these massive infrastructure projects, just like they can do, you know, build giant highways.
00:20:12.000They could build the the AGI cluster if we let them.
00:20:16.000Something that's at the center of A.I. and Leviathan, which, again, I recommend the War Room Posse check out.
00:20:25.000It's a little heady in places, but you can get through it in an afternoon very, very easily.
00:20:31.000But one of the you open up with the theme of or a kind of metaphor or a symbol of the X-ray goggles, the X-ray specs and comparing A.I. to that.
00:20:41.000It gives people the power to see beyond what they could otherwise see.
00:20:45.000And someone like me, I mean, I see the development of these technologies, especially in regard to surveillance, and it's very off putting.
00:20:54.000You don't want to be seen in that way.
00:20:56.000You don't want someone to have that power over you.
00:20:59.000And so my first instinct is to reject it, lambast it, do anything I can to push it away.
00:21:04.000You, on the other hand, are approaching it much more from the perspective of how can this be used?
00:21:10.000And how can you use these technologies to protect yourself from surveillance or any other kind of predation?
00:21:18.000Can you go into that a little bit and break down the X-ray spec metaphor?
00:21:22.000Yeah, and part of it is recognizing that technology is often more discovered than invented, right?
00:21:28.000So if one day we woke up and there were X-ray specs that would be built with sort of off-the-shelf technology that no one could ever control, what would happen?
00:21:37.000Suddenly, I could see through your clothes. I could see through walls. I could break into banks. I could cheat at poker at the casino.
00:21:44.000There are all these systems in our society that would just suddenly break because of this new capability.
00:21:48.000There's kind of three canonical ways society could respond.
00:21:52.000We could sort of change our culture or change our norms.
00:21:55.000We could become nudists and embrace post-privacy norms.
00:22:04.000We could adapt or do mitigation so we could retrofit our homes with copper wire or anything that blocks the X-ray penetration.
00:22:14.000And the third option is we have an X-ray Leviathan, the all-seeing state that orders all the X-ray glasses to be handed over to feds
00:22:24.000and then they use their monopoly on X-ray glasses to scan our bodies to make sure we don't have them.
00:22:28.000But the core point is the fourth option of nothing happening or some stable equilibrium is not tenable, right?
00:22:36.000Because it's fundamentally a kind of collective action problem.
00:22:39.000I want the glasses but I don't want you to have the glasses.
00:22:41.000But you have the exact same incentive and so very quickly we move into a new world where we all have the glasses and we have to do something about it.
00:22:47.000And AI is very similar. It's hardly even a metaphor.
00:22:50.000You know, we even have some of these like Meta Ray-Ban glasses that, you know, could you imagine...
00:22:55.000You could imagine downloading a machine learning model for...
00:22:57.000You know, there are such models for detecting people through walls using Wi-Fi signal displacement.
00:23:04.000I'm sure that won't be on the Apple App Store, but people will jailbreak these things.
00:23:09.000We're going to go to break shortly, but in our remaining moments before, can you just tee up the idea that one of the more dramatic predictions you make is that the democratization of AI,
00:23:22.000AI diffusing across the population, whether it be America or any other country, is going to inevitably lead to regime change.
00:23:30.000Why? Why is this your core argument before moving on to the distant future?
00:23:37.000Yeah, it's not that I'm a technological determinist, but I do see the way in which, you know, our institutions or governments or organizations are technologically contingent.
00:23:46.000Right. So, you know, the growth of the administrative state, for instance, was presaged by and partly driven by the telegraph and early rail networks that let Washington, D.C.
00:23:57.000have agents of the state be in far away parts of the country and be able to still communicate and get back and forth.
00:24:03.000And so whenever you have a big technology shock to the core sort of inputs to organizations, the ability to monitor, to broker contracts, to enforce contracts, principal agent costs, the ability to, if I give you a job that you're going to execute on that job.
00:24:18.000When those costs come down radically, you get new kinds of institutions. Right. We saw that in micro with with Uber and Lyft.
00:24:23.000Right. That was a regime change. You know, these were public taxi commissions that, you know, were, you know, quasi governmental.
00:24:30.000And for the people involved, for the taxi drivers involved is incredibly violent and dramatic.
00:24:36.000You know, you saw protesters in Paris throwing rocks off of bridges and so on.
00:24:39.000You know, I think for the rest of us, it was a massive improvement.
00:24:43.000But that was a shift that happened quite dramatically within a span of less than five years.
00:24:47.000You know, the ridership completely flipped.
00:24:49.000And that was because of mobile and Internet and these new technologies leading to new kinds of organizational forms.
00:24:55.000Well, if there's any one thing that people need to keep in mind as this transition unfolds,
00:25:02.000it's that you're going to need some kind of economic hedge against total economic disruption.
00:28:31.000Imagine having the world's most connected financial insider feeding you vital information.
00:28:36.000The kind of information only a handful of people have access to.
00:28:41.000And that could create a fortune for those who know what to do with it.
00:28:46.000That's exactly what you get when you join our frequent guest and contributor, Jim Rickards, in his elite research service, Strategic Intelligence.
00:28:56.000Inside Strategic Intelligence, you'll hear directly from Jim and receive critical updates on major financial and political events before they hit the mainstream news.
00:29:06.000He'll put you in front of the story and tell you exactly what moves to make for your best chance to profit.
00:29:13.000As a proud American, you do not want to be caught off guard.
00:29:17.000Sign up for Strategic Intelligence right now at our exclusive website.
00:31:50.000We are back with Sam Hammond, Chief Economist at the Foundation for American Innovation.
00:31:56.000He is the author of AI and Leviathan, a very slim-tracked, packed full of nightmarish futures, but also tips on how to survive them.
00:32:08.000Okay, Sam, if we could just return briefly to the concept of regime change, the breakdown of the current order under the pressure of AI and other downstream technologies.
00:32:19.000You don't necessarily present this as something that's ideal or even something that's desired, but you do present it as a cultural and political landscape that people will have to deal with.
00:32:31.000So if we could just return really quickly to the mechanisms by which democratized AI will, in fact, erode current institutions and how you think people should find that narrow corridor as you describe it in the book.
00:33:13.000And I think even if we had the sort of competence of the 1950s Eisenhower administration or something like that, the fact is a lot of this talent, a lot of the know-how is embodied in these private corporations, right?
00:33:25.000So we're using Palantir as our spy agency.
00:33:28.000We're using SpaceX as our launch capability.
00:33:32.000And I just see running that forward becoming more and more true.
00:33:37.000And especially when you look at the roadmaps what these AI companies are saying they want to build, right?
00:34:40.000You know, Elon Musk has been fighting for shareholder control over Tesla for this very reason, right?
00:34:46.000Because he said, you know, it's less about the money.
00:34:48.000You know, this trillion-dollar package he's gotten is more about the control because I'm going to use Tesla to build a humanoid robot army.
00:34:55.000You know, they're planning to ramp to 50,000, 100,000 by next decade, millions of these human robots coming off the factory line.
00:35:01.000Yeah, building basically an Optimus Gigafactory right now in Texas, correct?
00:35:07.000And so that's a lot of power under one person, but it's also a new kind of organization.
00:35:14.000You know, we complain about the DMV or whatever, but a lot of the jobs that governments do are already extremely exposed to current AI technology.
00:35:26.000These things are going to fall this decade.
00:35:28.000And if there's going to be sort of a balance between the private sector and the public sector, if we can have our state capacity, the minimum viable government we need to enforce contract and make sure we maintain rule of law, we need to keep up, right?
00:35:42.000If we don't, and I think this is sort of a safe default scenario that the government doesn't adapt quickly enough, then it will just be displaced in the same way we're already seeing.
00:35:51.000And when you say we, you mean the United States?
00:35:54.000I think broadly speaking, most Western democracies are pretty exposed because of our slow procedural orientation where we take our time and the technology doesn't wait, right?
00:36:09.000And so it calls for, I think, this balancing act where we want to be pushing AI into government, but also taking that as an opportunity to set standards.
00:36:18.000Because, you know, another worry is not just the private concentration of power, but, you know, you can imagine some tin pot dictator.
00:36:25.000You know, what is the thing that keeps them from having total power?
00:36:28.000Well, it's the fact that, you know, the military could do a coup or, you know, their generals will defy an order.
00:36:34.000But if all those become sort of automated, if the whole machinery of government becomes AI, then it's a matter of just changing the prompt and you change your government.
00:36:43.000And so we need to build in some levels of privacy, civil liberties, engineering into the tech stack itself so that we don't have this sort of lock-in effect that could arise.
00:36:55.000But, you know, this goes to my point that I think a lot of this technology is at this point, you know, the Pandora's box has been opened.
00:37:05.000We can try to resist the technology, but really I think a better path is to try to steer the technology, to master the technology, not let it master us.
00:37:12.000So my own perspective, you know, I can appreciate your view as a futurist and as an economist and seeing these trends going forward and seeing them as being quasi inevitable.
00:37:24.000What do you do about it on a practical level, on an economic level?
00:37:27.000But as a humanist, as a flea-bitten monkey person, I am much more concerned about, well, then what do people do?
00:37:46.000Do we vote for the new tech accelerationist party?
00:37:49.000Like, what would you, in your view, what sorts of futures would a blue collar working man be dealing with or a small business owner be dealing with?
00:38:02.000Well, I mean, over the next, say, five years, I think a lot of blue collar labor work is still relatively safe.
00:38:12.000I think there's a lot of ways that AI could be empowering the small business owners and entrepreneurs.
00:38:16.000You know, the fact that you can now do your own marketing and graphic design and things that would normally require big teams or get legal counsel essentially for free.
00:38:25.000Now, you know, in the long run, if we want to ask, you know, what is sort of my vision for the best possible outcome?
00:38:32.000And again, this is not necessarily a forecast.
00:38:35.000This is now me telling you what I would want to happen.
00:38:38.000Is, you know, I see there are potential opportunities for AI to be a corrective to a lot of the problems of modernity.
00:38:45.000Right. And, you know, I think a lot of conservatives and a lot of right wing thought is a part reaction to modernity and the trade offs that came from.
00:38:53.000Yes, we want to have these large scale systems because they're more efficient and they produce standards.
00:38:59.000And by modernity, you mean to include managerialism, bureaucracy, egalitarianism, the post enlightenment era where, where, you know, we lost something with that, too.
00:39:32.000We can try to recreate community, but it was a real trade off.
00:39:35.000And is there a way in which A.I. could, you know, insofar as it does start to dissolve some of these these state functions and and these forces of homogenization enable a new kind of, you know, high tech communitarianism.
00:39:48.000You know, I want to go back to the, you know, one room schoolhouse that was down the road before it all consolidated into these big school.
00:39:55.000Well, you could, you know, you could have the A.I. tutor in the morning and the jujitsu class in the evening.
00:40:00.000Right. And you're definitely going to want to keep up with their physical prowess.
00:40:03.000Well, I think I think that's actually true.
00:40:05.000I think, you know, the the if the early part of the 21st century was, you know, was good for the nerds.
00:40:12.000I think the latter half will be good for the for the jocks.
00:40:15.000OK, I was never much of a jock, but I do appreciate the sentiment.
00:40:21.000At least it's a monkey person sentiment.
00:40:24.000But you see where I'm sort of going with this is, you know, and there's and this is also what animates a lot of the more conservative pro-AI folks.
00:40:32.000As they see the power of A.I. to, you know, dissolve Hollywood to, you know, at least in the short run, deflate the sort of managerial professional class economy.
00:40:43.000You know, these laptop workers that rule over us.
00:40:46.000You know, when that becomes plentiful, then it's the electrician or the plumber that is actually in high demand.
00:40:52.000Now, I just think that will be a relatively short transitional window where, you know, at some point we'll also have robotics that do that as well.
00:40:58.000And so to your point, to your question, then what then what?
00:41:35.000So I guess they are, they're Czech robots.
00:41:37.000So I think there is a world where we end up in a kind of, you know, rentier state.
00:41:41.000And I think this is one of the things we have to balance.
00:41:44.000You know, the reason Saudi Arabia has a big sovereign wealth fund is because if they don't, they suffer a resource curse.
00:41:49.000And they, you know, so we need to, you know, I think the Trump administration has been quite thoughtful and has a lot of foresight in the fact that Trump wants a U.S. sovereign wealth fund.
00:42:01.000But that leads up to the question of what do we do on a daily basis?
00:42:04.000And, you know, we look back in history.
00:42:06.000What did, you know, people in the 1600s do on a daily basis?
00:42:10.000Well, they, yes, they sowed the land or whatever, but they also, you know, went to church.
00:42:19.000They went to rituals and had services.
00:42:21.000And I think there's a world where we can get back to something that is potentially more human than what we have today because it, because AI has this potential for this radical relocalization of human society.
00:42:33.000So taking this line of thought, by the way, before we go, I will say one more prayer for a solar flare.
00:42:41.000And if failing that, just give us an S curve, but a long flat S curve.
00:42:48.000I'm intrigued by the roots of your thought in what was once called transhumanism is now called science and technology.
00:42:57.000We were both reading Ray Kurzweil's The Age of Spiritual Machines around the same time, 2001 or so.
00:43:05.000And it made a deep impression on me, the totalizing vision of technology, the idea of superhuman,
00:43:11.000superhuman AI, all human beings attached to it through nanobots or whatever, the indistinguishable nature of physical and virtual reality, all that.
00:43:20.000But when I read it, it just sounded like a nightmare world.
00:43:23.000You know, I'd read Ted Kaczynski a couple of years before and oftentimes joke that, you know, on one shoulder is Ray Kurzweil and on the other is Ted Kaczynski, sort of like a devil and a fallen angel on each shoulder.
00:43:36.000For you, my sense is that Kurzweil had a different impact.
00:43:43.000I think the primary impact it had was just looking back at, you know, how much he got right through relatively simple methods.
00:43:50.000Right. So, you know, people will nitpick that his timing is off here and there.
00:43:55.000But Age of Spiritual Machines came out in 1999 and he predicted that, you know, we'd have human level AI, AGI by 2029, which if you look at the betting markets and the other forecasting sites is roughly where things are converging.
00:44:10.000You know, he may have had a slightly different path, how to get there.
00:44:14.000Right. He talked about whole brain emulation.
00:44:16.000Yeah, that we'd scan the brain. And in a way, we did that indirectly. Right.
00:44:20.000These large language models are trained on human generated data.
00:44:24.000And in the limit, they are learning the thing that generated that data, not the data itself.
00:44:29.000And the thing that generated that data is a mind, which is why these, you know, these companies are actually even talking, starting to talk about, you know, the welfare of the AI.
00:44:38.000So what I got from Kurzweil was just, first of all, that history hasn't ended, that we should not limit our imagination.
00:44:47.000And if you talk to most people, they're relatively linear thinkers, whereas Kurzweil always trusts, you know, there's these exponential trends and we got to take them very seriously.
00:44:54.000And then secondly, that that you can do a lot and go a long way with these very simple forecasting methods of, you know, what will be the biggest supercomputer?
00:45:03.000What will be the most, you know, what is the computational power of our brain and when will those two lines intersect?
00:45:08.000Yeah, you go into a bit, you give a hat tip to the early extropians, Max Moore and others in that genre.
00:45:17.000You know, Max Moore is the reason we're saying transhumanism when he pivoted to that as a term.
00:45:22.000And you also give a hat tip to the effective accelerationists.
00:45:27.000I get the sense that you also see it as being part of the same trend.
00:45:31.000I want to pivot, if we can, to your vision of the future, your timeline.
00:45:38.000I mean, if there's one thing that Ray Kurzweil can be given credit for, it's that he had the guts to say, this is what I believe is going to happen.
00:45:51.000And if you would just give the audience a sense, you broke it down into three basic periods.
00:45:57.000The immediate future, about six, seven years from now, beginning in 2036 and then ending, of course, in the 2040s as we approach the singularity.
00:46:08.000What are the different elements that people should expect to see as we move forward towards this imagined, I would say, singularity?
00:46:17.000So let's start with where we are today.
00:46:19.000Today we are in a place where we have Google, OpenAI, Anthropic, X, are sort of in a neck-to-neck race to release the best general-purpose language model.
00:46:30.000The big breakthrough last year were reasoning models, thinking models, models that can actually do tasks.
00:46:35.000And now the application of reinforcement learning, which is an AI training technique that basically gives these models goals and goal-directed behavior.
00:46:43.000Now, there's an organization called METR, M-E-T-R, that tracks the level of autonomy in these systems.
00:46:49.000By autonomy, I mean what's the longest task that they can do before they sort of become discombobulated and fall off track and start to drift.
00:46:58.000That is now doubling every seven months or so.
00:47:27.000And then suddenly, you know, very quickly, we have systems that are doing things autonomously that would normally take humans or teams of humans weeks or months.
00:47:35.000Those are going to be incredibly powerful.
00:47:37.000They're going to be incredibly useful economically because this is when you move from AI being a tool to being a direct substitute for all kinds of, at least at first, white collar work.
00:47:54.000And then, you know, but it's also very dual use, right?
00:47:57.000So the autonomy of these systems can be used to automate your Excel job, but it could also be used to execute cyber espionage campaigns, as Anthropic just revealed.
00:48:06.000They disrupted a Chinese effort using their models and their servers running autonomously to spy on U.S. corporations and government agencies.
00:48:15.000So I think that's really the next, say, two or three year period.
00:48:19.000I think we could see, you know, a major run up in these cyber attacks and, you know, potentially in ways in which the Internet becomes somewhat unusable.
00:48:29.000Or at least we need to build new Internet rails, both because it'll be hard to know what's real and what's not, the proliferation of deep fakes.
00:48:36.000But also, you know, the cyber, the level of cyber threats, the vulnerabilities in our cyber infrastructure are very severe.
00:48:43.000And if we don't fix them fast enough, we may have to just build alternatives.
00:48:48.000And you see the appropriate response or at least the most effective response is people basically moving into gated communities, both in reality, physical reality and virtually, right?
00:48:59.280Yeah, you see this in my privatization.
00:49:01.680Yeah, you see this with, you know, online communities, right?
00:49:04.360So if you go on Facebook and look at, you know, the comments on, you know, some fake image of like an African child who built a Jesus statue out of shrimp shells, you know, you see all the people commenting and be like, oh, man, you know, praise the Lord.
00:49:21.040And it's like, well, you know, we're not going to make it.
00:49:23.980Those people are not going to make it.
00:49:27.200Well, you can either have some identification system where we all scan our iris like WorldCoin wants to do, or you end up moving into these more gated communities where you are very selective about who gets in.
00:49:38.160And I think that that's already happening in the digital realm.
00:49:40.000I think it will increasingly happen in the real world, too.
00:49:42.620And ultimately leading to the singularity.
00:49:45.020If I may, should I read the last passage of the book?
00:50:02.800And you say the city is a home to a fusion-powered supercluster with billions of times more computational power than every human brain combined.
00:50:10.420It just completed its first big training run, and the new model is ready to be tested.
00:50:14.440The engineers have read the sequences and know the danger, but their pride, curiosity, and benthamite expected value calculations all scream, turn it on.