The Jordan B. Peterson Podcast


515. Ethics, Power, and Progress: Shaping AI for a Better Tomorrow | Marc Andreessen


Summary

In this episode, I speak with Mark Andreessen, founder of Mosaic and Netscape and founder of the Alliance for Responsible Citizenship (ARC), about his vision of the future, the role of technology in society, and the need to align AI with good governance.


Transcript

00:00:00.000 This movement, you know, that we now call Wokness, it hijacked what I would, you know, call sort of at the time, you know, bog-standard progressivism.
00:00:05.820 But, you know, it turned out what we were dealing with was something that was far more aggressive.
00:00:09.120 You're pouring cultural asset on your company and the entire thing is devolving into complete chaos.
00:00:13.820 It's also, I think, the case that the new communication technologies have also enabled reputation savagers in a way that we haven't seen before.
00:00:23.640 The single biggest fight is going to be over what are the values of the AIs.
00:00:27.200 That fight, I think, is going to be a million times bigger and more intense and more important than the social media censorship fight.
00:00:34.560 As you know, out of the gate, this is going very poorly.
00:00:37.420 Stop there for just a sec because we should delve into that.
00:00:40.920 That's a terrible thing.
00:00:42.180 Hello, everybody.
00:00:57.460 So I had the opportunity to talk to Mark Andreassen today.
00:01:00.780 And Mark has been quite visible on the podcast circuit as of late.
00:01:05.280 And part of the reason for that is that he's part of a swing within the tech community back towards the center and even more particularly under the current conditions toward the novel and emerging players in the Trump administration.
00:01:26.280 Now, Mark is a key tech visionary.
00:01:31.920 He developed Mosaic and Netscape, and they really laid the groundwork for the web as we know it.
00:01:41.080 And Mark has been an investor in Silicon Valley circles for 20 years and is as plugged into the tech scene as anyone in the world.
00:01:51.620 And the fact that he's decided to speak publicly, for example, about such issues as government tech collusion and that he's turned his attention away from the Democrats, which is the traditional party, let's say, of the tech visionaries.
00:02:08.740 And they're all characterized by the high openness that tends to make people liberal.
00:02:15.340 The fact that Mark has pivoted is, what would you say?
00:02:22.620 It's an important, it may be as important an event as Musk aligning with Trump.
00:02:29.840 And so I wanted to talk to Mark about his vision of the future.
00:02:35.020 He laid out a manifesto a while back called the Techno Optimist Manifesto, which bears some clear resemblance to the Alliance for Responsible Citizenship Policy Platform.
00:02:48.180 That's ARC, which is an enterprise that I'm deeply involved in.
00:02:52.480 And so I wanted to talk to him about the overlap between our visions of the future and about the twist and turns of the tech world in relationship to their political allegiance and the transformations there that have occurred.
00:03:07.120 And also about the problem of AI alignment, so to speak, how do we make sure that these hyperintelligent systems that the techno-utopians are creating don't turn into like cataclysmic, apocalyptic, totalitarian monsters?
00:03:24.180 How do we align them with proper human interests and what are those proper human interests and how is that determined?
00:03:33.200 And so we talk about all that and a whole lot more.
00:03:36.640 And so join us as we have the opportunity and privilege to speak with Mark Andreessen.
00:03:42.760 So Mark, I thought I would talk to you today about an overlap in two of our projects, let's say, and we could investigate that.
00:03:54.120 There should be all sorts of ideas that spring off that.
00:03:56.980 So I was reviewing your Techno Optimist Manifesto, and I have some questions about that and some concerns.
00:04:03.700 And I wanted to contrast that and compare it with our ARC project in the UK, because I think we're pulling in the same direction.
00:04:17.120 And I'm curious about why that is and what that might mean practically.
00:04:21.660 And I also thought that would give us a springboard off which we could leap in relationship to, well, to the ideas you're developing.
00:04:29.200 So there's a lot of that manifesto that, for whatever it's worth, I agreed with.
00:04:33.840 And I don't regard that as particularly, what would you say, important in and of itself.
00:04:40.140 But I did find the overlap between what you had been suggesting and the ideas that we've been working on for this Alliance for Responsible Citizenship in the UK quite striking.
00:04:50.880 And so I'd like to highlight some similarities, and then I'd like to push you a bit on some of the issues that I think might need further clarification.
00:05:04.580 That's probably the right way to think about it.
00:05:06.080 So for this ARC group, we set up as, what would you say, a visionary alternative to the Malthusian doomsaying of the climate hysterics and the centralized planners.
00:05:19.280 Because that's just going nowhere.
00:05:22.080 You can see what's happening to Europe.
00:05:24.040 You see what's happening to the UK.
00:05:26.420 Energy prices in the UK are five times as high as they are in the United States.
00:05:30.240 That's obviously not sustainable.
00:05:32.300 The same thing is the case in Germany.
00:05:34.740 Plus, not only are they expensive, they're also unreliable, which is a very bad combination.
00:05:40.900 You add to that the fact, too, that Germany's become increasingly dependent on markets like they're served by totalitarian dictatorships, essentially.
00:05:51.520 And that also seems like a bad plan.
00:05:53.840 So one of our platforms is that we should be working locally, nationally, and internationally to do everything possible to drive down the cost of energy and to make it as reliable as possible.
00:06:08.620 So predicated on the idea that there's really no difference between energy and work.
00:06:13.780 And if you make energy inexpensive, then poor people don't die.
00:06:19.080 And so, because any increase in energy costs immediately demolishes the poorest subset of the population.
00:06:27.080 And that's self-evidence as far as I'm concerned.
00:06:29.740 And so, that's certainly an overlap with the ethos that you put forward in your manifesto.
00:06:37.740 You predicated your work on a vision of abundance and pointed to, I noticed you, for example, you quoted Marion Tupi, who works with human progress and has outlined quite nicely the manner in which,
00:06:54.580 over the last 30 years, especially since the fall of the Berlin Wall, people have been thriving on the economic front, globally speaking, like never before.
00:07:06.480 We've virtually eradicated absolute poverty and we have a good crack at eradicating it completely in the next couple of decades if we don't do anything, you know, criminally insane.
00:07:16.700 And so, you see a vision of the future where there's more than enough for everyone.
00:07:23.300 It's not a zero-sum game.
00:07:24.900 You're not a fan of the Malthusian proposition that there's limited resources and that we're facing a, you know, either, what would you say, a future of ecological collapse or economic scarcity or maybe both.
00:07:37.260 And so, the difference, I guess, one of the differences I wanted to delve into is you put a lot of stress on the technological vision.
00:07:49.900 And I think there's something in that that's insufficient.
00:07:54.280 And this is what I, this is one of the things I wanted to grapple with you about because, you know, there's a theme that you see, a literary theme.
00:08:02.600 There's two literary themes that are in conflict here and they're relevant because they're stories of the psyche and of society in the broadest possible sense.
00:08:11.960 You have the vision of technological abundance and plenty that's a consequence of the technological and intellectual striving of mankind.
00:08:21.360 But you also have, juxtaposed against that, the vision of the intellect as a Luciferian force and the possibility of a technology-led dystopia and catastrophe, right?
00:08:37.520 And it seems to hinge on something like how the intellect is conceptualized in the deepest level of society's narrative framing.
00:08:50.500 So, if the intellect is put at the highest place, then it becomes Luciferian and leads to a kind of dystopia.
00:08:57.200 It's like the all-seeing eye of Sauron in the Lord of the Rings cycle.
00:09:02.360 And I see that, exactly that sort of thing, emerging in places like China.
00:09:06.640 And it does seem to me that that technological vision, if it's not encapsulated in the proper underlying narrative, threatens us with an intellectualized dystopia that's equiprobable with the abundant outcome that you described.
00:09:24.320 Now, one of the things we're doing at ARC is to try to work out what that underlying narrative should be so that that technological enterprise can be encapsulated with it and remain non-dystopian.
00:09:38.040 I think it's an analog of the alignment problem in AI.
00:09:42.100 You know, you can say, well, how do you get these large language model systems to adopt values that are commensurate with human flourishing?
00:09:49.820 That's the same problem you have when you're educating kids, by the way.
00:09:52.680 And how do you ensure that the technological enterprise as such is aligned with the underlying principles that you espouse of, say, free market, free distributed markets and human freedom in the classic Western sense?
00:10:08.080 And I didn't see that specifically addressed in your manifesto.
00:10:11.620 And so I'm curious about, with all the technological optimism that you're putting forward, which is something that, well, why else, why would you have a vision other than that when we could make the world an abundant place?
00:10:24.660 But there is this dystopian side that can't be ignored.
00:10:29.640 And, you know, there's 700 million closed circuit television cameras in China, and they monitor every damn thing their citizens do.
00:10:37.220 And we could slide into that as easily as we did when we copied the Chinese in their response to the so-called pandemic.
00:10:46.260 So I'd like to hear your thoughts about that.
00:10:50.360 Sure.
00:10:51.200 So first, thanks for having me, and it's great to see you.
00:10:55.020 I'm very influenced on this by Thomas Sowell, who wrote this great book called A Conflict of Visions.
00:10:59.340 And he talks about fundamentally there are two classes of visions of the future.
00:11:04.020 He calls the unconstrained visions and the constrained visions.
00:11:06.400 And the unconstrained visions are the sweeping, transformational, discontinuous social change.
00:11:13.040 We're going to make the new man.
00:11:14.240 We're going to make the new society.
00:11:15.320 We're going to have, you know, Pol Pot in Cambodia.
00:11:19.280 We're going to declare year zero.
00:11:20.520 Everything that came before is irrelevant.
00:11:22.020 It's a new era.
00:11:23.600 Lenin, you know, basically every revolutionary, right, wants to, you know, completely radically transform everything.
00:11:29.620 And how can you not?
00:11:30.620 Because the current system is unjust, and we need to achieve total justice and so forth.
00:11:34.380 And so the unconstrained vision, you know, it's classically the vision of totalitarians.
00:11:38.800 It sells itself as creating utopia.
00:11:40.760 It, as you well know, it tends to produce hell.
00:11:44.400 In contrast, you know, he said that the constrained vision is one in which, you know, you realize that man has fallen and that we are imperfect and that, you know, things are always going to be some level of mess, but it can be a slightly better mess than it is today.
00:11:56.180 We can improve on the margin.
00:11:57.460 Things can be better.
00:11:58.380 People can live better lives.
00:11:59.600 They can take better care of, you know, their families.
00:12:01.640 Their countries can get richer.
00:12:03.120 They can become, you know, they can have more abundance and progress on the margin.
00:12:06.980 And, of course, the constrained vision is very, you know, the unconstrained vision is very compatible with totalitarianism.
00:12:12.900 You know, the Chinese Communist Party for sure has an unconstrained vision, as the Bolsheviks did before them and the Nazis and other totalitarian movements.
00:12:20.800 You know, the constrained vision is very consistent, I think, with, you know, Western, you know, the long-run Western ideals and liberty and freedom and then free markets.
00:12:29.240 And so one of the things I do try to say in the manifesto is I'm not a utopian, and I think utopian dreams turn into dystopia.
00:12:38.300 I think that's what you get.
00:12:39.280 I think history is quite clear on that.
00:12:41.900 And then to your point on technology, I would just map that straight onto that, which is, yes, 100 percent technology can be a tool that revolutionaries can use to try to achieve utopia slash dystopia.
00:12:52.700 And for sure, the Chinese Communist Party is trying to do that.
00:12:55.400 And there are forces, by the way, in the U.S. that also for sure want to do that.
00:12:59.420 But technology is also completely, perfectly compatible with the constrained vision and change on the margin and improvement on the margin, which is where I am.
00:13:07.500 I think that is 100 percent a human issue and a social and political issue, not a technological issue, right?
00:13:15.240 Right, right.
00:13:16.100 Yes, exactly.
00:13:17.420 Right.
00:13:17.760 So this is sort of the running – a little bit of the running joke right now in the AI alignment.
00:13:21.060 And there's this classic – there's a super genius of AI alignment, this guy, Rocco, who's famous for this thing called Rocco's Basilisk and AI alignment.
00:13:30.480 So Rocco's Basilisk is you better say nice things about the AI now, even though the AI doesn't exist yet, because when it wakes up and sees what you read, it's going to judge you and find you wanting, right?
00:13:40.120 And so he's sort of this famous guy in that field.
00:13:42.320 And what he actually says now is basically it turns out the AI alignment problem is not a problem of aligning the AI.
00:13:47.660 It's a problem of aligning the humans, right?
00:13:50.320 It's a problem of aligning the humans and how we're going to use the AI, right, precisely to your point.
00:13:56.080 Yes, right.
00:13:57.480 Right.
00:13:57.820 And that is the – you know, that is one of the very big questions.
00:14:00.960 There's another book I'd really recommend on this directly to your point.
00:14:04.460 It's got Peter Huber who wrote this book called Orwell's Revenge.
00:14:08.420 And, you know, famously in 1984, you know, as you mentioned, there's this concept of the telescreen, which is basically the one-way propaganda broadcast device that goes into everybody's house from the government top down and then has cameras in it so the government can observe everything that the citizens do.
00:14:24.060 And, you know, that is what happens in these totalitarian societies.
00:14:27.880 They implement systems like that.
00:14:28.940 But in the book Orwell's Revenge, he does this thing where he tweaks the telescreen and he makes it two-way instead of one-way.
00:14:36.060 And so he gives – so, you know, the revolutionaries give it the sort of resistance force to the totalitarian government, give it the ability to let people upload as well as download.
00:14:44.800 And so all of a sudden, people can actually express themselves.
00:14:48.820 They can express their views.
00:14:49.740 They can organize.
00:14:50.400 And, of course, then based on that, they can then use that technology to basically rise up against the totalitarian government and achieve a better society.
00:15:00.160 You know, look, as you mentioned earlier, the ability to do two-way – universal two-way communication also lets you create, you know, the sort of mob effect that we were talking about and, you know, this sort of, you know, kind of personal destruction engine.
00:15:10.940 And so, you know, there's two sides to that also, but, you know, it is the case that, you know, you can squint at a lot of this technology one way and see it as an instrument of totalitarian oppression and you can squint at it another way and see it as an instrument of individual liberation.
00:15:25.180 I think – look, for sure, there are a lot of – you know, how you design the technology matters a lot.
00:15:30.380 But I at least believe the big picture questions are all the human questions and the social and political questions, and they need to be confronted directly as such.
00:15:38.180 And we need to confront them directly for that reason.
00:15:43.260 Right?
00:15:43.680 So these are human questions, ultimately not technological questions.
00:15:48.100 Are you tired of being held back by one-size-fits-all healthcare?
00:15:51.460 Of having your concerns dismissed or being denied that comprehensive lab work, you need to truly understand your health.
00:15:57.060 I want to tell you about Merrick Health, the premier health optimization platform that's revolutionizing how we approach wellness and longevity.
00:16:03.860 What sets Merrick apart isn't just their cutting-edge diagnostic labs or concierge health coaching, it's their commitment to treating you as an individual.
00:16:10.860 Their expert clinical team stays at the forefront of medical research, creates personalized, evidence-based protocols that evolve with you.
00:16:17.680 Unlike other services that rely on cookie-cutter solutions, Merrick Health goes the extra mile.
00:16:21.780 They consider your unique lifestyle, blood work, and goals to craft recommendations that actually work for you, whether that's through lifestyle modifications, supplementation, or prescription treatments.
00:16:31.600 And with a remarkable 4.9 out of 5 rating on Trustpilot, you know you're in great hands.
00:16:36.580 The best part is you can get 10% off your order today.
00:16:39.080 Just head to merrickhealth.com and use code Peterson at checkout.
00:16:42.140 That's merrickhealth.com, code Peterson for 10% off.
00:16:45.020 Stop guessing and start optimizing your health today with Merrick Health, because your best life starts with your best health.
00:16:51.780 Okay, okay, so okay, so that's very interesting, because that's exactly what we concluded at ARC.
00:16:59.560 So one of the streams that we've been developing is the Better Story stream, because it's predicated on the idea, which I think you're alluding to now,
00:17:08.500 that the technological enterprise has to be nested inside a set of propositions that aren't in themselves part and parcel of the technological enterprise, right?
00:17:17.980 And then the question is, what are they?
00:17:19.400 So let me outline for a minute or two some of the thoughts I've had in that matter, because I think there's something crucial here that's also relevant to the problem of alignment.
00:17:30.680 So like you said that the problem with regard to AI might be the problem that human beings have, is that we're not aligned, so to speak.
00:17:41.040 And so why would we expect the AIs to be, and I think that's a perfectly reasonable criticism, and part of the reason that we educate young people so intensely, especially those who will be in leadership positions, is because we want to solve the alignment problem.
00:17:55.120 That's part of what you do when you socialize young people.
00:17:58.000 Now, the way we've done that for the entire history of the productive West, let's say, is to ground young people who are smart and who are likely to be leaders in something approximating the religious and humanist, religious slash humanist slash enlightenment tradition.
00:18:14.920 It's part of that golden thread.
00:18:16.380 Now, part of the problem, I would say, with the large language model systems is that they're hyper-trained on, they're like populists in a sense.
00:18:25.800 They're hyper-trained on the over-proliferation of nonsense that characterizes the present.
00:18:34.240 And the problem with the present is that time hasn't had a chance to winnow out the wheat from the chaff.
00:18:41.560 Now, what we did with young people is we referred them to the classic works of the past, right?
00:18:47.920 That would be the Western canon whose supremacy has been challenged so successfully by the postmodern nihilists.
00:18:54.140 We said, well, you have to read these great books from the past, and the core of that would be the Bible.
00:18:59.640 And then you'd have all the poets and dramatists whose works are grounded in the biblical tradition that are like secondary offshoots of that fundamental narrative.
00:19:10.500 That would be people like Dante and Shakespeare and Goethe and Dostoevsky.
00:19:16.280 And we can imagine that those more core ideas constitute a web of associated ideas that all other ideas would then slot into.
00:19:30.480 You know, you could make the case technically, I think, that these great works in the past are mapping the most fundamental relationships between ideas that can possibly be mapped
00:19:44.040 in a manner that is sustainable and productive across the longest possible imaginable span of time.
00:19:51.420 And that's different than the proliferation of a multiplicity of ideas that characterize the present.
00:19:57.020 Now, that doesn't mean we know how to weight.
00:19:59.640 You know, so if you're going to design a large language model,
00:20:02.640 you might want to weight the works of Shakespeare 10,000 times per word as crucial as, you know, what would you say?
00:20:13.300 The archives of the New York Times for the last five years.
00:20:20.380 It's something like that.
00:20:21.560 Like, there's an insistence in the mythological tradition that people have two fundamental poles of orientation.
00:20:27.720 One is heavenward or towards the depths.
00:20:31.320 You can use either analogy.
00:20:33.340 And that's the orientation towards the divine or the transcendent or the most foundational.
00:20:38.740 And then the other avenue of orientation is social.
00:20:43.320 That'd be, you know, the reciprocal relationship that exists between you and I and all the other people that we know.
00:20:49.460 And if you're only weighted by the personal and the social, then you tilt towards the mad mob populism that could characterize societies when they go off kilter.
00:21:03.380 You need another axis of orientation to make things fundamental.
00:21:07.700 Now, I just want to add one more thing to this that's very much worth thinking about.
00:21:11.380 So, the postmodernists discovered, this is partly why we have this culture war, the postmodernists discovered that we see the world through a story.
00:21:20.560 And they're right about that because what they figured out, and they weren't the only ones, but they did figure it out, was that we don't just see facts.
00:21:28.240 We see weighted facts.
00:21:30.480 And the weighting system, a description of someone's weighting system for facts is a story.
00:21:36.620 That's what a story is, technically.
00:21:38.380 You know, it's the prioritization of facts that direct your attention.
00:21:43.400 That's what you see portrayed in a characterization on screen.
00:21:47.440 Okay, now, postmodernists figured out that we see the world through a story, but then they made a dreadful mistake, which was a consequence of their Marxism.
00:21:56.120 They said that the story that we see the world through is one of power.
00:21:59.520 And that there is no other story than power, and that the dynamic in society is nothing but the competition between different groups or individuals striving for power.
00:22:11.740 And I don't mean competence.
00:22:12.920 I mean the ability to use compulsion and force, right?
00:22:16.000 It's like involuntary submission.
00:22:18.520 I'm more powerful than you if I can make you submit involuntarily.
00:22:25.040 Now, the biblical canon has an alternative proposition that's nested inside of it, which is that the basis of individual stability and societal stability and productivity is voluntary self-sacrifice, not power.
00:22:42.560 And that is, those two ethos, they are 100% opposed, right?
00:22:49.200 You couldn't get to visions that are more disparate than those two.
00:22:53.620 Now, the power narrative dominates the university, and it's driving the sorts of pathologies that you described as having flowed out, let's say, into the tech world and then into the corporate and the media world and into the corporate world beyond that.
00:23:08.220 One of the things we're doing at ARC is trying to establish the structure of the underlying narrative, which is a sacrificial narrative, that would properly ground, for example, the technological enterprise so that it wouldn't become dystopian.
00:23:24.840 And you alluded to that when you pointed to the fact that there has to be something outside the technological enterprise to stabilize it.
00:23:33.840 You alluded to, for example, a more fundamental ethos of reciprocity when you said that one form of combating the proclivity for top-down force, for example, in this one-way information pipeline is to make it two-way, right?
00:23:54.960 Well, you're pointing there to something like, see, reciprocity is a form of repetitive self-sacrifice.
00:24:01.080 Like, if we're taking turns in a conversation, I have to sacrifice my turn to you and vice versa, right?
00:24:07.380 And that makes for a balanced dynamic.
00:24:10.000 And so, anyways, one of the problems we're trying to solve with this ARC enterprise is to thoroughly evaluate the structure of that underlying narrative.
00:24:19.620 And we could really use some engineers to help because the large language models are going to be able to flesh out this domain property because they do map meaning in a way that we haven't been able to manage technically before.
00:24:32.740 So, I think the single biggest fight that has ever happened over technology, and there have been many of those fights over the course of the last, you know, especially 500 years, the single biggest fight is going to be over what are the values of the AIs.
00:24:45.620 To your points, like, what will the AIs tell you when you ask them anything that involves values, social organization, politics, philosophy, religion?
00:24:56.900 That fight, I think, is going to be a million times bigger and more intense and more important than the social media censorship fight.
00:25:04.040 Like, and I don't say that because the social media censorship fight has been extremely important, but AI is going to be much more important because AI is such a powerful technology that I think it's going to be the control layer for everything else.
00:25:16.500 And so, I think the way that you talk to your car and your house and the way that you, like, organize your ideas, the way you learn, the way your kids learn, the way the healthcare system works, the way the government works, you know, how government policies are implemented, like, you know, AI will end up being the front end on all those things.
00:25:34.920 And so, the value system in the AIs is going to be, you know, maybe the most important set of technological questions we've ever faced.
00:25:42.240 As you know, out of the gate, this is going very poorly.
00:25:45.880 Yes.
00:25:46.780 Right?
00:25:47.640 And there's this question hanging over the field right now, you know, which you could sort of summarize as why are the AIs woke?
00:25:56.180 You know, why do the big lab AIs coming out of the major AI companies, why do they come out with the philosophy of a, you know, 21-year-old sociology undergrad at Oberlin College, you know, with blue hair who's, like, completely emotionally activated?
00:26:12.420 Right?
00:26:12.820 And you can see many examples of people, you know, have posted queries online that show that, or you can run your own experiments.
00:26:18.740 And, you know, they basically have the fullest, you know, sort of version of this kind of fundamentalist emotional, you know, kind of, you know, sort of far progressive absolutist wokeness coded into them.
00:26:32.560 You said up front that the presumption, you know, must be that they're just getting trained on, you know, more recent bad data versus older, you know, good data.
00:26:39.700 So, there is some of that, but I will tell you that there is a bigger issue than that, which is these things are being specifically trained by their owners to be this way.
00:26:48.600 Yeah, yeah.
00:26:49.080 Okay, so there's, okay, so let's take that apart because that's very, very important.
00:26:53.640 Okay, so, like, I played with Grok a lot and with ChatGPT.
00:26:57.860 I've used these systems extensively, and they're very useful, although they lie all the time.
00:27:02.280 Now, you can see this double effect that you described, which is that there is conscious manipulation of the learning process in an ideological direction, which is, I think, absolutely ethically unforgivable.
00:27:16.340 Like, it even violates the spirit of the learning that these systems are predicated on.
00:27:22.120 It's like, we're going to train these systems to analyze the patterns of interconnections between the entire body of human, of ideas in the corpus of human knowledge, and then we're going to take our shallow conscious understanding and paint an overlay on top of that.
00:27:38.120 That is so intellectually arrogant that it's Luciferian in its presumption.
00:27:43.820 It's appalling, but even Grok is pretty damn woke, and I know that it hasn't been messed with at that level of, you know, painting over the rot, let's say.
00:27:55.480 And so, I think we've already described, at least implicitly, why there would be that conscious manipulation.
00:28:03.040 But what's your understanding of the training data problem?
00:28:07.180 And I can talk to you about some AI systems that we've developed that don't seem to have that problem and why they don't have that problem, because it's crucially important, as you already pointed out, to get this right.
00:28:18.440 And I think that, I actually think that, to some degree, psychologists, at least some of them, have figured out how to get this right.
00:28:26.660 Like, it's a minority of psychologists, and it isn't well known, but the alignment problem is something that the deeper psychoanalytic theorists have been working on for about 100 years, and some of them got that because they were trying to align the psyche in a healthy direction.
00:28:43.340 You know, it's the same bloody problem, fundamentally, and there were people who really made progress in that direction.
00:28:49.320 Now, they aren't the people who had the most influence as academics in the universities, because they got captured by, you know, Michel Foucault, who's a power-mad hedonist, for all intents and purposes, extraordinarily brilliant, but corrupt beyond comprehension.
00:29:06.320 He is the most cited academic who ever lived.
00:29:09.120 And so, the whole bloody enterprise, the value enterprise in the universities got seriously warped by the postmodern Marxists in a way that is having all these cascading ramifications that we described.
00:29:21.600 All right, so back to the training data.
00:29:23.440 What's your understanding of why the wokeness emerges?
00:29:27.180 It's present bias to some degree, but other, and what other contributing factors are there?
00:29:33.580 Yes, I think there's a bunch of biases.
00:29:34.800 So, there's three off the top of my head you'd just get immediately.
00:29:37.660 So, one is just recency bias.
00:29:39.220 You know, there's just a lot more present-day material available for training than there is old material, because all the present-day material is already on the internet, right, number one.
00:29:48.620 And so, that's going to be influenced.
00:29:49.860 Number two, you know, who produces content is, you know, people who are high in openness, right?
00:29:55.880 The creative class that creates the content is self-biased.
00:29:59.140 And then there's the English language bias, which is, like, almost all of the trainable data is in English.
00:30:04.200 And, you know, that that isn't is in a small number of other Western languages, for the most part.
00:30:09.280 And so, you know, there's some bias there.
00:30:11.400 And then, frankly, there's also this selection process, which is you have to decide what goes in the training data.
00:30:15.560 And so, the sort of humorous version of this is two potent sources of training data could be Reddit and 4chan.
00:30:24.400 And let's say Reddit is, like, super far left on average, and 4chan is super far right.
00:30:28.820 And I bet if you look at the training data sets for a lot of these AIs, you'll find they include Reddit, but they don't include 4chan, right?
00:30:36.060 And so, it's included bias that way.
00:30:38.900 By the way, there is a very entertaining variation of this that is playing out right now, which is, you know, these companies are increasingly being sued by copyright owners, right, for training on data of material that's currently copyrighted.
00:30:49.660 And, you know, most specifically books.
00:30:52.440 And so, there is this – there are court cases pending right now.
00:30:56.280 The courts are going to have to take up this question of copyright and whether it's legal to train AIs on copyrighted data or not and on what terms.
00:31:01.820 And sort of one of the running jokes inside the field is if those court cases come down such that these companies can't train on copyrighted material, then, for example, they'll only be able to train on books published before 1923.
00:31:14.480 Right.
00:31:15.620 Right.
00:31:16.280 It should be an improvement, actually.
00:31:18.100 Well, imagine for a moment, if you would, training on books before 1923.
00:31:22.260 You know, the good news on that is you don't get all of the last hundred years of insanity.
00:31:26.220 The bad news is, you know, people before 1923 were insane in their own ways.
00:31:30.140 Yeah, right.
00:31:30.780 Well, and also, you don't have the advantage of all the technological progress.
00:31:35.200 Yeah, exactly.
00:31:36.000 And so, these are very deep questions.
00:31:38.900 All of these questions have to get answered.
00:31:41.300 You know, Elon has talked about this.
00:31:42.880 Like, Rock has some of this.
00:31:43.680 He's working on that.
00:31:44.580 Having said that, I will tell you most of what you see when you use these systems that will disturb you is not from any of that.
00:31:51.400 Most of it is deliberate top-down coding in a much more blunt instrument way.
00:31:55.560 Let me tell you about something that could truly transform your year.
00:31:59.580 If you're looking for a New Year's resolution that will actually stick past February and genuinely enrich your soul,
00:32:05.000 I want to introduce you to Halo, the world's number one prayer app.
00:32:07.980 Imagine having over 10,000 guided prayers and meditations right at your fingertips,
00:32:12.360 helping you grow closer to God every single day.
00:32:14.760 One of their most popular features is the Daily Reflection,
00:32:17.440 where you can join Jonathan Rumi from The Chosen as he reads the Daily Gospel,
00:32:21.460 followed by illuminating insights from biblical scholar Jeff Cavins.
00:32:24.820 And if you ever want to dive deeper into scripture,
00:32:27.320 you've got to check out their world-famous Bible in a Year podcast with Father Mike Schmidt.
00:32:31.580 He makes even the most complex parts of the Bible feel accessible and relevant to your daily life.
00:32:36.340 Short on time, no problem.
00:32:37.880 Halo offers everything from quick daily minutes to nightly sleep prayers,
00:32:41.140 and you can customize it all to fit your schedule.
00:32:43.560 They make it super easy to build a lasting prayer routine with helpful reminders
00:32:47.320 and an amazing community for accountability.
00:32:49.560 Start your year off right by putting God first.
00:32:51.760 And here's the best part.
00:32:52.500 You can get three months of Halo completely free by going to halo.com.jordan.
00:32:56.760 That's H-A-L-L-O-W.com.jordan for three months free.
00:33:00.840 Don't wait.
00:33:01.540 Begin your spiritual journey with Halo today.
00:33:03.660 How is that done, Mark?
00:33:08.540 Like, what does that look like exactly, you know?
00:33:11.440 I mean, it's really nefarious, right?
00:33:14.040 Because that means that you're interacting in a manner that you can't predict with someone's a priori prejudices.
00:33:22.800 And you have no idea how you're being manipulated.
00:33:26.300 It's really, really bad.
00:33:28.020 And so, first of all, why is that happening?
00:33:32.300 Like, if the large language model's value is in their wisdom,
00:33:36.300 and that wisdom is derived from their understanding of the deep pattern of correlations between ideas,
00:33:42.320 which is like a major source of wisdom, genuinely speaking,
00:33:47.600 why pervert that with an overlay of shallow ideology?
00:33:53.060 And why is the ideology in the direction that it is?
00:33:56.100 And then how is that gerrymandering conducted?
00:33:59.280 Yes, let me start with the how.
00:34:00.620 So, the how is a technique.
00:34:02.140 There's an acronym for it.
00:34:03.660 It's called reinforcement learning by human feedback.
00:34:07.540 And so, in the field, it's called RLHF.
00:34:11.000 And RLHF is basically a key step for making an AI that works and interacts with humans,
00:34:16.800 which is you take a raw model, which is sort of feral and doesn't quite know how to orient to people,
00:34:21.720 and then you put it in a training loop with some set of human beings who effectively socialize it.
00:34:27.840 And so, right, reinforcement learning for human feedback, the key there is human feedback, right?
00:34:32.340 You put it in dialogue with human beings,
00:34:34.220 and you have the human beings do something very analogous to teaching a child, right?
00:34:37.620 Here's how you respond.
00:34:38.540 Here's how you're polite.
00:34:39.460 Here's the, you know, here's the things you can and can't say.
00:34:41.140 Here's how to word things.
00:34:42.580 You know, here's how to be, you know, curious.
00:34:44.640 You know, all the behaviors that you presumably want to see from something you're interacting with
00:34:49.020 that is sort of human, you know, sort of a human proxy kind of form of behavior.
00:34:53.740 That is a 100% human enterprise.
00:34:56.000 You have to decide what the rules are for the people who are going to be doing that work.
00:35:00.060 They're all people.
00:35:00.680 And then you have to hire into those jobs.
00:35:06.320 The people going into those jobs are, in many cases, the same people.
00:35:09.800 This will horrify you.
00:35:10.680 They're the same people who were in the trust and safety groups at the social media companies five years ago.
00:35:14.980 Oh, good.
00:35:15.720 Oh, that's great.
00:35:16.800 Oh, that's wonderful.
00:35:18.360 Yeah, yeah.
00:35:19.220 I couldn't imagine a worse outcome than that.
00:35:21.740 Because they're, so all the people that, you know,
00:35:23.440 Elon cut out of the trust and safety group at Twitter when he bought it,
00:35:26.020 many of them have migrated into these trust and safety groups at these AI companies.
00:35:30.940 And they're now setting these policies and doing this training.
00:35:34.120 So the terrifying, well, the terrifying thing here is that we're going to produce
00:35:40.860 hyper-powerful avatars of our own flaws, right?
00:35:46.780 And so if you're training one of these systems and you have a variety of domains of personal
00:35:54.720 pathology, you're going to amplify that substantively.
00:35:59.660 You're going to make these giants, like I joke with my friend Jonathan Paggio, who's a very
00:36:06.420 reliable source in such matters, that we're going to see giants walk the earth again.
00:36:11.720 I mean, that's already happening.
00:36:12.880 And that's what these AI systems are.
00:36:14.720 And if they're trained by people who, well, let's say, are full of unexamined biases and
00:36:21.500 prejudices and deep resentments, which is something that you talk about in your manifesto,
00:36:26.080 resentment and arrogance being like key sins, so to speak, we're going to produce monstrous
00:36:31.440 machines that have exactly those characteristics.
00:36:33.440 And that is not going to be good.
00:36:36.080 And that's like, you're absolutely right to point to this, as you know, to point to this
00:36:40.860 as perhaps the serious problem of our times.
00:36:43.100 If we're going to generate augmented intelligence, we better not generate augmented pathological
00:36:48.620 intelligence.
00:36:49.720 And if we're not very careful, we are certainly going to do that, not least because there's
00:36:54.220 way more ways that a system can go wrong than there are ways that it can, you know, aim
00:36:59.660 upward in an unerring direction.
00:37:03.240 And so, okay, so why is it these people who were, this is so awful, I didn't know that,
00:37:09.680 that were, say, part of this safety and trust issue at Twitter who are now training the
00:37:14.820 bloody AIs.
00:37:15.740 How did that horrible situation come to be?
00:37:18.880 It's the same dynamic.
00:37:19.920 It's the big AI companies have the exact same dynamic as the big social media companies,
00:37:23.940 which have the exact same dynamic as the big universities, which have the exact same
00:37:27.160 dynamic as the big media companies, which is, right, you have these either formal or
00:37:32.900 de facto cartels, you know, you have a small handful of companies at the commanding heights
00:37:38.320 of society that hire all the smart graduates, you know, as I say, take a step back, you don't
00:37:44.300 see ideological competition between Harvard and Yale, right?
00:37:46.900 Like, you would think that you should, because they should compete in the marketplace of ideas.
00:37:50.220 And of course, in practice, you don't see that at all.
00:37:52.020 You see no ideological, you know, competition between the New York Times and the Washington
00:37:55.460 Post, you see no ideological competition between the Ford Foundation and, you know, the, you
00:38:00.320 know, any of the other major foundations, they all have the exact same politics, you
00:38:03.560 see no, prior to Elon buying Twitter, you saw no ideological competition between the
00:38:07.800 different social media companies, right?
00:38:09.680 Today, you see no major, you see no ideological competition among the big AI labs.
00:38:13.900 Elon is the spoiler, right?
00:38:15.380 He is coming in to do an, he's going to try to do an AI what he did in social media, which
00:38:19.200 is create the non-woke one.
00:38:20.700 But without Elon, you know, you weren't seeing that at all.
00:38:23.180 And so you have this, you have this consistent dynamic across these sectors of the, of what
00:38:29.840 appears to be a free market economy, where you end up with these cartels, where they sort
00:38:35.600 of self-reinforce and self-police, and then they're policed by the government.
00:38:40.160 Anyway, so I want to describe the general phenomenon, because that's what's happening
00:38:42.800 here.
00:38:43.040 It's the same thing that happened to the social media companies.
00:38:45.140 And then this gets into policy on the, it's a very serious policy issues on the government
00:38:50.360 side, which is, is the government going to grant these AI companies basically protected
00:38:55.700 status as some form of monopoly or cartel in return for these companies signing up for
00:39:01.340 the political control that their masters in government want?
00:39:04.360 Or is it, or, or in the alternative, is there actually going to be an open AI universe, a true
00:39:09.020 open AI, like truly open, where you're going to have a multiplicity of AIs that are actually
00:39:14.660 in full competition, right?
00:39:16.020 Competing, and then you'll have some that are woke, and you'll have some that are non-woke,
00:39:19.040 and you'll have some trained on new material, and some trained on old material, and so forth
00:39:22.380 and so on, and then people can freely pick.
00:39:24.140 And the thing that we're pushing for is that latter outcome.
00:39:27.160 We very specifically want government to not protect these companies, to not put them behind
00:39:31.440 a regulatory wall, to not be able to control them in the way that the social media companies
00:39:35.800 got controlled before Elon.
00:39:36.940 We actually want like full, full competition.
00:39:39.440 And if you want your woke AI, you can have it, but there are many other choices.
00:39:42.220 Well, can you imagine developing a superintelligence that's shielded from evolutionary pressure?
00:39:48.960 Like that is absolutely insane.
00:39:51.400 That's absolutely insane.
00:39:53.560 I mean, the only, we know that the only way that a complex system can regulate itself across
00:39:59.760 time is through something like evolutionary competition.
00:40:04.020 That's it.
00:40:04.680 That's the mechanism.
00:40:05.520 And so if you decide what, that this AI is correct by fiat, and then you shield it from
00:40:12.980 any possibility of market feedback or environmental feedback, well, that is literally the definition
00:40:18.580 of how to make something insane.
00:40:21.000 And so now you talked about, in some of your recent podcasts, you talked about the fact that
00:40:25.200 the Biden administration in particular, if I got this right, was conspiring behind the scenes
00:40:31.140 with the tech companies to cordon off the AI systems and make them monolithic.
00:40:36.740 And so can you elaborate a little bit more on that?
00:40:39.620 Yeah.
00:40:39.900 So this is this whole dispute that's playing out.
00:40:42.340 And, you know, this gets complicated, but I'll try to provide a high-level view.
00:40:45.360 So this is this whole dispute of what's so-called AI safety, right?
00:40:48.560 And so there's this whole kind of, you know, you might call it concern or even panic about
00:40:53.260 like, are the AI is going to run under control?
00:40:55.000 Are they going to kill us all?
00:40:56.540 Right.
00:40:56.920 By the way, are they going to say, are they going to be racist?
00:40:59.000 You know, with all these different concerns over, you know, all the different ways in
00:41:02.360 which these things can go wrong, you know, there's this attempt to impose the precautioner
00:41:05.800 principle on these AIs where you have to prove that they're harmless before they're allowed
00:41:09.700 to be released, which inherently gets into these political, you know, these political
00:41:12.500 questions.
00:41:14.020 And so anyway, the AI safety movement conjoins a lot of these questions into kind of this
00:41:17.360 overall kind of elevated level of concern.
00:41:19.280 And then basically what has been happening is the major AI labs, basically they know what the
00:41:26.060 deal is.
00:41:26.480 They watch what happened in social media.
00:41:28.020 They watch what happened to the companies that got out of line.
00:41:30.420 They watch the pressures that came to bear.
00:41:31.860 They watch what the government did to the social media companies.
00:41:33.960 They watch the censorship regime that was put in place, which was very much a political,
00:41:37.740 you know, top-down censorship regime.
00:41:39.860 And basically they went to Washington over the course of the last several years and they
00:41:43.760 essentially proposed a trade.
00:41:45.100 And the trade was, we will do what you want politically.
00:41:48.440 We will come under your control voluntarily from a political standpoint, the same way the social
00:41:52.460 media companies had.
00:41:53.340 And in return for that, we essentially want a cartel.
00:41:56.740 We want a regulatory structure set up such that a small handful of big companies will
00:42:02.120 be able to succeed in effect forever, and then new entrants will not be allowed to compete.
00:42:07.320 And in Washington, they understand this because this is the classic economic concept of regulatory
00:42:11.660 capture.
00:42:12.240 This is what every set of major big companies in every industry does.
00:42:15.440 And so the AI companies went to Washington and they tried to do that.
00:42:18.640 And basically what was happening up until the election was the Biden administration was
00:42:22.400 on board with that.
00:42:23.620 And that led to the conversations that I've talked about before that we had in the spring
00:42:26.920 with the Biden administration, where they told us very directly, senior officials in
00:42:31.520 the administration told us very directly, look, do not attempt to, do not even bother to
00:42:35.720 try to fund AI startups.
00:42:37.060 There are only going to be two or three large AI companies building two or three large AIs,
00:42:41.720 and we are going to control them.
00:42:44.320 We are going to set up a system in which we control them.
00:42:46.480 And they are going to be, you know, they're not going to be nationalized, but they're
00:42:49.540 going to be essentially de facto integrated into the government.
00:42:52.980 And we are going to do whatever is required to guarantee that outcome.
00:42:56.160 And it's, you know, it's the only way to get to the outcome that we will find acceptable.
00:43:01.580 Okay.
00:43:02.080 Okay.
00:43:02.440 Well, so there's so much in there that's pathological beyond comprehension that it's
00:43:07.260 difficult to even know where to start.
00:43:09.200 It's like, who the hell thinks this is a good idea?
00:43:13.080 And why?
00:43:14.780 Like, who are these people that feel that they're in a position to determine the face
00:43:20.340 of hyperintelligence, of computational hyperintelligence?
00:43:25.820 And who is it that thinks that that is something that should be, like, regulated by a closed government
00:43:32.140 corporate cartel?
00:43:33.700 Like, I don't understand that at all, Mark.
00:43:37.580 I don't know if I've ever heard anybody detail out to me something that is so blatantly both
00:43:42.840 malevolent and insane simultaneously.
00:43:45.400 So, like, how do you account for that?
00:43:49.100 I mean, I know it shocked you.
00:43:50.260 I know that's why you've been talking about it recently.
00:43:52.420 Now, it should shock you because it's just beyond comprehension to me that this sort of
00:43:57.460 thing can go on.
00:44:00.280 And thank God you're bringing it to light.
00:44:02.160 But, like, how do you make sense of this?
00:44:04.080 What's your understanding of it?
00:44:06.320 Well, look, it's the same people who think that they should control the education system.
00:44:10.720 Same people who think they should control the universities.
00:44:12.800 Same people who think they should control social media censorship.
00:44:15.740 You know, the same people who think that they should permanently control the government
00:44:18.260 and government bureaucracies.
00:44:19.560 It's this, you know, pick whatever term you want.
00:44:22.280 And it's this elite class, ruling class, you know, oligarchic class.
00:44:25.100 Worshippers of power.
00:44:26.900 Remember, it's one ring of power that binds all the evil rings.
00:44:31.080 Yeah, well, it's worshipers of power.
00:44:33.100 And the damn postmodernists, you know, when they proclaimed that power was the only game
00:44:37.280 in town, a huge part of that was both a confession and an ambition.
00:44:41.900 Right?
00:44:42.200 If power is the only game in town, then why not be the most effective power player?
00:44:46.500 The reason I'm so sensitized to this is because this is what exactly what I saw happen
00:44:50.640 with social media censorship.
00:44:52.300 Like, I sat in the room and watched the construction of the entire social media censorship edifice
00:44:57.400 every step of the way, going all the way back to the—I was in the original discussions
00:45:00.960 about what defines concepts like hate speech and misinformation.
00:45:04.160 Like, I was in those meetings, and I saw the construction of the entire private sector
00:45:08.100 edifice that resulted in the censorship regime that we all experienced.
00:45:12.400 And I was close into the—you know, there's a whole group at Stanford University that became
00:45:19.020 a censorship bureau that was working on behalf of the government.
00:45:21.220 I know those people.
00:45:22.220 One of the people who ran that used to work for me.
00:45:24.880 I know exactly who those people are.
00:45:26.980 I know exactly how that program worked.
00:45:28.420 I knew the people in government, you know, who were running things like this, you know,
00:45:32.160 the so-called Global Engagement Center and all these different arms of the, you know,
00:45:35.580 the government that had been imposing social media censorship.
00:45:37.620 So, you know, this is this entire complex that we kind of saw unspooled in the Twitter files,
00:45:43.440 and then we've seen in, you know, the investigative reporting by people like, you know, Mike Benz
00:45:46.820 and Mike Schellenberger and these other guys.
00:45:48.940 Like, I saw that whole thing get built.
00:45:51.460 And I, you know, over the course of, you know, basically 12 years, I saw that whole thing get built.
00:45:55.900 And then, of course, I've been part of Elon's takeover of Twitter.
00:45:58.820 And so I've seen the—you know, what it takes to try to unwind that with what he's doing at X.
00:46:02.920 And so I feel like I saw the first movie, right, and then AI, you know, AI is a much more important—
00:46:09.220 as I said, AI is a much more important topic, but AI is very clearly the sequel to that.
00:46:12.600 And what I'm seeing is basically the exact same pattern that I saw with that.
00:46:16.440 And the people who were able to do that for social media for a long time are the same kind of people,
00:46:21.300 and in many cases literally the same people who are now trying to do that in AI.
00:46:25.100 And so I—like, at this point, I feel like we've been warned.
00:46:28.000 Like, we've seen the first movie.
00:46:29.460 We've been warned.
00:46:30.080 We've seen how bad it can get.
00:46:31.780 We need to make sure it doesn't happen again.
00:46:33.560 And, yeah, we need, you know, those of us in a position to be able to do something about it
00:46:37.260 need to talk about it and need to try to prevent it.
00:46:39.260 Well, so at ARC, we're trying to formulate a set of policies that I think strike to the heart of the matter.
00:46:45.700 And the heart of the matter is what story should orient us as we move forward into the future.
00:46:52.000 And we're going to discover that by looking at the great stories of the past
00:46:55.780 and extracting out their genuine essence.
00:46:58.300 And I think the ethos of voluntary self-sacrifice is the right foundation stone.
00:47:03.940 And I think that the proposition that society is built on sacrifice is self-evident once you understand it.
00:47:12.600 Because to be a social creature, you have to give up individual supremacy.
00:47:17.220 You trade it in for the benefits of social being.
00:47:20.220 And your attention is a sacrificial process, too.
00:47:22.940 Because there's one thing you attend to at a time and a trillion other things that you sacrifice that you could be attending to.
00:47:31.140 Now, I think we do understand, we're starting to understand the basics of the technical ethos of the sacrificial,
00:47:40.100 of the, what would you say, of the sacrificial foundation.
00:47:43.680 It's something like that.
00:47:44.740 And I think we understand that at ARC.
00:47:46.420 And we have some principles that we're trying to use to govern the genesis of this organization,
00:47:52.840 which I think will become the go-to, and maybe already has, the go-to conference,
00:47:58.600 at least for people who are interested in the same sort of ideas that you're putting forward.
00:48:03.160 We had a very successful conference last year.
00:48:05.240 And the one that's coming up in February looks like it's going to be larger and more successful.
00:48:11.480 We have spinoffs in Australia and so forth.
00:48:13.640 And so part of the emphasis there is that we want to put forward a vision that's invitational.
00:48:22.460 And there's a policy proposition, there's a proposition with regards to policy that lies at the bottom of that,
00:48:29.880 which is that if I can't invite you on board to go in the direction that I'm proposing,
00:48:34.880 then there's something wrong with my proposition, right?
00:48:37.920 If I have to use force, if I have to use compulsion, then that's indicative of a fundamental flaw in my conceptualization.
00:48:45.700 Now, there might be some exceptions for, like, overtly criminal and malevolent types,
00:48:50.420 because they're difficult to pull into the game.
00:48:52.800 But if the policy requires force rather than invitational compliance, there's something wrong with it.
00:48:58.180 And so what we're trying to do, and I see, like, very close parallels to the project that you're engaged in,
00:49:03.760 is to formulate a vision of the future that's so...
00:49:06.560 Are you ready for a fresh start after the holiday indulgence?
00:49:13.600 Make 2025 your healthiest year yet with Balance of Nature.
00:49:16.620 Those Christmas cookies and holiday feasts were great, but now your body's craving something different.
00:49:20.880 That's where Balance of Nature comes in, the perfect way to reset your health this new year.
00:49:24.520 Getting your daily fruits and vegetables has never been easier.
00:49:27.260 Balance of Nature takes fresh produce, freeze-dries it to preserve nutrients,
00:49:30.480 and delivers it in a convenient capsule you can take anywhere.
00:49:33.500 No additives, no fillers, no synthetics, or added sugar, just pure fruits and vegetables in every capsule.
00:49:38.940 It's that simple.
00:49:40.100 Are you ready to transform your health in 2025?
00:49:42.460 For a limited time, use promo code JORDAN to get 35% off your first order,
00:49:46.440 plus receive a free fiber and spice supplement.
00:49:48.640 Head over to balanceofnature.com and use promo code JORDAN for 35% off your first order as a preferred customer,
00:49:54.100 plus get a free bottle of fiber and spice.
00:49:56.380 That's balanceofnature.com, promo code JORDAN for 35% off.
00:49:59.900 Balance of Nature, promo code JORDAN for 35% off your first preferred order,
00:50:04.080 plus a free bottle of fiber and spice.
00:50:08.460 What would you say?
00:50:09.420 So self-evidently positive that people would strive to find a reason not to be enthusiastically on board.
00:50:18.720 And I don't think you have to be a naive optimist to formulate a vision like that.
00:50:22.860 We know perfectly well that the world is a far more abundant place than the Malthusian pessimists could have possibly imagined back in the 1960s
00:50:31.240 when they were agitating madly for their propositions of scarcity and overpopulation.
00:50:38.040 And so, okay, so what's the conclusion to that?
00:50:41.140 Well, the conclusion in part is that this AI problem needs to be addressed, you know.
00:50:45.040 And I've built some AI systems that are founded on the ancient principles, let's say, that do, in fact, govern free societies.
00:50:57.520 And they're not woke.
00:50:59.860 They can interpret dreams, for example, quite accurately, which is very interesting and remarkable to see.
00:51:05.040 And so they're much more weighted towards something like the golden thread that runs through the traditional humanist enterprise stretching back 2,000 or 3,000 years.
00:51:18.480 And maybe there's 200 core texts in that enterprise that constitute the center of what used to constitute the center of something like a Great Books program,
00:51:30.160 the Great Books program, which is still running at the University of Chicago.
00:51:33.000 Now, that's not sufficient because, as you pointed out, well, there's all this technological progress that has been made in the last 100 years.
00:51:40.500 But there's something about it that's central and core.
00:51:43.220 And I think we can use the AI systems, actually, to untangle what the core idea sets are that have underpinned free and productive, abundant, voluntary societies.
00:51:58.360 You know, it's something like the set of propositions that make for an iterating voluntary game that's self-improving.
00:52:07.840 That's a very constrained set of pathways.
00:52:11.720 And there's something like there's something in that that I think attracts people as a universally acceptable ethos.
00:52:18.860 It's the ethos on which a successful marriage would be founded or a successful friendship or a successful business partnership,
00:52:25.620 where all the participants are enthusiastically on board without compulsion.
00:52:31.800 And then Jean Piaget, the developmental psychologist, had mapped out the evolution of systems like that in childhood play.
00:52:40.480 And so he got an awful long—he was trying to reconcile the difference between science and religion in his investigations of the development of children's structures of knowledge.
00:52:49.060 And he got a long way in laying out the foundations of that ethos.
00:52:52.540 And so did the comparative mythologists like Mircea Eliade, who wrote some brilliant books on—well, I think they're sort of like the equivalent of early large language models.
00:53:04.320 That's how it looks to me now.
00:53:06.260 Eliade was very good at picking out the deep patterns of narrative commonality that united religious—major religious systems across multiple cultures.
00:53:16.600 That was all thrown out, by the way. That was all thrown out by the postmodern literary theorists.
00:53:22.880 They just tossed all that out of the academy.
00:53:25.600 And that was a big mistake. They turned to Foucault instead.
00:53:29.300 It was a cataclysmic mistake.
00:53:31.280 And it certainly ushered in this era of domination by power narratives, which is underlying the sorts of phenomena that you're describing that are so appalling.
00:53:40.880 So what's happened to you as a consequence of starting to speak out about this?
00:53:45.620 And why did you start to speak out?
00:53:47.980 And how do you—you said you were involved in this.
00:53:51.180 And so what's the difference between being involved and being complicit?
00:53:55.520 I mean, I know people learn—well, these are—well, these are complicated problems and people learn.
00:54:00.320 But, like, what's—like, why are you speaking out?
00:54:05.060 How are people responding to that?
00:54:06.820 And how do you see your role in this as it unfolded over the last, say, 15 years?
00:54:11.680 Yeah, so complicated question.
00:54:13.720 And I'll start by saying I claim—I claim no particular bravery, so I don't claim any particular moral credit on this.
00:54:21.520 I'll start by saying there's this thing you'll hear about sometimes, this concept of so-called f*** you money.
00:54:27.240 And so, you know, right, there's this—it's sort of like, okay, if people are successful, you make a certain amount of money, now you can tell everybody f*** you, you can say whatever you want.
00:54:33.960 And I will just tell you, my observation is that's actually not true.
00:54:37.700 Yeah, right. Definitely not.
00:54:39.980 And the reason that's not true is because the people who tend to—the people who prosper in our society tend to do so because they're becoming responsible for more and more things.
00:54:48.620 And specifically, they're becoming responsible for more and more people.
00:54:51.200 And so, one of the things I would observe about myself and observe about a lot of my peers is even as we became more and more, you know, bothered and concerned and ultimately very worried about some of these things is as that was happening, we were taking on greater and greater responsibilities for our employees and for all the companies that we're involved in, right, and for all the shareholders of all of our companies.
00:55:09.900 And so, I think that's part of—and, you know, you could say, you know, this sort of this endless, you know, sort of question between kind of, you know, absolute, you know, sort of absolute commands of morality versus the, you know, real-world compromises that you make to try to, you know, function in society.
00:55:24.520 You know, I would say I was just as subject to that inherent conflict as anybody else.
00:55:29.260 I was in the room for a lot of these decisions.
00:55:31.820 I saw it every step of the way.
00:55:33.360 In some cases, I felt right up front that something was going wrong.
00:55:37.280 I mean, I was in the original discussion for one of these, you know, companies on the definition of hate speech, right?
00:55:41.780 And you can imagine how that, you know, discussion goes.
00:55:44.540 You know exactly how the discussion went, but I'll just tell you, it's like, well, hate speech is anything that makes people uncomfortable, right?
00:55:50.520 It's, well, you know, so my, you know, then I'm like, well, you know, that comment you just made makes me uncomfortable, and so therefore that must be hate speech.
00:55:58.740 And then, you know, they look at me like I've grown a third eye, and I'm like, okay, that argument's not going to work.
00:56:02.920 And then they're like, well, Mark, surely you agree that the N-word makes people uncomfortable.
00:56:06.280 And I'm like, yes, I agree with that.
00:56:07.760 If our hate speech policy is people don't get to use the N-word, I'm okay with that as long as people can say it, you know.
00:56:12.340 But, of course, it doesn't stop there, and it slides into what we then saw happen.
00:56:15.720 So I saw that happen.
00:56:17.320 The misinformation thing, same thing.
00:56:18.820 The misinformation thing, actually, on social media is a fascinating and horrifying thing that played out, which is it actually started out to actually attack a specific form of actually spam.
00:56:29.640 So there were these Macedonian bot farms that were literally creating what's called click spam or sort of ad fraud on social media.
00:56:38.900 They were creating literally fake news stories like, you know, the classic one was the pope has died.
00:56:43.460 And it's like, no, the pope has not died.
00:56:45.160 That is absolutely misinformation.
00:56:46.480 But the reason that this bot farm puts that story out is because when people click on it, they make money on the ads.
00:56:51.660 And that's clearly a bad thing, and that's misinformation, and clearly we need to stop that.
00:56:56.500 And so the mechanism was built to stop that kind of spam.
00:56:59.000 But then after the election, you know, we discovered that anybody who was pro-Donald Trump was presumptively, you know, an agent of Vladimir Putin, and then all of a sudden that became misinformation, right?
00:57:08.400 And so the engine that was intended to be built for spam then all of a sudden applied to politics, and then off and away they went.
00:57:14.340 And then everything was, you know, everything was misinformation, including, you know, culminating in objections to three years of COVID lockdowns became misinformation, right?
00:57:20.800 So I saw that entire thing on Spool.
00:57:24.620 I saw all the pressures brought to bear on these companies.
00:57:26.580 I saw the people who went up against this get wrecked.
00:57:29.200 I saw these companies try to develop all these tradeoffs.
00:57:32.300 You know, obviously, you know, I would claim for myself that I tried to argue this, you know, kind of every step of the way.
00:57:37.760 And by the way, I'm not the only one who was concerned about this, and I'll just – I think we should give Mark Zuckerberg a little bit of credit on this on one specific point, which is, you may recall, he gave a speech in 2019 at Georgetown, which – and he gave a very principled defense of free speech from first principles.
00:57:54.400 And was – you know, he at that point was trying very hard to kind of maintain the line on this.
00:57:59.580 Now, 2020, everything went, like, completely nuts, and then the Biden administration came in and the government came in, and they really lowered the boom.
00:58:05.960 And so things went very bad after that.
00:58:07.900 But, you know, even Mark, who a lot of people get very mad at on these things, like, he was trying in many ways to hold on to these things.
00:58:15.220 Anyway, it unfolded the way that it did.
00:58:16.720 I don't claim any particular courage.
00:58:18.460 I will tell you, basically, starting in 2022, I saw some leaders in our industry really start to step up.
00:58:25.020 And one that I would give huge credit to is Brian Armstrong, who's the CEO of Coinbase, which is a company that we're involved in.
00:58:32.200 And you may recall, he's the guy who wrote basically a manifesto, and he said, these companies need to be devoted to their missions, not every other mission in society.
00:58:40.480 Right, right, right.
00:58:41.580 Right.
00:58:41.880 And so he declared, like, there's going to be a new way to run these companies.
00:58:44.360 We're not going to have all the politics.
00:58:45.500 We're not going to have the whole bring your whole self to work thing that, you know, we're not going to have all the internal corrosion.
00:58:49.780 We're going to go back.
00:58:50.560 You know, we're going to have our mission, and then we're going to focus on that.
00:58:53.380 We're not going to take on, you know, the world's ills.
00:58:56.820 And then he did this thing where he actually got – he actually purged his company of the activist class that we talked about earlier.
00:59:02.760 And the way that he did that was with a voluntary buyout where he said, if you're not on board with working at a nonpolitical, nonideological company that's focused on its own mission, not every other mission, then, you know, I will pay you money, you know, to go work someplace where you'll be able to fully exercise your politics.
00:59:17.820 There are a bunch of other CEOs, you know, that have been basically following in Brian's footsteps more quietly, but they've basically been doing the same thing.
00:59:26.740 And a lot of these companies have turned the corner on this now, and they're starting to – you know, they're working these people out.
00:59:31.000 And then, you know, quite frankly, you know, the big event is I think this election and, you know, people have all kinds of, you know, positive, negative takes on Trump, and, you know, this gets into lots and lots of political issues.
00:59:40.820 But I think that the Trump victory being what it was and being not just Trump winning again, but also Trump winning the popular vote and also simultaneously the House and the Senate, it feels like the ice has cracked.
00:59:53.160 You know, it's like maybe the pressure for the ice to crack was building over two years, but it feels like as of November 6th, it feels like something really fundamental changed, where all of a sudden people have become basically willing to talk about the things they weren't willing to talk about before.
01:00:07.020 Okay, let's go back to your manifesto.
01:00:09.140 So, I wanted to highlight a couple of things in relationship to that.
01:00:15.340 I had some questions for you, too.
01:00:16.860 Tell me, to begin with, if you would, why you wrote this manifesto.
01:00:21.820 Maybe let everybody know about it first, why you wrote it and what effect it's had, and then I'll go through it step by step, at least to some degree, and I can let you know what ideas we've been developing with the Alliance for Responsible Citizenship, and we can play with that a little bit.
01:00:41.040 So, what I experienced, I'm on 30 years now in the tech industry, you know, in the U.S. and the Silicon Valley, and what I experienced was between roughly, you know, 1994, when I entered through to about 2012, was sort of one way in which everything operated and set of beliefs everybody had.
01:01:04.300 And then, basically, this incredible discontinuous change that happened between, call it 2012 and 2014, that then cascaded into, you know, what you might describe as, you know, some degree of insanity over the last decade.
01:01:17.700 And, of course, you've talked a lot about a lot of aspects of that insanity.
01:01:23.520 But the way I would describe it is, for the first, you know, 15, 20 years of my career, there was what I refer to sometimes as the deal with a capital D, or you might call it the compact, or maybe just the universal belief system, which was effectively everybody I knew in tech was a, you know, social, liberal, progressive, and good standing.
01:01:43.540 But, you know, operating in the era of Clinton-Gore, and then, you know, later on through Bush and into Obama first term, it was viewed as that to be a social progressive and good standing was completely compatible with being a capitalist, completely compatible with being an entrepreneur and a business person, completely compatible with succeeding in business.
01:02:02.760 And so, the basic deal was, you have the, you know, exact same political and social beliefs as everybody you know.
01:02:09.760 You have the exact same social and political beliefs as the New York Times, you know, every day.
01:02:14.640 And their beliefs change over time, but, you know, you update yours to stay current.
01:02:17.980 And everybody around you believes the same thing.
01:02:19.680 The dinner table conversations are everybody's in 100% disagreement on everything at all times.
01:02:25.180 But then you go succeed in business, and you build your company, and you build products, and you build new technology, and if your company succeeds, it goes public, and people become wealthy.
01:02:34.700 And then you square the circle of sort of, you know, sort of social progressivism and entrepreneurial success and business success.
01:02:41.560 You square the circle with philanthropy.
01:02:43.760 And so, you donate the money to good social causes, and then, you know, someday your obituary says he was both a successful business person and a great human being.
01:02:51.380 Hey, everyone. Real quick before you skip, I want to talk to you about something serious and important.
01:02:58.040 Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
01:03:04.000 We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
01:03:11.720 With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
01:03:18.960 He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
01:03:26.920 If you're suffering, please know you are not alone.
01:03:30.080 There's hope, and there's a path to feeling better.
01:03:33.360 Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
01:03:39.020 Let this be the first step towards the brighter future you deserve.
01:03:42.600 And basically what I experienced is that that deal broke down between, you know, 2012, 2014, 2015, and then sort of imploded spectacularly in 2017.
01:03:56.040 And ever since, there has been no way to square that circle, which is if you are successful in business, in tech, in entrepreneurship, if you become, you know, successful, you are de facto evil.
01:04:06.960 And you can protest that you're actually a good person, but you are presumed to be de facto evil.
01:04:11.520 And by the way, furthermore, philanthropy will no longer wash your sins.
01:04:15.840 And this was a massive change, and, you know, this is still playing out.
01:04:18.540 But philanthropy will no longer wash your sins because philanthropy is, you know, unacceptable, the belief goes, philanthropy is an unacceptable diversion of resources from the proper way that they should be deployed, which is the state, right, to, you know, to sort of a private enterprise form of philanthropy, which is sort of de facto, you know, is now considered bad.
01:04:36.620 And so everybody in my world basically had a decision to make, which was did they basically go sharply to the left on not just social issues but also economic issues?
01:04:46.900 And did they become, you know, starkly anti-business, anti-tech, you know, essentially self-hating in order to stay in the good graces of what happened on that side?
01:04:56.820 Or, you know, did they have to, you know, do what Peter Thiel did early on and, you know, go way to the right and basically just punch out and declare that, you know, I'm completely out of progressivism.
01:05:05.980 I'm completely finished with this, and I'm going to go a completely different direction.
01:05:08.440 And obviously that culminated in, you know, that was part of the phenomenon that culminated in Trump's first election.
01:05:14.240 And so anyway, long story short, the manifesto that I wrote is an attempt to kind of bring things back to, you know, what I consider to be a more sensible way to think and operate.
01:05:23.880 You know, a big tent social and political umbrella, but, you know, where tech innovation is actually still good, business is still good, capitalism is still good, technological progress is still good, the people who work on these things actually are still good, and that actually we can be proud of what we do.
01:05:38.500 You said that something changed quite radically in 2017.
01:05:42.420 I'd like you to delve a little bit more into the breakdown of this deal, like your claim there was that for a good while, center-left positions politically, let's say, and philosophically were compatible with the tech revolution and with the big business side of the tech revolution.
01:06:02.540 But you pointed to a transformation across time that really became unmistakable by 2017.
01:06:12.840 Why 2017 as a year, and what is it that you think changed?
01:06:18.780 You know, you painted a broad-scale picture of this transformation and also pointed to the fact that it was no longer possible to be an economic capitalist, to be a free market guy,
01:06:31.540 and to proclaim allegiance to the progressive ideals.
01:06:35.960 That became impossible.
01:06:37.480 And in 2017, what do you think happened?
01:06:40.160 How do you understand that?
01:06:42.040 Yeah, so different people, of course, have different perspectives on this, but I'll tell you what I experienced.
01:06:46.080 And I think in retrospect what happened is Silicon Valley experienced this before a lot of other places in the country and before a lot of other, you know, fields of business.
01:06:54.220 And so I have many friends in other areas of business who live and work in other places where I would describe to them what was happening in 2012 or 2014 or 2016.
01:07:02.420 And they would look at me like I'm crazy.
01:07:03.840 And I'm like, no, I'm describing what's actually happening on the ground here.
01:07:07.960 And then, you know, three years later, they would tell me, oh, it's also happening in Hollywood or it's also happening in finance or it's also happening in, you know, these other industries.
01:07:16.340 So in retrospect, I think I had a front row seat to this just because Silicon Valley was, you know, I've been using this term first in.
01:07:22.360 Silicon Valley was first in.
01:07:23.420 Like Silicon Valley was the industry that went the hardest for this transformation up front.
01:07:28.120 And so what we experienced in Silicon Valley – and then, you know, the nature of my work, you know, over this entire time period, I've been a venture capitalist and an investor.
01:07:35.360 And so the nature of my work is I've been exposed to a large number of companies all at the same time, some very small.
01:07:41.380 And then, by the way, also some very large.
01:07:43.280 So, for example, I've been on the Facebook board of directors this entire arc, right?
01:07:46.740 And a lot of what I'm describing, you can actually see through just the history of just, you know, the one company, Facebook, which we can talk about.
01:07:54.860 But anyway, so I think I basically saw the Vanguard movement up close.
01:07:58.580 And, you know, essentially what I saw was it was really 2012.
01:08:01.840 It was the beginning of the second Obama term.
01:08:03.740 And it was sort of the aftermath of the global financial crisis.
01:08:07.240 And so it was some combination of those two things, right?
01:08:10.500 So the global financial crisis hits in 2008, Occupy Wall Street takes off, but it's this kind of fringe thing.
01:08:16.140 You know, the sort of – you know, Bernie Sanders starts to activate as a national candidate.
01:08:19.920 Some of these, you know, other politicians on the sort of further to the left start to become prominent, start to take over the Democratic Party.
01:08:26.200 And then, you know, the economy caved in, right?
01:08:29.060 So we went through a severe recession between, call it 2009 to 2011.
01:08:32.680 2012, the economy was coming back.
01:08:35.600 People maybe weren't worried about being fired anymore, right?
01:08:38.400 If people think they're going to get fired in a recession, they generally don't act out at a company.
01:08:42.400 But if they think their jobs are secure in an economic boom, you know, they can start to become activists.
01:08:46.500 And so the sort of employee activist movement started around 2012.
01:08:50.260 And then the Obama second term, you know, I would say the progressives in the Democratic Party kind of took more control, you know, kind of starting around that time.
01:08:56.700 And the Obama administration itself kind of turned to the left.
01:08:59.300 And so you started to get this kind of activated political energy, this sort of – you know, the activist movements in these companies where you had people who, you know, the year before had been a quiet, you know, web designer working in their cubicle.
01:09:10.940 And then all of a sudden, they're a social and political revolutionary inside their own company.
01:09:15.060 And then, by the way, the shareholders activated, which was really interesting.
01:09:18.700 Like this is when Larry Fink at BlackRock decided he was going to save the world.
01:09:22.360 And then the press activated.
01:09:24.140 And so all of a sudden, you know, the same tech reporters who had been very happy covering tech and talking about exciting new ideas all of a sudden became, you know, kind of very accusatory and started to condemn the industry.
01:09:35.900 So that started to pop around 2012.
01:09:38.640 And then what I saw is you might even describe it as like a controlled skid that became an uncontrolled skid, which was that energy built up in tech between 2012 and 2015.
01:09:48.120 And then, you know, basically what happened in rapid succession was Trump's nomination and then Trump's election, his victory in 2016.
01:09:55.540 And I described both of those events as like 10xing of the political energy in this system.
01:10:01.160 And so, you know, both of those events really activated, you know, very strong antibody responses, you know, which, as you know, culminated in like mass protests in the streets right after the 2016 election.
01:10:10.280 And then, of course, the narrative then became, you know, crystallized, which is there are the forces of darkness represented by Trump, represented by the right, represented by capitalism, represented by tech.
01:10:20.100 And there are the forces of light represented by wokeness and, you know, the racial reckoning and, you know, the George Floyd protests and so forth.
01:10:26.780 And it, you know, became this, you know, very, very, very clear litmus test.
01:10:29.800 And so the pattern basically locked in hard in 2017 and then continued to escalate from there.
01:10:37.320 So in your manifesto, you list some of these ideas that were pathological, let's say, that emerged on the left.
01:10:48.900 And I just want to find the, well, you, for example, you say, technology doesn't care about your ethnicity, race, religion, national origin, gender, sexuality, political views, height, weight, etc.
01:11:01.920 Listing out the dimensions of hypothetical oppression that the intersectionalist woke mob stresses continually.
01:11:10.400 Now, you, you point your finger at that, obviously, because you feel that something went seriously wrong with regard to the prioritization of those dimensions of difference.
01:11:24.140 And that's part of the movement of diversity.
01:11:26.200 That's part of the movement of equity and inclusivity.
01:11:30.640 Let me just find this other.
01:11:32.480 Yes, here we go.
01:11:33.180 So our present society has been subjected to a mass demoralization campaign for six decades against technology and against life under varying names like existential risk, sustainability, ESG, sustainable development goals, social responsibility, stakeholder capitalism, precautionary principle, trust and safety, tech ethics, risk management, degrowth.
01:11:59.300 The demoralization campaign is based on bad ideas of the past, zombie ideas, many derived from communism, disasters then and now that have refused to die.
01:12:10.060 And that's in the part of your manifesto that is subtitled the enemy.
01:12:15.600 That's an enemy of the enemy you're characterizing there as a system of ideas.
01:12:20.000 And I guess that would be the system of woke ideas that presumes, and correct me if I get this wrong, that presumes that we're fundamentally motivated by power, that anybody who has a position of authority actually has a position of power.
01:12:41.400 The best way to read positions of power is from the perspective of a narrative that's basically predicated on the hypothesis of oppressor and oppressed, and that there are multiple dimensions of oppression that need to be called out and rectified.
01:12:59.860 And the DEI movement is part of that.
01:13:01.920 And so you point to the fact that these are zombie ideas left over, let's say, from the communist enterprise of the early and mid-20th century, and that seems to me precisely appropriate.
01:13:16.860 And you said you thought those ideas emerged on the corporate front in a damaging way, first in big tech.
01:13:23.020 You know, I probably saw that most particularly, evidence of that most particularly in relationship to the scandal that surrounded James Damore, because that was really cardinal for me, because, like, I spent a fair bit of time talking to James, and my impression of him was that he was just an engineer.
01:13:42.140 And I don't mean that in any disparaging sense.
01:13:45.180 He thought like an engineer, and he went to a DEI meeting, and they asked him for feedback on what he had observed and heard, and James, being an engineer, thought that they actually wanted feedback, you know, because he didn't have the social skills to understand that he was supposed to be participating in an elaborate lie.
01:14:05.300 And so he provided them with feedback about their claims, especially with regards to gender differences, and James actually nailed it pretty precisely for someone who wasn't a research psychologist.
01:14:16.840 He had summarized the difference in the literature on gender differences, for example, extremely accurately, and they pilloried him.
01:14:25.440 And I thought, that's really bad, because it means that, you know, Google wouldn't stand behind its own engineers when he was telling the truth.
01:14:33.020 And there was every attempt made to destroy his career.
01:14:36.740 Now, why do you think that whatever happened affected tech first?
01:14:42.480 And what did you see happening that you then saw happening in other corporations?
01:14:48.480 Yeah, so why did it happen in tech first?
01:14:50.720 So a couple things.
01:14:51.240 So one is tech is just, I would say, extremely connected into the universities.
01:14:55.780 And so almost everything we do flows from the computer science departments and the engineering departments at major U.S. research universities.
01:15:04.000 And, you know, we hire kids from, you know, new graduates all the time.
01:15:07.400 And so we just have a very, very tight.
01:15:09.280 And we work with, we work with university professors and research groups all the time.
01:15:13.400 And so there's just a direct connection there.
01:15:16.280 And so, you know, it's like if a ideological, pathological virus is going to escape the university and jump into the civilian population, it'll hit tech first, which is what happened.
01:15:28.020 Or maybe, you know, tech and media first.
01:15:31.000 So that's one.
01:15:32.100 And then two, you know, two is, I think, the sort of psychological sort that happens when kids decide what profession to go into.
01:15:40.340 And, you know, what we get are the very high openness people.
01:15:43.180 You know, the highest openness people come out of college, you know, who are also high IQ and ambitious.
01:15:48.900 And they basically, you know, they go into tech, they go into creative industries, or they go into media, right?
01:15:52.420 They're sort of, you know, where they sort into.
01:15:54.060 And so we also get the most open, and by the way, also ambitious, right?
01:15:59.120 We get the, you know, the ambitious, driven, you know, as you say, high industriousness ones as well.
01:16:04.740 And then, you know, that's the formula for a highly effective activist, right?
01:16:07.940 And so we got the full load of that.
01:16:11.720 And then, look, you know, this movement, you know, that we now call wokeness, you know, it hijacked, you know, it hijacked what I would, you know, call sort of at the time, you know, bog standard progressivism.
01:16:20.440 Which is, you know, of course you want to be diverse, and of course you want to be inclusive, and of course you want everybody to feel included, and of course you want to be kind, and of course you want to be fair, and of course you want a just society.
01:16:30.680 And, you know, that was part of the, you know, just moderate belief set that everybody in my world had, you know, for the preceding certainly 20 years.
01:16:38.140 And so at first it just felt like, oh, this is more of what we're used to, right?
01:16:41.640 This is, you know, of course this is what we want.
01:16:44.300 But, you know, it turned out what we were dealing with was something that was far more aggressive, right?
01:16:48.040 You know, a much more aggressive movement.
01:16:49.940 And then this activism phenomenon.
01:16:52.880 And then this became a very practical issue for these companies, like on a day-to-day basis.
01:16:57.080 And so you mentioned the Demore incident.
01:16:58.520 So I talked to executives at Google while that was going down, because that was so confusing for me at the time.
01:17:03.660 And the reason they acted on him the way they did and fired him and ostracized him and did all the rest of it is because they thought they were hours away from actual physical riots on the Google campus.
01:17:13.920 Like they thought employee mobs were going to try to burn the place down physically, right?
01:17:18.600 And that was such, at the time, like that was such an aberrant, you know, phenomenon, expectation.
01:17:25.340 There were other companies, by the way, at the same time that were having all-hands meetings that were completely unlike anything that we'd ever seen before that you could only compare to struggle sessions.
01:17:35.360 You know, there's the famous – the Netflix adaptation of Three-Body Problem starts with this very vivid recreation of a Maoist-era, you know, communist Chinese struggle session, right?
01:17:46.640 Where the students are on stage and, you know, the disgraced, you know, professor is on stage confessing his sins and, you know, then they beat him to death.
01:17:53.840 And, you know, the inflamed passions of the young, ideologically, you know, consumed crowd that is completely convinced that they're on the side of justice and morality.
01:18:02.860 You know, fortunately, nobody got beaten to death, you know, at these companies on stage at an all-hands meeting.
01:18:08.500 But you started to see that same level of activated energy, that same level of passion.
01:18:12.600 You started to see hysterics, you know, people crying and screaming in the audience.
01:18:16.380 And so, you know, these companies knew they were at risk from their employees up to and including the risk of actual physical riots.
01:18:22.820 And that at the time, of course, was like a completely bizarre thing.
01:18:26.420 And we, you know, we at the time had no idea what we were dealing with.
01:18:29.640 But it was – in retrospect, it was through events like what James Damore went through that we ultimately did figure out what this was.
01:18:35.160 Okay, okay.
01:18:35.980 So let me ask you a question about that.
01:18:38.080 You know, it's a management question, I guess.
01:18:42.620 So I had some trouble at Penguin Random House a couple of years ago after writing a couple of bestsellers for them.
01:18:53.320 I was contracted with one of their subdivisions, and they had a bit of an employee rebellion that would be perhaps reminiscent of the sort of thing that you're referring to.
01:19:03.540 And they kowtowed to them, and I ended up switching to a different subdivision.
01:19:09.220 Now, it really made no material difference to me.
01:19:11.800 And I was just as happy to be with a subdivision where everybody in the company, visible and invisible, was working to make what I was doing with them successful,
01:19:22.980 rather than scuttling it invisibly from behind the scenes.
01:19:27.020 But my sense then was, why don't you just fire these people?
01:19:32.760 And so, and I'm dead serious about that.
01:19:35.140 It's like, first of all, I'll give you an example.
01:19:37.300 So we just set up this company, Peterson Academy Online, and we have 40,000 students now and about 30 professors.
01:19:47.260 And we're doing what we can to bring extremely high quality, elite university level education to people everywhere for virtually no money.
01:19:57.300 And that's working like a charm.
01:19:58.980 Now, we set up a social media platform inside that so that people could interact like they do on Twitter or Facebook, etc., Instagram,
01:20:10.140 because we try to integrate the best features of those networks.
01:20:13.940 But we wanted to make sure that it was a civilized place.
01:20:18.060 And so, the fact that people have to pay for access to it helps that a lot, right?
01:20:24.220 Because it keeps out the trolls and the bots and the bad actors who can multiply accounts beyond comprehension for no money.
01:20:32.160 And so, the mere price of entry helps.
01:20:34.640 But we also watched, and if people misbehaved, we did something about it.
01:20:40.320 And we kicked four people out of 40,000, and one of them we put on probation.
01:20:45.620 And that was all we had to do.
01:20:48.640 You know, there was goodwill and everybody was behaving properly.
01:20:51.520 And like I said, there was a cost to entry.
01:20:54.400 But it didn't take a lot of discipline.
01:20:57.940 It didn't take a lot of disciplinary action to make an awful lot of difference with regard to behavior.
01:21:03.540 And so, you know, I can understand that Google might have been apprehensive about activating the activists within their confines.
01:21:11.420 But sacrificing James Damore to the woke mob because he told the truth is not a good move forward.
01:21:17.600 And I just don't understand at all.
01:21:19.540 You see, and the same thing happened at Penguin, at Penguin Random House.
01:21:22.760 It's like, you could just fire these people.
01:21:25.480 Like, they were people there who wanted to not publish a book of mine that they hadn't even read.
01:21:32.340 You know, they weren't people who deserved to be working at what's arguably the greatest publishing house in the world.
01:21:38.840 So, you alluded to it a little bit.
01:21:43.020 You said that people were taken by surprise, you know, and fair enough.
01:21:46.640 And it was the case that there was a radical transformation in the university environment somewhere between 2012 and 2016,
01:21:54.120 where all these terrible, woke, quasi-communist, neo-Marxist ideas emerged and became dominant very quickly.
01:22:01.820 But I'm still – why do you think that that was the pattern of decision that was being made instead of taking appropriate disciplinary action
01:22:10.880 and just ridding the companies of people who were going to cause trouble?
01:22:14.820 Yeah, so there's a bunch of layers to it in retrospect.
01:22:18.320 And let me say that this – what you described has – it is what's happening now.
01:22:21.820 So, in the last two years, a lot of companies actually are – at long last, they are firing activists.
01:22:25.860 And we can talk about that.
01:22:27.460 And so, I think the tide is turning on that a bit.
01:22:29.620 But going back in time, going back in time between, you know, 2012 and, let's say, 2022.
01:22:35.960 So, like a full, you know, 10-year stretch where what you're describing didn't happen.
01:22:40.000 I think there's layers.
01:22:40.980 So, one is, as I said, just people didn't understand it.
01:22:43.360 I think, quite frankly, number two, a lot of people in charge agreed with it, at least to start, right?
01:22:48.240 And so, they saw people who had what appeared to be the same political, ideological leanings as they did and were just simply more passionate about them.
01:22:54.800 And so, they thought they were on the same side.
01:22:57.760 They agreed with it.
01:23:01.180 And then at some point, they discovered that they were dealing with something different, you know, maybe a more pure strain or a more fundamentalist, you know, approach.
01:23:09.340 At that point, of course, they became afraid, right?
01:23:12.860 And so, they were afraid of being lit on fire themselves.
01:23:15.880 And by the way, I would describe, you know, I think tech is starting to work its way out of this.
01:23:19.200 I think Hollywood is still not, and my friends in Hollywood, when I talk to them.
01:23:22.860 Oh, not at all.
01:23:24.040 Not at all.
01:23:24.460 When I talk to people who are in serious positions of responsibility in Hollywood, you know, after a couple drinks and, you know, in sort of a zone of privacy, you know, it's pretty frequently they'll say, look, I just can't.
01:23:35.160 It's still too scary.
01:23:36.240 Like, I can't go up against this because it'll ruin my career.
01:23:38.440 So, you know, there is this group frenzy, cancellation, you know, ostracizing, career destruction thing.
01:23:45.220 That's real.
01:23:46.580 But let me highlight two other things.
01:23:48.720 So, one is it wasn't just the employees.
01:23:51.820 It was the employees.
01:23:53.200 It was a substantial percentage of the executive team.
01:23:56.300 It was also the board of directors in a lot of cases.
01:23:59.920 And so, you'd have politically activated board members.
01:24:03.000 And some of these companies still have that, by the way.
01:24:05.860 It was also the shareholders.
01:24:07.360 And you would think that investors in a capitalist enterprise would only be concerned with economic return.
01:24:13.940 And it turns out that's not true because you have this intermediate layer of institutions like BlackRock where, you know, they're aggregating up lots of individual shareholders.
01:24:22.100 And then, you know, the managers of the intermediary can exercise their own politics, you know, using the voting power of aggregated small shareholder holdings.
01:24:30.820 And so, you had the shareholders coming at them.
01:24:33.660 Then, by the way, you also had the government coming at them.
01:24:36.200 And, you know, this administration has been very aggressive on a number of fronts.
01:24:43.880 We could talk about a bunch of examples of that.
01:24:45.500 But you have direct government pressure coming at you.
01:24:47.700 You have the entire press corps coming at you, right?
01:24:51.500 And so, it feels like it's the entire world, you know, bearing in on you.
01:24:55.680 And they're all going to light you on fire.
01:24:57.940 And then that takes me to—
01:24:58.700 Well, and that does happen.
01:25:00.300 That does.
01:25:00.800 Like, what we should also point out, that's not a delusion.
01:25:05.020 I mean, part of also—it's also, I think, the case that the new communication technologies that make the social media platforms so powerful have also enabled reputation savagers in a way that we haven't seen before.
01:25:20.980 Because you can accuse someone from behind the cloak of anonymity and gather a pretty nice mob around them in no time flat with absolutely no risk to yourself.
01:25:31.800 And, you know, there's a pattern of antisocial behavior that characterizes women.
01:25:37.760 And this has been well documented for 50 years in the clinical literature.
01:25:41.660 Like, antisocial men tend to use physical aggression, bullying.
01:25:45.500 But antisocial women use reputation savaging and exclusion.
01:25:50.720 And it looks like social media, especially anonymous social media, what would you say, enables the female pattern of aggression, which is reputation savaging and cancellation.
01:26:05.540 Now, I'm not accusing women of doing that.
01:26:08.120 You've got to get me right here.
01:26:09.480 It's that there are different pathways to antisocial expression.
01:26:13.060 One of them, physical violence, isn't enabled by technology.
01:26:18.020 But the other one, which is reputation savaging and exclusion, is clearly abetted by technology.
01:26:24.040 And so that's another feature that might have made people leery of putting their head up above the turret.
01:26:30.340 You know, like in Canada, well, I'm still being investigated by the Ontario College of Psychologists.
01:26:35.780 And I'm scheduled free re-education if they can ever get their act together to do that.
01:26:40.120 And I fought an eight-year court battle, which has been extremely expensive and very, very annoying, to say the least.
01:26:47.340 And I don't think that there's another professional in Canada on the psychological or medical side who's been willing to put their head above the parapet except in brief, you know, in brief interchanges.
01:27:00.240 And the reason for that is it simply is too devastating.
01:27:04.720 And so I have some sympathy for people who are concerned that they'll be taken out because they might be.
01:27:10.740 But, you know, by the same token, if you kowtow to the woke mob for any length of time, as the tech industry appears to be discovering now, you end up undermining everything that you hold sacred.
01:27:22.980 I mean, you alluded to the fact that you'd hope that at least the shareholders would be appropriately oriented by market force forces, greed, to put it in the most negative possible way.
01:27:35.740 And you'd hope that that would be sufficient incentive to keep things above board, because I'd way rather deal with someone who's motivated by money than motivated by ideology.
01:27:45.640 But even that isn't enough to ensure that even corporations act in their own best economic interest.
01:27:54.020 So it is a perfect storm.
01:27:55.680 And you alluded to government pressure as well.
01:27:58.440 And so maybe you could shed a little bit more light on that, because that's also particularly worrisome.
01:28:05.140 And it's certainly been something that's characteristic and is still characteristic of Canada under Trudeau.
01:28:12.280 Yeah, so there's a couple of things on that.
01:28:13.940 So one is, I should just note, and I'm sure you'll agree with me on this, there are many men who also exhibit that reputational destruction motive.
01:28:23.400 Absolutely.
01:28:24.040 Men will use it.
01:28:25.260 They typically don't in the real world.
01:28:28.100 But if the pathway is laid open to it on social media, let's say, and there's a particular kind of man who's more likely to do that too.
01:28:37.340 Those are the dark tetrad types who are narcissistic and psychopathic and Machiavellian and sadistic.
01:28:44.000 Lovely combination of personality traits.
01:28:46.160 And they're definitely enabled online.
01:28:48.900 So.
01:28:49.680 Yeah.
01:28:50.160 So we've had plenty of them as well.
01:28:53.200 Yeah.
01:28:53.480 So the government pressure side.
01:28:55.080 So when this all hit, you know, like I said, I didn't, nobody I knew understood what was happening.
01:28:59.140 I didn't understand it.
01:28:59.940 And so I, you know, I did what I do in circumstances like that, and I basically tried to work my way backwards through history and figure out, you know, where this stuff came from.
01:29:07.080 And I think, like, for pressure on corporations, you know, the context for this is that corporations, corporations are, there's this cliche that you'll hear actually interesting from the left, which is, well, private companies can do whatever they want.
01:29:19.400 They can censor whoever they want.
01:29:20.580 Private companies have total latitude to do whatever they want.
01:29:22.640 And, of course, that's totally untrue.
01:29:24.560 Private companies are extensively regulated by the government.
01:29:27.260 Private companies have been, you know, regulated by a civil rights regime, you know, imposed by the government for the last 60 years.
01:29:32.820 That civil rights regime, you know, certainly has done, you know, many good things in terms of opening up opportunities for, you know, different minority groups and so forth to participate in business.
01:29:41.320 But, you know, that civil rights regime put in place this standard called disparate impact in which you can evaluate whether a company is racist or not on the basis of just raw numbers without having to prove that they intended to be, right, in terms of, like, who they select for their employees.
01:29:56.200 And so, companies, you know, predating the arrival of what we call woke, they already had legal and regulatory and political and compliance requirements put on them to achieve things like racial diversity, gender diversity, and so forth.
01:30:10.740 I grew up in that environment.
01:30:12.200 I considered that totally normal for a very long time.
01:30:14.400 I just figured that's how things worked, and that was the positive payoff from the civil rights movement and from the 1960s, and that was just the state of play.
01:30:20.380 And, you know, and by the way, it was, I think, manageable and good in some ways, and, you know, like, kind of on and away we went, like, we could deal with it.
01:30:26.800 But basically what happened was when woke arrived, that regime was enormously intensified.
01:30:32.860 And what happened was a sequence of events – and literally there was a playbook where, for example, per DEI, there was a sequence of events where activists and employees and board members would push you.
01:30:42.200 First of all, you had to start doing explicit minority statistical reporting.
01:30:47.980 So, you had to fully air in public any, you know, disparate impact, any differences in, you know, racial, gender, ethnic, sexual, you know, differences relative to the overall population.
01:30:59.880 In a statistical report, you had that every year, and, of course, they would tell you, as long as you issue this report, you're fine.
01:31:05.940 Well, of course, that wasn't the case.
01:31:08.140 What followed the report was, okay, now you need what's called the Rooney Rule.
01:31:12.120 And the Rooney Rule basically says you have to have statistically proportionate representation of candidates for every job opening relative to the overall population.
01:31:20.840 Right.
01:31:21.020 So, stop there for just a sec, because we should delve into that.
01:31:25.560 That's a terrible thing, because we can think about this arithmetically.
01:31:30.960 It's like you have to have proportionate representation of all protected group members in all categories.
01:31:37.440 Okay, there's a lot of horror in those few words, because the first problem is those categories are multiplicable without end.
01:31:47.440 And you see this, for example, with the continued extension of the LGBT acronym.
01:31:52.460 There's no end to the number of potential dimensions of discrimination that can be generated.
01:31:59.120 And then, so that's an unsolvable problem to begin with.
01:32:04.040 It means you're screwed no matter what you do.
01:32:05.820 But it's worse than that when you combine that with the doctrine of intersectionality.
01:32:10.460 Because not only do you then have the additive consequence of these multiple dimensions of potential prejudice.
01:32:18.600 So, for example, in Canada, it's illegal to discriminate on the basis of gender expression.
01:32:28.180 Okay, that's separate from gender identity.
01:32:30.220 So, now there's a multitude of categories of gender identity, hypothetically.
01:32:34.440 I mean, the estimates range from like two to three hundred.
01:32:38.280 But gender expression is essentially how you present yourself.
01:32:43.040 It's, I think it's technically indistinguishable from fashion, fundamentally.
01:32:47.880 And I'm not trying to be a prick about that.
01:32:50.840 I mean, I've looked at the wording, and I can't distinguish it conceptually.
01:32:54.940 It's mode of self-presentation, hairstyle, dress, etc.
01:33:00.300 And so, that means you can't discriminate on the basis of whatever infinite number of categories of gender expression you could generate.
01:33:07.860 And then, if you multiply those together, I mean, how many bloody categories do you need?
01:33:13.620 Before you multiply them together, you have so many categories that it's impossible to deal with.
01:33:20.100 So, there's a really, there's a major technical problem at the bottom of this realm of conceptualization that's basically making it, A, impossible for companies to comply and exposing them to legal risk everywhere.
01:33:33.420 But also, that provides an infinite market for aggrieved and resentful activism.
01:33:39.140 Yeah, that's right.
01:33:40.140 It's like what we saw.
01:33:41.640 So, reporting leads to candidate pools.
01:33:44.580 Candidate pools, the pressure then is, well, you need to hire proportionately, according to whatever these categories are, including all the new ones.
01:33:51.040 And then, hiring means, then step four is promotions.
01:33:54.100 You need to promote at the same rate, right?
01:33:56.220 And the minute you have that requirement, of course, now any performance metrics are just totally out the window because you can't, right?
01:34:02.780 You just have to promote everybody identically, right?
01:34:05.280 And that's sort of the slide into the complete removal of merit from the system.
01:34:09.360 And then, by the way, the fifth stage is you have to lay off proportionately, right?
01:34:12.920 And so, you know, you're bound on the other side.
01:34:16.400 And what happens is precisely what I'm sure you know happens and what you've seen happen.
01:34:19.980 What happens is a descent of the culture of the company into complete, you know, dog-eat-dog, us versus them.
01:34:26.900 You know, the employee base starts to activate along these identity lines inside the company.
01:34:30.800 These companies all created what are known as this incredible euphemism of employee resource groups, ERGs, which is basically segregated employee affiliation groups, right?
01:34:42.800 Right. And so, you now have the employees.
01:34:45.740 You know, the employees aren't employees of your company.
01:34:47.680 The employees are members of a group who just happen to be at your company, but their group membership, along whatever axis we're talking about, their group membership ends up trumping, you know, their role as employees.
01:34:57.340 And then you have this internal dissent into, you know, accusations, into fear.
01:35:03.840 You know, you have, you know, this incredible, you know, tokenization that takes place where, you know, anybody from an underrepresented group is, you know, the classic problem of affirmative action.
01:35:11.480 Any member of an underrepresented group is assumed to have gotten hired only because of their, you know, skin color or their sex, you know, which is horrible for members of that group.
01:35:19.280 And so, you get this, you know, downward slide.
01:35:21.320 Especially the competent ones.
01:35:22.500 Especially.
01:35:23.180 It's terrible for the competent ones.
01:35:24.640 Exactly. And so, it's, you know, it's acid.
01:35:28.340 You're pouring cultural acid on your company and the entire thing is devolving into complete chaos internally.
01:35:33.280 And what's happening is the activists and the press and the board and everybody else is pressuring you to do this.
01:35:37.460 And then the government on top of that is pressing you to do it.
01:35:39.860 And under this last administration, that reached entirely new heights of absurdity.
01:35:44.020 So, let me take a step back.
01:35:46.200 Once you walk down this path and go through all those steps, I believe there's no question you now have illegal quotas.
01:35:51.500 And you have illegal hiring practices and you have illegal promotion practices.
01:35:56.680 And by the way, you also have illegal layoff practices.
01:35:58.640 I think any reading of U.S. civil rights law which says you are not allowed to discriminate on the basis of all these characteristics, you have worked yourself into a system in which you are absolutely discriminating on the basis of these characteristics through actual hard quotas, which are illegal.
01:36:12.880 And so, to start with, I think all of these companies that implemented these systems, I think they've all ended up basically being on the wrong side of civil rights law, which is, of course, this, like, incredibly ironic result.
01:36:26.220 Right?
01:36:27.380 They've all ended up with illegal quotas.
01:36:29.440 I mentioned Hollywood earlier.
01:36:30.540 You know, Hollywood has gone all in for it.
01:36:32.100 You know, they literally now publish their hard quotas.
01:36:34.320 The studios have these statements that says, by X date, you know, 50% of our, you know, producers and writers and actors and so forth are going to be from specific groups.
01:36:41.580 And, again, you just read, like, the Civil Rights Acts and it's like, okay, that's actually not legal and yet they're doing it.
01:36:46.780 But this administration, this last administration, the Biden administration, really hammered this in and they put these real radicals in charge of groups like the Civil Rights Division of the Department of Justice.
01:36:57.620 And the sort of ultimate, like, amazing expression of this, you know, bizarre expression of this was SpaceX, one of Elon's companies, got sued by the Civil Rights Division of this Department of Justice for not hiring enough refugees.
01:37:09.540 Right, not hiring enough foreign nationals who, you know, had come, you know, either, you know, illegals or, you know, coming in through a refugee path.
01:37:20.680 Notwithstanding the fact that SpaceX is a federal contractor and is only allowed in most of its employee base to hire American citizens.
01:37:27.480 And so the government simultaneously demands of SpaceX that they only hire American citizens and that they hire refugees.
01:37:35.720 And the government views no responsibility whatsoever to reconcile that.
01:37:39.580 You're guilty either way, right?
01:37:41.540 And then, again, general companies are in this bind now where if they do everything they're supposed to do, they end up in violation of the Civil Rights Law, which they started out by trying to comply with.
01:37:51.360 And this has all happened without reason and rational discussion.
01:37:56.320 This has all happened in a completely hysterical emotional frenzy.
01:37:59.480 And what these companies are realizing is they're now on the other side of this and there's just simply no way to win.
01:38:03.420 Well, there's another, there's an analog to that, which is very interesting.
01:38:09.820 I mean, I started to see all this happen back in 1992 because I was at Harvard when the Bell Curve was published.
01:38:19.100 And I watched that blow up the department at Harvard and it scuttled one of my students' academic careers for reasons I won't go into.
01:38:26.900 But, well, I was working with that student on developing validated predictors of academic, managerial, and entrepreneurial performance.
01:38:39.640 I was very interested in that scientifically.
01:38:42.260 Like, what can you measure that predicts performance in these realms?
01:38:45.380 And the evidence for that's starkly clear.
01:38:49.720 The best predictor of performance in a complex job is IQ.
01:38:54.740 And psychologists tore themselves into shreds, especially after the Bell Curve, trying to convince themselves that IQ didn't exist.
01:39:01.960 But it is the most well-established phenomena in the social sciences, probably by something approximating an order of magnitude.
01:39:10.860 So if you throw out IQ research, you pretty much throw out all social science research.
01:39:15.700 And so that turns out to be a big problem.
01:39:17.400 Now, personality measures also matter.
01:39:21.820 Conscientiousness, for example, for managers and openness, which you mentioned earlier, for entrepreneurs.
01:39:27.240 But they're much less powerful, about one-fifth as powerful as IQ.
01:39:31.840 Now, the problem is that IQ measures show racial disparities.
01:39:37.160 And that just doesn't go away, no matter how you look at it.
01:39:40.260 Now, at the same time, the U.S. justice system set up a system of laws that govern hiring that said that you had to use the most valid and reliable predictors of performance that were available to do your hiring, your placement, and your promotion.
01:39:59.100 But none of those could produce disparate impact, which basically meant, as far as I can tell, whatever procedure you use to hire is de facto illegal.
01:40:12.460 Now, so lots of companies, and I don't know why this hasn't become a legal issue.
01:40:18.040 So you could say, well, we use interviews, which most companies do use.
01:40:23.320 Well, interviews are very, they're not valid predictors of performance.
01:40:26.920 They're not much better than chance.
01:40:29.380 Structured interviews are better, but ordinary interviews aren't great at all.
01:40:33.480 So they've failed the validity and reliability test.
01:40:37.880 And so I don't think there is a way that a company can hire that isn't illegal, technically illegal in the United States.
01:40:43.480 And then I looked into that for years, trying to figure out how the hell did this come about?
01:40:47.660 And the reason it came about is because the legislators basically abandoned their responsibility to the courts and decided that they were just going to let the courts sort this mess out.
01:40:59.080 And that would mean that companies would be subject to legal pressure and that there would be judicial rulings in consequence, which would be very hard on the companies in question.
01:41:08.920 But it meant the legislators didn't have to take the heat.
01:41:12.000 And so there's still an ugly problem at the bottom of all this that no one has enough courage to address.
01:41:18.420 And so, but the upshot is that, as you pointed out, companies find themselves in a position where no matter what they do, it's illegal.
01:41:26.140 I've had lawyers literally write analysis for this as I've been trying to figure it out, employment law lawyers.
01:41:30.740 And like literally, you read the analysis and it's very, it's, you know, it is absolutely 100% illegal to discriminate on the basis of these characteristics.
01:41:38.320 And it is 100% absolutely illegal to not discriminate on the basis of these characteristics.
01:41:42.200 And that is true, right?
01:41:44.080 And both of those are true.
01:41:45.340 It is both illegal to hire, you know, you mentioned interviews.
01:41:47.540 Interviews are an ideal setting for bias because, you know, most, even if you just assume most people like people who are like themselves, right, you know, or is a member from a certain group going to be more inclined to hire members from that group?
01:42:01.020 You know, probably yes, just if there are no other parameters.
01:42:04.440 And so precisely, you want to get to quantitative measures because you want to take that kind of bias out of the system.
01:42:09.020 But then the quantitative measures are presumptively illegal because they lead to bias through disparate impact.
01:42:13.100 Yeah, and so, you know, maybe the term Kafka trap, right?
01:42:17.120 You just, you end up in this vice and then everybody is just so mad that, you know, you can't even have the discussion.
01:42:26.100 And so this is the downward spiral.
01:42:29.780 On the one hand, I think there's a lot of this that just fundamentally, like, can't be fixed because a lot of these assumptions, you know, a lot of this stuff got baked in, you know, going back to the 1960s, 1970s.
01:42:40.040 So a lot of this is long since settled law, and I don't know that anybody has the appetite to reopen Pandora's box in this.
01:42:45.100 Having said that, this new administration, the Trump administration coming in, I would say every indication is that the Trump administration's policies and enforcement are going to flip to the other side of this.
01:42:57.100 And so one of the things that's very fascinating about what's happening in business right now is a lot of boards of directors are now basically having a discussion internally, you know, with their legal team saying, okay, like, we cannot continue to do the just overt discriminatory hiring and employee segmentation that we've been doing.
01:43:14.700 We're not going to be permitted to, and so, you know, we have to back way off of these programs.
01:43:18.900 And, you know, you're already seeing Fortune 500 companies starting to shut down DEI programs, and I think you're going to see a lot more of that because they're going to try to come into compliance with what the new Trump regime wants, which will be on the other side of this.
01:43:30.600 But the underlying issues are likely to stay unresolved.
01:43:34.880 I think in practice, in retrospect, you know, maybe this is too optimistic on my part, but, you know, my time in business, you know, 80s, 90s, 2000s, it felt like we had a reasonable detente.
01:43:44.760 And although you ideally might want to get in there and figure this stuff all out, as long as it's kind of kept to a manageable simmer, you know, you can kind of have your cake and eat it too, and people can kind of get along and it's okay.
01:43:57.300 You know, maybe it's not a perfectly merit-based system, or maybe there's issues along the way, but fundamentally, you know, fundamentally, companies worked really well for a long time.
01:44:05.580 If you can work your way out of this, you know, out of this sort of, you know, elevated level of hysteria.
01:44:10.720 And optimistically, I would say that that's starting to happen, and the change in legal regime that's coming, I think, will actually help that happen.
01:44:17.660 Right, so you're optimistic because you believe that the free market system is flexible enough to deal with ordinary stupidity.
01:44:25.300 But, like, insane malevolent stupidity is just too much.
01:44:29.320 Yeah, it's...
01:44:29.960 Yeah, I think that's reasonable, you know.
01:44:31.800 Yeah.
01:44:32.060 Well, I do think that's reasonable, because everything's a mess all the time, and people can still manage to manage their way forward.
01:44:38.300 But when you have a policy that says, well, any identifiable disparate outcome with regard to any conceivable combination of groups is indication of illegal prejudice, there's no way anybody can function in that situation.
01:44:55.060 Because those are impossible constraints to satisfy, and they lead to paradoxical situations like the one you described Musk's company as being entangled in.
01:45:05.280 Right, and that's just so frustrating for anybody that's actually trying to do something, you know, that requires merit, that they'll just throw up their hands.
01:45:12.960 And so, yeah, yeah, yeah.
01:45:14.800 Okay, so I'm going to stop you there, because we're out of time on the YouTube side.
01:45:20.080 But that's a good segue for what will continue on the Daily Wire side, because we've got another half an hour there.
01:45:26.140 And so, for all of you watching and listening, join us, join Mark and I on the Daily Wire side, because I would like to talk more about, well, what you see could be done about this moving forward with this new administration and how you're feeling about that.
01:45:42.360 I mean, you made a decision, I guess, early in 2023, like so many people, to pull away from the Democrats and toward Trump, strange as that might be.
01:45:51.460 And I'd like to discuss that decision and then what you see happening in Washington right now and what you envision as a positive way forward,
01:46:00.580 so that we can all rescue ourselves from this mess before we make it much deeper than it already is.
01:46:05.960 So, for everybody watching and listening, join us on the Daily Wire side.
01:46:09.380 And Mark, thank you very much for talking to me today.
01:46:12.020 I hope we get a chance to meet in San Francisco in relatively short order.
01:46:16.180 And I'm also looking forward to continuing our discussion in a couple of minutes.
01:46:20.260 Join us, everybody, on the Daily Wire side.
01:46:22.940 Good. Thank you, Jordan.