The Ben Shapiro Show - November 10, 2024


Harnessing the Power of AI | Avichal Garg


Episode Stats

Length

51 minutes

Words per Minute

227.98071

Word Count

11,817

Sentence Count

657

Misogynist Sentences

1

Hate Speech Sentences

4


Summary

Avishal Garg is a successful venture capitalist, serial entrepreneur, and founding partner at Electric Capital, where he focuses on investing in innovative technologies in the Web 3 space. In this episode, he outlines the problems with legacy tech and cryptocurrency s role in the declining power of the American dollar, and why he s an optimist about the burgeoning AI revolution.


Transcript

00:00:00.000 And bringing it back to the crypto conversation we were having earlier, I mean, this is, you see this in the data, you can look at Pew data or Gallup data or any survey data, and you see, you know, do Americans, and frankly, it's all Western democracies, you know, how much do you trust your public schools?
00:00:12.000 And it used to be, in the 1970s, it would be 70%, and now it's down to 30%.
00:00:15.000 How much do you trust newspapers?
00:00:17.000 It was 70%, now it's down to 30%.
00:00:18.000 How much do you trust the banks?
00:00:19.000 It was 70%, now it's down to 30%.
00:00:21.000 So you can go through basically every institution and society that we sort of looked to to make the system work, And it's just inverted from the 60s and 70s.
00:00:30.000 Avishal Garg is a successful venture capitalist, serial entrepreneur, and founding partner at Electric Capital, where he focuses on investing in innovative technologies in the Web3 space.
00:00:39.000 Garg's background in computer science and engineering at Stanford led him to executive roles at Google and Facebook, yet Garg emerged from the legacy companies of Silicon Valley a strong advocate for decentralized technologies like cryptocurrency.
00:00:50.000 Garg views crypto as a democratizing force in tech, enhancing the average citizen's access to digital and financial services.
00:00:56.000 At Electric Capital, Garg is now carving a path for early-stage startups in crypto.
00:01:04.000 In today's episode, Avichal outlines the problems with legacy big tech and cryptocurrency's role in the declining power of the American dollar.
00:01:11.000 He also talks about why he's an optimist about the burgeoning AI revolution and how we can all best prepare for the changes it will almost certainly make in our everyday lives.
00:01:18.000 Stay tuned to hear Avichal Garg's perspective on the intersection of technology and politics on this episode of the Sunday Special.
00:01:24.000 Avichal, thanks so much for stopping by.
00:01:34.000 I really appreciate it.
00:01:35.000 Good to see you.
00:01:36.000 Thanks for having me.
00:01:36.000 Yeah, absolutely.
00:01:37.000 So, you know, your specialty is the crypto world and crypto.
00:01:41.000 So for people who have like no background in crypto, what is it?
00:01:45.000 How does it work?
00:01:46.000 Why is it important?
00:01:47.000 Yeah, so it depends how far you want to go down the rabbit hole.
00:01:49.000 But crypto, well, the old school version of crypto is just cryptography.
00:01:53.000 And all that is, is math that lets you secure data and prove facts about the world.
00:01:58.000 And it's actually really important to the internet in general.
00:02:01.000 It's why we can do things like credit cards on the internet.
00:02:03.000 That's cryptography.
00:02:04.000 Crypto, as it's become to be known over the last decade, is really how do you use cryptography and distributed systems really to move money around and do new kinds of computation.
00:02:14.000 And so this really started with Bitcoin circa 2009.
00:02:16.000 That sort of kicked off what we call the modern crypto.
00:02:20.000 And that came out of sort of an observation or sort of really a political belief that the legacy financial systems were messed up and broken.
00:02:29.000 And so if people go back to 2008, you know, the economy was falling apart, the banks had sort of done a bunch of things that they shouldn't have been doing.
00:02:38.000 And so there was a computer scientist or a scientist by the name of Satoshi.
00:02:42.000 It was sort of a pen name.
00:02:43.000 And they published a white paper articulating how you might do peer-to-peer cash without the need for a bank in the middle.
00:02:49.000 And so how could I transfer money and transfer value to somebody else without a bank in the middle?
00:02:54.000 And it actually was a combination of both...
00:02:57.000 Computer science, like legitimate technology breakthroughs, and some really interesting social design.
00:03:02.000 And kind of the fundamental thing that they did as a trade-off, they said, well, the legacy systems are really designed around efficiency.
00:03:08.000 And what if instead of saying, I'm going to optimize for efficiency, I optimize for security?
00:03:13.000 And so you optimize for things like self-sovereignty, I'm going to own my own data.
00:03:18.000 And I don't want to trust an intermediary.
00:03:20.000 And so rather than trusting these banks, I choose to not trust them.
00:03:24.000 Could I do the same things?
00:03:25.000 Could I actually move my money around?
00:03:27.000 And it turns out you can.
00:03:28.000 And it was kind of a crazy idea.
00:03:30.000 It was published anonymously.
00:03:31.000 A white paper was put out.
00:03:32.000 Some code, open source code was put out.
00:03:35.000 And it started to take off.
00:03:36.000 And here we are almost 14 years, 15 years later.
00:03:40.000 And that...
00:03:41.000 The core sort of insight around Bitcoin is now almost a trillion dollar asset.
00:03:45.000 There's been a whole sort of explosion of other crypto assets around that, cryptocurrencies and crypto projects around that.
00:03:51.000 That whole ecosystem is worth, you know, somewhere around $2 trillion at this point.
00:03:55.000 And so, you know, a lot has evolved over the last 15 years, but kind of the core original insight was, what if I wanted to do things and I don't trust these intermediaries anymore?
00:04:03.000 Which I think touches on a lot of sort of things that people have really started to feel over the last 5 or 10 years in particular.
00:04:08.000 But this was sort of the computer scientist version of If I want to do the things I wanted to do before, how to do them without trusting any intermediaries?
00:04:15.000 So you can certainly see why established institutions would not like this, because it's quite disruptive.
00:04:19.000 If you're the banking industry and you're used to people using you for those mechanisms of exchange, no longer that's being used, you're not going to like that.
00:04:26.000 If you're a government, you might not particularly like it, because you've been in charge of the monetary supply, and now when you're looking at something like Bitcoin, it has its own established value, a value-established I want to get to the institutional opposition to Bitcoin and to crypto generally.
00:04:42.000 But first, what do you think that crypto has become?
00:04:46.000 There was a lot of talk early on about wide adoption of crypto as a mechanism of exchange as opposed to a second property of it which has a repository of value.
00:04:55.000 Basically, it's like digital gold.
00:04:56.000 I'm hedging against inflation in the economy, and so I buy a bunch of gold, I put it in my safe, and a year from now it's worth more money.
00:05:02.000 Some people have used Bitcoin like that, but there are obviously other uses for crypto.
00:05:08.000 What are the various uses for crypto?
00:05:10.000 Yeah, so what it's really become is sort of from that original insight of potentially the ability to move money around, I think people started to say, well, actually, this is just a new computational infrastructure.
00:05:20.000 What it really does is, if you look at Amazon or Facebook or Google, they really built their entire infrastructure around speed and efficiency and scalability.
00:05:29.000 And so you got massive centralization, which you see on the internet.
00:05:31.000 You got these intermediaries like Facebook or Google or Amazon that sort of concentrated all of the users and concentrated all of the money and all the power.
00:05:38.000 And it's partly an outgrowth of what the infrastructure lets you do.
00:05:41.000 So the way the internet was designed, you sort of got this necessary centralization effect because scale sort of makes things more efficient.
00:05:49.000 In this world, if you start from an assumption of, well, what if I don't want centralization?
00:05:53.000 What if I don't want to trust my counterparty?
00:05:55.000 And what if I'm willing to trade off some efficiency to get these sorts of properties?
00:05:58.000 You sort of start from a different premise.
00:06:00.000 And so the infrastructure has grown in a different way.
00:06:02.000 And Bitcoin was just the first application, this idea of a non-sovereign store of value, almost like a digital gold.
00:06:09.000 And as that sort of started to take off, people started to say, well, you know, what if you could write code around this?
00:06:14.000 What if it's not just about transferring the value, but what if I could, because, you know, as soon as the money becomes ones and zeros...
00:06:21.000 Well, ones and zeros is just a thing that a computer can recognize.
00:06:25.000 And so what if you wrote code around that?
00:06:27.000 And so starting in about 2015 with Ethereum, people started to say, well, if I can write code around money, I can now program money.
00:06:34.000 And when you start thinking about a lot of the world, a lot of the infrastructure of the world, whether it's Wills or trusts or escrow agreements if you bought a house or insurance, securities, derivatives.
00:06:45.000 What are all of these things?
00:06:46.000 Fundamentally, what they are is here's a pile of resources, here's a pile of money, and I have some rules around who can access that money, and I have some rules around what happens over time with that money.
00:06:56.000 There's cash flows that come off of it, or in the event of my passing, something needs to happen with this pile of resources.
00:07:02.000 And now, instead of all of that being written in legal documents, we could write it in code.
00:07:06.000 We could write it in software.
00:07:07.000 And that insight sort of kicked off an entirely new sort of arm and branch of what's known as crypto today that started to explore this idea of what would it look like if you could write code that owned money and operated around money.
00:07:20.000 And so now the kinds of use cases that you're starting to see that have really taken off are, for example, stablecoins.
00:07:25.000 And this is the idea that actually a lot of the world wants to transact U.S. dollars.
00:07:28.000 And it's quite painful to do that.
00:07:31.000 The example we always use is, If you wanted to get money from, let's say, San Francisco to London, and it's Friday afternoon, you are actually better off going to the bank, pulling out $20,000, putting it into a briefcase, buying a flight, getting on a plane, and going to London and then handing somebody the suitcase.
00:07:49.000 Because if you send a wire, it's not going to get there until Tuesday.
00:07:52.000 The banks are closed on the weekends.
00:07:53.000 They don't settle wires.
00:07:54.000 It'll take a T plus one to settle on Monday.
00:07:56.000 And so maybe the money shows up on Tuesday.
00:07:58.000 And you could actually have the money in a suitcase there Friday night or Saturday morning.
00:08:01.000 And that's kind of crazy.
00:08:02.000 In 2025 almost here, there are very few parts of the world where atoms actually move faster than bits.
00:08:10.000 And money continues to be one of them.
00:08:11.000 And so now you can actually send money via USD stablecoins.
00:08:15.000 And it's 24-7.
00:08:17.000 It shows up instantly.
00:08:18.000 It shows up in somebody's wallet, and then they can off-ramp it back to their bank account when the banks open.
00:08:22.000 But it's actually the best, fastest, cheapest way to move U.S. dollars in the world now.
00:08:25.000 And so you're seeing on the order of about a trillion dollars a quarter of money movement happening with this stuff.
00:08:30.000 And for a lot of sort of international use cases.
00:08:32.000 So you're seeing, for example, a lot of remittance volumes start to go through stablecoins in the Gulf or...
00:08:37.000 In Southeast Asia, where a lot of people want dollars and they can't get access to dollars, and there are large populations of people in the West from those markets.
00:08:44.000 And it's just a better way to do business.
00:08:46.000 I mean, we use this for investing in startups, because we can just send money over the weekend after you invest in a company.
00:08:52.000 And so you're starting to see now businesses start to look at this and adopt this as one of the novel and interesting things.
00:08:58.000 And so once you realize that money is digital and you can write code around money, it opens up this whole new universe of, well, what are all the things that We're technology built in the 1970s, and we can do those things in a drastically different way today.
00:09:10.000 And so you're starting to see new emerging use cases even beyond just the basic sort of U.S. dollar money movement into things like lending, into things like stable swaps, so like Forex sorts of use cases.
00:09:20.000 Like really sort of base infrastructure that we haven't really had to think about since maybe World War II. You know, Bretton Woods in the 1950s where we invented a lot of this infrastructure.
00:09:29.000 But we're reimagining it now with 2024 computer science, which is just faster, cheaper, better, more scalable.
00:09:34.000 So, I mean, all of this sounds pretty amazing.
00:09:36.000 And yet there's tremendous regulatory heartburn over all of this.
00:09:40.000 So to sort of steel man the regulatory worries, what exactly are people most worried about that they are now suggesting there needs to be a heavy regulatory regime around some of the use cases or around the original technology?
00:09:52.000 Yeah.
00:09:52.000 So the backdrop here is – so I think it's important to also contextualize this globally – So it turns out the U.S. is a bit of an outlier in this.
00:09:58.000 Actually, most of the world is very, very happy to have these technologies.
00:10:02.000 And I think it's an interesting history lesson, which is if you go back to the 90s and the U.S. Internet, You know, there's Section 230 that sort of happens and it was part of the Telco Act of 96.
00:10:13.000 And it was a really, really important piece of legislation because what it did was it opened the door to the internet, the modern internet as we know it.
00:10:20.000 And really what it said was, you know what, these technologies are a little bit different.
00:10:24.000 They're not exactly like newspapers that have editorial staff and you can sort of control these things in the way that you did the newspapers.
00:10:29.000 We've got to think about these rules differently.
00:10:31.000 And so, for example, it set aside rules that said, well, you know, if you're a digital forum, like a Facebook or YouTube, you have different rules than a newspaper might have.
00:10:39.000 And actually, like, if you, you know, as a media person, you don't need to go to the FTC to get a media license, a radio license to publish on the Internet.
00:10:48.000 And so these sort of basic ground rules understood that the technology was different in some fundamental ways, and so we needed some new rules to account for how it was different.
00:10:55.000 That ushered in the modern Internet.
00:10:57.000 The reason America won the internet was because some of these rules were put in place in the late 90s.
00:11:02.000 And so we got Amazon, we got Facebook, we got YouTube, and we got Google, and all the sort of mega tech companies.
00:11:09.000 Now the rest of the world looked at this, and today has a set of challenges.
00:11:13.000 Like if you're Europe, if you're Brazil, if you're India, if you're China, you're looking at these things and you're saying they're so, so, so important in society as sort of fundamental pieces of infrastructure, but you have zero leverage over them.
00:11:26.000 If you want to call in one of the CEOs of these companies in Europe, are the CEOs going to come?
00:11:31.000 No.
00:11:31.000 They're American.
00:11:32.000 They're American citizens.
00:11:32.000 They're in America.
00:11:33.000 They're subject to U.S. law.
00:11:35.000 They will abide by most of these laws in these other places.
00:11:38.000 But you have relatively little leverage, actually, over these companies because fundamentally they're American companies.
00:11:43.000 So it's a tremendous national security asset that these ended up being American companies.
00:11:47.000 So now the rest of the world looked at this and said, oh, we kind of lost the Internet.
00:11:51.000 We don't want to repeat that mistake.
00:11:53.000 And so if you look at what the rest of the world is doing around things like crypto or AI, they're really leaning into it.
00:11:59.000 And what they're trying to do is figure out how do we make these companies successful in our domestic markets.
00:12:04.000 Because if we can do that, we'll actually in the long term have leverage.
00:12:07.000 And we won't be beholden to these American companies over whom we have no leverage.
00:12:11.000 So actually the rest of the world has gone very pro-crypto.
00:12:14.000 UK, Germany, Singapore, many of these markets are actually embracing it and leaning into the technology.
00:12:19.000 They're giving visas to people to buy.
00:12:21.000 They're giving visas to people to come build these companies.
00:12:24.000 United States, on the other hand, I think we learned some wrong lessons, which is because we were the beneficiaries.
00:12:30.000 It's almost like a resource curse.
00:12:31.000 We sort of won the internet, and we forgot, in many ways, why we won.
00:12:35.000 And we sort of take it for granted, and how powerful these entities are, and how important they are from an economic perspective and a national security perspective.
00:12:42.000 And so we're kind of botching it a little bit.
00:12:44.000 And so there's a lot of sort of dragging your feet.
00:12:46.000 There's a lot of opposition.
00:12:47.000 There's a lot of sort of fear about what these things might do rather than sort of optimism about how amazing things could be.
00:12:56.000 And so the US is a bit of an outlier.
00:12:57.000 And specifically in the US, a lot of the concern fundamentally comes down to, in my opinion, a philosophical concern, which is who gets to control these things.
00:13:05.000 And I think from a government perspective, The legacy financial system and a lot of these computational companies like Amazon or Google, you can call the CEO. You can have them come testify.
00:13:17.000 You can yell at them.
00:13:18.000 You can get your PR moment.
00:13:20.000 Or you can find them.
00:13:21.000 How do you do that with a decentralized crypto network?
00:13:24.000 How do you really think about that?
00:13:26.000 And the reality is you can't.
00:13:27.000 Who are you going to come yell at about Bitcoin?
00:13:29.000 You can't.
00:13:31.000 And even if you could, there's nothing they can do about it.
00:13:33.000 The code is open source and it's running and it's autonomous.
00:13:36.000 And this presents a real challenge, I think.
00:13:38.000 And we don't really have a great framework for how to think about that.
00:13:40.000 I think the last time, in my opinion, that we had to think about these kinds of issues was frankly like the invention of the LLC, like in the 1850s.
00:13:48.000 So if you go back and look at the history of limited liability corporations, we invented, it was actually a fantastic technological breakthrough, something like the Dutch East India Company, which said, let's pull a bunch of capital and we'll go pursue this venture.
00:14:00.000 And that charter, you had to go get that charter from the king.
00:14:03.000 So the king would give you a charter to start a corporation, and you could pool assets and then go pursue risk opportunities.
00:14:08.000 And it was really a separation of capital risk from the people pursuing that with their skills.
00:14:14.000 It took about 150 or 200 years, but a lot of people started to say, well, wait a second, why do you need to go to the king to talk about these things?
00:14:20.000 Like, why can't I just start a company?
00:14:21.000 And so around 1850...
00:14:23.000 In the UK and the US, we created really simple ways for people to create corporations.
00:14:30.000 And it's actually really fascinating.
00:14:31.000 If you go back and look at some of the debates that were happening at the time, there were really interesting questions like, well, you know, are these things people?
00:14:38.000 Like, do they have rights?
00:14:39.000 What happens if there is a fire and somebody dies?
00:14:41.000 Like, who's liable?
00:14:42.000 Is it the owner of the thing or is it the company or who pays?
00:14:45.000 Will these things have free speech rights?
00:14:47.000 Will they be able to, like, Participate in elections.
00:14:49.000 What does that mean?
00:14:50.000 Will they have too much power and money?
00:14:51.000 And there were actually a lot of really interesting questions that were surfaced at the time.
00:14:54.000 But what we said was, you know what, this is a useful new technological construct, and so let's create some rules around things like corporations and LLCs.
00:15:00.000 And that was a really big breakthrough because that was a big part of what allowed things like railroads or the Industrial Revolution.
00:15:05.000 Because now you could aggregate capital to pursue opportunities at a scale that you just previously could not.
00:15:11.000 And that sort of fundamental set of questions of like, how does this work?
00:15:15.000 And what are the rules?
00:15:16.000 People have to really sit down and think about things like liability and ownership, free speech rights and all these things.
00:15:22.000 We haven't really had to face that for 150 years.
00:15:25.000 You know, we are now faced with a similar set of questions here around, well, these are decentralized networks.
00:15:30.000 There isn't a CEO that you can just call.
00:15:32.000 So how does liability work?
00:15:35.000 How does free speech work?
00:15:36.000 If I can write code and just put it on the internet, isn't that free speech?
00:15:40.000 What happens if something goes wrong?
00:15:42.000 How do we think about things like KYC and anti-money laundering?
00:15:46.000 And how do we prevent terrorist financing?
00:15:47.000 There's some really fundamental questions here that we need to sort through.
00:15:51.000 I don't think the answer is to just say no, because the reality is that that's not an answer, because this stuff is happening globally now.
00:15:58.000 And so I think the U.S. has actually sort of dragged its feet, and there's been a lot of opposition, but I think it's actually to the detriment of the U.S. to not have people thinking about these questions and trying to answer them.
00:16:08.000 This past year, we've witnessed not only the ongoing war in the Holy Land, but also something even more disturbing in some ways, an absolute explosion of anti-Semitism across the West.
00:16:17.000 When Jews in Israel need support most, I'm proud to partner with an organization that's been there for decades, the International Fellowship of Christians and Jews.
00:16:24.000 For over 40 years, the fellowship has been building bridges between Christians and Jews.
00:16:27.000 Their work has never been more critical than it is today.
00:16:29.000 Right now, they're on the ground in Israel and Ukraine providing real help, food, security, and essential supplies to those who need it most.
00:16:35.000 Your support during this critical time makes a concrete difference.
00:16:38.000 Every gift you give to the Fellowship goes directly to helping people with food, necessities, and security measures that save lives.
00:16:44.000 Here's what I need you to do right now.
00:16:46.000 Go to benforthefellowship.org.
00:16:48.000 That's benforthefellowship.org.
00:16:49.000 Make a gift today.
00:16:51.000 In the face of these many threats, the Fellowship's ongoing work providing security to Israelis has never been more important.
00:16:56.000 Remember, that's benforthefellowship.org.
00:16:58.000 They're doing important work on the ground, in the Holy Land, every single day.
00:17:01.000 A lot of suffering people there.
00:17:01.000 Give generously.
00:17:02.000 Head on over to benforthefellowship.org.
00:17:05.000 God bless and thank you.
00:17:06.000 So when you look at sort of the current regulators and the regulatory environment, give me some examples of some badly thought out legislation or badly thought out regulation.
00:17:17.000 Obviously, if you spend any time in this world at all, even like me sort of casually, the name Gary Gensler comes up a lot.
00:17:24.000 There are regulators who are widely, shall we say, disliked in these communities, in the crypto community, for being heavy-handed, for believing that you can put the genie back in the bottle.
00:17:24.000 Yeah.
00:17:37.000 What does bad regulation look like?
00:17:38.000 What are the problems?
00:17:39.000 Yeah, well, bad regulation is actually no regulation.
00:17:41.000 It's regulation by enforcement.
00:17:42.000 And so there's no legislation.
00:17:44.000 And so what we really need are clarity of the rules.
00:17:48.000 And it's actually, I've never, it's pretty crazy.
00:17:50.000 I've never worked in a space that is demanding legislation the way that the crypto industry is.
00:17:54.000 People go to senators and congresspeople and they're like, please regulate us.
00:17:57.000 Tell us what the rules are because we want to follow the rules.
00:18:00.000 Because in the absence of these rules, what you really end up doing is empowering the bad actors.
00:18:04.000 What you end up doing is the people who are willing to skirt the law, who are not willing to spend time working with the regulators or the legislators to figure this stuff out, are the ones who just go off and start doing scammy things, and then people ultimately lose their money.
00:18:15.000 And so, for example, if you look at the really famous example of this from a few years ago is FTX, which is sort of an American company.
00:18:23.000 It was a bunch of American founders, but they moved to the Bahamas to run their business.
00:18:26.000 And so it looks like an American company, but in practice was not an American company.
00:18:29.000 It was not regulated by the United States.
00:18:31.000 They were not subject to disclosure rights in the US. But a lot of people trusted them because they looked like an American company.
00:18:37.000 And it ended up being a $10 billion fraud and Sam Beckenfried is going to jail and so on.
00:18:42.000 But that's a perfect example of the lack of legislation and clarity creates the opportunity for these sorts of bad actors to come in and exploit the system.
00:18:51.000 Meanwhile, the good actors are trying to figure out how to comply and it's unclear how to do that.
00:18:55.000 And so when it comes to some of these agencies, what you really see happening is that they are taking this ambiguity and in the absence of clarity from the legislators are essentially deciding the rules and dictating those rules.
00:19:09.000 And sometimes those rules are applied arbitrarily and capriciously.
00:19:13.000 These are literally words that judges have used.
00:19:15.000 And a handful of companies that have the resources are now taking the SEC to court and winning.
00:19:21.000 And it's actually pretty remarkable.
00:19:23.000 For anybody who's studied the history of the agency model and Chevron and sort of these recent rulings that have happened, it's pretty fascinating because historically, you know, if the SEC gave you a Wells notice or said, we're taking you to court, their hit rate was so good.
00:19:37.000 They were so trusted.
00:19:38.000 Like your stock would tank if you got a Wells notice.
00:19:39.000 These days, because they've taken such a different strategy of issuing Wells notices and frankly losing in court multiple times, the market just doesn't even take that seriously anymore, which is a real, I think, hit to the credibility of the agency at the end of the day.
00:19:52.000 And so what's happened is we actually don't have legislation.
00:19:55.000 We don't have regulatory clarity.
00:19:57.000 The agencies are trying to do this through enforcement.
00:20:00.000 And then when it goes to court, they're in many cases losing.
00:20:03.000 And that's just an extremely expensive way to run an industry.
00:20:08.000 And so this is why the industry is going to, you know, there are many great legislators on both sides, on Democrats and Republicans.
00:20:14.000 You know, Senator Gillibrand from New York.
00:20:16.000 You have Richie Torres from New York on the Dem side.
00:20:18.000 You have Ro Khanna here from California.
00:20:21.000 You have Patrick McHenry, who's the chair of the House Financial Services Committee.
00:20:24.000 You know, you have people who are trying to push this forward.
00:20:28.000 But that's what we ultimately need is we need some sort of legislation to start clarifying what the rules actually are and actually telling the agencies what the rules are.
00:20:36.000 In the absence of that, the agencies are sort of taking this into their own hands.
00:20:40.000 So you mentioned economic growth and innovation.
00:20:43.000 Obviously, the area that's seen tremendous amounts of investment right now is in AI. So let's talk about AI. A lot of people are very, very nervous about AI, about what it's going to do.
00:20:54.000 There's sort of the tech optimist view, which is it's an amazing new technology.
00:20:57.000 It's going to allow for innovation at a level that we've never seen before because essentially, at a certain point, you are going to reach artificial general intelligence, which means Why don't you define it?
00:21:07.000 I don't want to misdefine it.
00:21:08.000 Why don't you define what artificial general intelligence is?
00:21:10.000 Yeah, well, it's interesting because AGI sort of has a couple of different flavors of it.
00:21:14.000 I think one version of it is—well, there's another test that we talked about, which is the Turing test.
00:21:19.000 But AGI, generally the way people will describe it, is for an arbitrary task, this software could perform as well as an arbitrary human.
00:21:26.000 So, you know, an average typical human.
00:21:28.000 Now there's a lighter weight version of that that you might say, well, for some universe of tasks, will this outperform?
00:21:35.000 So it's not universal in the sense that a human is very good at learning how to drive a car and ride a bicycle and also do their taxes and also learn email and learn new software.
00:21:45.000 And so maybe you don't get AGI that is necessarily good at doing all of those things, but you might have some sort of human level intelligence that for certain types of tasks it does as well as human.
00:21:53.000 I think we tend to get, we're more likely to be able to get that level of The broader, it can learn anything with very limited data.
00:22:01.000 Humans are phenomenal with a couple of data points.
00:22:04.000 You can teach a four-year-old how to ride a bike in a couple hours.
00:22:06.000 It's pretty remarkable that we can do that.
00:22:09.000 That, I think, is some time away.
00:22:10.000 And so if you define AGI that way, that's probably many years away.
00:22:13.000 If you define AGI as four certain types of intellectual tasks that will do as well as a typical human, I think that's on the order of two years away.
00:22:20.000 Now, that whole sort of can of worms opens up a whole set of questions.
00:22:26.000 I think this is the white-collar analog of what happened in the 80s with manufacturing, as robotics sort of entered, and has real consequences on society, because I think there will be something, if I had to guess, something like 50% to 70% of jobs that people do today, a computer will be better at doing in two to three years.
00:22:44.000 And so what this means is there's real economic pressure for businesses to not have as many employees to get the same output.
00:22:51.000 Over time, I don't think this means there are fewer jobs.
00:22:53.000 I think this means that companies will do more with fewer people.
00:22:56.000 And so you can actually do 10x as much with the same staff.
00:22:59.000 But it does present some challenges in the short term, which is that a lot of these jobs that humans are doing today take things like, you know, doing your taxes, you know, scheduling emails, you know, think about the number of financial transactions that you do that involve just filling out some PDF and sending it back and forth.
00:23:16.000 Like a huge swath of the economy, actually computers will now be better at than humans.
00:23:21.000 And we don't know what the consequences of that will be societally.
00:23:24.000 From a productivity perspective and a growth perspective, it's going to be phenomenal.
00:23:26.000 It's going to make every business that much more efficient.
00:23:28.000 It's going to make the economy boom and sort of take productivity through the roof, I think.
00:23:34.000 The second-order effects, I think, are still a little bit TBD. Yeah, so when we get to the second-order effects, which is, I think, what people are mostly worried about, the societal effects of what happens when half the population, at least in the temporary term, hasn't adjusted yet to the shift.
00:23:46.000 Because, as you mentioned, if you look at the history of American economic growth, there have been a number of industries where it used to represent a huge percentage of the workforce and now represents a tiny percentage of the workforce, and people moved into other jobs.
00:23:58.000 But there is this lag effect.
00:23:59.000 I mean, America used to be a heavily agricultural economy.
00:24:01.000 It was a heavy manufacturing-based economy.
00:24:01.000 Yeah.
00:24:03.000 Now it's a heavy service economy.
00:24:04.000 And so when that is taken over by AI, when AI is doing a lot of those tasks, what do you think is the sort of next thing that human beings are still going to be better at than machines?
00:24:16.000 Presumably, there will be a cadre of experts in every field who will still be better than the machines are because they'll be the geniuses who are...
00:24:23.000 If you're a Nobel Prize winning nuclear physicist and you have a bunch of grad level students who are working for you, but they're all AIs, it wipes out the grad level students, it doesn't necessarily wipe out the Nobel Prize winning nuclear physicist.
00:24:35.000 But if you're not that guy, if you're everybody else, which is 98% of the population, what exactly do you think is left for those people to do?
00:24:45.000 Yeah, I think a lot of it will become human-to-human interactions.
00:24:48.000 So there are certain things that humans ultimately will still want human interfaces for.
00:24:53.000 So for example, going to the doctor.
00:24:55.000 So an AI is going to be better at diagnosing you.
00:24:57.000 AI is already better at diagnosing than most doctors today.
00:25:00.000 But there's something fundamentally human about going to the doctor's office and having somebody talk to you and listen to you.
00:25:05.000 That doesn't go away.
00:25:06.000 And so the modality of how you do your job in a lot of human-to-human interactions will change pretty fundamentally, and you'll have to retrain a bunch of people on how to use these tools properly.
00:25:14.000 But I think what you'll see is anything that involves a human, you know, real-world interaction, I think will probably persist.
00:25:21.000 And then beyond that, I think it's a big open question.
00:25:24.000 You know, I think this is a big, big economic alignment.
00:25:27.000 And so, for example, what does education mean in this new world?
00:25:31.000 Yes, teaching is still a very human thing, but how do we prepare our students for this new world?
00:25:37.000 Or how do we think about, you know, you're talking about the Nobel Prize-winning researcher.
00:25:43.000 I think it's entirely possible that in 7 to 10 years, for some of these domains, that the AI is actually smarter than the Nobel Prize winning physicist or whatever.
00:25:51.000 And so when you get the superhuman intelligence on the other side of that, what are the consequences?
00:25:55.000 And I don't think anybody really has a great answer to that yet, other than the human systems will persist.
00:26:00.000 Anything that involves information, anything that involves data, anything that involves computer code, the AI is probably going to be better than humans at in relatively short order.
00:26:09.000 That sounds pretty scary to a lot of folks, obviously, especially because we're an economy that is optimized for, lack of a better term, for IQ. We're an economy that is optimized toward intelligence, and now you're saying that basically intelligence is no longer the metric for success because it's going to be available to everybody.
00:26:26.000 In the same way that the manufacturing resources have now become unbelievably available to everybody.
00:26:30.000 You don't have to have a manufacturing facility to get anything you want at your front door in a day.
00:26:34.000 It's going to be like that, except with intelligence.
00:26:37.000 And so where the mind goes is to places like, okay, so now you're going to be optimizing for Emotional quotient.
00:26:45.000 You're going to be optimizing for how well people deal with people.
00:26:47.000 If you're an extrovert, you're going to do really well in this new economy.
00:26:50.000 If you're an introvert, you might be a little bit screwed.
00:26:52.000 The engineer types are going to have a real tough time.
00:26:54.000 But if you're a face for the AI to teach kids, then that'll be okay.
00:27:00.000 It also raises serious questions, presumably, about the educational process itself.
00:27:04.000 What do you even teach kids?
00:27:05.000 I see this even with my own kids now.
00:27:08.000 My son, who's a smart kid, I bet he was delayed by about a year because he understood how voice detects worked.
00:27:12.000 He would just grab my phone and instead of actually typing in the question the way that he might, if you were able to read, he would just, you know, actually just voice the question and then he would have the phone read it back to him.
00:27:22.000 He never actually had to do that.
00:27:23.000 So what exactly are we going to be teaching kids and how does that affect the brain?
00:27:27.000 Because obviously, I mean, these are all open questions.
00:27:29.000 When it comes to, it seems to me that You stop teaching kids math, for example, because it's so easy to just do everything using AI. Doesn't that atrophy a part of the brain that you might actually have to use?
00:27:39.000 I mean, there are general uses that are taught via specific skill sets and that apply more broadly.
00:27:46.000 Yeah.
00:27:46.000 All excellent questions.
00:27:47.000 I don't think anybody has the answer.
00:27:48.000 I think there are two ways to sort of think about it.
00:27:50.000 Two data points you might consider.
00:27:51.000 So one is...
00:27:53.000 A lot of this was sort of what happened on the internet.
00:27:55.000 So imagine you're in 1990 and you can see the internet coming.
00:27:58.000 You can see e-commerce coming.
00:27:59.000 You can see video conferencing coming.
00:28:00.000 You can see social media coming.
00:28:02.000 What would you do?
00:28:04.000 That is the question I'd be asking myself if I were a parent right now, if I were a teacher, if I were a white-collar worker.
00:28:10.000 If you understood that the internet was happening in 1990, what would you have done?
00:28:13.000 And I think probably what you do is you start getting really, really familiar with the technology.
00:28:16.000 You start writing emails.
00:28:17.000 You start buying stuff online.
00:28:19.000 You start buying PCs and sort of staying on top of it, so that by the time you're in the year 2002, you've sort of upskilled yourself.
00:28:27.000 You've gotten to a place where you understand what these tools are capable of, and you've re-skilled, and so you're able to sort of move into this new world.
00:28:34.000 Could anybody have predicted that in 2010 or 2020 you would have, you know, the entire media landscape would look fundamentally different than it did in 1995?
00:28:42.000 I think that was a big leap.
00:28:44.000 But if you understood the tools, you were well positioned to sort of move into that new world as it happened.
00:28:48.000 So I think that's One mental model you might have is what happened in the 90s and what would you have done?
00:28:53.000 If you knew everything that you knew about the internet today, but you're in 1990, what would you do?
00:28:56.000 And I think that's probably the best you could do is let's just get really, really familiar with the technology.
00:29:00.000 Let's make sure we buy our kids a PC. Let's make sure that they play with it every day.
00:29:03.000 And by the time they're 10 or 12 or 15, they'll have an intuitive sense for how this technology works and then they'll be okay.
00:29:09.000 The second thing I would observe is that all of these transitions are very scary in the moment.
00:29:13.000 Whether it's the internet, whether it was PCs, whether it was factories, whether it was railroads, whether it was the car, all the way back to the printing press.
00:29:23.000 And there's always this fundamental debate that happens.
00:29:26.000 There's always this fundamental question that happens between two groups of people.
00:29:29.000 One group of people says, you know what, the current system is fine, and actually we don't need this technology, we don't want this technology, because it's going to be so disruptive that a bunch of people will be left behind.
00:29:40.000 And there's another group of people who say, no, no, no, as scary as this thing is, it's a tool that will make life better.
00:29:48.000 And I think every time you look at the tools that humanity creates, if we choose to adopt them, life gets better.
00:29:54.000 So it takes something like the printing press.
00:29:56.000 It was a big driver in why we have literacy.
00:29:57.000 And I think you would be hard-pressed to argue that people being able to read is a bad thing.
00:30:01.000 But there were actually people around that time that said, actually, learning to read is a bad thing.
00:30:05.000 We don't want people to learn how to read.
00:30:06.000 This creates a negative social consequence.
00:30:09.000 You know, what if people can actually read the Bible and understand what it says?
00:30:12.000 Like, that's really dangerous.
00:30:13.000 We don't want that.
00:30:14.000 But I don't think anybody would make that argument today.
00:30:16.000 And so I think AI we're probably going to look at similarly in 20 years.
00:30:19.000 I think we're going to say, you know what, like, yes, it was a transition, but on balance, it was extremely positive.
00:30:26.000 And it does create some societal questions about how we manage that transition for people so we don't end up with a rust belt the way that we sort of, I think, abandoned a bunch of people in the 80s and 90s and didn't reskill them.
00:30:35.000 And this is a problem that we need to think about.
00:30:38.000 But I think on balance, it's going to be extremely positive.
00:30:41.000 And the best you can really do is just immerse yourself in it.
00:30:43.000 So, for example, at work in our company, what we've done is we literally, every person is required to use an AI. And every day you have to use it.
00:30:52.000 And we share productivity tips.
00:30:54.000 And so we have a Slack channel where people are sharing these tips.
00:30:56.000 We have an engineering team internally and we task them with going and actually getting these open source models and running the code and actually trying to build agents that will be able to do the job of a finance person or an operations person so that we can scale our business 10x without hiring more people.
00:31:11.000 And so I think the people that just sort of jump in the deep end today will be at a tremendous advantage in two to five years.
00:31:17.000 Alrighty folks, let's talk about dressing sharp without sacrificing comfort.
00:31:20.000 If you're tired of choosing between looking professional and feeling relaxed, I have great news for you.
00:31:24.000 Collars& Co.
00:31:25.000 is revolutionizing menswear with their famous dress collar polo.
00:31:29.000 Imagine this, the comfort of a polo combined with the sharp look of a dress shirt.
00:31:32.000 It's the best of both worlds, giving you that professional edge without the stuffiness.
00:31:35.000 Gone are the days of floppy collars that make you look like you just rolled out of bed.
00:31:38.000 These polos feature a firm collar that stands up straight all day long.
00:31:42.000 The four-way stretch fabric means you can freely move comfortably throughout the day.
00:31:45.000 It's office-approved, so you can look professional without feeling like you're trapped in a suit.
00:31:49.000 And get this, it travels really well, so whether you're commuting to work or jetting off for a business trip, you'll arrive looking crisp and feeling great.
00:31:54.000 But Collars& Co.
00:31:55.000 isn't just about polos.
00:31:56.000 They've expanded their line impressively.
00:31:58.000 They've got merino sweaters, quarter zips, stretch chinos, even a performance blazer.
00:32:01.000 They call the Maverick.
00:32:02.000 It's versatility at its finest.
00:32:04.000 These pieces look great by themselves, under a sweater, or with a blazer.
00:32:07.000 Look at what I'm wearing right now.
00:32:08.000 Don't I look spiffy?
00:32:09.000 I mean, check out this jacket.
00:32:11.000 This thing is great.
00:32:11.000 You see this thing?
00:32:12.000 So if you want to look sharp, you know, the way that I do, feel comfortable, support a fast-growing American company, head on over to collarsandco.com.
00:32:19.000 Use the code BEN for 20% off your very first order.
00:32:22.000 That's collarsandco.com, code BEN, collarsandco, because you shouldn't have to choose between looking good and feeling good.
00:32:29.000 So a couple of the challenges that people brought up as possible theoretical challenges is one is called the data wall.
00:32:33.000 The basic idea being that what happens when, right now, LLMs, large language models, are learning based on the amount of data that's on the internet.
00:32:41.000 What happens when people stop producing the data because the LLMs are actually producing all the data, and then they're really not producing the data, they're synthesizing the data that's already there.
00:32:49.000 So we hit a point Where because there's no new human-created data that's actually being input, that basically the internet runs out of data.
00:32:56.000 And there are some people who sort of suggest that that's not going to happen, and there are some people who suggest that it will.
00:32:59.000 What do you make of that question?
00:33:00.000 Yeah, it's a very interesting question.
00:33:01.000 I tend to think we will probably not run out of data anytime soon.
00:33:04.000 I think there are large domains that are private data that are not yet fully tapped.
00:33:09.000 And so I think when you think about things like medical records, bank records, financial data, government data, there's a lot of data actually that's not yet been ingested into these systems.
00:33:17.000 I also think that the AI creating stuff, it does create sort of a self-referential feedback loop, but I think a lot of that data is quite valuable.
00:33:24.000 And so you will have a lot of quote-unquote AI-generated data or potentially even synthetic data.
00:33:28.000 And I think if you can put a layer on top of that to have some sort of filtering that says, actually, this data is good data, you might actually get really interesting feedback loops that actually the AI can sort of learn from the AI. So, I think it's a big open question.
00:33:41.000 I'm sort of on the side of, actually, I think we'll get better and better using the existing data that we have, and there are still large buckets of data that are untapped, and so I don't think we've hit sort of the top of the S-curve in terms of being able to hit, you know, diminishing returns with data yet.
00:33:54.000 Another question I've seen raised about this is the question of innovation.
00:33:57.000 How much can AI actually innovate as opposed to performing sophisticated recombination?
00:34:02.000 Which I suppose actually goes to, what exactly is innovation?
00:34:05.000 Is it just sophisticated recombination?
00:34:07.000 Or is it something else?
00:34:08.000 It's sort of like you're in the shower and the flux capacitor hits you when you fall off the toilet.
00:34:11.000 What exactly is that?
00:34:14.000 How far can AI go in terms of what we would perceive to be innovation and Also because whenever there's a sophisticated technology, human beings tend to then map their own model of the brain onto the sophisticated technology.
00:34:25.000 So for a while it was, the brain is a computer, and now the brain is an AI. And how accurate is that?
00:34:31.000 Yeah, I think...
00:34:33.000 It's a tricky question.
00:34:34.000 I think there's going to be a class of innovation that this stuff is really good at, which is when you have large amounts of data and you can interpolate inside those data sets, you can infer a lot.
00:34:42.000 You can learn new things.
00:34:44.000 And so you see, for example, things like protein folding.
00:34:47.000 We have a lot of data and we can start to learn how these things might work and how these systems might work.
00:34:51.000 I think there's some people who think that actually extrapolating beyond the data, creating entirely new frameworks and new ways of thinking is something that humans are uniquely good at.
00:35:01.000 I tend to be on the side of, you know what, every time humans say that somehow we're special and unique, it turns out to be incorrect.
00:35:08.000 And so whether we're talking about, you know...
00:35:13.000 Whether we're talking about the Turing test.
00:35:15.000 For a long time, Alan Turing was one of the pioneers of computer science.
00:35:19.000 He created this test that basically said, if you can talk to a computer, and maybe that's textual, maybe that's voice, but let's say it's text, and you cannot distinguish, let's say you have a black box and you sort of are asking questions and it gives you answers.
00:35:31.000 If you cannot tell if on the other side of it is a human or a computer, that is intelligence that's indistinguishable from a human.
00:35:39.000 And we've passed the Turing test at this point.
00:35:41.000 You can talk to an LLM and it looks like a human.
00:35:43.000 For all you know, it's a human type in that response and it appears to be self-aware and cognizant.
00:35:49.000 It's not, but it appears to be that way.
00:35:51.000 And so we actually passed this really important milestone that most people don't talk about, which is the Turing test.
00:35:56.000 And so I look at that and I say, well, there are a whole set of things that humans have claimed for a long time are uniquely human that these things based in silicon are now doing.
00:36:07.000 If we have an existence proof for a set of things that we thought were uniquely human that now these computers can do, why would it be the case that there are a whole set of other things that these computers would not be able to do?
00:36:15.000 So I'm kind of on the side of it's just a matter of time that these things are able to replicate all of these things, including the ability to extrapolate and the ability to create new frameworks and to be creative.
00:36:25.000 I think it may take longer, but I think those things will ultimately fall too.
00:36:29.000 So when we look at the world of AI and tons of money is pouring into AI, you see sovereign wealth funds all over the planet, people building their own AIs.
00:36:37.000 What exactly does that mean?
00:36:38.000 We say we're building an AI. What does that mean?
00:36:40.000 Yeah, it's a good question.
00:36:41.000 There's a lot of pieces to it.
00:36:43.000 The thing that most people ultimately see is some sort of a model that they're interacting with that you can type in text to or maybe speak to.
00:36:50.000 The voice transcription models are getting really good.
00:36:53.000 And the text-to-voice and voice-to-text, all that stuff, all those pipelines are getting really good.
00:36:57.000 That's generally what people think of.
00:36:58.000 There's a whole bunch of infrastructure behind the scenes that makes that possible.
00:37:01.000 Everything from how do you acquire the data to how do you clean the data to how do you Get some sort of evaluation on it to how do you actually train these things on very, very large clusters with lots and lots of GPUs and doing that at a scale that requires many, many tens of thousands of GPUs and coordinating all of that.
00:37:16.000 There's a lot of infrastructure and computer science that goes into that.
00:37:19.000 And so that's where all the money is going.
00:37:21.000 Ultimately, it's how do you take large, large amounts of data, the entire internet, and compress it into these very, very small models that hopefully can run on your phone one day.
00:37:31.000 And the entire process of doing that is extremely capital intensive because the sheer amount of computational resource and data centers and power that you need, the GPUs that you need, are so immense to actually produce these models that you then interact with as an AI. It just requires tens of billions of dollars to be able to actually have all that computational power to create these models.
00:37:48.000 And how do you tell the difference between the models?
00:37:50.000 Are they all using the same data set, or is it just they're using different data sets?
00:37:53.000 Is it weighting?
00:37:54.000 What is the difference?
00:37:55.000 It's an excellent question.
00:37:56.000 I think at the end of the day, they will basically all be using the same data sets.
00:38:00.000 And so a lot of it comes into how do you do the training.
00:38:02.000 And there's a lot of know-how and proprietary knowledge around how you actually train these things.
00:38:09.000 A relatively small number of people in the world know how to do that today.
00:38:12.000 And out of that come these weights.
00:38:14.000 And those weights effectively let you understand what, you know, given a word, what is given a context in a word, what is sort of the next word that the system would predict.
00:38:24.000 And so those weights, in the case of, let's say, a llama or open source, in the case of something like OpenAI or Anthropic or not open source, But ultimately, that's what you're trying to produce.
00:38:31.000 You're trying to produce, sort of take all the data, compress it, and the compression ultimately is producing this sort of set of weights, and the weights are just, all they're really doing is saying, given a word, what is the next most likely word?
00:38:42.000 I'm glossing over a lot of details, obviously.
00:38:44.000 And that's what we call sort of the AI model.
00:38:46.000 And there's a lot of sort of how do you do the compression.
00:38:50.000 That's sort of all the proprietary stuff.
00:38:51.000 Which speaks to actually this sort of really...
00:38:53.000 It kind of goes back actually to this technology problem we've talked about, which is that stuff is very opaque, actually.
00:39:02.000 Even though you may be able to see the weights on the final model, you actually have no idea the data that it was trained on.
00:39:08.000 That is not disclosed.
00:39:10.000 And so very subtle biases could be entering into the data that we have no idea about, which presents a real challenge because these are private corporations.
00:39:18.000 They're not subject to...
00:39:20.000 Our vote, ultimately, at the end of the day.
00:39:23.000 And so we don't really know what's going into these data sets.
00:39:25.000 We don't really know what's being kept out of the data sets.
00:39:28.000 That's all opaque.
00:39:29.000 And there's no real accountability at this point in terms of what that means.
00:39:32.000 And so there's a lot of sort of, I think, open philosophical questions here about, like, who should get to dictate what goes into the data sets and why?
00:39:40.000 And what does that mean downstream for what these models are?
00:39:42.000 And so there's a real push from some in this community to make all of this open source.
00:39:45.000 Not just the weights, but actually open source the entire process.
00:39:48.000 Like, tell me what data you're using.
00:39:49.000 Tell me what data you left out.
00:39:51.000 Tell me why you're using this data.
00:39:52.000 Tell me what your eval scores are.
00:39:54.000 And open source the entire thing.
00:39:55.000 And we'll see if that happens.
00:39:57.000 There's sort of some tricky economic questions there because there's so much money to be made that you kind of don't want to disclose those trade secrets because you don't want somebody else to be able to replicate this.
00:40:06.000 And so right now, that whole pipeline is effectively closed.
00:40:10.000 The closeness of it does create the possibility of some pretty dystopian issues.
00:40:14.000 I mean, just to take a quick sort of practical example, when you used to use Google, you would Google a term, and then you'd get a series of webpages, and then you sort of prioritize them in a certain way, but you could always scroll the page, too, and sort of figure out what exactly you were looking for.
00:40:27.000 Now, you use Google's AI, and it spits out It's destroying internet traffic.
00:40:33.000 Nobody has used a click-through link on Google for several months at this point.
00:40:37.000 And by the time my kids are using it a lot, they won't be using it at all.
00:40:40.000 I mean, it'll just be they type in a prompt and then the answer comes back out.
00:40:43.000 And then how you determine what that answer actually looks like, how they got to that answer, is going to be very difficult.
00:40:48.000 And so you can certainly see the possibility of bias in the informational environment.
00:40:53.000 And obviously, conservatives are worried about this all the time.
00:40:56.000 Remember when we were first seeing ChatGPT, conservatives were typing in things and it was coming out very left-wing.
00:41:00.000 We were saying, well, what are the parameters that are being used on all of this?
00:41:04.000 And there's no visibility into that.
00:41:06.000 And actually, the argument on the other side is, well, this stuff is really dangerous.
00:41:10.000 We don't want to open source this stuff.
00:41:12.000 What happens if it gets exploited by a rogue government?
00:41:14.000 Or what if bad actors take this technology and do things with it?
00:41:17.000 And so there's this really strong sort of safety movement around some of this.
00:41:21.000 And I don't think it's misguided necessarily, but I think that the form that it can take is essentially trust us.
00:41:27.000 Like we are the arbiters of truth.
00:41:28.000 We will do the right thing.
00:41:30.000 But I don't think that's actually the right way to go about doing these things.
00:41:34.000 Just to bring it back to the earlier conversation we were having, I think there's actually a very interesting historical analog here around religion, which is the sort of A handful of people that really understand how these systems work and ultimately are making decisions about what data goes into these systems.
00:41:49.000 It's sort of like the church, you know, in sort of, you know, this sort of 1400 era where there's a set of people and they get to dictate what is truth.
00:41:59.000 And over time, even with the best of intentions, even with the best of people, if these things are long-lived enough, there is generational transfer.
00:42:06.000 There are new people that come in, and if there's no accountability, those systems have a tendency to become corrupt.
00:42:12.000 And this is what Martin Luther noted when he banged those 95 theses into the wall at the church, was, wait a second, this system has become corrupted in all sorts of ways, things like indulgences.
00:42:24.000 And there's no accountability.
00:42:27.000 And what people on one side of it said was effectively, hey look, knowing how to read and read the Bible and understand this stuff is really dangerous.
00:42:34.000 You need an interpreter.
00:42:35.000 You need an intermediary to help you understand how to do this and trust us.
00:42:39.000 We're going to filter it in the right way and we'll do what's right for you and we'll do what's right for society.
00:42:43.000 And there's another argument that said, no, no, no, you should have a direct relationship with God.
00:42:46.000 You should be able to read the Bible and decide what you do.
00:42:49.000 And actually, the 95 Theses and the printing press happening at the same time was sort of this confluence of philosophy and technology coming together.
00:42:55.000 And I think there's a very similar philosophical, technological argument and tension happening right now.
00:43:00.000 Do a small group of people get to interpret the data and get to decide what goes into these models and they get to dictate truth and you have very little say over that and whatever they say is true is true?
00:43:10.000 Or do you get to have a direct relationship with the AI? Do you get to have a direct relationship with what data goes into it and how you train it and so on?
00:43:16.000 And there's a lot of people that sort of fall into this world.
00:43:18.000 But it's a scary world because what you have to do if you take that position is you have to believe that humans are fundamentally good.
00:43:24.000 That if you give them this technology and you give them this knowledge, they'll do the right things with it.
00:43:27.000 On balance, humans are good.
00:43:28.000 But that's a big leap for a lot of people.
00:43:30.000 It's a scary thing to hand over that powerful technology to everybody in the world.
00:43:35.000 So when you look at the risks of AI, you see sort of the catastrophist risks, the idea that it's going to take over the nuclear missiles and just start firing them at people.
00:43:42.000 How seriously ought we to take this sort of World War III end of humanity risk, the Skynet scenario?
00:43:49.000 Yeah, I think it's probably overblown, personally.
00:43:51.000 I think it's...
00:43:53.000 It's unlikely that this thing runs away from us in a way that we can't control.
00:43:59.000 I mean, literally, you just shut off the data center, right?
00:44:02.000 We have a kill switch.
00:44:03.000 And so that's not to say you shouldn't think about these things and battle hard in your systems and so on.
00:44:09.000 But I think there's a lot of sort of religious fervor around this.
00:44:13.000 And I think the place where it actually comes from is a desire for control.
00:44:17.000 And it's a fear-based motivation.
00:44:20.000 And it's sort of a degrowth mindset.
00:44:22.000 And I think if you have that kind of a mindset, it's very easy to fall into this trap.
00:44:25.000 And then, of course, the question is like, well, then who gets to decide the safety?
00:44:28.000 And the answer is, well, me.
00:44:29.000 And so it's actually sort of like, it's a perverse incentive, I think.
00:44:32.000 It's like, By and large, the people that are pushing this argument are very self-interested in wanting to push this argument.
00:44:38.000 And hence, I think the argument gets overblown.
00:44:41.000 I think this also sort of feeds from Hollywood.
00:44:43.000 I think for the last 20 to 30 years, the way that the media has really thought of technology has been through this dystopian lens.
00:44:50.000 I think it's actually like a really fundamental problem in society.
00:44:52.000 If the interface that most people have with technology is sort of they use the technology, but then they see media about technology, and by and large that media is negative, then people are going to have the gut reaction is going to be dystopian.
00:45:06.000 Whereas if you go back to the 60s, you see there's like beautiful posters that we're going to have cities on the moon, or we're going to have underwater cities, and we're going to be living on Mars.
00:45:12.000 And so there's this sort of like positivity around technology.
00:45:15.000 And so I think a lot of this is actually, it's cultural and societal.
00:45:19.000 It's like, actually, I think it's not about the technology.
00:45:21.000 It's actually kind of, where are we as a country and as a society?
00:45:25.000 And I think there's a lot of dystopian sort of undercurrents just generally in society right now.
00:45:28.000 It's a fascinating point.
00:45:29.000 You can see where that's coming from, meaning lack of social cohesion leads to mistrust of one another.
00:45:35.000 Mistrust of one another means we're afraid that the technologies that we all control are going to go completely awry and we'll use the technologies to murder one another.
00:45:41.000 if it were you and your family and you're just like wow this is a great new technology that we just adopted in our house then you're really not super worried that your family is going to blow each other up yeah totally um but you know i'm not sure how to it seems like very often i come back to this but but without you know the the institution of social cohesion coming back then you will see a sort of revanchist ludditeism that's going to rise and i think you do see that on the rise i think you see people being like whoa this is all moving too fast this is all too scary we need to you know burn the looms yeah that's right and
00:46:09.000 And to bring it back to the crypto conversation we were having earlier, I mean, you see this in the data.
00:46:13.000 You can look at Pew data or Gallup data or any survey data, and you see, you know, do Americans, and frankly, it's all Western democracies, you know, how much do you trust your public schools?
00:46:22.000 And it used to be, in the 1970s, it would be 70%, and now it's down to 30%.
00:46:26.000 How much do you trust newspapers?
00:46:27.000 It was 70%, now it's down to 30%.
00:46:29.000 How much do you trust the banks?
00:46:30.000 It was 70%, now it's down to 30%.
00:46:31.000 So you can go through basically every institution and society that we sort of looked to to make the system work, And it's just inverted from the 60s and 70s.
00:46:40.000 It was that people trusted these institutions and they don't anymore.
00:46:44.000 And I think it's actually warranted.
00:46:46.000 Most of these institutions, if you think about it, were created after World War II. And they've really atrophied.
00:46:50.000 And it is time for a reboot in a bunch of these institutions.
00:46:53.000 But I think you're right that the root of it is social cohesion.
00:46:55.000 It's like, how do we think about who we are as a people, as a society, as what are our values?
00:47:00.000 Because the institutions are a reflection of that.
00:47:02.000 And the institutions atrophying and the social cohesion being lost over the last 50 years, I think, kind of go hand in hand.
00:47:07.000 Do you think that technology can exacerbate that?
00:47:09.000 Meaning that when you look at crypto, it's actually a great example of this, in the sense that...
00:47:15.000 Let's say that there are higher levels of social cohesion, and you didn't want to transfer money from, say, San Francisco to London, and you wanted to do it over the weekend, but all the banks are closed.
00:47:25.000 In the old days, or if you had a close-knit community, what you might do is get on the phone with somebody who you knew.
00:47:31.000 A certain amount of money out of your bank.
00:47:32.000 That's Friday afternoon.
00:47:33.000 Take that money out of your bank, and I'll pay you back on Monday.
00:47:35.000 Okay, I know you.
00:47:37.000 I know your family.
00:47:38.000 I know you're good for it.
00:47:39.000 No problem.
00:47:40.000 And as social cohesion breaks down, it's like, okay, we actually need to now control for that by creating trust-free systems that allow for that sort of monetary transfer.
00:47:48.000 And the more you rely on the trust-free systems, the less actually there's a requirement of trust societally in order to get to the same sort of outcome.
00:47:56.000 Yeah, that's an interesting take.
00:47:56.000 Hmm.
00:47:57.000 You know, as a backstory, this is how the Rothschild empire started, actually.
00:48:01.000 The brothers all trusted each other, and so you could do this kind of stuff in a...
00:48:01.000 Exactly.
00:48:04.000 This was their market-mover advantage.
00:48:05.000 It really was.
00:48:06.000 I mean, you had, like, brothers all over the continent, and they would just wire money to one another or send a note to one another in code.
00:48:12.000 Usually, it was actually in a Hebrew script.
00:48:14.000 It would be, like, transliterated English, but in Hebrew.
00:48:17.000 So people would think it was Yiddish, We didn't read like Yiddish.
00:48:19.000 And that's how they had that first mover advantage.
00:48:21.000 It's still how what they call, what Thomas Sowell calls middlemen minorities, tend to thrive in most societies specifically because of this.
00:48:28.000 So to take another Jewish example, Hasidic diamond dealers.
00:48:31.000 Everybody's very angry at Hasidic diamond dealers because it turns out that they'll have a cousin in Jerusalem, and their cousin in Jerusalem will be like, okay, I'm going to call Chaim over in New York, and he's just going to do it for me because we're cousins.
00:48:41.000 And so it turns out kinship networks are an amazing source of social cohesion and data transfer.
00:48:47.000 And as that sort of stuff comes apart, as we become more atomized, you have to create tools that control for the atomization, but then that actually tends to create a spiral of, okay, well, I don't need to trust the person, so why even bother to trust the person?
00:49:00.000 Yeah, I think that's all right.
00:49:02.000 I tend to think of these things in different buckets.
00:49:04.000 Which is, you know, take just the invention of money as a technology to scale this idea of I don't need to trust you because I can pay you money and you'll give me the service that I need and then therefore I don't have to trust you and have this barter system or have a debt-based system.
00:49:18.000 And so I think they kind of have to coexist.
00:49:20.000 And the reality is we exist now in a global world.
00:49:23.000 We exist, you know, in a highly interdependent world.
00:49:26.000 The internet allows us to talk to anybody in the world and communicate and learn from anybody in the world.
00:49:31.000 And so the technologies allow us to sort of interact with each other and transact with each other without knowing each other, I think are great enablers for productivity and learning and, you know, advancement and progress and innovation and all the things that we need as sort of at the species level.
00:49:45.000 I don't think they're a substitute for social cohesion.
00:49:47.000 I think it's a separate problem.
00:49:49.000 We need to solve both.
00:49:50.000 I think we need to solve the problem of what does it actually mean to be in a society and be in a community and how do we foster those kinds of bonds because at the end of the day we're still atoms and we're still humans.
00:49:59.000 We're still social primates and that's a very important part of society.
00:50:02.000 We also have to solve this problem of There are now 7 billion people on Earth, and we can create tremendous productivity.
00:50:12.000 We can create tremendous growth.
00:50:14.000 We can educate.
00:50:14.000 If you think about, just say, from a meritocratic perspective or sort of an opportunity perspective, of the 7 billion brains on Earth, how many have we really tapped?
00:50:24.000 There's one Elon.
00:50:25.000 There's one Zuck.
00:50:25.000 There's one Bezos.
00:50:28.000 They're not...
00:50:29.000 I would argue maybe...
00:50:31.000 Maybe we've tapped 7 million people, you know, and properly fed them and given them the resources and education and given them a smart phone in their pocket.
00:50:38.000 And really, that number should be like the top 10% of humans are actually pretty smart.
00:50:41.000 So we should have 700 million people who are of that caliber.
00:50:44.000 And so there's like 100x left in humanity if you can get the resources to these people.
00:50:47.000 But to do that, you have to have these tools that can operate at global scale.
00:50:50.000 So you kind of, I think, need both.
00:50:52.000 I think that one is not a substitute for the other.
00:50:54.000 Well, Avi Child, this has been awesome, really interesting.
00:50:57.000 And where should folks go to check out more of your work?
00:51:01.000 On Twitter, probably.
00:51:02.000 Just add Avicel on Twitter.
00:51:03.000 I publish a lot of thoughts there.
00:51:06.000 But a lot of it, frankly, is in private conversations.
00:51:08.000 The really fun stuff is in private.
00:51:09.000 This has been awesome.
00:51:10.000 Really appreciate the time.
00:51:11.000 Great to see you.
00:51:12.000 The Ben Shapiro Sunday Special is produced by Savannah Morris and Matt Kemp.
00:51:21.000 Associate Producers are Jake Pollock and John Crick.
00:51:24.000 Production Coordinator is Jessica Kranz.
00:51:26.000 Production Assistant is Sarah Steele.
00:51:28.000 Editing is by Olivia Stewart.
00:51:30.000 Audio is mixed by Mike Koremina.
00:51:32.000 Camera and Lighting is by Zach Ginta.
00:51:34.000 Hair, Makeup, and Wardrobe by Fabiola Christina.
00:51:37.000 Title Graphics are by Cynthia Angulo.
00:51:39.000 Executive Assistant Kelly Carvalho.
00:51:47.000 The Ben Shapiro Show Sunday special is a Daily Wire production.