Avishal Garg is a successful venture capitalist, serial entrepreneur, and founding partner at Electric Capital, where he focuses on investing in innovative technologies in the Web 3 space. In this episode, he outlines the problems with legacy tech and cryptocurrency s role in the declining power of the American dollar, and why he s an optimist about the burgeoning AI revolution.
00:00:00.000And bringing it back to the crypto conversation we were having earlier, I mean, this is, you see this in the data, you can look at Pew data or Gallup data or any survey data, and you see, you know, do Americans, and frankly, it's all Western democracies, you know, how much do you trust your public schools?
00:00:12.000And it used to be, in the 1970s, it would be 70%, and now it's down to 30%.
00:00:21.000So you can go through basically every institution and society that we sort of looked to to make the system work, And it's just inverted from the 60s and 70s.
00:00:30.000Avishal Garg is a successful venture capitalist, serial entrepreneur, and founding partner at Electric Capital, where he focuses on investing in innovative technologies in the Web3 space.
00:00:39.000Garg's background in computer science and engineering at Stanford led him to executive roles at Google and Facebook, yet Garg emerged from the legacy companies of Silicon Valley a strong advocate for decentralized technologies like cryptocurrency.
00:00:50.000Garg views crypto as a democratizing force in tech, enhancing the average citizen's access to digital and financial services.
00:00:56.000At Electric Capital, Garg is now carving a path for early-stage startups in crypto.
00:01:04.000In today's episode, Avichal outlines the problems with legacy big tech and cryptocurrency's role in the declining power of the American dollar.
00:01:11.000He also talks about why he's an optimist about the burgeoning AI revolution and how we can all best prepare for the changes it will almost certainly make in our everyday lives.
00:01:18.000Stay tuned to hear Avichal Garg's perspective on the intersection of technology and politics on this episode of the Sunday Special.
00:01:24.000Avichal, thanks so much for stopping by.
00:02:04.000Crypto, as it's become to be known over the last decade, is really how do you use cryptography and distributed systems really to move money around and do new kinds of computation.
00:02:14.000And so this really started with Bitcoin circa 2009.
00:02:16.000That sort of kicked off what we call the modern crypto.
00:02:20.000And that came out of sort of an observation or sort of really a political belief that the legacy financial systems were messed up and broken.
00:02:29.000And so if people go back to 2008, you know, the economy was falling apart, the banks had sort of done a bunch of things that they shouldn't have been doing.
00:02:38.000And so there was a computer scientist or a scientist by the name of Satoshi.
00:03:41.000The core sort of insight around Bitcoin is now almost a trillion dollar asset.
00:03:45.000There's been a whole sort of explosion of other crypto assets around that, cryptocurrencies and crypto projects around that.
00:03:51.000That whole ecosystem is worth, you know, somewhere around $2 trillion at this point.
00:03:55.000And so, you know, a lot has evolved over the last 15 years, but kind of the core original insight was, what if I wanted to do things and I don't trust these intermediaries anymore?
00:04:03.000Which I think touches on a lot of sort of things that people have really started to feel over the last 5 or 10 years in particular.
00:04:08.000But this was sort of the computer scientist version of If I want to do the things I wanted to do before, how to do them without trusting any intermediaries?
00:04:15.000So you can certainly see why established institutions would not like this, because it's quite disruptive.
00:04:19.000If you're the banking industry and you're used to people using you for those mechanisms of exchange, no longer that's being used, you're not going to like that.
00:04:26.000If you're a government, you might not particularly like it, because you've been in charge of the monetary supply, and now when you're looking at something like Bitcoin, it has its own established value, a value-established I want to get to the institutional opposition to Bitcoin and to crypto generally.
00:04:42.000But first, what do you think that crypto has become?
00:04:46.000There was a lot of talk early on about wide adoption of crypto as a mechanism of exchange as opposed to a second property of it which has a repository of value.
00:04:56.000I'm hedging against inflation in the economy, and so I buy a bunch of gold, I put it in my safe, and a year from now it's worth more money.
00:05:02.000Some people have used Bitcoin like that, but there are obviously other uses for crypto.
00:05:10.000Yeah, so what it's really become is sort of from that original insight of potentially the ability to move money around, I think people started to say, well, actually, this is just a new computational infrastructure.
00:05:20.000What it really does is, if you look at Amazon or Facebook or Google, they really built their entire infrastructure around speed and efficiency and scalability.
00:05:29.000And so you got massive centralization, which you see on the internet.
00:05:31.000You got these intermediaries like Facebook or Google or Amazon that sort of concentrated all of the users and concentrated all of the money and all the power.
00:05:38.000And it's partly an outgrowth of what the infrastructure lets you do.
00:05:41.000So the way the internet was designed, you sort of got this necessary centralization effect because scale sort of makes things more efficient.
00:05:49.000In this world, if you start from an assumption of, well, what if I don't want centralization?
00:05:53.000What if I don't want to trust my counterparty?
00:05:55.000And what if I'm willing to trade off some efficiency to get these sorts of properties?
00:05:58.000You sort of start from a different premise.
00:06:00.000And so the infrastructure has grown in a different way.
00:06:02.000And Bitcoin was just the first application, this idea of a non-sovereign store of value, almost like a digital gold.
00:06:09.000And as that sort of started to take off, people started to say, well, you know, what if you could write code around this?
00:06:14.000What if it's not just about transferring the value, but what if I could, because, you know, as soon as the money becomes ones and zeros...
00:06:21.000Well, ones and zeros is just a thing that a computer can recognize.
00:06:25.000And so what if you wrote code around that?
00:06:27.000And so starting in about 2015 with Ethereum, people started to say, well, if I can write code around money, I can now program money.
00:06:34.000And when you start thinking about a lot of the world, a lot of the infrastructure of the world, whether it's Wills or trusts or escrow agreements if you bought a house or insurance, securities, derivatives.
00:06:46.000Fundamentally, what they are is here's a pile of resources, here's a pile of money, and I have some rules around who can access that money, and I have some rules around what happens over time with that money.
00:06:56.000There's cash flows that come off of it, or in the event of my passing, something needs to happen with this pile of resources.
00:07:02.000And now, instead of all of that being written in legal documents, we could write it in code.
00:07:07.000And that insight sort of kicked off an entirely new sort of arm and branch of what's known as crypto today that started to explore this idea of what would it look like if you could write code that owned money and operated around money.
00:07:20.000And so now the kinds of use cases that you're starting to see that have really taken off are, for example, stablecoins.
00:07:25.000And this is the idea that actually a lot of the world wants to transact U.S. dollars.
00:07:31.000The example we always use is, If you wanted to get money from, let's say, San Francisco to London, and it's Friday afternoon, you are actually better off going to the bank, pulling out $20,000, putting it into a briefcase, buying a flight, getting on a plane, and going to London and then handing somebody the suitcase.
00:07:49.000Because if you send a wire, it's not going to get there until Tuesday.
00:08:18.000It shows up in somebody's wallet, and then they can off-ramp it back to their bank account when the banks open.
00:08:22.000But it's actually the best, fastest, cheapest way to move U.S. dollars in the world now.
00:08:25.000And so you're seeing on the order of about a trillion dollars a quarter of money movement happening with this stuff.
00:08:30.000And for a lot of sort of international use cases.
00:08:32.000So you're seeing, for example, a lot of remittance volumes start to go through stablecoins in the Gulf or...
00:08:37.000In Southeast Asia, where a lot of people want dollars and they can't get access to dollars, and there are large populations of people in the West from those markets.
00:08:44.000And it's just a better way to do business.
00:08:46.000I mean, we use this for investing in startups, because we can just send money over the weekend after you invest in a company.
00:08:52.000And so you're starting to see now businesses start to look at this and adopt this as one of the novel and interesting things.
00:08:58.000And so once you realize that money is digital and you can write code around money, it opens up this whole new universe of, well, what are all the things that We're technology built in the 1970s, and we can do those things in a drastically different way today.
00:09:10.000And so you're starting to see new emerging use cases even beyond just the basic sort of U.S. dollar money movement into things like lending, into things like stable swaps, so like Forex sorts of use cases.
00:09:20.000Like really sort of base infrastructure that we haven't really had to think about since maybe World War II. You know, Bretton Woods in the 1950s where we invented a lot of this infrastructure.
00:09:29.000But we're reimagining it now with 2024 computer science, which is just faster, cheaper, better, more scalable.
00:09:34.000So, I mean, all of this sounds pretty amazing.
00:09:36.000And yet there's tremendous regulatory heartburn over all of this.
00:09:40.000So to sort of steel man the regulatory worries, what exactly are people most worried about that they are now suggesting there needs to be a heavy regulatory regime around some of the use cases or around the original technology?
00:09:52.000So the backdrop here is – so I think it's important to also contextualize this globally – So it turns out the U.S. is a bit of an outlier in this.
00:09:58.000Actually, most of the world is very, very happy to have these technologies.
00:10:02.000And I think it's an interesting history lesson, which is if you go back to the 90s and the U.S. Internet, You know, there's Section 230 that sort of happens and it was part of the Telco Act of 96.
00:10:13.000And it was a really, really important piece of legislation because what it did was it opened the door to the internet, the modern internet as we know it.
00:10:20.000And really what it said was, you know what, these technologies are a little bit different.
00:10:24.000They're not exactly like newspapers that have editorial staff and you can sort of control these things in the way that you did the newspapers.
00:10:29.000We've got to think about these rules differently.
00:10:31.000And so, for example, it set aside rules that said, well, you know, if you're a digital forum, like a Facebook or YouTube, you have different rules than a newspaper might have.
00:10:39.000And actually, like, if you, you know, as a media person, you don't need to go to the FTC to get a media license, a radio license to publish on the Internet.
00:10:48.000And so these sort of basic ground rules understood that the technology was different in some fundamental ways, and so we needed some new rules to account for how it was different.
00:10:57.000The reason America won the internet was because some of these rules were put in place in the late 90s.
00:11:02.000And so we got Amazon, we got Facebook, we got YouTube, and we got Google, and all the sort of mega tech companies.
00:11:09.000Now the rest of the world looked at this, and today has a set of challenges.
00:11:13.000Like if you're Europe, if you're Brazil, if you're India, if you're China, you're looking at these things and you're saying they're so, so, so important in society as sort of fundamental pieces of infrastructure, but you have zero leverage over them.
00:11:26.000If you want to call in one of the CEOs of these companies in Europe, are the CEOs going to come?
00:12:31.000We sort of won the internet, and we forgot, in many ways, why we won.
00:12:35.000And we sort of take it for granted, and how powerful these entities are, and how important they are from an economic perspective and a national security perspective.
00:12:42.000And so we're kind of botching it a little bit.
00:12:44.000And so there's a lot of sort of dragging your feet.
00:12:57.000And specifically in the US, a lot of the concern fundamentally comes down to, in my opinion, a philosophical concern, which is who gets to control these things.
00:13:05.000And I think from a government perspective, The legacy financial system and a lot of these computational companies like Amazon or Google, you can call the CEO. You can have them come testify.
00:13:31.000And even if you could, there's nothing they can do about it.
00:13:33.000The code is open source and it's running and it's autonomous.
00:13:36.000And this presents a real challenge, I think.
00:13:38.000And we don't really have a great framework for how to think about that.
00:13:40.000I think the last time, in my opinion, that we had to think about these kinds of issues was frankly like the invention of the LLC, like in the 1850s.
00:13:48.000So if you go back and look at the history of limited liability corporations, we invented, it was actually a fantastic technological breakthrough, something like the Dutch East India Company, which said, let's pull a bunch of capital and we'll go pursue this venture.
00:14:00.000And that charter, you had to go get that charter from the king.
00:14:03.000So the king would give you a charter to start a corporation, and you could pool assets and then go pursue risk opportunities.
00:14:08.000And it was really a separation of capital risk from the people pursuing that with their skills.
00:14:14.000It took about 150 or 200 years, but a lot of people started to say, well, wait a second, why do you need to go to the king to talk about these things?
00:14:20.000Like, why can't I just start a company?
00:14:31.000If you go back and look at some of the debates that were happening at the time, there were really interesting questions like, well, you know, are these things people?
00:14:50.000Will they have too much power and money?
00:14:51.000And there were actually a lot of really interesting questions that were surfaced at the time.
00:14:54.000But what we said was, you know what, this is a useful new technological construct, and so let's create some rules around things like corporations and LLCs.
00:15:00.000And that was a really big breakthrough because that was a big part of what allowed things like railroads or the Industrial Revolution.
00:15:05.000Because now you could aggregate capital to pursue opportunities at a scale that you just previously could not.
00:15:11.000And that sort of fundamental set of questions of like, how does this work?
00:15:42.000How do we think about things like KYC and anti-money laundering?
00:15:46.000And how do we prevent terrorist financing?
00:15:47.000There's some really fundamental questions here that we need to sort through.
00:15:51.000I don't think the answer is to just say no, because the reality is that that's not an answer, because this stuff is happening globally now.
00:15:58.000And so I think the U.S. has actually sort of dragged its feet, and there's been a lot of opposition, but I think it's actually to the detriment of the U.S. to not have people thinking about these questions and trying to answer them.
00:16:08.000This past year, we've witnessed not only the ongoing war in the Holy Land, but also something even more disturbing in some ways, an absolute explosion of anti-Semitism across the West.
00:16:17.000When Jews in Israel need support most, I'm proud to partner with an organization that's been there for decades, the International Fellowship of Christians and Jews.
00:16:24.000For over 40 years, the fellowship has been building bridges between Christians and Jews.
00:16:27.000Their work has never been more critical than it is today.
00:16:29.000Right now, they're on the ground in Israel and Ukraine providing real help, food, security, and essential supplies to those who need it most.
00:16:35.000Your support during this critical time makes a concrete difference.
00:16:38.000Every gift you give to the Fellowship goes directly to helping people with food, necessities, and security measures that save lives.
00:16:44.000Here's what I need you to do right now.
00:17:06.000So when you look at sort of the current regulators and the regulatory environment, give me some examples of some badly thought out legislation or badly thought out regulation.
00:17:17.000Obviously, if you spend any time in this world at all, even like me sort of casually, the name Gary Gensler comes up a lot.
00:17:24.000There are regulators who are widely, shall we say, disliked in these communities, in the crypto community, for being heavy-handed, for believing that you can put the genie back in the bottle.
00:17:50.000I've never worked in a space that is demanding legislation the way that the crypto industry is.
00:17:54.000People go to senators and congresspeople and they're like, please regulate us.
00:17:57.000Tell us what the rules are because we want to follow the rules.
00:18:00.000Because in the absence of these rules, what you really end up doing is empowering the bad actors.
00:18:04.000What you end up doing is the people who are willing to skirt the law, who are not willing to spend time working with the regulators or the legislators to figure this stuff out, are the ones who just go off and start doing scammy things, and then people ultimately lose their money.
00:18:15.000And so, for example, if you look at the really famous example of this from a few years ago is FTX, which is sort of an American company.
00:18:23.000It was a bunch of American founders, but they moved to the Bahamas to run their business.
00:18:26.000And so it looks like an American company, but in practice was not an American company.
00:18:29.000It was not regulated by the United States.
00:18:31.000They were not subject to disclosure rights in the US. But a lot of people trusted them because they looked like an American company.
00:18:37.000And it ended up being a $10 billion fraud and Sam Beckenfried is going to jail and so on.
00:18:42.000But that's a perfect example of the lack of legislation and clarity creates the opportunity for these sorts of bad actors to come in and exploit the system.
00:18:51.000Meanwhile, the good actors are trying to figure out how to comply and it's unclear how to do that.
00:18:55.000And so when it comes to some of these agencies, what you really see happening is that they are taking this ambiguity and in the absence of clarity from the legislators are essentially deciding the rules and dictating those rules.
00:19:09.000And sometimes those rules are applied arbitrarily and capriciously.
00:19:13.000These are literally words that judges have used.
00:19:15.000And a handful of companies that have the resources are now taking the SEC to court and winning.
00:19:23.000For anybody who's studied the history of the agency model and Chevron and sort of these recent rulings that have happened, it's pretty fascinating because historically, you know, if the SEC gave you a Wells notice or said, we're taking you to court, their hit rate was so good.
00:19:38.000Like your stock would tank if you got a Wells notice.
00:19:39.000These days, because they've taken such a different strategy of issuing Wells notices and frankly losing in court multiple times, the market just doesn't even take that seriously anymore, which is a real, I think, hit to the credibility of the agency at the end of the day.
00:19:52.000And so what's happened is we actually don't have legislation.
00:19:57.000The agencies are trying to do this through enforcement.
00:20:00.000And then when it goes to court, they're in many cases losing.
00:20:03.000And that's just an extremely expensive way to run an industry.
00:20:08.000And so this is why the industry is going to, you know, there are many great legislators on both sides, on Democrats and Republicans.
00:20:14.000You know, Senator Gillibrand from New York.
00:20:16.000You have Richie Torres from New York on the Dem side.
00:20:18.000You have Ro Khanna here from California.
00:20:21.000You have Patrick McHenry, who's the chair of the House Financial Services Committee.
00:20:24.000You know, you have people who are trying to push this forward.
00:20:28.000But that's what we ultimately need is we need some sort of legislation to start clarifying what the rules actually are and actually telling the agencies what the rules are.
00:20:36.000In the absence of that, the agencies are sort of taking this into their own hands.
00:20:40.000So you mentioned economic growth and innovation.
00:20:43.000Obviously, the area that's seen tremendous amounts of investment right now is in AI. So let's talk about AI. A lot of people are very, very nervous about AI, about what it's going to do.
00:20:54.000There's sort of the tech optimist view, which is it's an amazing new technology.
00:20:57.000It's going to allow for innovation at a level that we've never seen before because essentially, at a certain point, you are going to reach artificial general intelligence, which means Why don't you define it?
00:21:08.000Why don't you define what artificial general intelligence is?
00:21:10.000Yeah, well, it's interesting because AGI sort of has a couple of different flavors of it.
00:21:14.000I think one version of it is—well, there's another test that we talked about, which is the Turing test.
00:21:19.000But AGI, generally the way people will describe it, is for an arbitrary task, this software could perform as well as an arbitrary human.
00:21:26.000So, you know, an average typical human.
00:21:28.000Now there's a lighter weight version of that that you might say, well, for some universe of tasks, will this outperform?
00:21:35.000So it's not universal in the sense that a human is very good at learning how to drive a car and ride a bicycle and also do their taxes and also learn email and learn new software.
00:21:45.000And so maybe you don't get AGI that is necessarily good at doing all of those things, but you might have some sort of human level intelligence that for certain types of tasks it does as well as human.
00:21:53.000I think we tend to get, we're more likely to be able to get that level of The broader, it can learn anything with very limited data.
00:22:01.000Humans are phenomenal with a couple of data points.
00:22:04.000You can teach a four-year-old how to ride a bike in a couple hours.
00:22:06.000It's pretty remarkable that we can do that.
00:22:10.000And so if you define AGI that way, that's probably many years away.
00:22:13.000If you define AGI as four certain types of intellectual tasks that will do as well as a typical human, I think that's on the order of two years away.
00:22:20.000Now, that whole sort of can of worms opens up a whole set of questions.
00:22:26.000I think this is the white-collar analog of what happened in the 80s with manufacturing, as robotics sort of entered, and has real consequences on society, because I think there will be something, if I had to guess, something like 50% to 70% of jobs that people do today, a computer will be better at doing in two to three years.
00:22:44.000And so what this means is there's real economic pressure for businesses to not have as many employees to get the same output.
00:22:51.000Over time, I don't think this means there are fewer jobs.
00:22:53.000I think this means that companies will do more with fewer people.
00:22:56.000And so you can actually do 10x as much with the same staff.
00:22:59.000But it does present some challenges in the short term, which is that a lot of these jobs that humans are doing today take things like, you know, doing your taxes, you know, scheduling emails, you know, think about the number of financial transactions that you do that involve just filling out some PDF and sending it back and forth.
00:23:16.000Like a huge swath of the economy, actually computers will now be better at than humans.
00:23:21.000And we don't know what the consequences of that will be societally.
00:23:24.000From a productivity perspective and a growth perspective, it's going to be phenomenal.
00:23:26.000It's going to make every business that much more efficient.
00:23:28.000It's going to make the economy boom and sort of take productivity through the roof, I think.
00:23:34.000The second-order effects, I think, are still a little bit TBD. Yeah, so when we get to the second-order effects, which is, I think, what people are mostly worried about, the societal effects of what happens when half the population, at least in the temporary term, hasn't adjusted yet to the shift.
00:23:46.000Because, as you mentioned, if you look at the history of American economic growth, there have been a number of industries where it used to represent a huge percentage of the workforce and now represents a tiny percentage of the workforce, and people moved into other jobs.
00:24:04.000And so when that is taken over by AI, when AI is doing a lot of those tasks, what do you think is the sort of next thing that human beings are still going to be better at than machines?
00:24:16.000Presumably, there will be a cadre of experts in every field who will still be better than the machines are because they'll be the geniuses who are...
00:24:23.000If you're a Nobel Prize winning nuclear physicist and you have a bunch of grad level students who are working for you, but they're all AIs, it wipes out the grad level students, it doesn't necessarily wipe out the Nobel Prize winning nuclear physicist.
00:24:35.000But if you're not that guy, if you're everybody else, which is 98% of the population, what exactly do you think is left for those people to do?
00:24:45.000Yeah, I think a lot of it will become human-to-human interactions.
00:24:48.000So there are certain things that humans ultimately will still want human interfaces for.
00:25:06.000And so the modality of how you do your job in a lot of human-to-human interactions will change pretty fundamentally, and you'll have to retrain a bunch of people on how to use these tools properly.
00:25:14.000But I think what you'll see is anything that involves a human, you know, real-world interaction, I think will probably persist.
00:25:21.000And then beyond that, I think it's a big open question.
00:25:24.000You know, I think this is a big, big economic alignment.
00:25:27.000And so, for example, what does education mean in this new world?
00:25:31.000Yes, teaching is still a very human thing, but how do we prepare our students for this new world?
00:25:37.000Or how do we think about, you know, you're talking about the Nobel Prize-winning researcher.
00:25:43.000I think it's entirely possible that in 7 to 10 years, for some of these domains, that the AI is actually smarter than the Nobel Prize winning physicist or whatever.
00:25:51.000And so when you get the superhuman intelligence on the other side of that, what are the consequences?
00:25:55.000And I don't think anybody really has a great answer to that yet, other than the human systems will persist.
00:26:00.000Anything that involves information, anything that involves data, anything that involves computer code, the AI is probably going to be better than humans at in relatively short order.
00:26:09.000That sounds pretty scary to a lot of folks, obviously, especially because we're an economy that is optimized for, lack of a better term, for IQ. We're an economy that is optimized toward intelligence, and now you're saying that basically intelligence is no longer the metric for success because it's going to be available to everybody.
00:26:26.000In the same way that the manufacturing resources have now become unbelievably available to everybody.
00:26:30.000You don't have to have a manufacturing facility to get anything you want at your front door in a day.
00:26:34.000It's going to be like that, except with intelligence.
00:26:37.000And so where the mind goes is to places like, okay, so now you're going to be optimizing for Emotional quotient.
00:26:45.000You're going to be optimizing for how well people deal with people.
00:26:47.000If you're an extrovert, you're going to do really well in this new economy.
00:26:50.000If you're an introvert, you might be a little bit screwed.
00:26:52.000The engineer types are going to have a real tough time.
00:26:54.000But if you're a face for the AI to teach kids, then that'll be okay.
00:27:00.000It also raises serious questions, presumably, about the educational process itself.
00:27:08.000My son, who's a smart kid, I bet he was delayed by about a year because he understood how voice detects worked.
00:27:12.000He would just grab my phone and instead of actually typing in the question the way that he might, if you were able to read, he would just, you know, actually just voice the question and then he would have the phone read it back to him.
00:27:23.000So what exactly are we going to be teaching kids and how does that affect the brain?
00:27:27.000Because obviously, I mean, these are all open questions.
00:27:29.000When it comes to, it seems to me that You stop teaching kids math, for example, because it's so easy to just do everything using AI. Doesn't that atrophy a part of the brain that you might actually have to use?
00:27:39.000I mean, there are general uses that are taught via specific skill sets and that apply more broadly.
00:28:19.000You start buying PCs and sort of staying on top of it, so that by the time you're in the year 2002, you've sort of upskilled yourself.
00:28:27.000You've gotten to a place where you understand what these tools are capable of, and you've re-skilled, and so you're able to sort of move into this new world.
00:28:34.000Could anybody have predicted that in 2010 or 2020 you would have, you know, the entire media landscape would look fundamentally different than it did in 1995?
00:28:44.000But if you understood the tools, you were well positioned to sort of move into that new world as it happened.
00:28:48.000So I think that's One mental model you might have is what happened in the 90s and what would you have done?
00:28:53.000If you knew everything that you knew about the internet today, but you're in 1990, what would you do?
00:28:56.000And I think that's probably the best you could do is let's just get really, really familiar with the technology.
00:29:00.000Let's make sure we buy our kids a PC. Let's make sure that they play with it every day.
00:29:03.000And by the time they're 10 or 12 or 15, they'll have an intuitive sense for how this technology works and then they'll be okay.
00:29:09.000The second thing I would observe is that all of these transitions are very scary in the moment.
00:29:13.000Whether it's the internet, whether it was PCs, whether it was factories, whether it was railroads, whether it was the car, all the way back to the printing press.
00:29:23.000And there's always this fundamental debate that happens.
00:29:26.000There's always this fundamental question that happens between two groups of people.
00:29:29.000One group of people says, you know what, the current system is fine, and actually we don't need this technology, we don't want this technology, because it's going to be so disruptive that a bunch of people will be left behind.
00:29:40.000And there's another group of people who say, no, no, no, as scary as this thing is, it's a tool that will make life better.
00:29:48.000And I think every time you look at the tools that humanity creates, if we choose to adopt them, life gets better.
00:29:54.000So it takes something like the printing press.
00:29:56.000It was a big driver in why we have literacy.
00:29:57.000And I think you would be hard-pressed to argue that people being able to read is a bad thing.
00:30:01.000But there were actually people around that time that said, actually, learning to read is a bad thing.
00:30:05.000We don't want people to learn how to read.
00:30:06.000This creates a negative social consequence.
00:30:09.000You know, what if people can actually read the Bible and understand what it says?
00:30:14.000But I don't think anybody would make that argument today.
00:30:16.000And so I think AI we're probably going to look at similarly in 20 years.
00:30:19.000I think we're going to say, you know what, like, yes, it was a transition, but on balance, it was extremely positive.
00:30:26.000And it does create some societal questions about how we manage that transition for people so we don't end up with a rust belt the way that we sort of, I think, abandoned a bunch of people in the 80s and 90s and didn't reskill them.
00:30:35.000And this is a problem that we need to think about.
00:30:38.000But I think on balance, it's going to be extremely positive.
00:30:41.000And the best you can really do is just immerse yourself in it.
00:30:43.000So, for example, at work in our company, what we've done is we literally, every person is required to use an AI. And every day you have to use it.
00:30:54.000And so we have a Slack channel where people are sharing these tips.
00:30:56.000We have an engineering team internally and we task them with going and actually getting these open source models and running the code and actually trying to build agents that will be able to do the job of a finance person or an operations person so that we can scale our business 10x without hiring more people.
00:31:11.000And so I think the people that just sort of jump in the deep end today will be at a tremendous advantage in two to five years.
00:31:17.000Alrighty folks, let's talk about dressing sharp without sacrificing comfort.
00:31:20.000If you're tired of choosing between looking professional and feeling relaxed, I have great news for you.
00:31:25.000is revolutionizing menswear with their famous dress collar polo.
00:31:29.000Imagine this, the comfort of a polo combined with the sharp look of a dress shirt.
00:31:32.000It's the best of both worlds, giving you that professional edge without the stuffiness.
00:31:35.000Gone are the days of floppy collars that make you look like you just rolled out of bed.
00:31:38.000These polos feature a firm collar that stands up straight all day long.
00:31:42.000The four-way stretch fabric means you can freely move comfortably throughout the day.
00:31:45.000It's office-approved, so you can look professional without feeling like you're trapped in a suit.
00:31:49.000And get this, it travels really well, so whether you're commuting to work or jetting off for a business trip, you'll arrive looking crisp and feeling great.
00:32:12.000So if you want to look sharp, you know, the way that I do, feel comfortable, support a fast-growing American company, head on over to collarsandco.com.
00:32:19.000Use the code BEN for 20% off your very first order.
00:32:22.000That's collarsandco.com, code BEN, collarsandco, because you shouldn't have to choose between looking good and feeling good.
00:32:29.000So a couple of the challenges that people brought up as possible theoretical challenges is one is called the data wall.
00:32:33.000The basic idea being that what happens when, right now, LLMs, large language models, are learning based on the amount of data that's on the internet.
00:32:41.000What happens when people stop producing the data because the LLMs are actually producing all the data, and then they're really not producing the data, they're synthesizing the data that's already there.
00:32:49.000So we hit a point Where because there's no new human-created data that's actually being input, that basically the internet runs out of data.
00:32:56.000And there are some people who sort of suggest that that's not going to happen, and there are some people who suggest that it will.
00:33:00.000Yeah, it's a very interesting question.
00:33:01.000I tend to think we will probably not run out of data anytime soon.
00:33:04.000I think there are large domains that are private data that are not yet fully tapped.
00:33:09.000And so I think when you think about things like medical records, bank records, financial data, government data, there's a lot of data actually that's not yet been ingested into these systems.
00:33:17.000I also think that the AI creating stuff, it does create sort of a self-referential feedback loop, but I think a lot of that data is quite valuable.
00:33:24.000And so you will have a lot of quote-unquote AI-generated data or potentially even synthetic data.
00:33:28.000And I think if you can put a layer on top of that to have some sort of filtering that says, actually, this data is good data, you might actually get really interesting feedback loops that actually the AI can sort of learn from the AI. So, I think it's a big open question.
00:33:41.000I'm sort of on the side of, actually, I think we'll get better and better using the existing data that we have, and there are still large buckets of data that are untapped, and so I don't think we've hit sort of the top of the S-curve in terms of being able to hit, you know, diminishing returns with data yet.
00:33:54.000Another question I've seen raised about this is the question of innovation.
00:33:57.000How much can AI actually innovate as opposed to performing sophisticated recombination?
00:34:02.000Which I suppose actually goes to, what exactly is innovation?
00:34:05.000Is it just sophisticated recombination?
00:34:14.000How far can AI go in terms of what we would perceive to be innovation and Also because whenever there's a sophisticated technology, human beings tend to then map their own model of the brain onto the sophisticated technology.
00:34:25.000So for a while it was, the brain is a computer, and now the brain is an AI. And how accurate is that?
00:34:34.000I think there's going to be a class of innovation that this stuff is really good at, which is when you have large amounts of data and you can interpolate inside those data sets, you can infer a lot.
00:34:44.000And so you see, for example, things like protein folding.
00:34:47.000We have a lot of data and we can start to learn how these things might work and how these systems might work.
00:34:51.000I think there's some people who think that actually extrapolating beyond the data, creating entirely new frameworks and new ways of thinking is something that humans are uniquely good at.
00:35:01.000I tend to be on the side of, you know what, every time humans say that somehow we're special and unique, it turns out to be incorrect.
00:35:08.000And so whether we're talking about, you know...
00:35:13.000Whether we're talking about the Turing test.
00:35:15.000For a long time, Alan Turing was one of the pioneers of computer science.
00:35:19.000He created this test that basically said, if you can talk to a computer, and maybe that's textual, maybe that's voice, but let's say it's text, and you cannot distinguish, let's say you have a black box and you sort of are asking questions and it gives you answers.
00:35:31.000If you cannot tell if on the other side of it is a human or a computer, that is intelligence that's indistinguishable from a human.
00:35:39.000And we've passed the Turing test at this point.
00:35:41.000You can talk to an LLM and it looks like a human.
00:35:43.000For all you know, it's a human type in that response and it appears to be self-aware and cognizant.
00:35:49.000It's not, but it appears to be that way.
00:35:51.000And so we actually passed this really important milestone that most people don't talk about, which is the Turing test.
00:35:56.000And so I look at that and I say, well, there are a whole set of things that humans have claimed for a long time are uniquely human that these things based in silicon are now doing.
00:36:07.000If we have an existence proof for a set of things that we thought were uniquely human that now these computers can do, why would it be the case that there are a whole set of other things that these computers would not be able to do?
00:36:15.000So I'm kind of on the side of it's just a matter of time that these things are able to replicate all of these things, including the ability to extrapolate and the ability to create new frameworks and to be creative.
00:36:25.000I think it may take longer, but I think those things will ultimately fall too.
00:36:29.000So when we look at the world of AI and tons of money is pouring into AI, you see sovereign wealth funds all over the planet, people building their own AIs.
00:36:43.000The thing that most people ultimately see is some sort of a model that they're interacting with that you can type in text to or maybe speak to.
00:36:50.000The voice transcription models are getting really good.
00:36:53.000And the text-to-voice and voice-to-text, all that stuff, all those pipelines are getting really good.
00:36:57.000That's generally what people think of.
00:36:58.000There's a whole bunch of infrastructure behind the scenes that makes that possible.
00:37:01.000Everything from how do you acquire the data to how do you clean the data to how do you Get some sort of evaluation on it to how do you actually train these things on very, very large clusters with lots and lots of GPUs and doing that at a scale that requires many, many tens of thousands of GPUs and coordinating all of that.
00:37:16.000There's a lot of infrastructure and computer science that goes into that.
00:37:19.000And so that's where all the money is going.
00:37:21.000Ultimately, it's how do you take large, large amounts of data, the entire internet, and compress it into these very, very small models that hopefully can run on your phone one day.
00:37:31.000And the entire process of doing that is extremely capital intensive because the sheer amount of computational resource and data centers and power that you need, the GPUs that you need, are so immense to actually produce these models that you then interact with as an AI. It just requires tens of billions of dollars to be able to actually have all that computational power to create these models.
00:37:48.000And how do you tell the difference between the models?
00:37:50.000Are they all using the same data set, or is it just they're using different data sets?
00:38:14.000And those weights effectively let you understand what, you know, given a word, what is given a context in a word, what is sort of the next word that the system would predict.
00:38:24.000And so those weights, in the case of, let's say, a llama or open source, in the case of something like OpenAI or Anthropic or not open source, But ultimately, that's what you're trying to produce.
00:38:31.000You're trying to produce, sort of take all the data, compress it, and the compression ultimately is producing this sort of set of weights, and the weights are just, all they're really doing is saying, given a word, what is the next most likely word?
00:38:42.000I'm glossing over a lot of details, obviously.
00:38:44.000And that's what we call sort of the AI model.
00:38:46.000And there's a lot of sort of how do you do the compression.
00:38:50.000That's sort of all the proprietary stuff.
00:38:51.000Which speaks to actually this sort of really...
00:38:53.000It kind of goes back actually to this technology problem we've talked about, which is that stuff is very opaque, actually.
00:39:02.000Even though you may be able to see the weights on the final model, you actually have no idea the data that it was trained on.
00:39:10.000And so very subtle biases could be entering into the data that we have no idea about, which presents a real challenge because these are private corporations.
00:39:29.000And there's no real accountability at this point in terms of what that means.
00:39:32.000And so there's a lot of sort of, I think, open philosophical questions here about, like, who should get to dictate what goes into the data sets and why?
00:39:40.000And what does that mean downstream for what these models are?
00:39:42.000And so there's a real push from some in this community to make all of this open source.
00:39:45.000Not just the weights, but actually open source the entire process.
00:39:57.000There's sort of some tricky economic questions there because there's so much money to be made that you kind of don't want to disclose those trade secrets because you don't want somebody else to be able to replicate this.
00:40:06.000And so right now, that whole pipeline is effectively closed.
00:40:10.000The closeness of it does create the possibility of some pretty dystopian issues.
00:40:14.000I mean, just to take a quick sort of practical example, when you used to use Google, you would Google a term, and then you'd get a series of webpages, and then you sort of prioritize them in a certain way, but you could always scroll the page, too, and sort of figure out what exactly you were looking for.
00:40:27.000Now, you use Google's AI, and it spits out It's destroying internet traffic.
00:40:33.000Nobody has used a click-through link on Google for several months at this point.
00:40:37.000And by the time my kids are using it a lot, they won't be using it at all.
00:40:40.000I mean, it'll just be they type in a prompt and then the answer comes back out.
00:40:43.000And then how you determine what that answer actually looks like, how they got to that answer, is going to be very difficult.
00:40:48.000And so you can certainly see the possibility of bias in the informational environment.
00:40:53.000And obviously, conservatives are worried about this all the time.
00:40:56.000Remember when we were first seeing ChatGPT, conservatives were typing in things and it was coming out very left-wing.
00:41:00.000We were saying, well, what are the parameters that are being used on all of this?
00:41:30.000But I don't think that's actually the right way to go about doing these things.
00:41:34.000Just to bring it back to the earlier conversation we were having, I think there's actually a very interesting historical analog here around religion, which is the sort of A handful of people that really understand how these systems work and ultimately are making decisions about what data goes into these systems.
00:41:49.000It's sort of like the church, you know, in sort of, you know, this sort of 1400 era where there's a set of people and they get to dictate what is truth.
00:41:59.000And over time, even with the best of intentions, even with the best of people, if these things are long-lived enough, there is generational transfer.
00:42:06.000There are new people that come in, and if there's no accountability, those systems have a tendency to become corrupt.
00:42:12.000And this is what Martin Luther noted when he banged those 95 theses into the wall at the church, was, wait a second, this system has become corrupted in all sorts of ways, things like indulgences.
00:42:27.000And what people on one side of it said was effectively, hey look, knowing how to read and read the Bible and understand this stuff is really dangerous.
00:42:35.000You need an intermediary to help you understand how to do this and trust us.
00:42:39.000We're going to filter it in the right way and we'll do what's right for you and we'll do what's right for society.
00:42:43.000And there's another argument that said, no, no, no, you should have a direct relationship with God.
00:42:46.000You should be able to read the Bible and decide what you do.
00:42:49.000And actually, the 95 Theses and the printing press happening at the same time was sort of this confluence of philosophy and technology coming together.
00:42:55.000And I think there's a very similar philosophical, technological argument and tension happening right now.
00:43:00.000Do a small group of people get to interpret the data and get to decide what goes into these models and they get to dictate truth and you have very little say over that and whatever they say is true is true?
00:43:10.000Or do you get to have a direct relationship with the AI? Do you get to have a direct relationship with what data goes into it and how you train it and so on?
00:43:16.000And there's a lot of people that sort of fall into this world.
00:43:18.000But it's a scary world because what you have to do if you take that position is you have to believe that humans are fundamentally good.
00:43:24.000That if you give them this technology and you give them this knowledge, they'll do the right things with it.
00:43:28.000But that's a big leap for a lot of people.
00:43:30.000It's a scary thing to hand over that powerful technology to everybody in the world.
00:43:35.000So when you look at the risks of AI, you see sort of the catastrophist risks, the idea that it's going to take over the nuclear missiles and just start firing them at people.
00:43:42.000How seriously ought we to take this sort of World War III end of humanity risk, the Skynet scenario?
00:43:49.000Yeah, I think it's probably overblown, personally.
00:44:29.000And so it's actually sort of like, it's a perverse incentive, I think.
00:44:32.000It's like, By and large, the people that are pushing this argument are very self-interested in wanting to push this argument.
00:44:38.000And hence, I think the argument gets overblown.
00:44:41.000I think this also sort of feeds from Hollywood.
00:44:43.000I think for the last 20 to 30 years, the way that the media has really thought of technology has been through this dystopian lens.
00:44:50.000I think it's actually like a really fundamental problem in society.
00:44:52.000If the interface that most people have with technology is sort of they use the technology, but then they see media about technology, and by and large that media is negative, then people are going to have the gut reaction is going to be dystopian.
00:45:06.000Whereas if you go back to the 60s, you see there's like beautiful posters that we're going to have cities on the moon, or we're going to have underwater cities, and we're going to be living on Mars.
00:45:12.000And so there's this sort of like positivity around technology.
00:45:15.000And so I think a lot of this is actually, it's cultural and societal.
00:45:19.000It's like, actually, I think it's not about the technology.
00:45:21.000It's actually kind of, where are we as a country and as a society?
00:45:25.000And I think there's a lot of dystopian sort of undercurrents just generally in society right now.
00:45:29.000You can see where that's coming from, meaning lack of social cohesion leads to mistrust of one another.
00:45:35.000Mistrust of one another means we're afraid that the technologies that we all control are going to go completely awry and we'll use the technologies to murder one another.
00:45:41.000if it were you and your family and you're just like wow this is a great new technology that we just adopted in our house then you're really not super worried that your family is going to blow each other up yeah totally um but you know i'm not sure how to it seems like very often i come back to this but but without you know the the institution of social cohesion coming back then you will see a sort of revanchist ludditeism that's going to rise and i think you do see that on the rise i think you see people being like whoa this is all moving too fast this is all too scary we need to you know burn the looms yeah that's right and
00:46:09.000And to bring it back to the crypto conversation we were having earlier, I mean, you see this in the data.
00:46:13.000You can look at Pew data or Gallup data or any survey data, and you see, you know, do Americans, and frankly, it's all Western democracies, you know, how much do you trust your public schools?
00:46:22.000And it used to be, in the 1970s, it would be 70%, and now it's down to 30%.
00:46:31.000So you can go through basically every institution and society that we sort of looked to to make the system work, And it's just inverted from the 60s and 70s.
00:46:40.000It was that people trusted these institutions and they don't anymore.
00:46:46.000Most of these institutions, if you think about it, were created after World War II. And they've really atrophied.
00:46:50.000And it is time for a reboot in a bunch of these institutions.
00:46:53.000But I think you're right that the root of it is social cohesion.
00:46:55.000It's like, how do we think about who we are as a people, as a society, as what are our values?
00:47:00.000Because the institutions are a reflection of that.
00:47:02.000And the institutions atrophying and the social cohesion being lost over the last 50 years, I think, kind of go hand in hand.
00:47:07.000Do you think that technology can exacerbate that?
00:47:09.000Meaning that when you look at crypto, it's actually a great example of this, in the sense that...
00:47:15.000Let's say that there are higher levels of social cohesion, and you didn't want to transfer money from, say, San Francisco to London, and you wanted to do it over the weekend, but all the banks are closed.
00:47:25.000In the old days, or if you had a close-knit community, what you might do is get on the phone with somebody who you knew.
00:47:31.000A certain amount of money out of your bank.
00:47:40.000And as social cohesion breaks down, it's like, okay, we actually need to now control for that by creating trust-free systems that allow for that sort of monetary transfer.
00:47:48.000And the more you rely on the trust-free systems, the less actually there's a requirement of trust societally in order to get to the same sort of outcome.
00:48:06.000I mean, you had, like, brothers all over the continent, and they would just wire money to one another or send a note to one another in code.
00:48:12.000Usually, it was actually in a Hebrew script.
00:48:14.000It would be, like, transliterated English, but in Hebrew.
00:48:17.000So people would think it was Yiddish, We didn't read like Yiddish.
00:48:19.000And that's how they had that first mover advantage.
00:48:21.000It's still how what they call, what Thomas Sowell calls middlemen minorities, tend to thrive in most societies specifically because of this.
00:48:28.000So to take another Jewish example, Hasidic diamond dealers.
00:48:31.000Everybody's very angry at Hasidic diamond dealers because it turns out that they'll have a cousin in Jerusalem, and their cousin in Jerusalem will be like, okay, I'm going to call Chaim over in New York, and he's just going to do it for me because we're cousins.
00:48:41.000And so it turns out kinship networks are an amazing source of social cohesion and data transfer.
00:48:47.000And as that sort of stuff comes apart, as we become more atomized, you have to create tools that control for the atomization, but then that actually tends to create a spiral of, okay, well, I don't need to trust the person, so why even bother to trust the person?
00:49:02.000I tend to think of these things in different buckets.
00:49:04.000Which is, you know, take just the invention of money as a technology to scale this idea of I don't need to trust you because I can pay you money and you'll give me the service that I need and then therefore I don't have to trust you and have this barter system or have a debt-based system.
00:49:18.000And so I think they kind of have to coexist.
00:49:20.000And the reality is we exist now in a global world.
00:49:23.000We exist, you know, in a highly interdependent world.
00:49:26.000The internet allows us to talk to anybody in the world and communicate and learn from anybody in the world.
00:49:31.000And so the technologies allow us to sort of interact with each other and transact with each other without knowing each other, I think are great enablers for productivity and learning and, you know, advancement and progress and innovation and all the things that we need as sort of at the species level.
00:49:45.000I don't think they're a substitute for social cohesion.
00:49:50.000I think we need to solve the problem of what does it actually mean to be in a society and be in a community and how do we foster those kinds of bonds because at the end of the day we're still atoms and we're still humans.
00:49:59.000We're still social primates and that's a very important part of society.
00:50:02.000We also have to solve this problem of There are now 7 billion people on Earth, and we can create tremendous productivity.
00:50:14.000If you think about, just say, from a meritocratic perspective or sort of an opportunity perspective, of the 7 billion brains on Earth, how many have we really tapped?
00:50:31.000Maybe we've tapped 7 million people, you know, and properly fed them and given them the resources and education and given them a smart phone in their pocket.
00:50:38.000And really, that number should be like the top 10% of humans are actually pretty smart.
00:50:41.000So we should have 700 million people who are of that caliber.
00:50:44.000And so there's like 100x left in humanity if you can get the resources to these people.
00:50:47.000But to do that, you have to have these tools that can operate at global scale.