Ep 28 | Ryan Khurana | The Glenn Beck Podcast
Episode Stats
Length
1 hour and 32 minutes
Words per Minute
167.95027
Summary
The world is about to change, and if you feel overwhelmed, or don t know what to make of it, you don t want to miss my conversation with technology policy fellow and executive director of the Institute for Advancing Prosperity (IAPG) Dr. Ben Shapiro.
Transcript
00:00:00.000
The world is about to change, and if you feel a little overwhelmed, or you're not sure what to
00:00:05.880
make, is it a sci-fi movie? Is any of this stuff possible? Drones that can kill people automatically
00:00:12.740
and identify people? Are social media? Is Google nudging us one way or another? What does it mean
00:00:20.300
to even have privacy? Gene splicing, making genetically perfect children. What is the future?
00:00:27.600
You don't want to miss my conversation today with technology policy fellow from Young Voices. He's
00:00:33.420
also the executive director of the Institute for Advancing Prosperity. It's a Canadian nonprofit
00:00:38.160
organization focusing on the social impacts of technology. He graduated from the University of
00:00:45.280
Manchester, where his dissertation was on the impact of artificial intelligence. So how do we
00:00:51.680
navigate all of these pitfalls of these revolutionary technologies? What does the world look like in a
00:00:58.660
year, five years, 10 years? What are the benefits? What are the safeguards to liberty? We are experiencing
00:01:05.940
now emerging technology that is going to change all of our lives. What does it mean for you?
00:01:13.060
So let me get a feel for you before we go into this.
00:01:33.620
I am someone who believes that there are two possibilities, and maybe a mixture of the two,
00:01:44.780
but I think it's going to lean hard one way or another. That the future is going to provide
00:01:50.740
mankind this new technology with more freedom, experiences that we can't even imagine now.
00:01:58.020
Um, literally in 10 years, our life will be completely different. And it could be fantastic.
00:02:07.080
It also could, um, either be the biggest prison, uh, or, uh, the end of, of humans as they are today.
00:02:21.620
I would say I'm in neither camp. I think both of those are far flung possibilities. And if we look at
00:02:29.960
technological advance throughout history, it's always been that as soon as a new technology comes
00:02:35.880
out, it causes mass panic. It causes a lot of crisis. Uh, one of the most famous examples would
00:02:42.540
be the printing press. As soon as the printing press comes out, you completely change the way society
00:02:47.300
functions, 30 years of war and chaos. And Europe has to completely reorganize the very conception of
00:02:53.720
how it works. After that, you have a lot more prosperity. What technologies do is they challenge
00:03:01.080
existing orders and it doesn't inevitably lead to prosperity and it doesn't inevitably lead to chaos,
00:03:08.020
but people have an incentive that while that change is occurring to try to figure out how to best
00:03:14.240
manage it, how to best utilize them, how to adapt to the new world they create. And then you find this
00:03:19.540
equilibrium where things are slightly better or much better or slightly worse. And that's manageable.
00:03:26.220
Okay. So, so I think we're, some of what people are feeling right now, everything seems to be in
00:03:32.260
chaos. And that's because the systems, no matter what you're talking about, it's all breaking down
00:03:39.020
because it's, it doesn't work this way anymore. You know, where we have all this new technology,
00:03:44.640
which is, is, is, is not functioning with something that feels like 1950, you know? Um,
00:03:54.480
and so, you know, that we're on the verge of change that's causing anxiety, but like, for instance,
00:04:01.200
the industrial revolution that changed a lot, but it was over a hundred years. Um, this change
00:04:07.820
is happening so fast and it is so dynamic and it is so pervasive. How do you not see us? I mean,
00:04:16.580
let's, let's just start, let's just start with, um, surveillance, capitalism,
00:04:22.880
blessing and a curse. It is providing us with, with services that are beyond our understanding,
00:04:34.760
even 10 years ago, but it is also monitoring us at all times. And it could be used like it's being
00:04:41.380
used in China. And are you concerned at all about that here? Let's, when we talk about, uh,
00:04:49.440
surveillance, capitalism, the production of so much data, we have to really step back and ask,
00:04:54.740
what are we worried about? Are we worried about the data collecting or are we worried about people
00:04:58.860
using it in harmful ways? Yes. Using it in harmful, using it in harmful ways. And in many
00:05:03.900
cases, what we need to do there is kind of step back and let it sit in a system. Um, for example,
00:05:11.020
the way a lot of companies use your data for their, their algorithms, nobody's looking at that data.
00:05:17.240
Nobody's really analyzing what you're, you're doing and no one's no human being is making a decision
00:05:23.220
that can affect your life. But a system is being worked to, to, to isolate points of that data,
00:05:27.920
which are beneficial to you. And as long as the correct incentives are in place, um, for companies
00:05:33.960
to use that in a way that's beneficial to you, I don't find that worrisome. What I do find worrisome
00:05:39.240
is if we have institutions that start to break down, if we have these companies, um, act with your
00:05:46.100
data in such a way that they can do anything and no one holds them accountable, but there is no reason
00:05:52.140
that that would be the case. And there's no reason that that data collection alone enables that to be
00:05:56.220
the case. Yeah, but we know that they are nudging us. Um, you know, and that's, that's, that is, uh,
00:06:05.220
just as evil as some guy with a, you know, curly mustache. Who's like, I'm going to control the
00:06:10.940
world. They are set on a mission that they believe, and it's going to be left or right, but they believe
00:06:17.120
what they're doing is right. That they know the voices that are hateful. They know the voices of peace
00:06:23.420
and prosperity and they're selecting and just through their algorithms, they can nudge. And,
00:06:29.520
and, and we know that to be true. We know that they're doing that now.
00:06:34.400
But that's, that nudging exists in all spheres of life. It's not like before the internet,
00:06:40.120
when we just had cable TV, we weren't being nudged. There was a much lower selection
00:06:43.900
of channels and options and each one had its agenda and each one pushed you in a certain direction.
00:06:49.600
Right now, what the concern is, is not about the nudging. It's about how many points of contact
00:06:56.620
do I have on the internet to make decisions? Does one person nudge everybody or are there different
00:07:03.160
options for me to go to and I can choose and select based on what I like the most? So this is
00:07:09.320
really a competition question. If, if these companies are monopolies, all right, we have some concerns
00:07:15.460
that that nudging is worrisome. But if those companies are in bed with a government. Yes.
00:07:21.420
That's also similarly concerning. Um, you have that in places like China, what you were mentioning
00:07:26.560
earlier, where you have a member of Congress this week suggesting that when it comes to, uh,
00:07:33.340
vaccinations, that there should be a public private partnership between YouTube and Google and,
00:07:38.980
and Twitter to remove those voices that say vaccinations are bad. I happen to be pro vaccination,
00:07:46.640
but I think everybody should be able to make their own decision and you should never ban books,
00:07:53.580
ban opinions. When it comes to a question like vaccinations, I actually kind of believe that
00:07:59.480
most of these companies are really, uh, headed in a different direction than the United States
00:08:04.380
government. Uh, we have, uh, uh, Google pulling out of department of defense contracts and the like,
00:08:10.040
they're not that embedded the way a lot of large companies were during the cold war with the
00:08:14.560
same time though. Google is in bed with China. I wouldn't call it in bed. Uh, they do have a
00:08:21.380
Beijing research center. They are trying to leverage the vast amount of data that's produced in China.
00:08:27.420
Uh, remember they have a lot more people that use the internet far more than Americans do.
00:08:31.500
And so that's very valuable resource, um, for these companies to develop better technologies
00:08:37.120
and for them to, to, to open to new markets and be profitable. But that doesn't mean they're in bed
00:08:42.320
with the Chinese government. Dragonfly. And they said they weren't doing it, but new reports out now,
00:08:49.340
internal reports say they are still working on dragonfly. So we have to remember what something like
00:08:55.160
project dragonfly is a project dragonfly, just as Google tried to go into China many times and
00:09:00.320
always had some resistance and had to pull out is Google's attempt to try to make their search
00:09:07.200
engine compatible with what the Chinese allow in their country. And to them, that's a market to make
00:09:13.560
more profit and also a market to, uh, protect themselves against Chinese competitors that become
00:09:21.060
Okay. So let's, let's, um, let's see if we can, and if this doesn't work, just let me know. Let's see
00:09:31.700
if we can divide this into, um, two camps to start. Okay. One is the 1984 camp, or I would call it 2025,
00:09:42.180
China 2025, which, you know, they are, would you agree that the 5g network, uh, from China is a way
00:09:54.000
for them to control information around the world? I wouldn't go that far. I would not go that far.
00:10:02.000
I do believe that companies like Huawei who are world leaders in 5g infrastructure may present
00:10:10.680
national security concerns if they control the majority of the infrastructure that is built in
00:10:16.420
a country like the United States. That is different than saying that it is a 5g plan to control
00:10:23.180
information around the world. I think that that is, it's a 2020 that's China. That's their stated plan.
00:10:29.580
Well, made in China 2025 is more about being the technological superpower. And it's, that's goes in
00:10:37.160
line with what the United States has tried to do forever. It's, it's the Chinese want to be richer
00:10:41.020
than us. That makes sense for a country as large as them to want to be. But you, but you also have
00:10:46.200
China doing something that we would never do. And that is full surveillance in China, 2020 full
00:10:52.900
surveillance with a social credit score that is so far. It's so dystopian that we can't even get our
00:11:01.440
arms around that. So they come, they come at things differently than we do. Oh no, absolutely. And I
00:11:08.520
think that's what makes the conversation about China's technological vision more complicated.
00:11:14.060
They come at things very differently than the United States does. When we talk about the social credit
00:11:18.820
system. Um, if you look at the way that it's being implemented in a lot of different areas in
00:11:23.720
China, they don't have that unified national vision yet. That's what they're trying to get to.
00:11:27.800
But in some, some places I was reading about in rural towns, they have the elderly go around and
00:11:33.700
give people stars when they do good things. And if you look at, these are not government reports.
00:11:39.840
These are independent scholars going in and interviewing people. Most people like the program. It has a very
00:11:44.480
high approval rating because they view their society as so untrustworthy that these little
00:11:50.580
nudges to care about your community more and be a more moral citizen are welcomed. Now, the reason
00:11:56.320
why that wouldn't fly in a place like the United States is historically a nation like China sees it
00:12:01.540
as a role of the government to help, uh, boost up moral values and make them a more unified community.
00:12:08.820
And I don't think the United States would allow our government to enforce moral values here.
00:12:14.400
You are the, you are the happiest guy. I think I've met in tech, um, enforcing moral values.
00:12:23.160
They're also the country that slaughters people by the millions and they're, they've, they're
00:12:29.400
building, uh, what do they call them? Uh, they're not reeducation camps. They're, uh, it's,
00:12:36.340
it's almost like a community college for the, you know, for the Muslims, uh, over in, in China.
00:12:44.000
So I'm not going so far as to defend what China's doing or welcome it here, but I'm saying it fits
00:12:50.560
with the cultural vision, not only that the Chinese have of their government, but what the government
00:12:56.680
is, um, has gotten away with doing before. Right. Russia is the same. Russia is the same way that
00:13:03.220
people are used to being, we are not used to being spied on and we wouldn't, well, maybe
00:13:08.700
we would, I'd, I'd hope we wouldn't tolerate, um, that. However, um, we seem to be headed in
00:13:17.100
that direction. And so one, one is 1984, where if you get out of line, um, you know, I think
00:13:25.860
one of the reasons why they're doing this is they are afraid of their own people in revolution.
00:13:29.760
If, if there's a real downturn economically, they, they need to have control. Um, we have
00:13:38.000
it on the other hand, where I don't think anybody is necessarily nefarious, uh, here in America.
00:13:46.340
I think everybody's trying to do the right thing. However, at some point the government
00:13:52.460
is going to say, you know, you guys have an awful lot of information on people and you
00:13:57.740
can help us. I'm not a fan of the Patriot act. Maybe you are. Um, uh, but you can help us
00:14:04.460
and Silicon Valley will also know we don't want Washington against us. Washington will
00:14:12.220
say we don't want Silicon Valley. So let's work together a little bit. Um, and, and to
00:14:17.620
me that is, uh, frightening because it's, it's more of brave new world. We're handing for
00:14:24.080
convenience. We're just handing everything to people. I think between those two scenarios,
00:14:30.020
the brave new world one is far more likely. Um, and the reason why I think is a lot of
00:14:36.100
people, I would call it uncritically adopt new things. It's convenient and I don't know
00:14:42.240
what I'm getting into. And that convenience is worth the trade-off. And by the time that
00:14:47.380
trade-off is made known to you, your life is so convenient with something new that you
00:14:51.020
can't, you can't go back. And so if one of those two, um, possibilities were to happen,
00:14:56.580
it would likely be the one where we agree to pacify ourselves. That is not to say that
00:15:02.600
this is the path that we're necessarily on. And, and to me, this is the reason why I'm,
00:15:07.980
uh, in tech policy and why I think that this is such an important field because what is lacking
00:15:13.400
is this communication. What scientists and technologists do, it's impressive stuff, but
00:15:19.900
it's hard for most people to understand. And most of them aren't that great at communicating
00:15:23.440
what they're doing and the public on mass can't get into that. And the journalists in between,
00:15:29.260
most of the people commentating people who have historically been those translators have an
00:15:34.340
incentive to hype it up to not really make it clear to you. And, and there's a gap in people
00:15:40.080
that can translate the stuff effectively so the public can be engaged.
00:15:43.700
So that's why I'm excited to have you on. I've talked to several people in Silicon Valley.
00:15:48.920
I've had Ray Kurzweil on and, and talked about, uh, the future with him. And it is important
00:15:55.220
because I don't think anybody in Silicon Valley is talking to or being heard in 90% of the country.
00:16:04.300
And what they're doing is game changing and it will cause real panic as it just hits.
00:16:12.380
Uh, and you have a bunch of politicians who are still saying, we're going to bring those jobs
00:16:17.660
back. No, you're not. That's not the goal for most people in Silicon Valley. The goal for a lot of
00:16:24.000
people is 100% unemployment. How great would it be if we, if no one had to work, you worked on what
00:16:32.620
you wanted to. So you have one group going this direction. Then you have the politician saying,
00:16:37.620
we're going to bring jobs back at some point, there's going to be a breakdown and people are
00:16:43.320
going to have to retool. Um, uh, you have people, I'm trying to remember Mitt Romney's old, um, uh,
00:16:51.500
company that he was with, um, Bain Capital. Bain Capital says we're going to have 30%
00:16:57.920
permanent unemployment by 2030. Um, I don't know if that's true. People always say those things.
00:17:04.540
However, you and I both know, I think that our jobs are not going to be like they are now.
00:17:11.720
Oh, absolutely. Right. So there's at least a lot of upheaval and retraining, and that's going to be
00:17:16.860
hard for people over 50. Um, uh, and nobody's talking to them. Yeah. And I think that's, that's a
00:17:25.460
very important concern. And I think there's two points that you brought up that I think are useful
00:17:30.480
touching on. One is yes. Retraining is hard for people over 50. And this is what's happened in
00:17:36.480
almost every industrial revolution we've had thus far. Um, we remember the industrial revolution
00:17:42.340
as being, we have all these new technologies, the world is much more productive. We're all happier.
00:17:46.500
It was a misery for a lot of the people living through it who had to uproot themselves from rural
00:17:50.920
communities and pack into unsanitary urban centers. It took time for us to learn how to develop the
00:17:58.600
institutions and the kind of governance needed to make sure that this is better than it was before,
00:18:05.380
that this opportunity was taken advantage of. And we're going through a similar upheaval right now.
00:18:10.360
And the people that are most affected by the kinds of, um, automation occurring are usually older
00:18:17.780
people who have been at one company for their entire life, who've learned something very specialized
00:18:22.860
and applicable to that company. When that job disappears, they don't really know how to apply
00:18:27.820
those skills to something different. Right. And number two are young people just entering the
00:18:32.980
workforce. A lot of them do routine work. Routine work is easier to automate. Those jobs aren't as
00:18:37.880
common. And I think this pretty well parallels the two types of, um, people who are most frustrated
00:18:44.480
with the current political scene, young millennials looking for work and older people who've lost their
00:18:49.580
classical jobs. And so you're right. We have to talk to them. We have to figure out how do we address
00:18:54.740
their concerns. But the second point that you brought up is Silicon Valley doesn't talk to 90%
00:18:59.880
of the country, but they're going to get their way anyways. I don't agree that that's the case.
00:19:04.460
And the reason why that's not the case is most of these technologies, um, if you look at the cool
00:19:10.280
advancements happening in artificial intelligence right now, they're not being filtered into the real
00:19:15.540
world at all. They're, they're fancy lab experiments. And the reason why is most people have no idea how to
00:19:21.660
use them. They don't know how to put them into their businesses. They don't know how to reorganize
00:19:26.320
their, their factories to make, to leverage these improvements. And unless the Silicon Valley talks
00:19:31.600
to the other 90%, these technologies will be for them. And you'll have a couple of people be really
00:19:36.520
rich off of them. They don't really make that wide of an impact though. Um, but historically they
00:19:41.860
have diffused. The best example of this is electricity. Uh, at the end of the 1800s, you open your first
00:19:47.700
electrical power plant. It takes till the 1930s for the United States to be 50% electrified.
00:19:53.240
That's because if you're going to a business and tell them to use electricity, they think in,
00:19:57.640
in early 1900s, okay, I save a couple of bucks on my power bill. But then over time you realize
00:20:03.200
I can completely change my factory layout. I can do a lot more cool things. I can really revolutionize
00:20:09.280
the way I organize society with electricity. And then you get a boom of change and you really make
00:20:16.680
everyone's lives much better because you realize what power you had. And until you realize what
00:20:21.520
power you had, a few benefit. And a lot of people are either unaffected and a few are negatively
00:20:27.140
affected. So I agree with you. The only difference is the speed at which we're traveling. You know,
00:20:34.200
I was, it's funny you brought up electricity cause that was going to be my example to you.
00:20:38.200
Late 1800s, you know, for the, the Chicago exhibition, we have Niagara Falls generating
00:20:46.400
power. So that's the first time we'd ever seen a city light up is the Chicago world's fair.
00:20:54.060
1930s, you know, because of the depression, Hey, let's build some power plants all around
00:20:58.580
people that that's a long time for them to get used to it. Um, you know, it, it, most people will
00:21:08.100
say that the next 10 years are going to be by, by the time we hit 2030, 2035, the rate of change is
00:21:16.580
going to be breathtaking. That's true that there's a lot more coming out today. So it's not something
00:21:24.580
isolated. Um, and we adapt faster. I mean, when you think we've only had an iPhone, a smartphone
00:21:31.900
since what, 2008? Yeah. 2006, I think the first crazy, that's crazy. It's everywhere now around the
00:21:39.500
whole world. And, and this, this, this goes back to the, the point that cultural adaptation can get
00:21:45.880
rapid when these things diffuse rapidly. Um, the question is, is all of those other, um, institutions
00:21:54.240
that build around it. So you, you brought up the iPhone, the smartphone really enabled a lot of
00:22:00.520
the kinds of revolutionary potential that people predicted from the internet when it was announced
00:22:04.900
in the 1980s. They're like, Oh, this is going to change the way we work, the way we communicate,
00:22:09.220
the way we do business, all of that kind of happened. Smartphone comes out. You're like, okay,
00:22:14.320
now that potential is realized. Cause I have the internet with me wherever I go. Right. And so there are
00:22:19.840
people that try to make us aware of what's happening, try to adapt us to it before, because
00:22:26.360
we can all kind of see into the near future. It's when we get slightly further into the future becomes
00:22:31.400
fuzzier and people are competing on their predictions. And the people that get to, to voice
00:22:36.340
their opinions most are either the ones that are the most optimistic or most dystopian as, as the
00:22:41.260
starting of this discussion pointed out and they dominate the public view. But if we start talking
00:22:46.400
about, Hey, how do we think in 10 years, and we have these more modest understandings of what's
00:22:52.540
happening, people can adapt to them pretty quickly and they can use that at adaptation time to
00:22:59.220
understand what they're getting into and use it positively, which is the main point that,
00:23:03.220
that we're trying to hope that they can do that. People can critically use technology as well.
00:23:08.620
Let's go to 5g here because 5g, wouldn't you say is the biggest game changer on the horizon?
00:23:14.760
I think 5g is a crucial infrastructure for all of the other interesting technologies that are,
00:23:20.800
are in development to actually make a, a dent. Okay. So explain to people what 5g means, what it
00:23:28.180
can do. So 5g, which is just the next step of, of wireless communications after 4g, much lower
00:23:34.960
latency, faster speeds should be cheaper for everyone to use. And what that enables, if, if all of, uh,
00:23:42.680
if there's a universal access to 5g is, so let's take, for example, cloud computing, which right now
00:23:48.700
is used by a lot of enterprise companies. Um, Google, Microsoft, and Amazon are the three big
00:23:53.440
providers of it for most, uh, most people. What cloud computing does is you don't have to spend a
00:23:59.100
lot of money on storage. You don't have to spend a lot of money on software. You don't have to spend
00:24:02.720
a lot of money on, on computing power. You use the internet, you use our servers. You can do that at
00:24:08.120
home. Now we have these cool AI technologies that optimize things really well. These are very data
00:24:14.980
hungry. These are very hardware intensive. If you don't have cloud computing, only the richest of the
00:24:20.680
rich can have access to this. But then as you have cloud computing, now everyone has access to it on a
00:24:26.060
rental basis. But if the internet speeds are too low, no one can really take advantage of this. And the
00:24:31.880
biases towards people with that physical hardware 5g enables this to spread. And so a lot of the kinds
00:24:38.860
of technologies we want to see make an impact in the world can't really do it as much unless there
00:24:44.120
is 5g infrastructure. So 5g because of the latency, um, issue that pretty much goes away. Um, that will
00:24:52.220
allow us, we've talked about doctors performing surgeries around the world with a robot. That's that 5g
00:24:59.060
technology, as long as everybody has it, um, allows that doctor to go in and, and do that surgery
00:25:07.400
now. Correct. It allows, um, anything that requires use of something over large spaces would be much
00:25:16.160
easier and more efficient with 5g technology. Right. And so right now we still have, you need to have
00:25:21.720
something physical and you need to be in, in the room for a lot of things to occur because the internet
00:25:26.260
is slow and not as reliable. Right. Let me ask you this. It's my understanding that 5g makes
00:25:32.680
self-driving cars much more of a reality. Absolutely. Because is my understanding, my understanding and
00:25:41.080
help me out if I'm wrong. Um, right now we think the car just needs to know where it's going and what's
00:25:49.240
in front of it. But the way it's really imagined is it will know, it will connect with everything
00:25:56.580
around it. So it will know who's in the car and that you won't know, but the car will know who's
00:26:01.940
in the car next to you, who's in the car in front of you, behind you on the sidewalk, et cetera,
00:26:08.100
et cetera. Because eventually it will make the decision of who's the best, what's the best way
00:26:14.980
to go? Well, we have to be careful with the word no. Uh, it will make a judgment. It'll moral
00:26:21.440
machine. So when we have a self-driving car, it doesn't actually see around it. It computers can't
00:26:31.640
really understand the world or represent it the way humans do. Right. Right. Right. And so the way it
00:26:36.340
has to work is it's pinging off everything around it and creates a network and it makes decisions based
00:26:43.460
on that network. If we didn't have 5g and we have a low penetration of self-driving cars,
00:26:48.920
it's only a couple of people have it. Like the people on Tesla autopilot, we're not taking
00:26:52.880
advantage of the revolutionary potential as technology. Because if you think about it,
00:26:58.700
what's one of the reasons why traffic is so horrible in most cities? It's because stoplights
00:27:05.140
and turns are really inefficient. And because every time one person makes a turn or one person
00:27:11.080
stops, it's not just everyone stops immediately. They stop slightly slower and this piles up and
00:27:16.340
this makes the entire grid very inefficient. Self-driving cars, they don't have to worry about
00:27:21.080
that. They can like with millimeters of difference, understand how far the other car is. And that
00:27:27.020
requires that connectivity. And then beyond that, if you have this interconnection between cars,
00:27:32.060
we can allow cars to work constantly. And if we can do that, we don't need as much parking space
00:27:38.360
as we use right now. And parking space is one of the biggest wasted spaces that we have in this
00:27:43.360
country. And if we can free that up, we can build lots of more things. We can make cities denser. We
00:27:48.900
can build more parks. We can make people's lives more fulfilling if we didn't need to waste that space
00:27:54.100
on parking. And so 5g is crucial for ensuring that that technology is safe and reliable and has that
00:28:01.200
kind of revolutionary potential. When do you think that becomes a reality?
00:28:06.100
So the big issues with self-driving car right now, part of it is just technological. They make
00:28:12.660
mistakes still and we need better, we need more data to be collected from test drives. But a lot of that
00:28:19.740
stuff is policy-based. Our infrastructure is just not optimized for these cars to be as present as
00:28:26.660
they are. We don't really understand what the best liability rules are for these cars. And so
00:28:32.040
these risks based on already existing rules are what hold people back. If we can start thinking
00:28:41.060
about, hey, how do we attach liability? Well, for self-driving cars, how do we govern their use on the
00:28:46.120
roads? How do we respond to these companies and help invest in the right infrastructure to make
00:28:51.620
these more of a reality? We can accelerate their deployment pretty quickly.
00:29:06.860
I want to go back to 5g in a second, but let me stay on cars for a second.
00:29:17.680
So I personally do not think that this is a possibility.
00:29:23.140
Yeah. So when we're talking about artificial general intelligence, so that's the idea that
00:29:28.500
a machine can perform any task a human being can, at least at human level.
00:29:35.320
That requires an understanding of the world, an understanding of concepts of causality,
00:29:43.760
an understanding of being able to abstract and reason the way we do and have conversations about
00:29:49.240
purely abstract topics. Machines can't do these things.
00:29:55.740
So we can talk about it on two points then. On one is, do we think that the current techniques of AI
00:30:01.300
will lead to this general intelligence? The current major technique is something called
00:30:06.300
deep learning. She uses a lot of data, processes it, comes up with all these correlations, sees
00:30:11.080
patterns. If you believe that that's all the human brain does, maybe that can lead to AGI.
00:30:17.240
I firmly disagree that that's all the human brain does. But when we think of what it means to be
00:30:24.300
human and how human beings think in the world, it's more complicated than just our brain looks at
00:30:29.940
things and makes a decision. We have bodies that understand the environment we're in. We respond
00:30:34.620
to our environments really well. We understand the thoughts happening in other people so we can
00:30:38.800
communicate with them. This is a level of reasoning complexity that I do not think a machine will
00:30:47.900
You don't think we'll even make AGI, let alone ASI.
00:30:51.820
So the super intelligence idea is about an intelligence explosion, that once you have a machine that can
00:30:58.100
self-improve itself to human level, there's nothing stopping it from quickly going beyond to a level
00:31:03.640
that it can do anything conceivable. But if you can't, I deny the idea that human consciousness and
00:31:13.040
understanding are so easily reduced to machine capabilities.
00:31:19.280
A lot of what couldn't, what, what, what are you saying cannot be replicated?
00:31:25.220
So the, the kind of, let's say the idea of an artificial general intelligence relies on this idea of
00:31:33.000
Alan Turin's theory of computation, that anything that can be formally modeled mathematically can be
00:31:40.160
processed and done by a computer. I do not think human consciousness can be formally modeled
00:31:45.180
mathematically. I do not think that the human mind and what it means to be a reasoning agent in the
00:31:52.040
world is just about processing. I may be wrong. These are my, my philosophical beliefs on, on the
00:31:58.480
matter, but it's clear to me that what we do and what it means to be human involves so many components
00:32:08.320
and so much complexity that it can't be reduced to simply learning from data or an agent, um, being
00:32:16.700
programmed to, to execute some policy decisions. It means a lot to be human. Uh, going back to
00:32:23.180
Aristotle, we're political animals. We, we understand things socially and our minds are far more than just
00:32:30.160
interacting with the world. They're interacting with other people. They're interacting with levels of
00:32:35.020
abstractions that can't be formally understood. And that level of reasoning, I do not think a machine
00:32:41.360
could ever do. So I, I, I tend to agree with you, which, which, um, you know, makes me fearful of people
00:32:48.580
like Ray Kurzweil because he does think that it's just a pattern. It's just a pattern. And I do think
00:32:56.120
that you could put a pattern together that is so good that people will say, yeah, well, that's,
00:33:01.460
that's life. Um, and, uh, no, it's, it's, it's not, it's a machine. It's not life. Um, but Ray will tell
00:33:10.620
you that, um, by 2030, we'll be able to download your experiences and everything else and you'll
00:33:16.660
live forever. Yeah. And as I explained to him, no, that's not me, Ray. That's a box. It's a machine.
00:33:22.360
Um, but there are those that believe that that's all we are. Yeah. Uh, so that kind of like
00:33:31.000
Kurzweil's, uh, transhumanist beliefs, I think that's a somewhat separate and I think, uh, kind
00:33:38.800
of an insane set of beliefs. Um, it relies on this philosophy, um, goes back to Descartes, you know,
00:33:45.320
the evil demon experiment. It's this idea that we can remove our brains and exist in a vat.
00:33:51.020
And that would be us. Um, I don't think that that's the case. Um, there's been quite a lot
00:33:57.900
of philosophers who have made very compelling arguments about why that just doesn't make
00:34:01.880
sense as a theory of, of human minds. Uh, two that jumped to mind are Saul Kripke and
00:34:06.080
Hilary Putnam, which if anyone has the time. You're the only person that I've ever met that has
00:34:10.740
mentioned Saul Kripke. I've, I've, I've mentioned Saul Kripke to some of the smartest people I know.
00:34:18.040
And they're all like, I don't know who that is. I've never read it. It's wild.
00:34:22.500
Yeah. Well, he was in his book, naming a necessity. He it's, uh, he makes a long argument. That's a
00:34:31.780
very technical mundane point about something called a posteriori necessity that if we find
00:34:37.780
out water is H2O, that must be the case. That's, that's what he's trying to do. And then at the end,
00:34:42.540
he's like, so my proof proves that the mind cannot be the brain. And it's like a little
00:34:45.920
line in it, but that was kind of like mind blowing to me when I first came across it.
00:34:51.000
And it's shaped a lot of my views that the mind and the brain are not reducible to each
00:34:55.480
other. And, and so that kind of transhumanist view that we can upload your consciousness
00:35:00.440
because we can map the neural patterns on your brain. It doesn't make sense to me.
00:35:05.120
So I think we're on the same page because I, I have a problem with, I'm also a spiritual
00:35:10.500
being and the choices that I have made in my life, the changes, the big pivot points
00:35:17.920
have been spiritual. And, and if, if you're just taking my pattern, that's who I am now.
00:35:27.000
But, uh, just like when you're, you're putting, you're finding my pattern on Twitter and we found
00:35:33.360
it goes darker and darker and darker, you know, as a, as a, uh, as, uh, an algorithm
00:35:40.120
tries to recreate, recreate my voice or anybody's voice. I think the same thing would happen.
00:35:46.920
There would be a decay of that because you wouldn't have those, those little things that
00:35:53.660
are innately human that are spiritual. Maybe I would describe them in nature. That is a pull
00:36:01.560
to be better. You know, that is a course changer. I mean, how could you find that pattern?
00:36:08.800
Well, to me, that's, that's, I think one of the things that can't be programmed, which
00:36:12.940
is that human beings have this desire. And that, I think that comes from the spiritual
00:36:18.480
side that you're talking about. We have a desire to know, we have a desire to find meaning.
00:36:22.460
Right. We have, we're pulled by desires to do things in life. Now they can be pulled to
00:36:27.340
bad things. It can be pulled to good things, but we are pulled by desires. Machines don't
00:36:32.720
really have desires. They don't have the, the inherent bias towards survival or self-improvement
00:36:39.260
or anything like that. Any desire it has is because a programmer has asked it to do something
00:36:44.320
or it's, it's embedded to do something. It's not autonomous in what we're talking about
00:36:50.800
when we talk about AI in the world today. Autonomous doesn't mean it reasons on its own
00:36:55.360
or it comes up with its own goals. Autonomous means it can execute on human goals without
00:37:02.300
I say often, and maybe correct me if I'm wrong, don't fear the machine. Don't fear the, even
00:37:11.060
the, um, uh, the, um, the algorithm fear the goal that's put into it because it will execute
00:37:23.360
it perfectly. It will go on that goal. So what are you teaching it? Yeah. And I think this is the
00:37:30.520
point, which is like, even if I don't believe that, um, AGI or super intelligence are possible,
00:37:36.400
a lot of those safety concerns that researchers who do believe it's possible are thinking about.
00:37:41.560
Um, one of them is something called AI alignment. How do I ensure that what the algorithm does is
00:37:48.440
what I want it to do. These are still valuable things to think about and work on because if
00:37:54.080
we're giving, if we're embedding these techniques into really serious infrastructure and decisions
00:38:00.340
that can impact millions of lines, we want to make sure that when we ask it to do something,
00:38:05.860
it does what we've actually asked it to do and not misinterpret it. So the concerns that people
00:38:13.340
in that community who do have these views on, on, on super intelligence have are still valid
00:38:17.900
concerns. Um, but also we can just have a view where we're like, there are certain things which
00:38:24.180
are very important to us. We want a human in the loop to make that decision that, uh, and, and
00:38:30.560
that's, that's, that's also just a policy decision that we make. We don't want to give AI access
00:38:36.320
to the nuclear launch codes because what if it makes a mistake? Well, what if the president makes
00:38:42.220
a mistake, but we, we, we have a little more trust that a human being isn't that irrational,
00:38:46.260
right? And so that kind of, um, those kinds of checks will help us ensure that we put these
00:38:53.600
things in places where the payoff is great and the risk is not existential.
00:39:00.620
So our Pentagon right now is, is, um, perfecting AI to the point to being able to see who the
00:39:09.560
aggressor is, um, in a crowd. You know, if there's a, if there's a mob and they're all fighting,
00:39:15.140
it can reduce the, the image to the aggressors, you know, and the ones being beaten the way they're
00:39:23.460
moving and cowering. Um, they're obviously the oppressed and it, it can analyze a scene and then
00:39:30.760
you can tell it, you know, get rid of the aggressors. Um, that's the idea behind it. So far, we have
00:39:38.680
said there has to be someone in the loop with a kill switch. The actual, it's the opposite of the
00:39:44.800
kill switch. Usually it stops the machine. This one allows the machine to execute. Um,
00:39:51.300
but that's America. Well, I think, uh, there is no, uh, law in the United States that actually
00:39:58.600
says you need to keep a human in the loop for military decisions. Yeah. I'm not sure. This
00:40:03.500
is what they say. Do you think that they're not doing that? Uh, no. So I'm just saying that this is
00:40:09.860
a, we, we, we just don't have the technology to allow it to, to, to kill on its own yet. We
00:40:16.400
haven't programmed it to do that, but it's not a, a legal barrier. I have complicated views
00:40:23.880
on autonomous weapons. Um, to me, I think the laws of war are pretty ethical when we, we have
00:40:32.240
just war theory and we have the Geneva convention. We're not teaching just war theory anymore.
00:40:37.080
So we have like a body of, of military literature that teaches you ethical combat. And I think
00:40:44.680
those standards are pretty high. The problem is if you're in a combat situation and it's
00:40:51.980
a do or die situation, you're not thinking through those combat procedures always. And
00:40:58.360
also when you're, when you're in really tight knit, um, military, um, platoons, you have
00:41:05.520
an incentive to cover for your, for your colleagues. If they, if they violate some rule, because that's
00:41:10.640
the camaraderie you build. So a lot of the unethical things that happened during war are down to human
00:41:18.440
error. And I find we can have a robot internalize our very good rules pretty well. And I think robots
00:41:27.560
deployed alongside humans would really improve, um, the accuracy of targeting. They would reduce
00:41:34.700
unintended casualties to what extent we want to remove humans from the battlefield and just let
00:41:40.140
wars fight. That's a little. Yeah. Cause you're not, I mean, cause if you don't believe in AGI,
00:41:45.000
if you don't believe that it, it can take on a, and I'm not saying, you know, go back to spiritual
00:41:51.760
machines. Um, at some point, a machine will say, don't, don't leave me. Don't turn me off. I'm lonely.
00:42:01.580
I'm this, I'm that. Um, and it could believe that it's alive. You don't believe, you don't believe
00:42:07.560
that. I doubt it. No. Um, well, a lot of really smart people don't believe that. And if they,
00:42:15.100
at that point, do you, you don't want it to, you don't want to have taught anything to kill humans.
00:42:24.740
That's a, it's pretty good, um, reasoning to have. And I think that that, that kind of shows why the
00:42:32.200
autonomous weapons conversation is more complex than a simple yes or a simple no. I don't, I don't like a
00:42:38.660
lot of the, um, autonomous weapons are bad by virtue of, we don't want to take a human, uh,
00:42:46.200
decision-making away kind of argument. I think they can do a lot of good. The policies that we
00:42:52.240
enact for them, um, are dependent on when we're saying autonomous to what degree of authority do
00:43:00.020
we mean specifically in a very narrow targeted situation? Because I don't want a robot making
00:43:08.180
the decision of a general, but maybe the robot making decision of a soldier in a combat situation
00:43:12.680
isn't as bad. Um, maybe the drone strike where we are saying that here is our, um, a terrorist
00:43:20.580
encampment. Here are all the details about it. Once you found it and you know that you've not violating
00:43:26.260
all these other rules, let the drone fire. It's different than teaching a system, manning all the
00:43:33.880
robots for the military to know how to kill humans. Like I, that Skynet scenario is very different than
00:43:39.900
these targeted scenarios. So, um, Elon Musk is concerned. I mean, I saw a speech where he said
00:43:47.660
it's the only thing that gives him hope is thinking about getting off to Mars and getting off this
00:43:51.580
planet. Um, uh, you have, um, uh, Bill Gates, Stephen Hawking, Stephen Hawking, I think was grossly
00:44:01.340
misunderstood when he said humans will be extinct by 2025. He didn't mean that humans are all going
00:44:09.560
to die. He just meant that we're going to be upgraded and merge. Do you believe that? Uh,
00:44:15.120
no, I, I, I, I understand the risks that a lot of these people are fearing. Um, I do not believe
00:44:22.940
that human beings are going to be upgraded or merged with a machine. Um, what would you call the
00:44:28.920
experiments that are being done now with robotics and bionics to where you think about moving your
00:44:35.240
arm and that new arm moves? So that the fact that we can do certain things does not mean that we will.
00:44:41.740
Um, I was pretty happy to see that when in China, uh, a rogue scientist injected to, to fetuses with,
00:44:50.520
with CRISPR to try to remove, um, the gene that would make them able to contract HIV, even though that
00:44:57.360
was totally unnecessary, I was impressed that the international community condemned that meant
00:45:03.540
saying that that is, that is not something that we think we can do. We should not edit humans on the
00:45:07.660
germline. These kinds of ethical and policy restrictions on what we're allowed to do with
00:45:14.440
technologies are, give us hope that we won't go down the path of, of human enhancement. And I don't
00:45:20.880
want us to go down a human enhancement path on any way, because you can frame it in the sense of human
00:45:27.440
choice. I'm, I'm just making my child slightly better, or I'm giving myself a cooler arm. The
00:45:33.400
second someone does it, they're much better than everyone else. So everyone's got to do it.
00:45:37.340
Right. And so that's, that's such a slippery slope that I don't want that to happen at all.
00:45:43.260
That was Ray Kurzweil's point. And it would become so common that it would be
00:45:47.040
so cheap that everybody would do it. I mean, who wouldn't want to do it? Well, I wouldn't want to
00:45:51.020
do it. I I've seen, I've seen arguments by philosophers who say, once we can genetically
00:45:57.600
upgrade your children, it's immoral for you not to genetically upgrade your children.
00:46:01.680
You'll be a bad parent if you don't genetically upgrade them. Yeah. Because everybody will be so
00:46:06.860
far ahead. You just don't think that's going to happen. I think if there's any, uh, here's my faith
00:46:13.140
in humanity. If there's any decency among lawmakers and the like, they will not allow that to happen.
00:46:17.900
And the ethical community will understand the limits on, you can use gene editing on animals.
00:46:24.600
You can use it to, um, save people. If that's the last case scenario, we can help a lot of people,
00:46:30.660
uh, live without life threatening conditions, but to, to do like designer babies and the like,
00:46:36.640
that's where we would draw a line. Um, let me, one more question on this. And that is, uh, right now,
00:46:44.640
I think it's Iceland in Reykjavik. They say they have a limited, eliminated down syndrome.
00:46:51.780
Uh, and that's just because, just because they can test for it and kill them. Um, I I'm, I'm as a,
00:47:01.280
as a father of a child with special needs, I'm really against, uh, getting rid of,
00:47:09.040
you know, cerebral palsy or, or, uh, down syndrome. Uh, where, where, where do you think
00:47:18.160
we're headed on that one? I think that that's one of the reasons why when I said
00:47:21.480
intervention on a child to remove a life threatening condition, it needs to be as a last
00:47:28.340
resort. Because if we did that so that any child with any, uh, disease whatsoever, we remove it,
00:47:36.060
even if there's good treatment available, what occurs as a result is no one's going to invest in
00:47:41.660
helping the people who are already living with that condition. Um, and I think that that's
00:47:48.860
worrisome both in the fact that, okay, you're, you're treating these people who are living with
00:47:53.540
the condition worse off medically, but more on an ethical level where you see people who are
00:47:59.360
diseased as less human. This would change our perception of what it, what gives someone dignity
00:48:05.260
or worth. And I don't think that anyone just because they're disabled has less dignity. And so
00:48:11.320
if we, if we have that, and would he have been the same man if he didn't have polio?
00:48:16.380
Probably not. No, our, our hardships, uh, even if you go to Teddy Roosevelt, his hardships as a young
00:48:22.700
person made him who he was when he, when he grew up. And I think if we have this view that,
00:48:28.320
oh, your child is, is going to be sick. Let's completely change your child's genetic makeup to
00:48:35.000
make him healthy so that your child lives a higher quality life. It's, it deprives them of that feeling
00:48:43.920
of, I would, I would go back to dignity because our hardships and our struggles make us more
00:48:49.840
dignified. I think a lot of people don't have my, the ethical view that I have. Um, they want us to
00:48:55.800
just live happy lives without having to struggle for it. I don't know if you could ever be your
00:48:59.880
highest self. Yes. Uh, I think that would pacify a lot of people. Um, it would take away from a lot of
00:49:06.340
the triumph of the spirit. Absolutely. Um, and it goes back to what you were saying earlier about the
00:49:11.800
brave new world thing. If we just wanted to live happy lives without struggling, we could do that.
00:49:17.400
It just wouldn't be as satisfying. I think, um, uh, let, let's, uh, let's talk about, uh, medicine a
00:49:28.820
little deeper. What do you think is, is coming? I saw a report, uh, that was from Goldman Sachs and I
00:49:38.200
don't, I, I don't, uh, fault Goldman Sachs for saying this. This is their job. Their job is to
00:49:45.180
advise people on, is that a good investment or not? And they were looking at the investments of
00:49:52.820
medicine that actually wipes diseases out. And they say, it's a really good investment for the
00:49:58.520
first five years. And then as the disease goes away, the return on the investment is horrible.
00:50:02.840
And so they were saying, as we start to advance, should people, should we recommend that people
00:50:10.800
invest in these things unless they're just do gooders? Okay. Um, we are going to start to have
00:50:19.100
these kinds of massive ethical problems. Are we not? Or questions? Well, um, I, this is the reason
00:50:26.460
why I think most of the world's really happy that we're not relying on banks to fund all medical
00:50:30.520
research. Um, but that for, for them that might change their business model. Uh, uh, and I think
00:50:37.780
that I'm sure that the person who said that got some reprimand from his higher ups for, for letting
00:50:43.780
people know that. Um, but yeah, even if they think that it's a bad investment for them, that doesn't
00:50:50.300
mean we as a society think it's a bad investment and we'll figure out investment vehicles to fund these
00:50:55.000
types of medicine. And you see a lot of people coming up with ways to do drug discovery and medical
00:51:01.920
treatment that could potentially figure out cures for things, but the process makes money. Um, so
00:51:09.100
take for example, um, the application of artificial intelligence to drug discovery when we're doing
00:51:15.500
medical trials, uh, and the like, we produce vast amount of data. The medical literature is huge.
00:51:21.120
No human being could ever dream to read it. And so there's a lot of failed medications in history,
00:51:25.820
which probably work really well. And we just don't know it. Um, so if we apply these, um,
00:51:31.140
statistical algorithms, go through all these papers, human can't read, we can find out, Hey,
00:51:35.840
here's a new cocktail that we try and it'll work for this person. If you create cure that person,
00:51:41.040
you're not charging that person anymore, but a drug company might want to pay this company to help
00:51:45.640
them save on their costs of R and D. Right. Um, and it changes the dynamics of how they're selling
00:51:50.840
products and everyone's kind of benefiting. Uh, and so that's still a technology leading to better
00:51:55.720
cures. Uh, but it's not this finance driven way with the old business model of how we're selling
00:52:01.720
drugs and figuring out new business models, I think is a more crucial question than how do we make it
00:52:08.200
appealing for Goldman Sachs to invest in it. What do you think is, um, what do you think is most likely
00:52:15.180
on the horizon in medicine? So personalized medicine, I think is probably going to be the
00:52:22.720
bigger breakthroughs, uh, in the coming years. And the reason why is we usually go through like
00:52:28.760
animal testing stages and then human trial stages and then product comes to market. Um, animal testing
00:52:35.760
is more or less useless because the distribution on what works in a rat and what works in a human is
00:52:40.960
more or less random. It doesn't, it'll, it'll tell you if this is harmful. It doesn't tell you if it
00:52:45.220
works. Um, and so we waste a lot of money on that. Um, I, I, I know like Alexander Fleming, for example,
00:52:53.480
developed penicillin. Uh, he's like, if I had to test it on animals to get it to work, I would have
00:52:59.720
never come to market kind of, uh, because it just didn't work on the initial tests on animals at all.
00:53:05.140
Um, but if we, if we now look at the new technologies coming out, we we've decoded the
00:53:12.240
human genome much better. We can understand you, your DNA much better. We can understand the
00:53:17.420
history of all the drug cocktails we've ever made much better. We can try to do some matching and
00:53:21.920
see, Hey, here's some trials will run on you as an individual to help your, like to, to tailor to your
00:53:27.800
specific medical needs. And, and that would really revolutionize care because the way doctors
00:53:34.460
prescribe thing right, right now is based on averages. And so you as an individual meet
00:53:40.820
most of the symptoms for this disease. I think you have this is high probability that you could
00:53:45.680
have a rare condition. Most people don't, those are kind of off to the side. Most people do have
00:53:50.300
the average condition, but if we could get it down to that individual level, think of how many lives
00:53:54.860
we could save as a result. So that, that brings me to, again, one of the massive changes that are
00:54:00.540
coming, um, insurance. People don't, people don't really understand insurance, I think,
00:54:06.940
or they don't want to, cause they see it as a big cash cow. You know, I, I've got my car. Well,
00:54:12.020
I'm going to get that check and maybe I'll fix my car a little less and I'll take this money.
00:54:16.700
Um, and they don't understand that insurance is not a guaranteed thing. Insurance works because it's a
00:54:23.980
gamble. You know, the insurance company is saying, if I, if I, if I bet on enough people that they're
00:54:30.380
going to be well, only a few of them are going to be sick. But if, but the collection of data now
00:54:38.640
and with DNA testing, et cetera, et cetera, you know, the goal of all of this data is certainty,
00:54:45.460
you know, that we can get as close to certain as we can. How would insurance work?
00:54:53.600
So I think, okay, when we, when we're talking about insurance, there's a lot of reasons why
00:54:58.780
we shouldn't allow, um, these kinds of automated decision-making and insurance and, and, and using
00:55:05.920
vast quantities of data because it'll take all this data from a ton of people and it'll figure out
00:55:12.020
connections on how to predict whether someone will pay it back. We don't know what it actually
00:55:18.960
picked out. A lot of that's kind of inscrutable. And we, I don't think we should have like explainability
00:55:24.780
requirements like they have in the EU simply because we know the stuff that's better at prediction is
00:55:30.020
the stuff that's harder to understand. But when it comes to insurance, explainability is more
00:55:35.700
important than prediction. Insurance is not simply a prediction thing. People don't want to know that
00:55:40.720
they got denied because the computer said it. They want to know why I got denied. Right. And so
00:55:45.360
things that are good at prediction work in a lot of domains. They work in, uh, in, in medicine. For
00:55:51.460
example, if I have, uh, your radiology test, I simply want to look at the image and say, is that
00:55:57.640
cancer? Is that not cancer? I don't need to explain to you why, and you don't care why I can use an
00:56:01.760
algorithm and your life is better. In insurance, it's not about prediction. Prediction is a part of it,
00:56:06.780
but it's about you understanding what you're getting into and that relationship with the
00:56:11.420
customer. And we shouldn't try to reduce it to a prediction decision. And that that's a reason why
00:56:18.760
we need to have legal rules on what insurance is allowed to do. And we might have to think about
00:56:22.800
different models for insurance that incentivize care better. Um, does this, I mean, as I'm listening
00:56:29.000
to you, I keep thinking, you know, I disagree with you on some things, but I keep thinking, yes,
00:56:33.220
yes, that's the conversation that we shouldn't be having. Tell me the person in Washington is
00:56:38.000
having any of these conversations. Well, so yeah, with insurance, it's a complicated thing
00:56:42.320
because insurance throughout most of history was done on a local community level. And that makes
00:56:46.360
a lot of sense. If we all just pull in our money, it's a community and whoever gets sick, we pay for
00:56:51.440
it. Everyone makes sure everyone else is healthy because no one wants to pay out. Um, those kinds of
00:56:57.200
model where we're, um, you're a shareholder, uh, as a purchaser of the insurance, I think are much
00:57:03.200
better for the kinds of like data that we have now. It would make everyone, uh, be incentivized
00:57:08.180
to be healthier and be wiser in their decisions. And they can really understand better. How do I
00:57:13.460
make those wise decisions? That's not the kinds of insurance models we do have. They're very
00:57:17.880
centralized by big companies. And we're talking about even more centralized. Yeah. And so there,
00:57:24.700
there should be a political conversation. How do we regulate insurance differently to encourage
00:57:29.040
people to be more knowledgeable about their plans and to incentivize whether this is something that
00:57:34.840
anyone in Washington is having? I don't know. I doubt it. Are you seeing anybody that's having
00:57:39.460
a, uh, there's one candidate, he's a Democrat, um, who's talking about basic minimum income.
00:57:46.020
I am dead set against basic minimum income, but I think people have to have the conversation,
00:57:52.180
the mental exercise, because there are people that are going to be saying 30% unemployment.
00:57:58.580
Now, whether that happens or not, I don't know, but there are going to be experts that will say
00:58:04.440
that's coming. And a lot of people may be unemployed. We hit a re a massive recession.
00:58:11.460
You're going to hear people talk about basic minimum income. We're not even having that conversation.
00:58:17.600
Yeah. I I've spoken to Andrew young before. And, uh, what I liked is, so I, I too, uh, disagree with,
00:58:24.100
with universal basic income proposals. Um, mainly because no one really proposes them as a way to
00:58:31.720
replace our social safety systems. It's, it's kind of like an additional, which is, uh, very
00:58:39.400
unsustainable, but I do, I agree with you. I do like the fact that he's one of the few people having
00:58:44.280
that conversation and we do need to be more forward thinking. And, um, and I've commented
00:58:49.620
on this before, which is we, we actually see more of these daring thinking on the left, which is sad
00:58:54.920
than on the right. There's too much still old thinking in a way. Um, I think a lot of the new
00:59:04.080
thinking is like wrong fundamentally. Um, but it, it, it is new, uh, in the sense that it's trying to
00:59:13.340
grapple with the new challenges. You, you see this resurgent antitrust movement on the left
00:59:20.160
and okay, you can say it's old because antitrust is an old measure, but it's, it's new in the sense
00:59:26.280
that it's saying, Hey, antitrust needs importance now because of digital concerns, the way the digital
00:59:33.020
markets work, you it's winner take most markets. So it's new thinking in the sense that we are
00:59:38.660
thinking about how to deal with new problems. Um, I see very little discussion on the right of
00:59:44.520
how do we grapple with the digital economy? Um, and these, these are important conversations.
00:59:49.440
And I think that we need to have models to understand how to best deal with the digital
00:59:57.200
world in a way that makes people better off. Um, I've seen a few people on the right, um,
01:00:02.440
the information technology and innovation foundation, um, ITIF, they, they publish a book,
01:00:10.140
um, recently called, uh, about why big business is good and, and, and trying to dismantle this
01:00:16.680
belief that all the dynamism in an economy comes from small business. And it's a really interesting
01:00:21.840
approach on, on a right-wing view that a country needs an industrial strategy for, um, for, for it to
01:00:28.540
leverage technology benefits. And I think that's a classically conservative view as well, that we
01:00:33.380
need our country to be able to understand what it's, what its resources are and to be nationally
01:00:39.360
competitive on the global stage. Um, but that, that's a, as a, a chorus of conservative voices,
01:00:46.360
Let's talk about the digital economy and, um, what, what are we going to be impacted
01:00:59.220
with first and the digital economy? What is, what is going to be the, the biggest,
01:01:07.480
the first thing that comes to us that we go, Oh, Oh, we should have talked about that.
01:01:13.420
I think people are already grappling it with the biggest change of the digital economy is
01:01:17.420
the complete change in how media works. Um, social media is very different than news media,
01:01:23.740
which is very different than print, like a television media, which is different than print
01:01:26.740
media. Um, and I think people are realizing this, they, people started to realize this after the 2016
01:01:33.380
election. I think that's when they first realized that the game is different now. Um, and we still
01:01:38.740
haven't fully understood what it is that social media. I don't even think, I don't think we're
01:01:44.300
having any even, but tell me the deep conversations, the philosophical conversations that you have
01:01:54.580
So I think the, the best, well, I, these conversations were actually happening from the
01:01:59.140
dawn of the internet. They just kind of lost their prominence now. Um, I think it actually
01:02:04.680
goes back to, to, to even before the internet, uh, Marshall McLuhan, who, who is a kind of the
01:02:10.740
father of media theory wrote, wrote a book called the Gutenberg man. Uh, and in it, he, he said that
01:02:16.580
before the printing press, human beings were oral. We told stories and who was important as a result,
01:02:23.020
our politicians, military leaders, religious figures. Why? Cause they're the best at communicating,
01:02:28.040
grabbing your attention on what matters. Our society was organized along this hierarchical
01:02:33.360
kind of understanding of who's on top, what's your place. That's how oral cultures function.
01:02:40.340
Then when you get to printing and you may move to a visual culture and everyone can read and you
01:02:44.920
completely change what you're listening to, you have this explosion in arts and sciences.
01:02:49.980
The person is the best orator is no longer the most famous. The person who tells the best story is
01:02:54.360
now the most famous and that's a scientist or a writer. And he's like, he, he analyzed that this
01:03:00.060
led to individualism. This led to, to people demanding democracy because they felt empowered
01:03:04.920
as individuals. He said, when you move to the, the cable companies, you're going back to an oral
01:03:11.320
culture. You're going back to, I, I understand my trusted source for things. Now, when you go and
01:03:17.360
this, what, what, what surprises me is this is even before we had social media, he said, but when you have
01:03:23.060
the global village was his term, when you have these people that are so interconnected and you
01:03:27.800
break down barriers of the place and the class, they're going to have a return to identity is
01:03:32.700
what's going to matter. I'm going to need my group to, to, to, for me to parse this vast amount
01:03:39.180
of information and to make us think in a way that I can understand rather than having to read
01:03:45.860
everything because that's impossible rather than having to go through all the various voices,
01:03:50.180
which is impossible. I need this filter and this kind of shared group identity that creates this
01:03:57.200
reputation. I think that's exactly what we're seeing. Yeah. We're tribes. We are literally tribes.
01:04:01.920
You find a group of people that generally you agree with that see the world the same way.
01:04:08.960
And, uh, and we now are tending to believe that each tribe is like, they're coming to get us.
01:04:13.980
Um, but there, Oh, I mean, how else would you function? But yeah. And so the, the, the question
01:04:19.860
then is not, I think there is a, again, the, the approach of people who want to go back or like
01:04:25.260
tribalism is bad. Let's go back to thinking for yourself. It's very hard to read everything on the
01:04:31.060
internet. It's really hard to know what's trustworthy. Right. And so you can't go back to this naive view,
01:04:37.440
read, read the books you'll, you'll learn for yourself.
01:04:39.980
So how do you do it? We, I think we need to find a way for tribes to interact peaceably for us to
01:04:48.380
understand. Here's my view of the world. Here's your view of the world. Let's negotiate as groups
01:04:53.580
rather than as individuals. And that's, that's just a new way that, that an interconnected society
01:04:59.980
has to live because there's too much information out there. And we are, we're splitting ourselves into
01:05:05.700
so many tribes. There's every, and seemingly all of them are saying the same thing, my way or the
01:05:11.440
highway. Well, it's, I think, um, good ways of, of kind of getting by. So we don't want tribes to be
01:05:18.520
collectivist in the sense that my entire identity is my tribes, but we can't go back to individualists.
01:05:24.740
Like I'm, I have no shared tribal group. We need something that's more fluid that I understand.
01:05:29.980
I belong to this category of groups. I see myself as this way with these people and this way with
01:05:35.500
these people and us as a group, we interact with this group in this way. And you, you get this web
01:05:41.460
of interrelations. And, and if we see our identities as more fluid and come up with mechanisms that
01:05:48.420
respect both individuals and, and groups, um, I think the best kind of political theory in history
01:05:56.880
to think of this is, is, uh, I don't know if you know much of a JK Chesterton, um, and in, in, in
01:06:04.900
Catholic social teaching, there's a term called subsidiarity where you are individual, but our most
01:06:11.220
basic unit of decision-making is our household. And so that's a tribe in a way it's, it's a collective
01:06:16.780
body. How we should think of society is decisions about most things should be made on the level of
01:06:23.220
the household decisions that can't be made at the household or made at the local community level.
01:06:27.560
And then decisions that can't be made or move up and it's moving up, bottom up, and then it moves
01:06:32.340
back top down. They feed both directions. We don't have a political organization that has this diversity
01:06:41.920
of decision-making. Well, we, we do, we just haven't used it in over a hundred years.
01:06:46.620
They haven't used it, uh, for quite some time, but that's, that was the premise. I don't think the
01:06:53.060
founders have been, um, more genius than right now. I mean, everybody says, oh, they couldn't see this
01:07:01.860
coming. Well, no, they couldn't have seen this coming. They couldn't have, but doesn't that make
01:07:06.000
it more genius? Because as we are becoming more tribal, as, as we are, um, living in pockets of
01:07:18.500
our own kind of tribes, we don't have to separate. We just have to agree to live with one another and
01:07:25.540
not rule over one another. And that was the point of America. And it's never been more apparent
01:07:33.380
how far ahead of their time than right now. Well, yeah, you go back to what was the vibrancy
01:07:39.860
of early America. It's most of the stuff was done by these various civil society organizations,
01:07:44.200
uh, these local groups interacting with each other. There, there is a need now more than there
01:07:51.320
was then for, for a central government to do things, but it's about delineating correct
01:07:56.180
responsibilities. There are things that only a central government can do. There's a lot of things
01:07:59.860
that local governments can do and things outside of government can do that we need to talk about
01:08:04.800
getting their responsibilities on track. And I think at least in recent memory, the, the conversation
01:08:11.560
among conservatives was this very simplistic government is bad. Private companies are good.
01:08:18.380
And there it's more complicated than that. Not all private companies are good. The government's
01:08:23.540
not always bad. You know, I'm a big fan of Winston Churchill and, um, uh, I've read so much, uh, on
01:08:30.660
him and I just love him, love him, love him, love him. Then I decided to read about Winston Churchill
01:08:35.760
in India from the Indian perspective. He's not that big, fat, lovable guy that everybody thinks,
01:08:43.900
you know what I mean? He, he's a monster there. And I think what we've done is we are trying to put
01:08:51.740
people into boxes where they don't fit. I struggled for a while. So is he a good guy? Is he a bad guy?
01:08:57.200
And then it dawned on me, he's both, he's both. And, and, and we have to understand government
01:09:05.280
is not all bad. It's bad and good people, companies, not all bad, bad and good. And we just have to,
01:09:16.000
it's, it's, it's, which way is it growing? Is it growing more dark or is it growing more light?
01:09:22.760
Um, and I think, did you ever read a Steven? Uh, no, um, Carl Sagan's book, demon haunted world
01:09:30.240
have not. Okay. He talks about, um, there will come a time, this is in early nineties. So come a time
01:09:36.260
when, uh, things are going to be so far beyond the average person's understanding with technology,
01:09:45.740
that it'll almost become like magic. And if we're not careful, those people will be the new
01:09:52.720
masters, you know, there'll be the new high priests that they can make this magic happen.
01:09:58.520
And, um, I, I think that's what I fear, uh, somewhat is these giant corporations. I've always
01:10:06.720
been for corporations, but now I'm starting to think, you know, these corporations are so powerful.
01:10:14.540
They're spending so much money in Washington. Um, they're getting all kinds of special deals and
01:10:20.240
special breaks and they're accumulating so much power. I could see for the first time in my life,
01:10:27.260
Blade Runner. I've never thought of that. I've always looked at that and gone, that's ridiculous.
01:10:31.200
That is 2019. But, uh, the, we can even go back before Sagan, uh, Eisenhower and his farewell
01:10:39.200
address, not only talked about the military industrial complex, but the scientific technological
01:10:43.540
elite. And that, that to me is, is the, the policy question, because it's about whether we've had this
01:10:52.860
tendency to defer to experts for so long that it's eroded democracy. And how do we put this complex
01:11:01.360
stuff back in the power of the public? And I've seen some, um, very interesting proposals about it.
01:11:08.180
There's, uh, an economist at Microsoft research, his name is Glenn Weil, and he wrote a book last
01:11:13.460
year called Radical Markets. Um, and it comes up with these fascinating mechanisms. And what these
01:11:21.180
mechanisms do is you will have technical decisions making made in executing decisions, but you allow
01:11:32.060
more democratic control on, on how people understand things, uh, and, and what, where their voice is
01:11:38.660
represented. And I'll give you a practical example. Um, he, he calls for something called data as labor.
01:11:44.800
And, and I'm a big proponent of this philosophy. And the reason why is when we look at these large
01:11:50.380
companies, which have tons of data and which make a lot of money, the reason why is legally we really
01:11:56.840
think of the value in the physical assets that they have. The data is, is an input and the output is the
01:12:02.220
physical things. And so they own all the servers. And so when you're operating on their site on top of
01:12:09.020
their server, even if you're creating tons of value, you're, you have to accept the agreement that
01:12:13.980
they've given you because I own the infrastructure here. If we start treating data as an input of value,
01:12:21.380
you increase the bargaining power of people working on these sites and they can ask for more money and
01:12:26.780
you take a pool of that income. These companies have a way, does that mean that everyone's going
01:12:30.700
to get a lot more money? No. Like if you take all of Amazon's profit, even like down to the, they have
01:12:36.100
no profits left and distributed among Amazon's users to get like 30 bucks each. Right. But when you
01:12:41.840
reduce the level of profitability, Amazon has itself and you divert, um, diffuse that bargaining power
01:12:47.780
and that little bit of money to each person, you have a far more competitive landscape on, uh, on top
01:12:52.980
of their site. You generate a lot more businesses on top of their site and you give a lot of those
01:12:58.380
users a lot more power and an interest in what's happening. And that generates not only a lot more
01:13:03.120
economic activity, but it, it, it allows people to have the incentive to care about how to govern
01:13:09.580
their, their interactions online. It gives them a voice online. Um, let's, um, let me go back to 5g here
01:13:18.460
for just a second and then we'll, we'll move on while we're here with government and corporations. Uh, this
01:13:26.680
week they were talking about, um, the government just doing 5g, having it a government project. I've talked to
01:13:36.380
people about AI and should we have a Manhattan project on AI? If it can be done, you know, we have to have it
01:13:46.540
first because I don't want China having it first or Russia having it first. Um, should the government
01:13:53.300
be doing 5g? Um, so the Trump administration's approach to 5g is kind of, it's a little all over
01:14:01.820
the place. Um, for, they understand it. Well, I, I, I don't know what their goals are and I'm going to
01:14:10.540
put it like that because so a year ago you have these kinds of restrictions on what Chinese companies
01:14:16.200
can sell in terms of 5g infrastructure and whether the, um, people in the national security,
01:14:22.160
uh, like in, uh, with clearance can use Huawei products. All right. You say I have national
01:14:28.280
security concerns. I don't want to use foreign companies. Makes sense. But now even this week,
01:14:34.140
he's like, I want to ban Huawei from working here. And that's, that's now really extreme.
01:14:40.320
It's going beyond that. Um, when, uh, so in Canada, the CFO of Huawei, um, is facing extradition
01:14:47.320
on alleged sanctions violations. And he, he announces, I want to use this as a bargaining
01:14:53.180
ship in the trade war. Now that that's now politicizing what should be a national security
01:14:59.100
conversation. And when you do that politicizing, it's, it's, what do you want when you're banning
01:15:04.660
this company? Is it because you have national security concerns? Is it because you're worried
01:15:08.700
that we're behind on, on these technologies? If you're worried that we're behind help domestic
01:15:13.020
companies compete, don't punish a foreign competitor, or is this, I want to punish China,
01:15:18.380
which is you'll harm Americans to punish China without good cause. And so I'm not clear what
01:15:23.880
the goals are there. And so I can't say whether it makes, makes a lot of sense what they're doing,
01:15:28.140
but I do think it's very erratic. And I think that this, this 5g announcement is part of this
01:15:35.560
erraticism in which they say, Hey, we don't want these Chinese companies having the lead.
01:15:41.720
We don't really want to do anything to make it more profitable for, for domestic companies to
01:15:46.020
invest. Let's just say the government will do it. I doubt it's a well thought out plan. I doubt they
01:15:50.620
actually have the funding or the mechanics done. Uh, even with, uh, the American AI initiative,
01:15:55.560
the executive order Trump announced, uh, for, for AI, there was very little by the way of,
01:16:01.620
of how this money is going to come up. So when it comes to, to the executive's approach to tech
01:16:07.080
policy, I don't think that there's, there's that vision or understanding of what we want.
01:16:12.700
Um, Silicon Valley, Silicon Valley and the government, I don't know. I mean, it's like a
01:16:20.480
clown car every time somebody goes to Washington and, and the clowns get up and they start questioning
01:16:25.740
the guys in Silicon Valley. I just don't have any faith that they have any idea what they're even
01:16:30.520
talking about. Um, and they, they keep going back to old solutions about, we have to regulate you.
01:16:38.600
Um, I keep hearing about regulation that we need to make sure that voices are heard. I think that's
01:16:45.180
the worst possible idea. I think there's a, uh, a misunderstanding of a platform and a publisher
01:16:53.360
and you can pick one, but you can't be both. I have no problem with Facebook saying, yeah,
01:17:00.900
we're changing the algorithm. We're a private company. We're changing the algorithm any way we
01:17:04.540
want. Okay. But you should not have the protection of a platform.
01:17:11.120
So if we, you brought up several points there, you brought up both the technical literacy in
01:17:18.440
Congress and the decisions being made, uh, by, by social media platforms. Um, when it comes to
01:17:24.720
the technical literacy, I think that there is a need for more competency. Um, there is a model
01:17:32.940
the United States used to have, and it got defunded in the nineties called the office of
01:17:36.240
technology, technological assessment. And that used to provide reports for, for, uh, staffers
01:17:42.860
who would read it. And then they would tell, tell what, uh, what Congressman should say when
01:17:49.100
they go into hearings. Um, that doesn't really exist. And, uh, and the research capacities of
01:17:55.060
Congress have basically been gutted for a while. And that's why they, they seem so embarrassing when
01:18:00.440
they go into these hearings. And so that's definitely one point though. I've been assured
01:18:04.520
behind closed doors, they are more respectable than they are in these hearings. They do want to get a
01:18:08.780
good sound bite in obviously. Um, when it comes to the social media platforms thing.
01:18:14.580
So what protects these social media platforms is, uh, something called section 230 of the
01:18:18.800
communications decency act that a platform is not liable for the content posted by its users,
01:18:23.580
which was there for porn and copyright. Yes. Uh, mostly for copyright, I think, but probably
01:18:29.940
porn as well. Um, but that has allowed the internet to become what it is today. Cause think of how many
01:18:37.660
small sites would just not be able to fight off the small, the lawsuits they would get. Correct. Um,
01:18:44.040
if we remove that liability, you're not going to see Facebook become less censorious. What you're
01:18:51.660
going to see is them removing most content off their site because the task of content moderation
01:18:57.840
is unbelievably complex and nobody has figured out how to do it efficiently. And these people are
01:19:04.120
learning, they're making tons of mistakes while they do it, but they're responding to the fact that
01:19:09.260
they have so many diverse interests. If I run my own, let's say I run a blog, right? And, and, uh,
01:19:16.280
I get some users saying, we don't like your opinion. I'll say, I don't care. This is, this is my blog.
01:19:20.940
Facebook has shareholders. It has its users that has all these people are telling it, no, no, no,
01:19:24.720
you have to do this for me. And it's so hard for them to, to actually execute that effectively.
01:19:31.680
If they're held liable on the content on top of it, you're going to see the amount of usage of
01:19:35.680
Facebook shrink to like 10% of what it is today. And so I do not think treating them like a publisher
01:19:40.940
is the way to go. Whether we need to see how do we incentivize new efforts in content moderation?
01:19:49.880
Do we need maybe, um, principles or guidelines on content moderation that everyone should operate
01:19:55.920
on? And then they can tweak within this framework for their own sites. Cause, um, obviously we shouldn't
01:20:01.080
have all of them moderating content the same way. We want them to compete and come up with better
01:20:05.140
rules, but whether we should nudge them in a certain direction, maybe, but treating them like
01:20:11.040
a publisher is probably the worst approach I can think of. Really? Because it would decimate online
01:20:17.020
activity. Yeah. Um, uh, except it's the rules that I have to abide by. It's the rules that everybody
01:20:26.080
else has to abide by. This there's the difference between how the New York times operates and how
01:20:32.720
Facebook operates. Cause the New York times, you submit them an op-ed or something. They have an
01:20:38.660
editor review it and say, go ahead. Facebook never gives you the initial go ahead. Right. But what I'm,
01:20:43.280
what I'm asking for is though, if you're a platform, what you're saying is I'm just an auditorium.
01:20:49.880
I rent to anybody and everybody. So unless it's illegal, I've got to rent this to anybody. You may
01:20:57.220
not like who was in here the night before, but I'm an open auditorium. I'm a platform for all.
01:21:02.860
I think this is, um, this is a misinterpretation of platform. A platform doesn't mean it's allowing
01:21:09.440
all voices or that it's showing them an equal regard. All it's saying is it's not making a decision
01:21:16.220
on whether the content is allowed from the moment you post it. They're not exercising editorial
01:21:24.420
control over types of content, but if their advertisers say we don't want, um, content
01:21:32.080
with nudity on it because we're not going to use your site anymore as a platform, they can still
01:21:37.460
understand that. All right. We want a platform where people can share their views and the like,
01:21:41.560
but we don't want this type of content on it because that's harmful for everyone else on the
01:21:45.780
platform. Okay. So what's the solution? Well, the solution in my view is simply allow incentivize
01:21:55.120
more competition online. How? Well, I, Oh, the data is labor proposal that I, I got back to,
01:22:01.300
I mentioned earlier, allowing more bargaining rights for the users of these sites with the sites
01:22:07.360
themselves will allow not only more democracy in their governments, but will allow people to make
01:22:15.180
small offshoots. The problem of what happens right now with competitor sites as, and we, they always
01:22:20.720
go to the worst. Um, whenever you have a content moderation saying we won't allow this type of,
01:22:27.340
we won't allow hate speech on our site, the type of site that comes out for people who are like,
01:22:32.300
we'll allow anything. There's maybe like three libertarians there and there's 5,000 witches who go
01:22:36.500
to that site. So, um, so that's not the model that usually works. You have, however, had successful
01:22:45.380
switches for sites from my space to Facebook. And usually that happens because the site has made
01:22:53.000
decisions that don't just anger, like the small few, they anger the majority users on the site and
01:22:58.460
they no longer like it. And for some people, they think Facebook's going down the path.
01:23:02.460
But if we allow these sites to make mistakes, but also give the tools for their users to, um,
01:23:11.920
have more bargaining power with them, um, like change the way we treat data ownership. Um, you'd
01:23:19.900
probably have a far more competitive space because these sites would kind of have to listen to people
01:23:26.220
more. They would change a lot more and then you would have more churn and who's on top.
01:23:32.460
Are you concerned about voices being snuffed out?
01:23:44.480
I am not. I do not think that, uh, uh, a lot of what is called censorship online is actually
01:23:52.280
censorship. I think it's just in the viable business decision for, for Facebook or Twitter
01:23:57.720
to not allow certain people on. And the internet more or less is still a very open place where you
01:24:04.080
can, you can start up a website, you can post it, you can buy marketing tools. You'll be excluded from
01:24:09.460
a large platform. Sure. But that doesn't mean you're silenced. I don't think we should have this
01:24:17.160
expectation that I can rely on, on Facebook or YouTube to provide me my audience because they, I,
01:24:26.180
they don't have to give me their service. I agree. Google is in a different place. They changed the
01:24:31.380
algorithm and exclude you because they don't want to show those results. They, they tinker with the
01:24:37.520
results of the search engine that I think is different. So the algorithmic changes, um, I think
01:24:44.740
the, the, the most, um, famous claim was that, you know, there's only two conservative sites that ever
01:24:50.920
show up on Google news, which are generally, um, Fox news and the wall street journal. Uh, and the
01:24:56.940
reason why is if you just look at the page view rankings of these sites, they're the only two
01:25:01.760
large conservative sites. Um, the vast majority of conservative news media small is small and
01:25:08.400
fractured and competing with each other. Whereas left-wing media tends to be more centralized,
01:25:13.260
large stations. And so the algorithm favors that. I don't know if that's, I would suspect that
01:25:20.640
that's not a politically motivated decision, but I don't know if that's the case. I'll, I'll give you
01:25:25.360
that, um, that caution in my statement, but it makes sense for an algorithm based on your size and
01:25:32.160
your prevalence to kind of demote conservative leaning sites. Is there such a thing as privacy
01:25:37.560
anymore? Privacy is, I think it's a topic where people have very strong views because they think
01:25:46.300
it's older than it really is. The right to privacy is about a hundred years old. The right to privacy
01:25:51.980
came out because of the camera. Um, people were like, Hey, I'm with my mistress in a park and you
01:25:58.080
can now take a photo of me with her. Uh, I don't, I don't want that to be allowed. So the right to
01:26:02.580
privacy gets, get blows up. I think a lot of the concerns people have, the media has over privacy,
01:26:09.540
isn't the concerns most people have over privacy. I think a lot of the, the, the hype over, we don't
01:26:15.920
have enough privacy is by people in positions like in government or in media who have a lot of things
01:26:22.140
to hide. And they know that their, their, their position is based on, on, on privacy. Whereas the
01:26:27.340
average person is willing to trade most of their information for, for, for that, um, improvement
01:26:33.960
in their quality of life because we don't have much to hide. I would agree with you except that
01:26:44.280
information is when it's total information in the hands of nefarious individuals, you, uh, uh,
01:26:54.240
they can make you look and look any way they want. If they have control of all your information and you
01:27:00.760
don't have control of your information. Right. And I think that this is the, this is,
01:27:04.900
is like, I think we, there's a far more concern when you have a guy behind an NSA computer monitoring
01:27:11.080
you. Um, then there is when you have this aggregate pools of data at Google and Facebook,
01:27:17.020
but I would love you to have more power over that data. Um, and, and this is why I think we do need to
01:27:25.360
have conversations on what are the rights over data? How do we classify data? Is it a type of property?
01:27:30.760
Did you produce your data? So is it, is it, is it your labor? These are conversations we need to
01:27:35.180
have, um, because people need to feel that they can have a greater stake in how their data is used.
01:27:41.700
This is not the same as saying, let's just reduce data collection for privacy reasons like they're
01:27:46.960
doing in Europe. Cause I don't think that benefits a lot of people. Most people don't know what a deep
01:27:52.120
fake is. I, I believe by 2020, by the time the election is over, everybody's going to know what a
01:28:00.120
deep fake is. So deep fakes, uh, it's a tool based on a pretty recent version of, of, um, artificial
01:28:09.320
intelligence called a GAN, a generative adversarial network that's able to develop new types of
01:28:14.920
content. Uh, it can create new data out of old data. And, um, a lot of the applications that you
01:28:21.520
see right now are in the video game, uh, zone. You can create more realistic characters, like higher
01:28:27.140
resolution images. Um, but you, you have a lot of positive views as well, because it could be
01:28:33.280
applied to medicine, detecting weird anomalies, lots of security applications as well. But you can
01:28:39.540
also use it to, um, make it look like you said something you didn't say or put you in a compromising,
01:28:45.920
uh, video that you never participated in. Um, and it can look pretty realistic.
01:28:51.460
The really interesting thing about deep fakes is the second that the first few came out on the
01:29:00.360
internet and people realized how horrible this was, everybody responded. You have a near kind of,
01:29:07.780
and I know you were, you were complaining that these companies have this censorious capacity,
01:29:11.960
but you have this complete shutdown where we're not hosting these types of videos and we're not
01:29:16.380
hosting you teaching people how to make them, um, across a lot of, of, of, of sites.
01:29:22.840
I know DARPA is developing algorithms to be able to see the, when it's ingested, it'll warn
01:29:31.520
deep fake and they want the, they want Facebook and YouTube and everybody else to, uh, to run that
01:29:39.620
algorithm. Yeah. And so, yeah, there was, uh, yeah, and it's not even just DARPA. There was a lot,
01:29:43.580
uh, lots of work coming out recently where it's getting much better to even detect these before,
01:29:50.040
before people have started making them on mass. Um, there was a brief wave of the, these being made,
01:29:57.660
um, for putting celebrities and pornographic videos, but that got banned so quickly as, as something to
01:30:03.360
do that that's really decreased. And a lot of the ones that slipped through the crack, um, if we have
01:30:11.900
these detection services that can prove that they're covered under existing laws and cyber laws
01:30:16.460
about harassment, identity theft, libel. And so you might, I, I, I like this idea that we're ramping
01:30:25.520
up the ability to enforce existing laws by saying, Hey, we can have the evidence that someone did this
01:30:30.560
and, and it needs to be taken down and we need to compensate you as a victim. And we've responded
01:30:36.700
to that really fast. And that makes me optimistic that we can respond to some of the more extreme
01:30:41.200
challenges as we're going in the future. The, the one interesting thing that comes along with deep
01:30:46.560
fakes though, and, and it hasn't been done yet, but I feel someone will do this as an experiment one
01:30:51.600
day. You can fake an entire news event. Now, um, you can generate people that don't exist. You can
01:30:59.100
generate landscapes that don't exist. You can generate audio that no one's actually said and try to come
01:31:05.580
up with a scenario. And I think that would be a warning shot if someone did this to try to like
01:31:11.040
game Twitter and convince everyone that this is real. And it would show that we need to, uh, have
01:31:16.900
some good regulatory approaches on identification. It, it, it is war of the worlds 1929. Um, that went
01:31:27.020
the very next day Congress was talking about what do we do about radio, this powerful medium that's
01:31:34.900
all over the country that could spread panic. Um, so it's, it's in a way history repeating itself.
01:31:40.880
And so, yeah, we, we, the developments you have historically are just be, be more transparent
01:31:46.620
about what you are, label things well, and we have the tools to allow us to do this. Uh, it's the policy
01:31:52.540
that's behind. It's the regulatory approach we're taking that's behind. It's been great to talk to
01:31:57.480
you. Great to talk to you as well, Glenn. Just a reminder, I'd love you to rate and subscribe to
01:32:08.840
the podcast and pass this on to a friend so it can be discovered by other people.