TRIGGERnometry - September 16, 2021


Privacy is Power - Carissa Véliz


Episode Stats

Length

57 minutes

Words per Minute

191.74852

Word Count

11,114

Sentence Count

462

Misogynist Sentences

2

Hate Speech Sentences

8


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 We are creating an architecture of surveillance that is so good that if it gets taken over by a bad government,
00:00:08.200 we are in serious trouble because it will be impossible to resist.
00:00:17.620 Hello and welcome to Trigonometry. I'm Francis Foster.
00:00:22.260 I'm Constantine Kissin.
00:00:23.400 And this is a show for you if you want honest conversations with fascinating people.
00:00:28.900 A brilliant guest we have for you today. She's an associate professor at Oxford University and
00:00:33.860 the author of Privacy is Power, a brilliant book. Carissa Vellis, welcome to Trigonometry.
00:00:39.260 Thank you so much for having me.
00:00:40.760 It is great to have you on the show. Listen, before we get into talking about the subject
00:00:44.320 you cover in your brilliant book, just tell everybody a little bit about who are you,
00:00:48.360 how are you, where you are, what has been the journey that leads you here to be sitting here
00:00:52.100 talking to us? I'm a philosopher because I was never good enough at jokes to be a comedian,
00:00:57.640 although I would love to be a comedian. But, you know, second best is philosopher.
00:01:02.380 And I studied philosophy as a BA, MA, and then PhD. I was writing my dissertation on something
00:01:09.520 related to ethics, but very different. And then I started researching the history of my family.
00:01:14.580 My family were Spanish refugees in Mexico from the Spanish Civil War. And they had never talked
00:01:20.440 about the war. It was a very sensitive topic. And I went into the archives and uncovered so
00:01:25.960 much about them that they hadn't told us. And it made me wonder whether I had a right to know these
00:01:30.420 things that they hadn't told us, whether I had a right to tell my family, or maybe even publish
00:01:34.620 about it, because it was so interesting. And being a philosopher, I looked into the philosophy
00:01:39.100 of privacy and realized there was a huge gap in the literature. There was very little written
00:01:44.340 about it. The literature that there was was kind of outdated and didn't really address the questions
00:01:50.020 that I was asking myself. And then that same summer, Snowden came up with his revelations
00:01:54.760 that we were being surveilled at a mass scale.
00:01:57.080 And I thought, I need to change the topic of my dissertation.
00:02:00.720 So I did, and I started researching about privacy,
00:02:03.780 which eventually led me to write this book, Privacy is Power.
00:02:07.980 And there's so many fascinating insights in your book.
00:02:10.900 I'm actually going to quote you in my book
00:02:12.620 because you talk about some stuff that doesn't get talked about much at all.
00:02:18.480 And we'll get into some of that.
00:02:19.760 What were some of the things that shocked you or surprised you the most
00:02:23.100 when you were doing this research and talking about privacy?
00:02:26.280 It was shock after shock after shock.
00:02:28.520 Because I think one of the interesting things about studying data
00:02:32.140 is that it's so abstract that we're just not made to understand that kind of thing.
00:02:36.900 Our psychology just doesn't work that way.
00:02:38.900 So even now, when I read things that I have written myself,
00:02:41.780 that I have researched and written, and I know them by heart,
00:02:45.440 I still get shocked because it's so invisible.
00:02:50.160 So I'll give you a few examples.
00:02:52.740 Actually, the most shocking example and a most concerning example, I decided not to publish it because I think it's so dangerous that anyone could misuse it.
00:03:00.420 And, you know, I wouldn't want to facilitate that.
00:03:03.060 But some examples of just corporate surveillance and your phone at night is sending information that it has collected throughout the day.
00:03:10.200 And it sends it at night so that you don't notice that your battery is draining because, you know, people typically connect it at night and charge it, charge their phones at night.
00:03:18.220 and companies are tracking very sensitive things like whether you sleep okay at night or not
00:03:23.620 at what time do you wake up with whom do you sleep with which can tell people you know who
00:03:29.260 is your partner but also things like whether you're having an affair and if you have a smart
00:03:33.680 car it's tracking not only where you go and how fast you drive and how well you drive but things
00:03:38.640 like and the music you listen to and what that tells about your mood but also the seats in your
00:03:44.100 car is measuring your weight and you know potentially selling that information to insurance
00:03:48.660 companies who might want to know you know if you're getting a bit too slim or a bit too fat
00:03:52.460 and they know things about your health records your educational records your purchasing power
00:04:01.440 your browsing history and people search the things that they care most about the things that worry
00:04:07.520 them their diseases and whether they can you know pay their their loan and so it's very very
00:04:13.020 sensitive it's just shock after shock after shock and carissa why is this a problem well you know
00:04:19.160 because people go well so what all they're doing is collecting data what does it matter
00:04:23.260 it matters because privacy is power because data whoever has the data in the digital age will have
00:04:30.820 the power because data gives not only the possibility of selling that data which gives
00:04:35.360 them money which already makes them very powerful but it gives them the possibility to try to predict
00:04:40.640 what you're going to do next and try to influence that and change that. And that is incredibly
00:04:45.500 attractive for companies, but also for governments. And it's something that we haven't been focusing
00:04:51.340 on enough. I was reading this book by Bertrand Russell, the philosopher called Power, and he
00:04:57.480 argues that we should think about power as something like energy, in that it can transform
00:05:02.140 itself from one thing into another. So if you have enough economic power, then you can buy votes or
00:05:07.100 buy politicians. If you have enough political power, you can buy military or get military power
00:05:12.220 and so on. And there's this really important kind of power in the digital age having to do
00:05:17.120 with forecasting and prediction that has always been there in a way. We've always known that the
00:05:23.000 more knowledge you have on somebody, the more power you have over them. But we have never had
00:05:27.500 this capability of amassing so much data and of analyzing it. So it matters because the more
00:05:32.780 others know about you, the more you're vulnerable to them in all sorts of ways that are invisible
00:05:36.860 to you, but you might get discriminated against for a job or a loan application or an apartment.
00:05:42.580 You might get extorted. Your identity can be stolen and your democracy can be stolen as well
00:05:49.100 because it influences how we relate to one another as citizens. And in your book, you use many,
00:05:55.940 many examples. The one that I found particularly powerful was the example of the man in Virginia.
00:06:01.040 Oh, yeah. So I have a friend who was training at the time to be a data analyst and he was
00:06:06.840 telling me like what it's like to go to that training. And one of the exercises that they
00:06:11.960 had to do is just pick a random person anywhere in the world and just research anything and
00:06:17.760 everything you can about them. So this random data analyst picked this random guy in Virginia
00:06:23.780 and he learned everything about him. So if I remember correctly, this was a man who had
00:06:28.100 diabetes, who was having an affair. He knew what kind of car he drove, what job he had,
00:06:34.160 who were their friends, who were their family.
00:06:36.620 And this person had no idea
00:06:38.420 that somebody was completely taking off his clothes
00:06:40.920 metaphorically online.
00:06:43.480 And, you know, I think one of the concerns
00:06:45.640 that people would have had in the past
00:06:47.840 is that the government is watching you, right?
00:06:50.700 The government is surveilling you.
00:06:52.340 And as we now know, as you say from Edward Snowden,
00:06:55.360 they were right to have that concern.
00:06:57.500 But I think, am I right in thinking
00:06:59.500 that now actually the biggest threats
00:07:01.520 in terms of our privacy,
00:07:03.080 in terms of being watched, in terms of being controlled,
00:07:06.260 in terms of people kind of tracking us
00:07:08.840 and trying to influence and shape our behavior
00:07:10.800 is not necessarily the government itself.
00:07:12.940 It's actually corporations, big tech,
00:07:15.960 people who want to sell you shit, basically.
00:07:18.940 Yeah, that's partly right
00:07:19.920 because most of data is actually collected by companies
00:07:22.820 and only then do governments make a copy of the data.
00:07:25.280 But it's really the companies that are best
00:07:27.300 and have more resources to collect data.
00:07:30.100 And they have many reasons to use that data.
00:07:32.940 against you. Like I mentioned, you can be discriminated against in all kinds of settings.
00:07:37.840 So essentially, they're undermining equality and equality of opportunity. You are not being treated
00:07:43.080 as an equal citizen. You're being treated on the basis of your data. But at the same time,
00:07:48.300 we shouldn't forget governments because it almost makes no sense to separate the corporate
00:07:52.980 surveillance from the government surveillance right now because they share data all the time.
00:07:58.620 So every time a company collects data, that data can potentially go to the government and the government very often just makes a copy of the data immediately.
00:08:08.380 But also every time the government collects data, that data also ends up in the hands of corporations.
00:08:12.500 So we've seen this in the coronavirus pandemic in the UK.
00:08:16.480 The NHS has given data to Palantir, this very shady data company that was partly funded by the CIA.
00:08:24.320 And they gave data not only that, you know, might be more kind of, I don't know, understandable about people's health to fight the coronavirus pandemic, but they also got data, for example, about people's criminal records.
00:08:37.060 And there was no explanation as to why exactly this company needed that data and what's going to happen to that data.
00:08:43.500 So the flow of information goes both ways to such a large extent that it almost makes no sense to differentiate between them.
00:08:51.020 And the other thing that people often say when I've talked to people about this is it's always
00:08:56.000 the same thing, which is, well, I'm not doing anything wrong. I don't care. What should I
00:09:01.140 worry about? What do you say to people who think about these things that way?
00:09:05.360 Well, there are at least two responses. One is you actually do care because nobody wants to have
00:09:10.500 their identity stolen. That takes, you know, that's a hassle. It can actually get you to jail
00:09:14.980 without having done anything wrong whatsoever because somebody else uses your name to commit
00:09:19.060 crimes in your name but also you are vulnerable in all sorts of ways even if you do nothing wrong
00:09:26.080 so for instance maybe you have a disease or maybe you have a disease that you don't know about
00:09:30.380 and that a company wants to pick up on before even you do and then discriminate against you
00:09:35.860 next time you ask for a loan or a job or something like that and so there are all kinds of reasons
00:09:41.020 why our privacy is important even if we do nothing wrong but furthermore even if you didn't care about
00:09:45.660 yourself. You just said, you know, I'm a masochist. I want to be stolen and I want to be exposed and
00:09:50.340 extorted. I'm fine with that. It's just an interesting experience. You should protect your
00:09:55.000 privacy because privacy is actually a collective thing. This narrative that the tech companies have
00:10:00.600 sold us that privacy is just a personal preference, it's something individual. And if you're not shy
00:10:04.560 and you're not a criminal, then you have no reason to protect your privacy is totally misguided.
00:10:09.500 because when when you expose yourself you expose others so every time you share your location you're
00:10:16.120 sharing data about your neighbors and your co-workers every time you share data about your
00:10:20.980 genetics you're sharing data not only about your siblings and parents and kids and cousins but very
00:10:26.580 distant kin that can suffer really bad consequences like being deported or being denied health
00:10:31.700 insurance or life insurance um even if they didn't do the test themselves and you're not you don't
00:10:38.760 know that you're actually kin and you've never met that person. And in the same way, society
00:10:46.180 benefits from people protecting their data. So one example is Cambridge Analytica. Only 270,000
00:10:52.860 people gave their data to the firm. And with that data, the political firm managed to get their
00:10:57.880 hands on the data of 87 million people who were the friends of these original 270,000 people,
00:11:03.040 but who didn't consent to anything.
00:11:05.280 And then with that data, the company made a tool
00:11:08.900 that was supposed to profile voters around the world.
00:11:12.760 So that's a very clear case of how those persons
00:11:15.740 didn't have the moral authority to share their data
00:11:18.340 because that had consequences for everyone else.
00:11:21.260 And I think people often massively underestimate
00:11:23.360 the predictive ability of individual bits of information.
00:11:27.760 So I remember during the Cambridge Analytica thing,
00:11:30.580 there was a website where you could see what their website would essentially predict about you.
00:11:37.280 And when I did it, my Facebook profile literally just had like my music and movie preferences on
00:11:42.420 it at the time. And when I put that in, it was like terrifyingly accurate about my political
00:11:49.940 views, about all sorts of other things, just based on the movies and the music that I had
00:11:56.460 happen to enter. And this is one of the things I think that's massively underappreciated, just how
00:12:02.100 much you can predict about a person with often things that aren't even necessarily relevant to
00:12:07.820 the thing that you're predicting. Like, you know, we had Dr. Pippa Malmgren on the show talking
00:12:12.280 about this, how people who eat blueberries are more likely to be conservative or whatever, like
00:12:16.900 stuff that you would never think is actually predictive is can be used for that effect, right?
00:12:23.780 Exactly. That's super important.
00:12:25.080 When people think that they're sharing their data,
00:12:27.440 they think about the data that they think they're sharing.
00:12:29.820 And then they say, you know, what do I care if Facebook knows my music preference?
00:12:33.280 But what we don't imagine is that being used to calculate our sexual orientation or IQ.
00:12:38.080 And that's being used by companies when we ask for a job or something.
00:12:42.240 So one example was, it turns out that people who like a Facebook page of curly fries
00:12:47.580 have a very high IQ.
00:12:49.880 And of course, when you like a Facebook page for curly fries,
00:12:52.200 It's never going to cross your mind that that's going to be used to calculate your IQ.
00:12:57.680 The hypothesis is that this Facebook page was probably created by somebody very smart.
00:13:02.420 Very smart people tend to have friends who are also very smart, and that's probably the explanation.
00:13:07.220 But that's one of the problems with algorithms, that they always work on correlations, not causation.
00:13:12.080 And there's no way of knowing what they're going to correlate.
00:13:14.980 So the concept of informed consent doesn't even make sense with data,
00:13:18.940 because you don't know what you're consenting to
00:13:21.000 because you don't know what kinds of inferences
00:13:22.700 are going to be made with that data in the future.
00:13:25.740 I bet you like Curly Fries, Paige, mate.
00:13:27.500 Yeah, I do, actually.
00:13:28.480 I always thought I was very intelligent as a result.
00:13:31.200 Carissa, why is it?
00:13:33.620 So we've seen what's happening.
00:13:35.200 We've seen that there's been a massive data grab
00:13:37.460 by these companies.
00:13:38.740 Why haven't laws been put in place
00:13:40.520 to stop this kind of behaviour?
00:13:44.120 It's a good question and it's complicated.
00:13:46.160 The first reason was because governments after 9-11 were very scared about what happened.
00:13:52.860 They wanted to prevent it at all costs.
00:13:54.980 And it was intuitive to think that the more data they had on people, the more they could do something about it and keep people safe.
00:14:00.820 Now, it just turns out that big data is not the kind of analysis that is good for preventing terrorism.
00:14:06.300 Big data is fantastic at knowing what you're going to buy tomorrow because we have data from billions of people who buy things every single day.
00:14:13.040 But terrorism will always be a very unusual event, and that makes it very hard to understand for big data.
00:14:18.860 So the first reason was because governments thought they had an interest in keeping the data.
00:14:23.440 Now, you know, 20 years have passed since then, and I think governments are slowly learning that having all that data stashed away is a national security danger.
00:14:35.160 So I think that's going to motivate them to change the law.
00:14:38.920 But another reason why laws haven't been put in place is because data is very hard to police.
00:14:45.800 So actually, the GDPR is quite good.
00:14:47.760 It's not perfect.
00:14:48.420 It has a lot of flaws, but it was a very important step in the right direction.
00:14:52.860 But it's just impossible to police because everybody's breaking the law all the time.
00:14:56.940 Many times, citizens can't even denounce it because we don't even know what's going on.
00:15:00.920 It's not like we can see our data being stolen and it hurts, right?
00:15:03.540 Or you can't breathe or something like that.
00:15:05.200 It's not tangible.
00:15:06.060 So how are you going to complain if you don't even know that it's happening?
00:15:09.860 So the implementation of the GDPR is facing a lot of difficulties.
00:15:13.220 They're underfunded as well, the data protection agencies, and we're dealing with giants.
00:15:18.900 And the third reason why we don't have more strict laws is, you know, there's a lot of money involved and there are a lot of interests.
00:15:24.620 And these companies lobby really hard and they pay more than any other company in the world to pressure politicians.
00:15:29.820 So it's a big challenge, but I am cautiously optimistic that eventually we're going to regulate it
00:15:37.180 because this is just unsustainable. It's a ticking bomb.
00:15:41.100 And you say it's a ticking bomb. Have things got worse under COVID?
00:15:47.060 Yes.
00:15:47.220 Because, yeah, I knew what the answer was. Can you explain why, though, Carissa?
00:15:53.520 Yeah. So one reason is that we've just been forced to use digital stuff more and more and more.
00:15:59.820 And, you know, one advantage is that the narrative that, you know, we can always opt out and it's up to us kind of has totally disintegrated, right?
00:16:08.640 It's obvious to everyone that you can't opt out if you want to be a participant in society.
00:16:12.660 There's no way to opt out of interacting with the digital.
00:16:17.140 But we have been forced to use it a lot more and therefore a lot more data is being collected.
00:16:22.640 Also, times of crisis are notoriously dangerous for civil liberties.
00:16:27.300 These are times in which governments very often pass laws and measures that wouldn't be accepted in other circumstances.
00:16:35.960 And so in this case, I think there was a well-intentioned idea that the more data we have, the more we can have tools to stop COVID.
00:16:44.340 It turns out that that actually hasn't been the case once again, that AI hasn't been helpful to stop COVID.
00:16:50.860 And, you know, the tracking applications haven't been the most important aspect of fighting COVID.
00:16:55.620 but the measures are there and they don't have a sunset clause so we don't know when they end
00:17:01.180 and what happens to all that data that has been collected and one thing the government has done
00:17:06.620 and nobody's talking about this but I think that this is absolutely major they've got rid of cash
00:17:12.760 can you explain why that is such a disastrous thing for our society
00:17:17.620 especially for me because I don't want to pay tax but that's beside the point
00:17:23.700 Which cash are you from again?
00:17:25.280 Anyway.
00:17:26.640 People are not talking about that enough.
00:17:28.620 But cash is very important because it's the only way in which citizens can buy things without being tracked.
00:17:34.620 And, of course, you know, if we don't have a society in which we are interacting with each other,
00:17:41.300 we're just interacting through online, we're all using credit cards, we're all using PayPal,
00:17:45.280 and that gets tracked really easily.
00:17:47.680 and so there's no way to buy things um or or services you know that hasn't been tracked and
00:17:53.340 you can say well but that's great because that fights crime right and yes it fights crime it can
00:17:58.300 be helpful to fight crime but it also does away with certain kinds of privacy that are really
00:18:03.320 important so here are a few examples uh paying a lawyer that's very kind of revealing about what
00:18:09.960 you're interested in or what you might be worried about uh paying for a psychiatrist or a psychologist
00:18:14.540 or other kinds of support and buying books.
00:18:20.100 So books are very revealing, again, of what you're interested in
00:18:24.400 and what you're worried about.
00:18:26.160 And a lot of people, I think, are very complacent because they think,
00:18:29.440 well, we live in a democracy.
00:18:30.620 Our government is not out to get me, you know, for what I read.
00:18:33.120 And fair enough.
00:18:34.360 But right now, the Taliban have gotten a hold of U.S. biometric systems.
00:18:39.620 and the point is that we are creating an architecture of surveillance that is so good
00:18:47.400 that if it gets taken over by a bad government we are in serious trouble because it will be
00:18:52.860 impossible to resist and even if i mean i think it's very yeah complacent to think that our
00:18:58.940 government will always be benevolent and democratic because the best predictor that something will
00:19:03.820 happen in the future is if it's happened in the past and it's happened in the past that we don't
00:19:07.120 have the best government possible um but even if you thought okay no this country is amazing and
00:19:12.520 it's always going to have the right government can you be sure that in 10 years time we're not
00:19:16.300 going to be invaded by another government by china by russia i don't know by somebody else
00:19:20.320 um i think that you have to be really confident in your like soothsaying abilities to not be
00:19:26.500 scared about having this architecture of surveillance and the thing that i've been
00:19:31.600 worried about carissa throughout this entire pandemic is the government has tried to introduce
00:19:35.980 mass surveillance in the form of the vaccine passports and people have been so complacent
00:19:42.760 they've been like they've just been well what's the problem yeah I think it's really hard to be
00:19:47.840 critical in times of crisis because the kind of opposition you face is oh well yeah we need to
00:19:52.440 save lives aren't you in favor of that of course everybody wants to save lives and so we don't
00:19:56.140 want to be a problem but I think we have to question not only whether we need vaccine
00:20:02.260 passports but even if we accepted them in what format do we need them so one question is why do
00:20:07.840 they have to be digital we've we've worked very well in the past with paper and passports and
00:20:14.260 paper certificates of different kinds the problem with digital is that it can be hacked it collects
00:20:19.520 a lot of data and and there's just no justification people just assume today that it has to be
00:20:26.860 digital. You know, coming back just a little bit to what you were talking about in terms of the
00:20:32.900 possibility of the data being used by foreign actors, I mean, you don't even necessarily have
00:20:37.200 to be invaded by China. The Chinese could just hack the database and then influence elections,
00:20:44.000 cause civil unrest, you know, undermine, you know, democracy in one way or another. And,
00:20:50.260 you know, we saw in 2016 the allegations about Russia, you know, where I'm from, you know,
00:20:55.840 causing Brexit, getting Trump elected. It doesn't look like that was what happened based on some of
00:21:01.700 the things we've seen. But it's certainly not impossible, right? And vice versa, you could see
00:21:07.380 powerful Western actors, the United States and Britain, influencing elections in other countries
00:21:13.680 using exactly the same technology. So, but before, you know what, actually, before we talk about that,
00:21:20.240 let me ask you the counter argument, which is increasingly my job on this show.
00:21:24.160 a lot of people might say well I hear what you're saying but on the other hand doesn't big data have
00:21:31.140 huge potential for identifying disease and people who don't even know they have it and therefore
00:21:37.740 getting treatment ahead of time you know if it turns out that you know buying more of something
00:21:44.020 means you've got cancer shouldn't don't we want to know that so we can stop people from dying of
00:21:49.060 cancer don't we want to prevent certain things from escalating that we could we could prevent
00:21:54.480 surely those are all good things Carissa absolutely and I have a whole section in the
00:21:58.720 book about medicine because it's such an important point and medicine is such a good example of that
00:22:02.480 and the first thing to say is that it has a lot of potential but we shouldn't like give up all
00:22:08.520 our data and our civil liberties for something that has potential and we have to have some
00:22:13.160 evidence that it's actually going to work right having potential is like really dreamy and fantastic
00:22:17.820 and promising, but that's not enough. And it's been very, very disappointing how millions and
00:22:23.480 millions and millions of pounds have gone into AI tools during the COVID pandemic. And the two
00:22:29.160 major studies that have researched this claim that out of the hundreds of tools that have been
00:22:34.220 developed, not one of them is clinically viable. And that should make us think twice, like, are we
00:22:39.200 making the right decisions here? It's not only about privacy, it's also about resources. Are we
00:22:43.340 Are we putting our resources in the right place?
00:22:45.600 But let's say that, you know, it can work or like we should give it a shot or we should research.
00:22:51.060 The devil is in the details.
00:22:52.360 It's not that we shouldn't use personal data at all.
00:22:55.000 We can use it.
00:22:55.740 And especially for medicine.
00:22:57.180 If you go to your doctor and you don't want to tell them what's wrong with you, they're not going to be able to help you.
00:23:01.020 It's as simple as that.
00:23:02.420 But we don't need to buy and sell personal data for that.
00:23:05.720 We don't need to allow data brokers to know everything about us.
00:23:08.600 We don't need to allow the government to know anything, everything about us.
00:23:11.400 So there's a huge difference between using personal data for things that are important and then having a market of personal data in which anyone can buy it. The highest bidder gets it. That's very different. And that's what we should avoid.
00:23:24.840 I mean, people might give another example of, let's say, the lending, which you've referred
00:23:30.300 to a number of times already. You could say, well, they're able to make better decisions. I mean,
00:23:35.280 2008 would suggest otherwise. But in the past, if you wanted to get a mortgage, you'd have to go to
00:23:41.000 the bank and the guy in the bank would have to make a personal decision based on how you were
00:23:45.880 dressed and whatever else. Now, we have all this data, which is very good at predicting whether
00:23:52.220 you know you're going to be able to repay your mortgage surely that's a good thing because we're
00:23:56.180 you know this is the argument we're not giving people debt they can't handle etc etc well it
00:24:01.240 depends it has to be shown it's not enough to just say this works i want to see how it works
00:24:05.940 and it hasn't been shown that way so for instance you know one argument is your banker might have
00:24:10.900 all kinds of prejudice they might be a racist they might be a sexist that's actually quite
00:24:14.240 plausible and even likely um but it turns out that algorithms are you've clearly met a lot of
00:24:19.400 bankers. No, I actually haven't. But it turns out that algorithms are just as biased or worse than
00:24:31.300 people. So one thing that I have proposed, and again, it depends on the kinds of proxies that
00:24:37.780 these algorithms are using. So in many cases, it's unfair that we are being treated as a category and
00:24:44.120 not as an individual. So say you live in a certain kind of postcode in which people tend to not pay
00:24:49.300 their loans. But you might be different, right? You might be like super serious and super
00:24:54.000 good at paying back your loan. And you're not going to get that loan because, you know,
00:24:58.140 your neighbors don't pay their loans or your friends on Facebook don't pay their loans. And
00:25:01.980 that seemed quite unfair. So one of the things that I have proposed recently in an article for
00:25:06.420 the Harvard Business Review is that we should only allow algorithms that have really important
00:25:12.460 decisions to make, like giving somebody a loan, to go out into the world if they have passed
00:25:18.140 a randomized control trial. So just like we do with medicines, we don't allow any medicine to
00:25:23.360 just go in the market without having been tested, not even in a crisis like the coronavirus pandemic.
00:25:27.460 We had to make sure that the vaccines were safe and they went randomized control trials. Well,
00:25:34.260 in the same way, an algorithm could go through a randomized control trial. We could have an
00:25:38.860 agency like in the United States, the FDA, the Food and Drugs Administration, to make sure that
00:25:44.340 an algorithm is safe and to prove that it's actually making better decisions than the
00:25:49.160 banker.
00:25:50.900 Carissa, isn't part of the problem here that we have this new technology, it's brand new,
00:25:56.700 we don't actually understand the full ramifications of this technology?
00:26:02.000 Yeah, so it's really reckless to just let it loose into the world. Essentially, we are
00:26:06.720 treating people as guinea pigs, and that's totally wrong. And we used to do it with medicine.
00:26:11.040 So when you went to the doctor in the 1950s, you could get signed up for an experiment without knowing.
00:26:18.240 And after the Nuremberg Code, which is like one of the most important medical ethics codes, we decided that we don't experiment on people without their consent.
00:26:27.100 So if you want to carry out clinical experiments, you get you inform people about what you're doing.
00:26:32.840 You get their consent and then you give them some kind of compensation.
00:26:36.520 And we should do the same thing. And right now we're not doing it.
00:26:38.920 we are being guinea pigs all the time of algorithms without even knowing it.
00:26:44.100 And one of the things I found really fascinating about your book, and this is the bit that I
00:26:49.400 mentioned I want to reference in mine, is the fact that people often don't understand that
00:26:54.700 intentionality and malice and the deliberate evil is actually completely unnecessary for harm to be
00:27:02.940 caused so the collection of data today completely innocently or indeed for beneficial purposes
00:27:09.740 will often end up being useful to to people who do want to use it for evil can you talk a little
00:27:15.920 bit about that yeah that's really important to have in mind because i don't think that you know
00:27:21.020 i i think that most people in tech and most people in finance and most people um just have
00:27:26.360 good intentions or at least not bad intentions they just want to do well in life they want to
00:27:30.140 innovate and so on. But Hannah Arendt had this amazing term called the banality of evil.
00:27:36.860 And the idea is when we think of evil, when we think of the Nazis, we think of
00:27:40.820 the paradigm of somebody like Hitler, who, you know, you imagine somebody who hates Jews and
00:27:46.700 wants to hurt people and wants to kill people. And that's our paradigm of evil. But in fact,
00:27:51.160 most of the time, evil gets perpetrated by perfectly normal people who are just bureaucrats.
00:27:56.420 And that was her conclusion with the Eichmann trial, that this guy was just a bureaucrat. He wasn't a monster. He was just following orders. He wasn't critical. He was just a cog in the system.
00:28:08.580 And it's really important that we don't become cogs in a system that creates injustice and that erodes democracy and even creates evil.
00:28:17.960 And it's not enough for us to think of ourselves as good people and have good intentions.
00:28:22.780 We need to be a lot smarter than that and a lot more critical to avoid evil.
00:28:27.580 And one of the examples you give on that very issue, speaking of the Nazis, is the fact that survival rates in different European countries for Jews were different because they had different practices of collecting data.
00:28:41.300 And those that collected data by ethnicity, which may have been perfectly reasonable under a legitimate, democratic, sensible, non-discriminatory government, then fell into the hands of people who wanted to use it to discriminate, to kill, to murder, to imprison.
00:28:57.860 And that innocent collection of data led to more people being killed because of it, right?
00:29:03.980 Yeah, I think it's a perfect example to show that personal data is a ticking bomb.
00:29:08.060 So the Dutch had a very good system of statistics.
00:29:11.320 They had a guy called Lenz who was one of the pioneers of statistics,
00:29:14.560 and he wanted to build a system that followed people from cradle to grave.
00:29:18.240 And in his census, there were a lot of questions,
00:29:21.020 and there was data collection about your religious affiliation,
00:29:24.080 but also your ancestry and things like where your grandparents lived.
00:29:26.900 Now, in contrast, in France, they had made a decision since 1872
00:29:32.120 not to collect that kind of data for privacy reasons.
00:29:35.800 And so when the Nazis arrived to France and asked, you know, where are the Jews?
00:29:39.360 They said, you know, we have no idea how many Jews we have, let alone where they live.
00:29:43.060 So good luck with that.
00:29:44.720 And the Nazis had to depend on either Jewish people turning themselves in or having neighbors
00:29:50.300 turn them in, which was very inefficient.
00:29:53.740 And the result is that in the Netherlands, the Nazis found and killed 73% of the Jewish
00:29:58.060 population.
00:29:58.680 And in France, 25% of the Jewish population.
00:30:01.480 And the difference is hundreds of thousands of people.
00:30:03.900 And if this has happened in Europe already, we really have to make sure that we don't make the same mistake again.
00:30:11.560 There's a reason why privacy is in the Declaration of Human Rights.
00:30:15.440 And we have forgotten that lesson and we have to relearn it pretty quickly before something really bad happens again.
00:30:22.480 And there's one story in particular in the Second World War that I think is very illustrative of what we need to avoid.
00:30:28.860 and that's because the Nazis, one of the first things they did when they invaded cities was go
00:30:34.260 to the registry because that's where the data was held. There was a resistance cell in Amsterdam
00:30:38.860 that wanted to destroy the registry in 1943. So they went into the building, they sedated the
00:30:44.100 guards to spare their lives, they set fire to the records and they had a deal with the fire
00:30:48.440 department that they were going to arrive late and that they were going to use more water than
00:30:52.960 necessary to destroy as many records as possible. And unfortunately they were very unsuccessful,
00:30:57.600 They only managed to destroy about 15% of the records.
00:31:00.800 They got caught and killed, and the Nazis found 70,000 Jews in Amsterdam.
00:31:06.620 And the Dutch had made two mistakes.
00:31:08.200 One, they had collected too much data that wasn't necessary to have a functional society.
00:31:13.040 And the second one is that they didn't have an easy way to delete that data in the event of an emergency.
00:31:17.740 And we are making both of those mistakes at a grand scale, and that should make us think twice.
00:31:24.020 Do you have a website, or do you plan to have a website?
00:31:27.000 Well, if you do, then EasyDNS are the company for you.
00:31:31.860 EasyDNS is the perfect domain name registrar provider and web host for you.
00:31:37.160 They have a track record of standing up for their clients, whether it be cancel culture,
00:31:42.320 de-platform attacks, or overzealous government agencies.
00:31:46.220 He knows a bit about that.
00:31:47.520 So will you in a second.
00:31:49.120 EasyDNS have rock solid network infrastructure and incredible customer support.
00:31:54.180 they're in your corner no matter what the world throws at you unless it's your ex-girlfriend in
00:31:59.220 which case you're on your own you'd know about that move your domains and websites over to easy
00:32:05.980 dns right now all you've got to do is head over to easy dns.com forward slash triggered and use
00:32:11.960 our promo code which is of course triggered as well and you will get 50 off the initial purchase
00:32:18.240 Sign up for their newsletter, Access of Easy, that tells you everything you need to know about technology, privacy and censorship.
00:32:27.980 Carissa, what would you say to those people who go, look, the cat's already out of the bag.
00:32:32.040 There's nothing that we can do. These companies are so huge, they're so powerful, they're so wealthy.
00:32:37.480 And we're just nothing but a collection of individuals.
00:32:40.460 I would say that's a lack of historical perspective.
00:32:43.580 We have had very powerful companies in the past.
00:32:45.720 We regulated from, you know, railroads to cars, airplanes, food, drugs.
00:32:50.260 There's no reason why we shouldn't be able to regulate tech.
00:32:52.820 In fact, tech is less complicated in many ways than finance.
00:32:56.440 And we regulated finance.
00:32:58.820 Furthermore, the United States, for instance, had to face Rockefeller on their own.
00:33:03.900 You know, France, the UK, they didn't care about Rockefeller.
00:33:06.860 But now there's so many countries that want to regulate big tech.
00:33:09.500 So, yes, they're very powerful.
00:33:10.660 But they're not more powerful, first, than all the collective users on which they depend, because they depend on our data.
00:33:17.060 And if we rebel and don't give them our data or obfuscate, they're in trouble.
00:33:21.540 And second, they're not more powerful than the collection of the UK, the US, Canada, South America, Australia, New Zealand, Europe, Japan.
00:33:31.600 We can gang up.
00:33:33.560 And don't you think part of the problem is as well is that people aren't getting angry about this?
00:33:38.980 Because a lot of the time, they don't really understand it.
00:33:42.820 They don't understand that it's happening to them, first of all.
00:33:45.480 And also, they don't understand the long-term implications of this data harvest.
00:33:50.760 Yeah, that is true.
00:33:51.480 And that's a challenge we have.
00:33:52.800 On the upside, more and more people, I mean, upside, downside, more and more people are
00:33:57.360 having bad experiences online.
00:33:58.880 So in a survey I did with a colleague, Sean Brooke, we found out that about 92% of people
00:34:04.060 have had some kind of bad experience related to privacy online.
00:34:07.260 Sometimes it's about getting your credit card number stolen.
00:34:09.460 Sometimes it's about being exposed on Twitter.
00:34:11.820 Sometimes it's about, you know, having an ex-boyfriend track you or something like that.
00:34:16.960 And the more we have bad experiences, the more we learn that actually privacy was important after all.
00:34:21.540 And the more we get angry that these companies are making us vulnerable.
00:34:25.240 But you're right that we need a lot more consciousness.
00:34:28.100 And in particular, we need to be much more aware of the political implications of privacy.
00:34:31.840 This is not an individual thing or it is an individual thing, but it's much more than that.
00:34:36.080 And do you think that, for instance, when you look at something like China, that is a warning for us all, particularly in this country?
00:34:43.980 It's such a huge warning. It's such an interesting case for so many reasons.
00:34:48.040 But one reason is that, you know, they claim that they are more ethical than we are because we're doing the same thing.
00:34:53.320 We're scoring people, but we're just not telling them. And there's actually some truth to that.
00:34:57.780 Of course, the flip side is that we have a lot more freedom.
00:35:02.360 And even if you're being scored, say, as a consumer, that's not going to impact whether you get, I don't know, a place at a university or a job.
00:35:10.740 So we're much more compartmentalized.
00:35:12.960 And that's part of what makes a liberal society liberal instead of being a totalitarian regime.
00:35:18.620 We have to make sure that we don't walk into that system.
00:35:21.160 We have to walk away from it.
00:35:22.420 And I am very worried that we're walking towards it.
00:35:24.920 But now a really interesting lesson from China is coming through right now as we speak.
00:35:30.840 And that's the one argument from the West for not regulating data was that, you know, we have to have as much data as possible because look at China, they're collecting so much data.
00:35:40.440 And if we don't collect enough data, we will be at a disadvantage.
00:35:43.720 And now it turns out that China is passing one of the strictest privacy laws in the world right now.
00:35:48.840 That's very interesting. There's a lot of speculation about why exactly are they doing it?
00:35:52.060 They're hurting their stock. And so why exactly are they doing it?
00:35:54.620 But one reason, I think, it may not be the only reason, but one really important reason is that they are realising how dangerous it is to have so much personal data stored because it's a national security danger and the West is going to hack it sooner or later and make a use of it.
00:36:12.300 So if even China is regulating privacy, we really have to get our act together quickly.
00:36:18.460 And how close do you think we are to that kind of dystopia, Carissa, where, you know, you go and get a loan and for reasons unbeknownst to yourself, you fail it. Your credit rating is excellent. You know, you've never been in debt. You've never defaulted on a payment yet because you don't know that you have a, you know, a genetic condition, but a company does, you're not going to get the money.
00:36:41.640 it's really hard to say part of me thinks that that probably is already happening but it's just
00:36:48.240 very hard to tell because everything's on the on underground and we can't see it so it wouldn't
00:36:53.380 surprise me that those things are already happening but it just they just haven't come out to the
00:36:57.600 light and then depending on you know it's it's hard to predict because if we get you know one
00:37:02.760 prime minister or another in a few years it can really make a difference and if the u.s you know
00:37:07.360 gets a different president. Things can change so quickly, for better and for worse.
00:37:13.680 Yeah, as we've seen in recent days. Carissa, moving on a little bit, I know that you are
00:37:18.680 an associate professor at the Institute for Ethics and AI. Do you mind if we talk a little
00:37:23.100 bit about AI more broadly? Yeah, sure. Because I remember I was listening to Elon Musk a few
00:37:29.980 weeks ago talking about, he was asked a bunch of broad questions, but one of them was like,
00:37:36.720 what terrifies you the most about the future.
00:37:39.320 And he was saying it's AI.
00:37:41.160 And I remember growing up as a kid,
00:37:43.940 I would read all the sci-fi, the Asimov, and the other stories,
00:37:48.580 a lot of which were really exploring the implications
00:37:51.080 of having artificial intelligence.
00:37:53.900 And again, from a point of view,
00:37:55.840 not of some evil plan to control the world or whatever,
00:38:00.020 but actually from a misplaced desire to make things better,
00:38:03.440 where people would give AI particular targets,
00:38:06.240 It's like, let's make human beings happier.
00:38:08.320 And before you know it, everyone is suddenly hooked up to like a heroin drip because that's
00:38:12.680 the way you make people happy.
00:38:13.940 Do you know what I mean?
00:38:14.780 Like, should we be worried about the increasing influence of artificial intelligence in our
00:38:21.600 lives?
00:38:22.920 Yeah, we should definitely be worried.
00:38:25.160 So research in AI, in the ethics of AI, I mean, one way to categorize it, there are
00:38:30.200 many ways, but one way is it broadly divides into two.
00:38:32.780 The people who are really worried about what is often called super intelligence, so the point at which AI becomes more intelligent than human beings, and then what are they going to do? Are they going to get us hooked in heroin or are they going to take over the world and so on?
00:38:48.780 And the people who are worried about more short-term problems, like, you know, the bias that algorithms are instituting and so on.
00:38:56.360 I think that those concerns about AI taking over the world and what are they going to do are legitimate and we should be thinking about it.
00:39:03.920 But my own take is that the short-term risks are much more real, tangible, and they're just like, they're here.
00:39:11.220 They're here and they can lead to a dystopia as bad as, you know, the AI that gets us hooked on heroin.
00:39:17.320 So I think it would be as bad to have this complacent attitude towards AI and data and get into a totalitarian regime that we can't resist because we're being surveilled all the time than the worry that robots might be advanced enough to take over the world.
00:39:31.700 I think we're pretty far from that still.
00:39:34.540 Give us some examples of the smaller, more short-term stuff that you're concerned about with artificial intelligence.
00:39:42.220 So one concern is, you know, how is AI impacting things like equality?
00:39:48.680 So one thing, you know, data and AI are very closely related because the most successful kind of AI at the moment uses a lot of data and much of that is personal data.
00:39:57.240 So it's hard to differentiate those two.
00:39:59.980 So one question is, you know, is it OK for us not to be treated as equals anymore, but to be treated on the basis of our data?
00:40:05.760 And what does that say about our society and what implications does that have?
00:40:09.200 Another very important issue is bias. It's really hard to have an AI that is not biased. And it's really difficult to identify the bias because, you know, when we're not aware of it. And there's a lot of evidence that shows that AI is intensifying inequalities that were already there. So, you know, we've been, as a society, we've been sexist for a long, long time. We're pretty good at it. But AI can make us even more sexist and without us realizing. And the same with racism.
00:40:39.200 Another issue that I'm worried about...
00:40:41.280 Sorry, Curissa, how can it do that?
00:40:43.420 How can...
00:40:44.160 Are we going to design a robot that is, you know...
00:40:46.960 Just goes around wolf whistling.
00:40:49.920 So there are many reasons why it can do that.
00:40:52.260 But to give you a couple of examples,
00:40:54.440 so a few years ago, Amazon used an algorithm to...
00:40:58.160 They tested an algorithm to try to hire people
00:41:01.280 and to filter candidates
00:41:02.320 because, you know, these big companies
00:41:03.760 get thousands and thousands and thousands of applications.
00:41:05.920 and they realized that the algorithm was being sexist and the reason it was being sexist is
00:41:11.540 because in the past say in the past 10 years amazon has tended to favor men and so the kind
00:41:16.660 of ideal candidate for the algorithm is you know the white guy and if you know a cv had things like
00:41:24.580 somebody played in the women's soccer team in high school say for as an example the algorithm
00:41:29.920 goes through the successful candidates that have applied to amazon in the past and nobody has been
00:41:34.740 in a women's soccer game and so it discriminates against that person for no good reason
00:41:40.000 and we have to be really vigilant to to to pick those up but another example is there's a lot of
00:41:46.740 sexism in medicine and one of the reasons is that most of the data we have it comes from men so the
00:41:54.240 the kind of the paradigm of medicine is a white male and it turns out that women in many cases
00:42:02.260 are very different for different things. So for instance, our neural pathways for processing pain
00:42:07.060 are different. So it turns out that painkillers are much more effective for men. And so it's very
00:42:12.660 easy to be sexist without the intention of being sexist. It's not like we're trying to design a
00:42:17.500 sexist algorithm. I think we wouldn't be able to do it as well if we tried it. It's more that
00:42:23.220 sexism is really baked into how we have seen and experienced the world and how we have hired and
00:42:28.120 treated people. And because algorithms work on historical data, they tend to reproduce that
00:42:33.900 reality. That's so interesting. That really is very interesting. So it's actually reproducing
00:42:41.960 inequality that may not actually exist in our minds today. I think very few people would choose
00:42:48.920 deliberately to discriminate against women now in employment. But the algorithm is actually more
00:42:54.340 sexes and human beings potentially because it's replicating data from 30 years ago that that wow
00:43:00.720 that really is interesting isn't it and it's fascinating because it shows us that it can
00:43:05.440 actually stop progress social progress and political progress it's kind of regress in many
00:43:12.960 in many ways and one of the important functions of forgetting is to progress because when you
00:43:18.020 forget you kind of let go of the past and you're able to see the future with fresh eyes and you
00:43:23.020 In the past, we used to forget all the time because our memories are not perfect and because
00:43:27.720 recording was very effortful and very expensive.
00:43:30.700 And because even when we recorded, say you put something on paper, when the paper didn't
00:43:34.180 have acid, it just fell apart after a few years or there was a fire or a flood or something.
00:43:39.580 But now the whole economy of remembering and forgetting has gone upside down.
00:43:43.980 And now we're remembering everything and just by default collecting all the data we
00:43:48.020 can and storing it indefinitely.
00:43:50.120 And that's really unwise.
00:43:51.660 And one reason it's unwise is because it creates societies that are very harsh when they never forget, but also because it kind of stops progress.
00:43:59.840 And what about the other argument? Sorry, Francis, just to finish on this very point.
00:44:03.680 You know, we talk about equality, but you might say from a kind of ruthless Russian mentality like mine, you could argue, couldn't you, that, well, look, these algorithms actually, in many cases, I take the point about the disparity when it comes to sex.
00:44:20.480 but in many cases they're making better decisions in order so for example if you are about to get a
00:44:26.840 mortgage and you do actually have a condition which means you're highly likely to die in the
00:44:30.840 next five years shouldn't the lender know that before they give you a loan isn't this just a way
00:44:36.260 of making better decisions and and being fairer because we never really had equality like the
00:44:42.220 fact that if you don't pay back your loans you shouldn't be treated as equal by your bank to
00:44:48.320 someone who does do you see what i'm saying and the algorithm is just amplifying the ability to
00:44:52.900 make accurate decisions some people might argue yeah so i'm reading an interesting book at the
00:44:57.800 moment called the tyranny of merit and about the kinds of effects that that it has in society to
00:45:03.300 think that you know people can just um are self-made and and we should judge them accordingly
00:45:09.720 and so i recommend that but so the first thing to say is unless we have randomized control trials
00:45:15.820 that show that these loans are actually these algorithms are making better decisions and getting
00:45:21.400 better returns then you know we shouldn't believe it we shouldn't believe it you know because
00:45:26.720 somebody says that we need randomized control trials so that's the first thing but secondly
00:45:30.100 let's say that they do work like that and let's say that they are more effective we also have to
00:45:34.320 ask ourselves what kind of society do we want to live in do we want to live in a society in which
00:45:38.400 because somebody gets cancer they can't get a loan um shouldn't we have rules for that just
00:45:43.040 to not be like a super harsh society
00:45:45.040 in which it's really hard to thrive
00:45:47.380 and it becomes kind of unlivable.
00:45:49.860 Or maybe, you know, the government then should step in
00:45:52.440 and give those loans to people who might have more trouble
00:45:56.280 or, I don't know, we have to do something.
00:45:58.420 But we just can't leave it to the market
00:46:00.060 and think that we're going to end up
00:46:01.620 with this wonderful, fair society because we're not.
00:46:04.660 And moving forward, if AI continues the way it is doing
00:46:09.040 and the way it's been predicted to do,
00:46:10.880 that's going to have a huge effect on the labour market, isn't it?
00:46:14.440 There's going to be significant swathes of jobs
00:46:16.620 that are just going to be non-existent.
00:46:18.720 Yeah, there's a huge controversy about it.
00:46:20.500 Some people think that we're going to lose jobs
00:46:22.720 and we're never going to recover them.
00:46:24.460 Other people think that, no, no, no, in the past this has happened before
00:46:27.220 and what happens is that people just change their jobs
00:46:30.440 so when technology gets developed, you know,
00:46:33.360 in farming you used to take care of the cows and the horses
00:46:36.780 and now you take care of the tractors.
00:46:39.180 But some people think that because AI is a different kind of technology in the sense
00:46:42.860 that we're trying for it to be autonomous, for it to not need input, then we are going
00:46:48.020 to lose a lot of jobs.
00:46:49.420 Then there are people who think that actually the best kind of AI works in tandem with a
00:46:54.440 human being.
00:46:55.440 AI is really stupid on its own.
00:46:56.840 If you've had a conversation with Siri or Alexa, you will have noticed that.
00:47:01.140 So many people think that a more realistic future is one in which we design AI to work
00:47:07.060 with human beings and so yeah that's that's a huge controversy and I don't think you know I'm
00:47:12.920 kind of neutral about it I'm not sure I'm not confident enough to say yeah this will go one
00:47:18.260 way or the other. One of the fascinating things that I remember reading about recently is the
00:47:24.880 idea that in order to program AI to make decisions for example driverless cars you're going to have
00:47:31.660 to start to make philosophical decisions about right and wrong because let's say a car that is
00:47:38.040 not driven by a person but is driven by AI has to make a decision do you crash into that car
00:47:42.920 or do you crash into this car because you you're in the trap do you kill three people there or do
00:47:47.700 you kill one person there like how how are we as humanity going to resolve some of these moral
00:47:52.940 moral dilemmas that pop up that's very difficult because on the one hand you're right that we have
00:47:58.240 make sure that we understand that whenever we make a tool, it has values within it. Technology
00:48:04.440 is not neutral. And we need to make sure that the right values are in place so that we end up with
00:48:09.600 the kind of society that we want. But it turns out that ethics, arguably, is something that you
00:48:14.140 cannot code into a computer. So we're not, a normal human being doesn't act morally in a way
00:48:21.720 that can be coded easily into a program. So there are many questions about how do we deal with this.
00:48:27.080 one one way to deal with it is to try to teach ai as if we were teaching a small child so just like
00:48:35.840 we teach ai to recognize images by giving them say you know millions of images of what is a dog and
00:48:41.000 what is a horse or whatever it is then we should just give them millions of cases of like what's
00:48:45.360 the right thing to do and what's the wrong thing to do now other people say well but actually
00:48:49.380 people are really bad at being moral i mean we we see examples of immorality every day all the time
00:48:56.600 So maybe we, you know, we should be more ambitious and actually have an AI that is perfect,
00:49:01.360 not just like human level, which is kind of pathetic, but that is actually virtuous.
00:49:07.060 But then, you know, what kind of morality do we accept?
00:49:10.220 So just to give an example, and something I've been thinking about lately,
00:49:13.980 utilitarianism is a very attractive moral philosophy for many people.
00:49:17.160 The main idea of utilitarianism is that you should maximize good consequences.
00:49:21.060 You should maximize utility.
00:49:22.240 How we cash out what utility depends, you know, can vary.
00:49:24.740 but let's say we should maximize well-being something like that um and that sounds great
00:49:29.460 and if you ever meet a human utilitarian they're very imperfect right because they think you know
00:49:35.760 they think they should do one thing but it's very hard to do so for instance they think that in
00:49:40.180 order to maximize well-being they should donate most of their money to charity to to the poor
00:49:45.160 because that's going to save lives and that's going to be better for the world but when they
00:49:48.520 have a family and they have a kid it's really hard to do that and they tend to prioritize their kid
00:49:52.720 and you know pay for college and the way they explain that is i'm imperfect you know i should
00:49:57.760 i know i shouldn't do that but i have these like psychological um constraints as a human being
00:50:02.940 and i can't help myself and i'm sorry about it and you know for them that's a bad thing for most
00:50:07.520 other people it's kind of a relief that you that they can't be a true utilitarian because a true
00:50:12.420 utilitarian would always be strategic right so when they are your friend they would only do
00:50:16.680 what maximizes well-being in the world so they wouldn't favor you as a friend and so i'm really
00:50:21.900 relieved that I don't have a friend who's a perfect utilitarian. But with AI, we could turn
00:50:26.100 it into a perfect utilitarian. They don't have those psychological constraints of loving someone
00:50:30.680 and being partial to someone. They could be a perfect utilitarian. And that's kind of a frightening
00:50:34.980 thought. Well, you make it more dystopian if you want, because what if the truth is, let's say,
00:50:41.600 that we know statistically most crimes in society are committed by a small number of people.
00:50:47.540 Many of them have psychopathic disorders.
00:50:49.620 Now, what if you were perfect AI utilitarian
00:50:51.940 and you wanted to maximize well-being by, I don't know,
00:50:54.420 killing all those people, right?
00:50:55.840 And then, you know, you kill off 2% of the population
00:50:58.000 and everybody else is really happy.
00:51:00.000 We get rid of Boris Johnson.
00:51:02.660 Yeah, not to mention that, you know,
00:51:04.220 the easiest way to get rid of suffering
00:51:05.940 is just to explode the world, right?
00:51:09.500 So, yeah, we have to make really, really sure
00:51:11.660 that we're sensible in how we program AI.
00:51:15.360 But I think at the moment we should be focusing more on the short-term challenges than the long-term ones.
00:51:22.240 Because they're so urgent and it's not going well.
00:51:25.520 Because it's also as well, it opens a whole Pandora's box when it comes to the law.
00:51:32.040 For instance, if a driverless car malfunctions, swerves into the road, kills six people,
00:51:37.620 or onto the sidewalk or pavement, I should say, who is ultimately responsible for that?
00:51:43.020 Because it's not a human being.
00:51:44.820 you know it's very simple if a human being does that you go well the human being is in charge of
00:51:49.160 the technology but who is in charge of that technology is it the person programming the
00:51:53.700 algorithm is it the company you know was there a malfunction was it the factory that produced it
00:51:58.900 doesn't this open a whole minefield legally it does it creates a lot of what's called
00:52:04.200 responsibility gaps in which something goes terribly wrong and everybody can say it wasn't me
00:52:08.700 and then we don't have incentives for people to be careful with these things because nobody pays
00:52:14.180 the price. And, you know, that example with the car is a very good one, but there are so many
00:52:18.420 others. So, for instance, a few years ago, the Michigan Unemployment Agency used an algorithm
00:52:23.520 to detect fraud, and it accused thousands of people, about 34,000 people, of fraud falsely.
00:52:30.700 They were false accusations. So these were people who were in a very precarious situation already.
00:52:36.920 They take their checks away, and these are people who lost their families, they lost their homes,
00:52:41.000 some of them committed suicide two years later they realized that the algorithm got it wrong 93
00:52:46.020 percent of the time and who goes to jail no one because nobody knows who's responsible was it that
00:52:51.660 you know they said the objective is wrong was it that they hired the wrong programmer which
00:52:56.380 programmer because a lot of people work these algorithms sometimes have millions of lines of
00:53:00.860 code and we so one of the things we need to do is before an ai project starts we have to design a
00:53:07.520 chain of responsibility and say exactly who's responsible for what before anything goes wrong
00:53:12.880 so that when things go wrong we can actually know who to who to turn to carissa are you optimistic
00:53:19.500 about the future i'm cautiously optimistic i think it can go both ways i think we're in a 50 50
00:53:26.720 kind of uh place but i think we can turn things around and i think we have done in the past so
00:53:33.320 you know, one example I gave in the book is how we managed to recover the ozone layer. We were
00:53:38.520 really in danger and about to lose it. And we were using CFCs all over the place. And we learned what
00:53:45.120 we were doing, we regulated, and now the ozone layer is recovering, and it's going to completely
00:53:50.140 recover in a few years. So I think, you know, I think it's possible. I think we can do it. And
00:53:54.320 my only hope is that we do it in time before something really bad happens. I think eventually
00:53:58.900 we're going to get it right. The question is, are we going to get it right now before something
00:54:02.540 really bad happens or are we going to wait for something like you know the nazis using personal
00:54:07.360 data for genocide in the second world war um in the west before we we get our act together
00:54:13.400 so so just before we finish so we're talking and i'm miserable and a pessimist so that's my
00:54:19.120 position so you talk about something really bad happening you've given the example of the nazis
00:54:24.880 you know and a lot of people will then say well look you know that that's you know a once in a
00:54:30.240 however many generations think. Could you give other examples of what this type of misuse of
00:54:36.380 AI and algorithms could actually lead to? Sure. So one example of personal data being misused
00:54:42.740 was in the Rwanda genocide in 1994 as well. Personal data was misused in the Second World
00:54:49.480 War with Japanese people in the United States. But also more recently with algorithms, the best
00:54:56.380 example is probably China using it against the Uyghurs. And this is a minority in China. They
00:55:01.360 are usually Muslims. And China has developed algorithms to detect facial features that are
00:55:09.400 racial. And they have put these people in camps and they are essentially persecuted. And there's
00:55:16.860 no reason why that couldn't happen in the West. Another example is Hong Kong. Hong Kong was a
00:55:21.720 very high-tech society and that was really in favor of democracy and suddenly they realized
00:55:28.380 that a lot of the tech that they had developed was being used against them when China got
00:55:33.360 changed their policies and wanted to gain some control back so we had these really impressive
00:55:38.380 images of people trying to take down cameras with facial recognition in the street or people
00:55:43.580 queuing for blocks and blocks to buy their subway ticket with cash instead of all the machines that
00:55:50.480 were much more prevalent. So that's one example in which once things go wrong, it's really hard
00:55:56.860 to go back. So we have to make sure that we have a system in place as a buffer in case things go
00:56:02.760 wrong. And I was going to say as well, in terms of personal things that we can do, I understand
00:56:08.780 the regulatory stuff that you're talking about. As individuals, what can we do to protect ourselves?
00:56:15.580 There's a lot we can do because these companies depend on us and our data. So choose privacy
00:56:19.940 friendly um apps and devices don't ever buy a device from a company like google that you know
00:56:26.580 earns their keep through through collecting personal data instead of using whatsapp use
00:56:30.840 signal instead of using google search use duck duck go instead of using gmail use something like
00:56:37.360 proton mail instead of doing using dropbox use jota cloud or there are many alternatives out
00:56:45.200 there protect other people's privacy contact your political representatives and tell them
00:56:49.360 that you care about this and in general let's kind of create a culture of privacy and not a
00:56:54.620 culture of exposure. And don't use that facial recognition software. Isn't that one that you
00:57:00.040 should absolutely avoid with the apps? Yeah if you can definitely avoid biometrics whenever
00:57:05.760 possible. There we go. Carissa thank you so much. Your book is called Privacy's Power. Thanks for
00:57:10.960 coming on. Where can people find your other work online if they want to follow you after this
00:57:14.560 interview um i rant on twitter a lot at carissa bellis and they can find me on my website it's
00:57:20.760 just my name.com fantastic thank you so much we'll ask a couple of questions for our local supporters
00:57:26.420 but for now uh thank you for coming on and thank you guys for watching at home we'll see you very
00:57:30.980 soon with another brilliant interview like this one or our show all of them go out 7 p.m uk time
00:57:35.740 which is 2 p.m eastern take care and see you soon guys we hope you've enjoyed this incredible
00:57:42.720 interview remember to subscribe and hit the bell button so that you never miss another fantastic
00:57:48.720 episode and if you believe that the work we do here at trigonometry is important
00:57:53.540 support us by joining our locals community using the link below