Making Sense - Sam Harris - April 14, 2017


#71 — What Is Technology Doing to Us?


Episode Stats

Length

45 minutes

Words per Minute

192.97774

Word Count

8,858

Sentence Count

464


Summary

Tristan Harris has been called by The Atlantic Magazine the closest thing that Silicon Valley has to a conscience. He was a design ethicist at Google, and then left the company to start a movement called Time Well Spent, which is a movement whose purpose is to align technology with our deepest interests. Tristan was recently profiled on 60 Minutes last week. He graduated from Stanford with a degree in computer science, having focused on human-computer interaction. And he s worked at various companies, including Apple, Wikia, and Google. In this episode, I chat with Tristan about what it means to be a good human being, and why we should all be trying to be good human beings. We don t run ads on the podcast and therefore, therefore, are made possible entirely through the support of our subscribers. If you enjoy what we re doing here, please consider becoming a supporter of the podcast by becoming a subscriber. You ll get access to all kinds of great shows and resources, including The Huffington Post, The New York Times, The Atlantic, and The New Republic, and much more! Thanks for listening and Happy Listening! -Sam Harris Make Sense Podcast Subscribe to the Making Sense Podcast: A Podcast About Stuff That Matters to You? by and . Learn more about your ad choices? by searching for in the App Store or App Store or ? by PODCAST by , or by clicking here to become a reviewer , and to learn more about what s going to be mentioned in the podcast? , can be found on on this episode is a post on this post? or a review on this podcast a post that s via & v= #_ has a review will be should be a tweet or a tweet so I of this post is a review or a place does that tweet a review of this episode is this is a link AND : an also ... _ (a link to a clip from a post has a link to :) ) + ;) And a video on this is also a retweeted tweet? (A screenshot of this is not a clip


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.440 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.140 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.520 Today I'm speaking with Tristan Harris.
00:00:50.660 Tristan has been called by The Atlantic Magazine the closest thing that Silicon Valley has to
00:00:55.660 a conscience.
00:00:56.180 He was a design ethicist at Google, and then left the company to start a foundation called
00:01:03.920 Time Well Spent, which is a movement whose purpose is to align technology with our deepest
00:01:10.560 interests.
00:01:11.880 Tristan was recently profiled on 60 Minutes.
00:01:14.600 That happened last week.
00:01:16.240 He's worked at various companies, Apple, Wikia, Apsure, and Google.
00:01:20.680 And he graduated from Stanford with a degree in computer science, having focused on human-computer
00:01:27.760 interaction.
00:01:28.960 We talk a lot about the ethics of human persuasion, and about what information technology is doing
00:01:35.200 to us and allowing us to do to ourselves.
00:01:37.080 This is an area which I frankly haven't thought much about, so listening to Tristan was a bit
00:01:43.520 of an education.
00:01:45.160 But needless to say, this is an area that is not going away.
00:01:48.540 We are all going to have to think much more about this in the years ahead.
00:01:52.960 In any case, it was great to talk to Tristan.
00:01:54.500 Tristan, I have since discovered that I was mispronouncing his name.
00:01:58.540 Apologies, Tristan.
00:02:01.740 Sometimes having a person merely say his name in your presence proves insufficient.
00:02:08.580 Such are the caprices of the human brain.
00:02:11.800 But however you pronounce his name, Tristan has a lot of wisdom to share.
00:02:16.680 And he's a very nice guy as well.
00:02:19.060 So, meet Tristan Harris.
00:02:24.500 I am here with Tristan Harris.
00:02:30.000 Tristan, thanks for coming on the podcast.
00:02:31.720 Thanks for having me, Sam.
00:02:32.660 So, we were set up by some mutual friends.
00:02:35.740 We have a few friends and acquaintances in common.
00:02:38.480 And you are in town doing an interview for 60 Minutes.
00:02:41.880 Yep.
00:02:42.020 Right?
00:02:42.360 So, I was actually, I confess, I was not aware of your work.
00:02:45.360 I think I'd seen the Atlantic article that came out on you recently.
00:02:48.820 But I think I had only seen it.
00:02:50.500 I don't think I had read it.
00:02:51.600 But what you're doing is fascinating and incredibly timely, given our dependence on this technology.
00:02:56.980 And I think this conversation we're going to have, I'm imagining it's going to be something like a field guide to what technology is doing to the human mind.
00:03:06.800 I think we'll talk about how we can decide to move intentionally in that space of possibilities in a way that's healthier for all of us.
00:03:15.360 And this is obviously something you're focused on.
00:03:17.040 But to bring everyone up to speed, because even I was not up to speed until just a few days ago, what is your background?
00:03:25.240 And I've heard you've had some very interesting job titles at Google, perhaps among other places.
00:03:29.980 One was the resident product philosopher and design ethicist at Google.
00:03:35.640 So, how did Tristan Harris get to be Tristan Harris?
00:03:39.260 And what are you doing now?
00:03:41.140 Well, first, thanks for having me, really.
00:03:42.760 It's an honor to be here.
00:03:43.960 I'm a big fan of this podcast.
00:03:46.380 So, yeah, my role at Google, that was an interesting name.
00:03:50.380 So, design ethicist and product philosopher, I was really interested in essentially when a small number of people in the tech industry, you know, influence how a billion people think every day without even knowing it.
00:04:04.200 If you think about your role as a designer, how do you ethically steer a billion people's thoughts, framings, cognitive frames, behavioral choices, basically the schedule of people's lives?
00:04:16.040 And so much of what happens on a screen, even though people feel as if they're making their own choices, will be determined by the design choices of the people at Apple and Google and Facebook.
00:04:24.920 So, we'll talk, I'm sure, a lot more about that.
00:04:26.840 I guess prior to that, when I was a kid, I was a magician very early.
00:04:30.980 And so, I was really interested in the limits of people's minds that they themselves don't see, because that's what magic is all about.
00:04:37.640 That there really is a kind of band of attention or short-term memory or ways that people make meaning or causality that you can exploit as a magician.
00:04:48.680 And that had me fascinated as a kid, and I did a few little magic shows.
00:04:53.680 And then, flash forward, when I was at Stanford, I did computer science, but I also studied as part of a lab called the Persuasive Technology Lab with BJ Fogg,
00:05:03.120 which basically taught engineering students how this kind of library of persuasive techniques and habit formation techniques in order to build more engaging products.
00:05:14.160 Basically, different ways of taking advantage of people's cognitive biases so that people fill out email forms,
00:05:20.420 so that people come back to the product, so that people register a form, so that they fill out their LinkedIn profiles, so that they tag each other in photos.
00:05:27.000 And I became aware, when I was at Stanford doing all this, that there was no conversation about the ethics of persuasion.
00:05:35.340 And just to ground how impactful that cohort was, in my year in that class in the Persuasive Technology Lab,
00:05:43.480 my project partners in that class and very close friends of mine were the founders of Instagram.
00:05:48.840 And many other alumni of that year in 2006 actually went on to join the executive ranks at many companies we know, LinkedIn and Facebook.
00:05:57.000 When they were just getting started.
00:05:59.100 And again, never before in history have such a small number of people with this tool set influenced how people think every day by explicitly using these persuasive techniques.
00:06:09.360 And so at Google, I just got very interested in how we do that.
00:06:13.680 And so you were studying computer science at Stanford?
00:06:15.820 Originally computer science, but I dabbled a ton in linguistics and actually symbolic systems.
00:06:20.960 Yeah.
00:06:21.240 Because you were at Stanford eventually.
00:06:23.120 Yeah, yeah.
00:06:23.660 So that was a great major at Stanford.
00:06:25.460 I was in the philosophy department.
00:06:26.840 There was overlap between philosophy and computer science for symbolic systems.
00:06:30.820 I think Reid Hoffman was one of the first symbolic systems majors at Stanford.
00:06:35.000 Yeah, yeah.
00:06:35.380 So yeah, so persuasion, the connection to magic is interesting.
00:06:38.660 There's an inordinate number of magicians and fans of magic in the skeptical community as well, perhaps somewhat due to the influence of James Randi.
00:06:48.620 But I mean, magic is really the ultimate act of persuasion.
00:06:53.040 You're persuading people of the impossible.
00:06:55.580 So you see a significant overlap between the kinds of hacks of people's attention that magicians rely on and our new persuasive technology.
00:07:04.940 Yeah, I think, well, I think if you just abstract away what persuasion is, it's the ability to do things to people's minds that they themselves won't even see how that process took place.
00:07:16.240 And I think that parallels your work in a big way in that beliefs do things.
00:07:20.800 To have a belief shapes the subsequent experience of what you have.
00:07:24.420 I mean, in fact, in magic, there's like principles where, you know, you kind of want to start bending reality and creating these aha moments so that you can do a little hypnosis trick later, for example, that people be more likely to believe having gone through a few things that have kind of bent their reality into being more superstitious or more open.
00:07:41.780 And there's just so many ways of doing this that most people don't really recognize.
00:07:46.840 I wrote an article called How Technology Hijacks Your Mind that ended up going viral to about a million people.
00:07:53.020 And it goes through a bunch of these different techniques.
00:07:55.260 But yeah, that's not something people mostly think about.
00:07:57.760 You also said in the setup for this interview that you have an interest in cults.
00:08:01.660 Yeah.
00:08:01.860 What's that about?
00:08:03.060 And to what degree have you looked at cults?
00:08:06.440 Well, I find cults fascinating because they're kind of like vertically integrated persuasive environments.
00:08:13.660 Instead of just persuading someone's behavior or being the design of a supermarket or the design of, you know, a technology product, you are designing the social relationships, the power dynamic between a person standing in front of an audience.
00:08:31.540 You can control many more of the variables.
00:08:34.420 And so I've done a little bit of sort of undercover investigation of some of these things.
00:08:39.020 You mean actually joining a cult or?
00:08:40.800 No, not joining, but...
00:08:42.780 Showing up physically and...
00:08:44.140 Showing up physically.
00:08:45.420 Many of these things are...
00:08:47.540 None of these cults ever would call themselves cults.
00:08:50.380 I mean, many of them are simply workshops, sort of new agey style workshops.
00:08:54.160 But you start seeing these parallels in the dynamics.
00:08:56.420 Do you want to name any names?
00:08:57.980 Do I know these groups?
00:08:58.820 I might prefer not to at the moment.
00:09:00.460 We'll see if we get there.
00:09:01.220 Okay.
00:09:02.440 You have a former girlfriend who's still in one?
00:09:04.560 No.
00:09:04.960 But I did actually, one of the interesting things is the way that people that I met in those cults who eventually left and later talked about their experience and the confusion that you face.
00:09:16.520 And I know this is an interest you've had.
00:09:18.080 The confusion that you face when you've gotten many benefits from a cult.
00:09:23.840 You've actually deprogrammed, let's say, early childhood traumas or identities that you didn't know you were holding or different ways of seeing reality that they helped you, you know, get away from.
00:09:34.780 And you get these incredible benefits and you feel more free, but then you also realize that was all part of this larger persuasive game to get you to spend a lot of money on classes or courses or these kinds of things.
00:09:45.000 And so what the confusion that I think people experience in knowing that they got all these benefits, but then also felt manipulated.
00:09:53.180 And they don't know in the sort of mind's natural black and white thinking how to reconcile those two facts.
00:09:58.780 I actually think there's something parallel there with technology.
00:10:01.200 Because, for example, in my previous work on this, a lot of people expect you, if you're criticizing how technology is designed, if you might say something like, oh, you're saying Facebook's bad, but look, I get all these benefits from Facebook.
00:10:12.660 Look at all these great things it does for me.
00:10:14.280 And it's because people's minds can't hold on to both the truth that we do derive lots of value from Facebook and there's many manipulative design techniques across all these products that are not really on your team to help you live your life.
00:10:30.200 And that distinction is very interesting when you start getting into what ethical persuasion is.
00:10:36.420 Yeah, it is a bit of a paradox because you can get tremendous benefit from things that are either not well-intentioned or just objectively bad for you or not optimal.
00:10:48.920 The ultimate case is you hear from all these people who survived cancer and cancer was the most important thing that ever happened to them.
00:10:56.460 So a train wreck can be good for you on some level because your response to it can be good for you.
00:11:02.100 You can become stronger in all kinds of ways, even by being mistreated by people.
00:11:07.860 But it seems to me that you can always argue that there's probably a better way to get those gains.
00:11:14.800 Frankly, with your work on the moral landscape, when you're thinking about if you're a designer at Facebook or at Google, because of how frequently people turn to their phone, you're essentially scheduling these little blocks of people's time.
00:11:29.140 If I immediately notify you for every Snapchat message, which Snapchat is one of the most abusive, more manipulative of the technology products, when you see a message from a friend in that moment urgently, that will cause a lot of people to go swipe over and not just see that message, but then get sucked into all the other stuff that they've been hiding for you.
00:11:50.480 Right. And that's all very deliberate. And so if you think of it as, let's say you're a designer at Google and you want to be ethical and you're steering people towards these different timelines, you're steering people towards schedule A in which these events will happen or schedule B in which these other events will happen.
00:12:04.520 You know, back to your point, should I schedule something that you might find really challenging or difficult, but that later you'll feel is incredibly valuable?
00:12:12.960 Do I take into account the peak end effect where people will have a peak of an experience and an end? Do I take a lot of their time or a little bit of their time? Should the goal be to minimize how much time people spend on the screen?
00:12:23.580 What is the value of screen time and what are people doing that's lasting and fulfilling? And when are you steering people as a designer towards choices that are more shallow or empty?
00:12:34.200 Yeah.
00:12:34.520 You're clearly concerned about time, as we all should be. It's the one non-renewable resource. It's the one thing we can't possibly get back any of, no matter what other resources we marshal.
00:12:46.820 And it's clear that our technology, especially smartphone-based technology, is just a kind of bottomless sink of time and attention.
00:12:58.020 I guess there's the other element that we're going to want to talk about, which is the consequence of bad information or superficial information and just what it's doing to our minds.
00:13:09.800 I mean, the fake news phenomenon being of topical interest. But just the quality of what we're paying attention to is crucial. But the automaticity of this process, the addictiveness of this process, the fact that we're being hooked and we're not aware of how calculated this intrusion into our lives is.
00:13:29.140 So this is the thing that's missing is that people don't realize, because there's the most common narrative, and we hear this all the time, that technology is neutral and it's just up to us to choose how we want to use it.
00:13:39.680 And if it happens, if people do fake news or if people start wasting all their time, that that's just people's responsibility.
00:13:45.860 What this misses is that because of the attention economy, which is every basically business, whether it's a meditation app or the New York Times or Facebook or Netflix or YouTube, you're all competing for attention.
00:13:58.860 The way you win is by getting someone's attention and by getting it again tomorrow and by extending it for as long as possible.
00:14:05.580 So it becomes this arms race for getting attention. And the best way to get attention is to know how people's minds work so that you can basically push some buttons and get them to not just come, but then to stay as long as possible.
00:14:17.980 So there are design techniques like making a product more like a slot machine that has a variable schedule reward.
00:14:24.440 So, you know, for example, I know you use Twitter, you know, when you land on Twitter, notice that there's that extra variable time delay between like one and three seconds before that little number shows up.
00:14:34.560 You have a return and the page loads. There's this extra delay.
00:14:39.020 I haven't noticed that.
00:14:39.880 Yeah. Hold your breath. And then there's a little number that shows up for the notifications.
00:14:43.480 And that delay is makes it like a slot machine. You're literally when you load the page, you you're as if you're pulling a lever and you're waiting and you don't know how many there's going to be.
00:14:51.840 Is there going to be 500 because some big tweet storm or is there going to be doesn't always say 99?
00:14:57.460 Well, you're not. Everyone is Sam Harrison has so many.
00:15:00.480 No, no. But I mean, I mean, isn't isn't that always the maximum never says 500, right?
00:15:04.180 You know, I so I don't because again, I'm not you. I don't have as many followers.
00:15:08.260 Well, I think I can attest that. I mean, mine is always at 99. So it's no longer salient to me.
00:15:13.180 Well, right. Which actually speaks to how addictive variable rewards work, which is the point is it has to be a variable reward.
00:15:18.120 So the idea that I push a lever or pull a lever and sometimes I get, you know, two and sometimes I get nothing and sometimes I get, you know, 20.
00:15:26.720 And this is the same thing with email.
00:15:28.520 Well, let's talk about what is the interest of the company, because I think most people are only dimly aware.
00:15:35.620 I mean, they're certainly aware that these companies make money off of ads very often.
00:15:41.040 They sell your data. So your your attention is their resource.
00:15:46.060 Yep. But take an example. I mean, so something like Twitter can't seemingly can't figure out how to make money yet.
00:15:51.320 But Facebook doesn't have that problem. Let's take the clearest case.
00:15:54.360 What is Facebook's interest in you as a user?
00:15:59.660 Well, obviously, the there's other there's many sources of revenue, but it all comes down to whether it's data or everything else.
00:16:06.980 It comes down to advertising and time because of the link that more of your attention or more of your time equals more money.
00:16:15.440 They have an infinite appetite in getting more of your time.
00:16:19.680 So time on your newsfeed.
00:16:21.420 And this is literally what they want.
00:16:23.120 That's right. And this is literally how the metrics and the dashboards look.
00:16:25.780 I mean, they measure what is the current sort of distribution of time on site.
00:16:30.060 Time on site is the that in seven day actives are the currency of the tech industry.
00:16:34.680 And so the only other industry that measures users that way is sort of drug dealers, right, where you have the number of active users who log in every single day.
00:16:43.680 So that combined with time on site are the key principal metrics.
00:16:47.600 And the whole goal is to maximize time on site.
00:16:50.640 So Netflix wants to maximize how much time you spend there.
00:16:53.020 YouTube wants to maximize time on site.
00:16:54.960 They recently celebrated people watching more than a billion hours a month.
00:16:58.720 And that was a goal and not because there's anyone who's evil or who, you know, wants to steal people's time.
00:17:05.500 But because of the business model of advertising, there is simply no limit on how much attention that they would like from people.
00:17:12.260 Well, they must be concerned about the rate at which you click through to their ads or are they not?
00:17:18.460 They can be concerned about that.
00:17:19.620 But because and ad rates are depreciating.
00:17:22.100 But because they can make money just by simply showing you the thing and there is some link between showing it to you and you clicking, you can imagine with more and more targeted things that you are seeing things that that are profitable.
00:17:32.720 And there's always going to be someone willing to pay for that space.
00:17:36.240 But this problem means that as this starts to saturate, because we only have so much time to even hold on to your position in the attention economy, what do you do?
00:17:44.580 You have to ratchet up how persuasive you are.
00:17:46.760 So here's a concrete example.
00:17:49.060 If you're YouTube, you need to add autoplay the next video to YouTube.
00:17:53.840 You didn't use to do that in the last year.
00:17:55.440 I always find that incredibly annoying.
00:17:56.940 Yep.
00:17:57.280 I wonder what percentage of people find that annoying.
00:17:59.440 Is it conceivable that that is still a good business decision for them, even if 99% of people hate that feature?
00:18:08.420 Well, it's with the whole exit voice or loyalty.
00:18:10.720 If people don't find it so annoying that they're going to stop using YouTube, because the defense, of course, is...
00:18:15.280 There's no way they're going to stop using YouTube.
00:18:17.040 Of course not.
00:18:17.640 And that's what these companies often hide behind this notion that if you don't like it, you can stop using the product.
00:18:23.140 But while they're saying that, I mean, they have teams of thousands of engineers whose job is to deploy these techniques I learned at the Persuasive Technology Lab to get you to spend as much time as possible.
00:18:32.420 But just with that one example, let's say YouTube adds autoplay the next video.
00:18:36.580 So they just add that feature.
00:18:38.400 And let's say that increases your average watch time on the site every day by 5%.
00:18:43.880 So now they're eating up 5% more of this limited attention market share.
00:18:49.460 So now Facebook's sitting there saying, well, shoot, we can't let this go, you know, to dry.
00:18:54.740 So we've got to actually add autoplay videos to our newsfeed.
00:18:58.100 So instead of waiting for you to scroll and then click play on the video, they automatically play the video.
00:19:03.820 They didn't always used to do that.
00:19:05.120 Yeah, it's another feature I hate.
00:19:06.540 Yep.
00:19:06.760 And the reason, though, that they're doing that, what people miss about this is it's not by accident.
00:19:10.440 The web and all of these tools will continue to evolve to be more engaging and to take more time because that is the business model.
00:19:17.800 And so you end up in this arms race for essentially who's a better magician, who's a better persuader, who knows these backdoors in people's minds as a way of getting people to spend more time.
00:19:27.280 Now, do you see this as intrinsically linked to the advertising model of revenue?
00:19:33.420 Or would this also be a problem if it was a subscription model?
00:19:38.260 It's a problem in both cases, but advertising exacerbates the problem.
00:19:42.040 So you're actually right that, for example, Netflix also maximizes time on site.
00:19:48.400 What I heard from someone through some back channels was that the reason they have to do this is they found that if they don't maximize, for example, they have this auto countdown watching the next episode.
00:19:59.280 Right.
00:19:59.560 So they don't have to do that.
00:20:00.760 Why are they doing that?
00:20:01.720 Strangely, I like that feature.
00:20:02.840 Try to figure that out, psychologists among you.
00:20:05.640 Well, and this is where it gets down to what is ethical persuasion because that's a one persuasive transaction where they are persuading you to watch the next video.
00:20:13.320 But in that case, you're happy about it.
00:20:15.160 I guess the reason why I'm happy about it there is that it is at least nine times out of 10.
00:20:20.560 It is, by definition, something I want to watch because it's in the same series as the series I'm already watching, right?
00:20:26.480 Whereas YouTube is showing me just some random thing that they think is analogous to the thing I just watched.
00:20:31.480 And then when you're talking about Facebook, or I guess I've seen this feature on embeds in news stories like in the Atlantic or Vanity Fair, the moment you bring the video into the frame of the browser, it'll start playing.
00:20:44.060 I just find that annoying, especially if your goal is to read the text rather than watch the video.
00:20:48.320 Yep. But again, there's this because of the game theory of it, when one news website evolves that strategy, you can think of these as kind of organisms that are mutating new persuasive strategies that either work or not at holding on to people's attention.
00:21:00.520 And so you have some neutral playing field, and one guy mutates this strategy on the news website of autoplaying that video when you land.
00:21:06.940 Let's say it's CNN.
00:21:07.680 So now the other news websites, if they want to compete with that, they have to, and assuming that CNN has enough market share that that makes a difference, the other ones have to start trending in that direction.
00:21:17.960 And this is why the internet has moved from being this neutral feeling resource where you're kind of just accessing things to feeling like there's this gravitational wormhole suck kind of quality that pulls you in.
00:21:30.540 And this is what I think is so important.
00:21:31.740 You asked, you know, how much of this is due to advertising and how much of it is due to the hyper competition for attention.
00:21:37.980 It's both.
00:21:38.900 One is we have to be able to decouple the link between how much attention we get from you and how much money we make.
00:21:46.720 And we actually did the same thing with, you know, for example, in energy markets where it used to be the energy companies made more money, the more energy you use.
00:21:55.540 And so therefore they have an incentive.
00:21:56.820 They want you to please leave the lights on.
00:21:58.760 Please leave the faucet on.
00:21:59.840 We are happy.
00:22:00.620 We're making so much more money that way.
00:22:02.540 But of course, that was a perverse incentive.
00:22:03.960 And so this new regulatory commission got established that basically decoupled, it was called decoupling, it decoupled the link between how much energy you use and how much energy they, how much money they make.
00:22:17.240 Well, and there's some ads online, I can't even figure out how they're working or why they're there.
00:22:23.500 There are these horrible ads at the bottom of even the most reputable websites like the Atlantic, you'll have these ads.
00:22:31.140 I think usually they're framed with, you know, from around the web and it'll be an ad like, you won't believe what these child celebrities look like today.
00:22:40.840 Yeah, Taboola and Outbrain, there's a whole actual kind of market of companies that specifically provide these related links at the bottom of news websites.
00:22:49.260 But I mean, they're so tawdry and awful.
00:22:51.500 I mean, you can go from just, you know, reading literally the best long form journalism and hit just one garish ad after another.
00:23:02.600 But the thing that mystifies me is when you click through to these things, I can't see that it ever lands at a product that anyone who was reading that article would conceivably buy.
00:23:13.240 I mean, you're just going down the sinkhole into something horrible.
00:23:17.520 Everything looks like a scam.
00:23:18.760 It all comes down to money, though.
00:23:20.680 The reason why, so I actually know a lot about this because the company, the way I arrived at Google was they bought our little startup company for our talent.
00:23:27.800 And we didn't do what this sort of market of websites did.
00:23:31.620 But we were almost being pushed by publishers who used our technology to do that.
00:23:36.340 So one of the reasons I'm so sensitive to this time on site stuff is because I had a little company called Apture, which provided little in-depth background pieces of information without making you leave news websites.
00:23:48.300 So you'd be on The Economist and it would talk about Sam Harris and you'd say, who's Sam Harris?
00:23:52.760 You'd highlight it and we'd give you sort of a multimedia background or thing and you could interactively explore and go deeper.
00:23:58.320 And the reason we sold this, the reason why Economist wanted it on their website is because it increased time on site.
00:24:05.180 And so I was left in this dilemma where the thing that I got up to do in the morning as a founder was, let's try to help people understand things and learn about things.
00:24:13.060 But then the actual metric was, is this increasing our time on site or not?
00:24:17.760 And publishers would push us to either increase revenue or increase time on site.
00:24:22.280 And so the reason that The Economist and all these other even reputable websites have these bucket of links at the bottom is because they actually make more money from Taboola and Outbrain and a few others.
00:24:32.060 Now, time on site seems somewhat insidious as a standard, except if you imagine that the content is intrinsically good.
00:24:43.140 Now, I'm someone who's slowly but surely building a meditation app, right?
00:24:47.520 So now time on my app will be time spent practicing meditation.
00:24:52.980 And so insofar as I think that's an intrinsically good thing for someone to be doing, anything I do in the design of the app so as to make that more attractive to do, and in the best case, irresistible to do, right?
00:25:08.720 I mean, the truth is, I would like an app in my life that got me to do something that is occasionally hard to do, but I know is worth doing and good for me to do.
00:25:18.840 Rather than waste my time on Twitter, something like meditation, something like exercise, eating more wisely.
00:25:25.920 I don't know how that can be measured in terms of time, but there are certain kinds of manipulations, speaking personally, of my mind that I would happily sign up for, right?
00:25:36.200 So how do you think about that?
00:25:37.320 Absolutely.
00:25:37.740 So this is a great example.
00:25:38.880 So because of the attention economy constantly ratcheting up these persuasive tricks, the price of entry for, say, a new meditation app is you're going to have to try and find a way to sweeten that front door so that that competes with the other front doors that are on someone's screen at the moment when they wake up in the morning.
00:25:57.600 And of course, you know, as much as I know, and I think many of us don't like to do this, it's like the Twitter and the Facebook and the email ones are just so compelling first thing in the morning, even if that's not what we'd like to be doing.
00:26:08.560 And so because all of these different apps are neutrally competing on the same playing field for morning attention and not a specific kind of like helping Sam wake up best in the morning, for your meditation app and what many meditation apps I personally know, they have to provide these usually these notifications.
00:26:26.840 So they start realizing, oh, shoot, Facebook and Twitter are notifying people first thing in the morning to get their attention.
00:26:32.380 So if we're going to stand a chance to get in the game, we have to start notifying people.
00:26:36.400 Right. And then everyone starts, again, amping up in the arms race and you don't end up with it's this race, classic race to the bottom.
00:26:42.520 You don't end up with, you know, a screen you want to wake up to in the morning at all.
00:26:46.140 It's not good for anybody, but it all became came from this this need to basically get there first to race up.
00:26:53.560 And so wouldn't we want to change the structure of what you're competing for?
00:26:56.540 So it's not just attention at all cost.
00:26:58.100 So, yeah, so you have called for what I think you've called a Hippocratic oath for software designers.
00:27:04.680 You know, first, do no harm.
00:27:06.820 What do you think designers should be doing differently now?
00:27:11.840 Well, I think of it less as the Hippocratic oath.
00:27:14.020 That's the thing that got captured in the Atlantic article.
00:27:16.700 But a different way to think about it is that the attention economy is like this city.
00:27:22.920 You know, essentially, Apple and Google and Facebook are the urban planners of this city that a billion people live inside of.
00:27:28.080 And we all live inside of it, like a billion people live inside of this attention city.
00:27:33.260 And in that city, it's designed entirely for commerce.
00:27:37.120 It's maximizing basically attention at all costs.
00:27:40.120 And that was fine when we first got started.
00:27:43.300 But now this is a city that people live inside of.
00:27:47.300 I mean, the amount of time people spend on their phone, they wake up with them, they go to sleep with them, they check them 150 times a day.
00:27:52.920 That's actually a real figure, too, right?
00:27:54.220 150 times a day is a real figure, for sure, yeah.
00:27:56.580 And so now what we'd want to do is organize that city, almost like, you know, Jane Jacobs created this sort of livable cities movement and said, you know, there are things that make a great city great.
00:28:08.660 There are things that make a city livable.
00:28:10.560 You know, she pointed out Eyes on the Street, you know, Stoops in New York.
00:28:14.360 She was talking about Greenwich Village.
00:28:16.000 These are things that make a neighborhood feel different, feel more homey, livable, safe.
00:28:21.400 These are values people have about what makes a good urban planned city.
00:28:26.000 There is no set of values to design this city for attention.
00:28:31.140 So far, it's been this Wild West, let each app compete on the same playing field to get attention at all costs.
00:28:38.580 So when you ask me, what should app designers do?
00:28:41.800 I'm saying it's actually a deeper thing.
00:28:43.740 That's like saying, what should the casinos who are all building stuff in the city do differently?
00:28:47.900 If a casino is there and the only way for it to even be there is to do all the same manipulative stuff that the other casinos are doing, it's going to go out of business if it doesn't do that.
00:28:57.220 So the better question to ask is, how would we reorganize the city by talking to the urban planners, by talking to Apple, Google, and Facebook to change the basic design?
00:29:07.020 So let's say there are zones.
00:29:08.540 And one of the zones in the attention economy city would be the morning habits zone.
00:29:13.020 So now you just get things competing for what's the best way to help people wake up in the morning, which could also include the phone being off, right?
00:29:20.720 That could be part of how the phone, the option of the phone being off for a period of time and telling your friends that you're not up until 10 in the morning or whatever could be one of the things competing for the morning part of your life in the life zone there.
00:29:33.300 And that would be a better strategy than trying to change meditation app designers to take a Hippocratic Oath to be more responsible when the whole game is just not set up for them to succeed.
00:29:44.680 Well, to come back to that question, because it's of personal interest to me, because I do want to design this app in a way that seems ethically impeccable.
00:29:55.880 If the thing you're directing people to is something that you think is intrinsically good, and forget about all the competition for mindshare that exists that you spoke about, it's just hard to do anyway.
00:30:09.000 I mean, people are reluctant to do it.
00:30:10.500 That's why I think an app would be valuable, and I think the existing apps are valuable.
00:30:15.620 So if you think that any time on app is time well spent, which I don't think Facebook can say, I don't think Twitter can say, but I think Headspace can say that.
00:30:27.860 Whether or not that's true, someone else can decide.
00:30:31.320 But I think without any sense of personal hypocrisy, I think they feel that if you're using their app more, that's good for you, right?
00:30:38.700 Because they think that it's intrinsically good to meditate.
00:30:41.220 And I'm sure any exercise app, you know, or the health app or whatever it is, I'm sure that they all feel the same way about that.
00:30:47.540 They're probably right.
00:30:48.760 Take that case, and then let's move on to a case where everyone's motives are more mercenary, and where time on the app means more money for the company, which isn't necessarily the case for some other apps.
00:31:02.260 When time on the app is intrinsically good, why not try to get people's attention any way you can?
00:31:09.960 Right. Well, so this is where the question of metrics is really important, because in an ideal world, the thing that each app would be measuring would align with the thing that each person using the app actually wants.
00:31:23.200 So time well spent would mean, in the case of a meditation app, asking the user, I mean, just not saying the app would do this, but if you were to think about it, a user would say, okay, in my life, what would be time well spent for me in the morning waking up?
00:31:36.120 And then imagine that whatever the answer to that question is, should be the rankings in the app store, rewarding the apps that are best at that.
00:31:44.780 So that again is more the systemic answer that the systems like the app stores and the ranking functions that run, say, search, Google search, or Facebook newsfeed would want to sort things by what helps people the most, not what's got the most time.
00:32:02.220 And the measure of that would be the evaluation of the user. I mean, there's some questionnaire based rating. Is this working for you?
00:32:11.580 Yeah. And in fact, we've done some initial work with this. Actually, there's an app called Moment on iOS. So Moment tracks how much time you spend in different apps, you send it a screenshot of your battery page on the iPhone, and it just captures all that data. And we've actually, they partnered with Time Well Spent to ask people, which apps do you find are most time well spent, most happy about the time you spent when you can finally see, this is all the time you spent in it. And which apps do you most regret?
00:32:39.820 And we have the data back that people regret that people regret the time that they spend in Facebook, Instagram, Snapchat, and WeChat the most. And they tend, so far, the current rankings are, for the most are like MyFitnessPal and Podcasts, and there's a bunch of other ones that I forgot.
00:32:58.080 The irony is that being ranked first in regret is probably as accurate a measure as any of the success of your app.
00:33:08.060 Yeah, exactly. And this is why the economy isn't ranking things or aligning things with what we actually want. I mean, if you think about it as everything is a choice architecture, and you're sitting there as a human being worth picking from a menu, and currently the menu sorts things by what gets the most downloads, the most sales, the most, gets most talked about, the things that most manipulate your mind.
00:33:29.900 And so the whole economy has become this, if you assume marketing is as persuasive as it is on a bigger level, the economy reflects what's best at manipulating people's psychology, not what's actually best in terms of delivered benefits in people's lives.
00:33:42.320 And so if you think about this as a deeper systemic thing about if you would want, how would you want the economy to work, you'd want it to rank things so that the easiest thing to reach for would be the things that people found to be most time well spent in their lives, for whatever category of life choice that they're making at that moment.
00:34:00.760 In terms of making choices easier or hard, because you can't escape, you know, in every single moment there is a menu, and some choices are easy to make, and some choices are hard to make.
00:34:09.420 It seems to me you run into a problem which behavioral economists know quite well, and this is something that Danny Kahneman has spoken a lot about, that there's a difference between the experiencing self moment-to-moment and the remembered self.
00:34:23.220 So when you're giving someone a questionnaire, asking them whether their time on all these apps and websites was well spent, you are talking to the remembered self.
00:34:33.980 And Danny and I once argued about this, how to reconcile these two different testimonies, but
00:34:39.420 at minimum you can say that they're reliably different, so that if you were experienced sampling people along the way, you know, for every 100 minutes on Facebook, every 10 minutes you were saying, how happy are you right now, you would get one measure.
00:34:54.680 If at the end of the day you ask them, how good a use of your time was that to be on Facebook for 100 minutes, you would get a different measure.
00:35:01.760 Sometimes they're the same, but they're very often different, and the question is who to trust.
00:35:07.740 Where are the data that you're going to use to assess whether people are spending their time well?
00:35:13.300 Well, I mean, the problem right now is that all of the metrics just relate to the current present self version, right?
00:35:20.720 Everything is only measuring what gets most clicked or what gets most shared.
00:35:25.460 So back to fake news, just because something is shared the most doesn't mean it's the most true.
00:35:31.660 Just because something gets clicked the most doesn't mean it's the best.
00:35:34.680 Just because something is talked about the most doesn't mean that it's real or true, right?
00:35:39.420 The second that Facebook took away its human editorial team for the Facebook trends, and they fired that whole team, and so it's just an AI picking what the most popular news stories are.
00:35:51.540 Within 24 hours, it was gamed, and the top story was a false story about Megyn Kelly and Fox News.
00:35:56.520 And so right now, getting into AI about all of these topics, AIs essentially have a pair of eyes or sensors that are trying to pick from these impulsive or immediate signals, and it doesn't have a way of being in the loop or in conversation with our more reflective selves.
00:36:13.060 It can only talk to our present-in-the-moment selves.
00:36:16.020 And so you can imagine some kind of weird dystopian future where the entire world is only listening to your present-in-the-moment feelings and thoughts, which are easily gameable by persuasion.
00:36:25.700 Although it just is a question how to reconcile the difference between being pleasantly engaged moment by moment in an activity at the end of which you will say, I kind of regret spending my time that way.
00:36:42.380 There are certain things that are captivating where you're hooked for a reason, right?
00:36:46.940 You know, whether it's a video game or whether you're eating french fries or popcorn or something that is just perfectly salted so that you just can't stop, you're binging on something because in that moment it feels good, and then retrospectively, very often you regret that use of time.
00:37:05.920 Well, so one frame of this is this sort of shallow versus deep sense.
00:37:09.380 That's what you're getting at here is a sense of something can either be full but empty, which we don't have really words in the English language for this, or something can be full and fulfilling.
00:37:18.200 Things can be very engaging or pleasurable but not fulfilling.
00:37:24.700 Yes, and even more specifically regretted.
00:37:27.000 And then there's the set of choices that you can make for a timeline if you're, again, scheduling someone else's life for them, as people at Google and Facebook do every day,
00:37:34.160 you know, where you can schedule a choice that is full and fulfilling.
00:37:38.820 Now, does that mean that we should never put choices on the menu that are full but you regret?
00:37:43.920 Like, should we never do that for Google or for Facebook?
00:37:46.960 That's one frame, but let me actually flip it around and make it, I think, even more philosophically interesting.
00:37:50.980 Let's say that in the future, YouTube is even better at knowing exactly what at every bone in your body you've been meaning to watch,
00:37:58.680 like the professor or lecture that you've been told is the best lecture in the world,
00:38:03.120 or just think about what every bone in your body tells you, in fact, would be full and fulfilling for you.
00:38:07.900 And let's imagine this future deep mind-powered version of YouTube is actually putting those perfect choices next on the menu.
00:38:15.660 So now it's autoplaying the perfect next thing that is also full and fulfilling.
00:38:21.260 There's still something about the way the screen is steering your choices that are not about being in alignment with the life you want to live,
00:38:30.160 because it's not in alignment with the time dimension now.
00:38:33.280 So now it's sort of blowing open or blowing past boundaries.
00:38:36.500 You have to bring your own boundaries.
00:38:37.920 Right.
00:38:38.540 You have to resist the perfect.
00:38:40.780 You have to resist the perfect.
00:38:41.980 Now, should that be...
00:38:43.620 And by the way, because of this arms race, that is where we're trending to.
00:38:47.320 People don't understand this.
00:38:48.300 The whole point of attention, the attention economy, because of this need to maximize attention,
00:38:52.280 that's where YouTube will be in the future.
00:38:56.000 And so wouldn't you instead say, I want Netflix's goal to basically optimize for whatever is time well spent for me,
00:39:04.040 which might be, let's say for me, watching one really good movie a week that I've been really meaning to watch.
00:39:09.680 And that's because I'm defining that.
00:39:11.180 It's in conversation with me about what I reflectively would say is time well spent.
00:39:14.660 And it's not trying to just say you should maximize as much as possible.
00:39:18.400 And for that relationship to work, the economy would have to be an economy of loyal relationships,
00:39:24.020 meaning I would have to recognize as a consumer that even though I only watch one movie a week,
00:39:28.940 that's enough to justify my relationship with Netflix.
00:39:32.200 Because they found in this case that if they don't maximize time on site,
00:39:36.140 people actually end up canceling their subscription over time.
00:39:39.220 And so that's why they're still trapped in the same arms race.
00:39:41.500 Right. And what concerns you most in this space?
00:39:44.800 Is it social media more than anything else?
00:39:47.140 Or is everything that's grabbing attention engaged in the same arms race and kind of equal concern to you?
00:39:54.540 Well, as a systems person, it's really the system.
00:39:58.780 It's the attention economy.
00:40:00.720 It's the race for attention itself that concerns me.
00:40:03.660 Because one is people in the tech industry appear to me very often as being blind
00:40:09.360 to what that race costs us.
00:40:12.360 You know, if one, let's, I mean, for example, the fake news stuff.
00:40:16.220 Instead of going to fake news, let's call it fake sensationalism.
00:40:19.920 You know, the news feed is trying to figure out what people click the most.
00:40:23.480 And if one news site evolves the strategy of outrage,
00:40:27.740 outrage is a way better persuasive strategy at getting you to click if it generates outrage.
00:40:32.300 Right.
00:40:32.380 And so the news feed, without even having any person at the top of it, any captain of the ship saying,
00:40:38.000 oh, I know what's going to be really good for people is outrage, or that'll get us more attention.
00:40:41.600 It just discovers this as an invisible trait that starts showing up in the AI.
00:40:45.600 So it starts steering people towards news stories that generate outrage.
00:40:49.480 And that's literally where, like, the news feeds have gone in the last three months.
00:40:54.240 This is where we are.
00:40:55.120 True or fake.
00:40:56.000 It's an outrage machine.
00:40:57.540 And then the question is, how much is that outrage?
00:40:59.840 I mean, if you thought about it, in the world, is there any lack of things that would generate outrage?
00:41:05.080 I mean, there's an infinite supply of news today, and there was even 10 years ago, that would generate outrage.
00:41:11.640 And if we had the perfect AI 10 years ago, we could have also delivered you a day full of outrage.
00:41:18.040 That's a funny title.
00:41:19.580 A day full of outrage.
00:41:20.560 How easy would that be to mark it?
00:41:22.540 A day full of outrage.
00:41:24.000 Nobody thinks they want that, but we're all acting like that's exactly what we want.
00:41:28.200 Well, and I think this is where the language gets interesting, because when we talk about what we want, we talk about what we click.
00:41:33.980 But in the moment right before you click, I mean, I'm kind of a meditator, too.
00:41:37.880 It's like I notice that what's going on for me right before I click is not, as you know from free will, like, how much is that a conscious choice?
00:41:44.780 What's really going on phenomenologically in that moment right before the click?
00:41:48.500 None of your conscious choices are conscious choices.
00:41:52.860 Right.
00:41:53.120 You're the last to know why you're doing the thing you're about to do, and you're very often misinformed about it.
00:41:59.600 We can set up experiments where you'll reliably do the thing for reasons that you, when you're forced to articulate them, are completely wrong about.
00:42:07.280 Absolutely. And even moreover, people, again, when they're about to click on something, don't realize there's a thousand people on the other side of the screen whose job it was was to get you to click on that, because that's what Facebook and Snapchat and YouTube are all for.
00:42:22.220 So it's not even a neutral moment.
00:42:23.520 But do you think that fact alone would change people's behavior if you could make that transparent?
00:42:29.980 It just seems it would be instructive for most people to see the full stream of causes that engineered that moment for them.
00:42:38.060 Well, one thing I've got some friends in San Francisco were talking about this, that people don't realize, and especially when you start applying some kind of normativity and saying, you know, the newsfeed's really not good.
00:42:46.880 We need to rank it a different way.
00:42:48.100 And they say, whoa, whoa, whoa, whoa, whoa.
00:42:49.480 Who are you to say what's good for people?
00:42:51.900 And I always say this is status quo bias.
00:42:55.180 People are thinking that somehow the current thing we have is set up to be best for people.
00:42:59.620 It's not.
00:43:00.580 It's best for engagement.
00:43:01.740 If you were to give it a name, if Google has page rank, Facebook is engagement rank.
00:43:05.780 Now, let's say, let's take it all the way to the end.
00:43:08.500 Let's say you could switch modes as a user, and you could actually switch Facebook to addiction rank.
00:43:14.400 Facebook actually has a version of newsfeed that I'm sure it could deploy called, you know, let's just actually tweak the variables so that whatever,
00:43:21.900 let's show people the things that will addict them the most.
00:43:24.600 Or we have outrage rank, which will show you the things that will outrage you the most.
00:43:27.920 Or we have NPR rank, which actually shows you the most boring, long comment threads, where you have, like, these, you know, long, in-depth conversations.
00:43:36.860 So that your whole newsfeed is these long, deep, threaded conversations.
00:43:40.440 Or you could have the Bill O'Reilly mode, where you get these, as something I know you care about, these sort of attack dog style comment threads where people are yelling at each other.
00:43:48.000 You can imagine that the newsfeed could be ranked in any one of these ways.
00:43:51.020 Actually, this form of choice is already implemented on Flickr, where you, when you look for images, you can choose relevant or interesting.
00:44:00.020 So you could have that same drop-down menu for any of these other media.
00:44:04.980 And this is your point of, like, people don't see transparently what the goals of the designers who put that choice in front of you are.
00:44:11.940 Right.
00:44:12.200 So the first thing would be to reveal that there is a goal.
00:44:15.800 It's not a neutral product.
00:44:17.220 It's not just something for you to use.
00:44:18.840 You can, obviously, with enough effort, get, you know, use Facebook for all sorts of things.
00:44:23.200 But the point is, the default sort of compass or North Star on the GPS that is Facebook of steering your life is not steering your life towards,
00:44:32.020 hey, help me have the dinner party, you know, that I want to have.
00:44:34.220 Or help me get together with my friends on Tuesday.
00:44:37.500 Or help me make sure I'm not feeling lonely on a Tuesday night.
00:44:41.060 But there is, it seems to me, a necessary kind of paternalism here that we just have to accept.
00:44:49.660 Because it seems true that we're living in a world where no one or virtually no one would consciously choose the outrage tab.
00:44:57.760 Right.
00:44:57.980 Like, basically, I want to be as outraged as possible today.
00:45:00.960 Show me everything in my news feed that's going to piss me off.
00:45:03.720 Nor the addiction tab.
00:45:04.560 Yeah.
00:45:04.900 Nor the superficial uses of attention tab.
00:45:08.160 You know, just...
00:45:08.640 Cat videos.
00:45:09.240 All day long.
00:45:09.900 Just give me the Kardashians all day long and I'll regret it later.
00:45:14.180 So no one would choose that.
00:45:15.600 And yet, we are effectively choosing that by virtue of what proves to be clickable in the attention economy.
00:45:22.180 In service of the greater goal of advertising.
00:45:24.660 Again, I think that goal wasn't unaccepted.
00:45:27.080 In fact, it's completely pleasant.
00:45:28.500 If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
00:45:34.800 Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
00:45:46.180 The Making Sense podcast is ad-free and relies entirely on listener support.
00:45:51.600 And you can subscribe now at SamHarris.org.