The Art of Manliness - November 09, 2016


#250: The Art of Strategy


Episode Stats

Length

53 minutes

Words per Minute

164.89862

Word Count

8,824

Sentence Count

439

Misogynist Sentences

5

Hate Speech Sentences

5


Summary

Whether you're a businessman, a statesman, or a general, you're strategizing on a daily basis. So how can you do it better? Well, my guest today will provide some insights. His name is Barry Nailbuff, and he's the author of the book, The Art of Strategy: A Game-theorist s Guide to Success in Business and Life. And on the show today, Barry and I discuss how Game Theory can help you make better strategic decisions in all sorts of situations. For example, we explore why threatening to punish your child s sibling for bad behavior might be a more effective strategy than threatening the child himself. We'll discuss what Donald Trump can teach us about the promise and perils of injecting randomness into your strategy. And we also talk about how you can employ Game Theory against yourself to lose weight or even quit smoking.


Transcript

00:00:00.000 Brett McKay here and welcome to another edition of the Art of Manliness podcast. Whether you're
00:00:18.700 a businessman, a statesman, a general, or a parent, you're strategizing on a daily basis.
00:00:24.460 So how can you do it better? Well, my guest today will provide some insights. His name
00:00:27.740 is Barry Nailbuff. He's a game theory expert and the author of the book, The Art of Strategy,
00:00:32.240 A Game Theorist's Guide to Success in Business and Life. And on the show today, Barry and
00:00:36.600 I discuss how game theory can help you make better strategic decisions in all sorts of
00:00:40.700 situations. For example, we explore why threatening to punish your child's sibling for bad behavior
00:00:46.040 might be a more effective strategy than threatening to punish the child himself. I know that sounds
00:00:50.640 Machiavellian, but we'll explain the reasoning behind that. We'll discuss what Donald Trump
00:00:54.780 can teach us about the promise and perils of injecting randomness into your strategy.
00:00:58.920 We also talk about how you can employ game theory against yourself to lose weight or even quit
00:01:03.380 smoking. After the show's over, check out the show notes at aom.is slash game theory.
00:01:17.420 Barry Nailbuff, welcome to the show.
00:01:19.700 Thanks for inviting me.
00:01:20.780 So you're the co-author of a book called The Art of Strategy. It's about strategic thinking,
00:01:25.920 particularly game theory. It's a topic I've long been interested in. But before we get into the
00:01:30.580 specifics of game theory, what it is, let's talk about strategy broadly. How do you and your co-author
00:01:36.620 define strategic thinking in your book? I mean, what is strategy really?
00:01:41.900 Strategy is different from decision-making. And the reason is that there are other people's
00:01:48.800 decisions that end up mattering. So when a lumberjack cuts down a tree or an engineer builds
00:01:53.580 a bridge, that bridge isn't responding, isn't thinking. The tree isn't a strategic player.
00:01:59.560 But when you make decisions in the real world, the success of your actions depends on how other
00:02:04.900 people will respond. And so that interactive aspect of the decision-making is what makes for
00:02:11.160 strategy and game theory.
00:02:12.680 Okay. And I mean, I can understand why business people or military strategists need to understand
00:02:18.080 strategic thinking or game theory, but why is it important for even lay people? Like just people
00:02:22.740 who are mom and dads, husbands, wives, and why is it important for them to understand strategy?
00:02:27.740 I think everyone is interacting with decisions you make. Certainly kids, whether or not they want to
00:02:34.100 eat something or not eat something or stay up late,
00:02:36.980 or how one divides up chores in a household, people are interacting with each other. And you don't
00:02:45.440 make decisions in a vacuum. In physics, they say that for every action, there's a reaction equal and
00:02:51.680 opposite. But in game theory, that reaction can be changed. It can be influenced. And since we don't
00:02:58.000 act in isolation, we better figure out how other people are going to respond to what we're doing.
00:03:03.000 Right. But the thing is, I think strategy has a bad, has a PR problem, right? Ever since ancient
00:03:08.760 Greeks, you know, Odysseus was the wily one, and his strategic thinking was often looked down upon as
00:03:14.840 sort of, you know, unmanly or wily. And, you know, we think of strategy, we think of Machiavelli and being
00:03:19.820 manipulative. Is that what strategy is? Or is that, can strategy turn into that? Or can strategy actually
00:03:24.340 be benevolent?
00:03:26.460 Well, another one of my books, co-authored with Adam Brandenberger, is called Co-Opetition.
00:03:31.020 And it's about competing and cooperating at the same time. And so you need to understand strategy
00:03:37.000 for how to compete more effectively, but you also need to understand it for how to cooperate more
00:03:43.080 effectively.
00:03:44.440 Okay. So let's get into, you know, what makes up strategic thinking. You focus on game theory.
00:03:49.600 And I think a lot of people might have heard of game theory if they've seen A Beautiful Mind
00:03:52.820 about John Nash. But what is game theory? And what's the history of its development?
00:03:58.740 Sure. Well, game theory was started, created by a brilliant polymath at Princeton named John
00:04:06.320 von Neumann. And of course, John Nash, also at Princeton, less than 100 years old. So it's a
00:04:14.860 relatively speaking, pretty new science. And initially, it started out thinking about everything
00:04:19.980 from how one would hide and find submarines in warfare, to now anything from how to raise
00:04:28.320 kids, to bid in auctions, to find smart compensation contracts for executives. I thought it might be
00:04:35.500 worth playing a little game with you.
00:04:37.400 Sure.
00:04:37.820 That could illustrate how to do this, what's going on. But it depends, actually, to the extent you've
00:04:46.220 read the book.
00:04:47.160 Correct.
00:04:48.040 Then I can't really do it.
00:04:50.180 Oh, no.
00:04:50.920 No.
00:04:51.100 It's been, it'd be ruined as a result.
00:04:53.760 Well, maybe we can link to something, an online game that people can play online if they haven't
00:04:59.940 done it. Well, I mean, okay, so it started off primarily math driven. But as I read your book,
00:05:04.400 it seems like game theory has developed in something more interdisciplinary. Is that correct?
00:05:08.880 Well, since political science, sociology, law, all of that requires thinking about interactions,
00:05:17.460 it does actually cross many disciplines. Yes, indeed.
00:05:20.740 Yeah. It seems like a lot of behavioral science is influencing it, psychology as well,
00:05:25.380 cognitive science.
00:05:26.500 Well, remember, if you think that economics is supposed to be a social science, as opposed
00:05:32.680 to asocial, we're supposed to understand how other people interact with us. And in that
00:05:37.660 sense, we have to take them as they are, not as you wish they would be. And so, in that
00:05:44.040 sense, it's certainly, we don't have a behavioral game theory, but a simple reason that that would
00:05:49.860 be redundant, that how other people behave is intrinsic and central to any discussion of
00:05:57.420 game theory.
00:05:58.480 Okay. And I'm sure we'll get into some games I'm sure people might are familiar with, but
00:06:03.140 let's start getting to the nitty gritty here. So, you start off saying in the book that the
00:06:09.020 first step when you find yourself in a strategic game. So, let's start, how do you know you're
00:06:14.220 in a strategic game? Like, is it just whenever there's someone else or other people in a decision
00:06:19.420 making process? Is that a strategic game?
00:06:22.040 If you're acting with other people and they can react to what you're doing, or their actions
00:06:28.380 influence your success, then you're pretty much in a game. So, to give you a couple recent
00:06:34.420 public policy examples, much of the debate about the ACA or Obamacare actually is really
00:06:40.900 a game theory discussion. So, whether or not you require people to buy healthcare, well,
00:06:49.720 if you don't require healthy people to buy healthcare, then the only people who will end
00:06:55.520 up buying it are those who have pre-existing conditions, who aren't healthy. That means the
00:07:01.240 premiums are going to have to be very high. That means that only the even sicker people will end up
00:07:06.540 buying healthcare, which means the premiums will have to be higher still. And the end result is
00:07:11.780 you'll get what's called a death spiral, and nobody will end up being able to afford healthcare.
00:07:17.260 And so, the idea of understanding the interaction between who will buy and what the effective premium
00:07:24.440 is would be a classic example of what George Ackloff won a Nobel Prize for, something called the
00:07:31.260 market for lemons. And so, I mean, game theory, games can get very complex or very simple. I mean,
00:07:36.600 a simpler one would be just negotiating what time your child's going to go to bed, right? That's a very
00:07:41.580 simple one, but the Obamacare instance, that's very complex. There's a lot of different people
00:07:46.360 involved, a lot of different factors. So, you say that whenever you find yourself in a strategic
00:07:52.620 game, when there's decisions being made that involve other people, the first thing to do is figure
00:07:57.560 out what kind of game it is, and it's either simultaneous or sequential. What are the differences
00:08:02.800 between the two, and how will your strategy change based on what kind of game it is?
00:08:08.840 So, let me just go back for a second. I don't think the game with your kid about bedtime is so simple,
00:08:14.140 because remember, it's not just one night. This is a classic repeated game, and you might decide that
00:08:20.000 today it's not worth fighting it, but you need to be tough to have a reputation, because otherwise,
00:08:24.740 in the future, you won't have any credibility. And so, most games are neither simply sequential or
00:08:35.500 simultaneous. They're mixtures of both. They're not single shot. They go on again and again, so they're
00:08:41.480 repeated. In a sequential move game, it's much like checkers. I make a move, you make a move,
00:08:48.500 I make a move, and we alternate making moves. So, when I'm making a move, I have to think about how
00:08:54.680 you're going to respond. When your response is taken into account, it's thinking about what I'm
00:08:59.460 going to do in response to your response, and so on. In contrast, a simultaneous move game doesn't
00:09:07.420 really require us to move at the exact same moment, but it means I have to make a move without
00:09:13.380 knowing exactly what it is you've done at the time I'm making my decision. So, a very simple example
00:09:22.420 of that would be, I am placing a bid in an auction, and I'm bidding without knowing what your bid is.
00:09:30.280 Another example could be, quite soon, I'm voting. And when I place my vote, I don't know what the other
00:09:36.620 people have done in their voting booth. And so, when I'm thinking about, do I want to support the
00:09:41.360 candidate who I really like, or make a protest vote, I don't really know what other people's
00:09:48.280 decisions have been made at the same time. So, it sounds like with simultaneous games,
00:09:52.940 there's a lot more uncertainty. With sequential games, there's a bit more certainty than simultaneous.
00:09:57.840 Exactly. Note, again, just in voting, it's not that we're literally all voting at the exact same
00:10:02.560 moment. But when I'm voting, since I don't know what you've done, it's as if we're moving at the
00:10:08.600 same time. And you're absolutely right that in sequential move games, it's much, much easier
00:10:13.780 to solve, because I know everything in terms of what you've done.
00:10:19.380 Right. So, with sequential games, you can look forward and start reasoning backwards,
00:10:25.820 I think is what you say in the book.
00:10:27.960 Exactly. So, the nice thing about sequential move games is we know how to solve them.
00:10:31.660 Yeah. And essentially, you can play out every possible scenario, and you can figure out what
00:10:37.500 is the best way of playing the game. And so, that's really easy to do in tic-tac-toe, which
00:10:44.980 is why nobody plays tic-tac-toe once they're above seven, because you can figure out going
00:10:49.920 to the center means we're always going to get a tie. In contrast, for a while, it was thought
00:10:55.660 that chess was too hard to solve, or Go was too hard to solve. And so, even though, in theory,
00:11:02.660 there was an optimal way of playing it, a way of guaranteeing either a victory, a tie, or a tie,
00:11:10.840 since nobody knew what that was, we could still enjoy playing the game. Pretty soon, I'd say that
00:11:17.820 that's not going to be the case.
00:11:20.160 Right. Because computers will allow them to map out all the sequences, possible sequences.
00:11:25.660 Right. I think the top six or seven best chess players in the world are all now computer
00:11:30.020 programs.
00:11:31.280 That's crazy. So, I mean, so I think we can all intuitively do this, you know, sequential,
00:11:37.060 you know, forecasting, right? You know, looking forward to reason backward, when things are
00:11:40.740 pretty simple. But, I mean, some of these sequential games can get really complex. I mean, chess is a
00:11:44.720 perfect example. There are millions upon millions of different sequences. So, how do you, as a game,
00:11:52.080 you know, as a game theorist, track those sequences and then figure out which sequence will probably
00:11:57.540 be the one that will play out in the real world?
00:12:01.340 Well, when we didn't have computers, you use heuristics. You say, I think that owning certain
00:12:09.580 parts of the board, certain positions are stronger than others. Certain pieces are worth more than other
00:12:14.980 pieces in terms of power. And so, you look for simple rules that are usually right. Maybe they're
00:12:22.420 not always right. In other cases, you can do simulation. In other cases, you base this on
00:12:29.720 experience. Depends a little bit on how important the game is and how often it's going to be played
00:12:36.560 in terms of how much you want to go and figure out how to solve it. A classic example, though,
00:12:43.800 sometimes failure to understand the right strategy occurs in sports, where when teams are thinking
00:12:52.440 about when it is to go for a two-point play rather than a one-point play after a touchdown,
00:12:59.940 sometimes they fail to look forward and reason backward. So, if you're down by two touchdowns
00:13:06.160 with not that much time left to go and you score one touchdown and make the extra point,
00:13:13.940 there are cases where the coach has then gone for the two-point play on the second touchdown
00:13:20.280 with just a few moments left to go so as to win rather than tie the game. And it turns out that that's
00:13:27.160 a fine strategy. You might argue it's worth taking that risk. But if you thought that's what you'd want
00:13:32.880 to do, then you should have gone for the two-point play on the penultimate touchdown. The reason being
00:13:38.580 that you have to make both a one-point and a two-point play. It doesn't really matter which order you make
00:13:43.680 them in. But if you miss the two-point play on the first shot, then you have another chance to make a
00:13:48.880 two-point play on the second one and still get a tie. Ah, okay. That makes sense. I didn't think about it
00:13:54.500 that way. I've had those moments where my coach decided to go for it later on and we ended up
00:14:00.000 losing. So, it sounds like sequential games. Are there any examples like real-life examples of like,
00:14:06.180 you know, I'm talking about, you know, parent to child or business to business where there's
00:14:10.440 sequential games? It seems like the examples we've been talking about are very, you know,
00:14:13.540 they're games, like literal games, tic-tac-toe, football, chess. Any examples where there's less
00:14:20.040 structure, but there's still a sequential game involved? Well, I'd say if you think about
00:14:25.840 appointing a Supreme Court nominee, the president goes and suggests a candidate. The Senate then,
00:14:33.720 in theory, advises and confirms. And so, sure, there are discussions and movements ahead of time,
00:14:42.600 but essentially the Senate doesn't get to, well, they can try and change this and preempt it by
00:14:50.720 saying, unless you pick X, we're not going to accept anyone. But essentially, the president moves
00:14:58.540 first and then the Senate moves second. In other cases, Congress and the Senate will pass a bill and
00:15:05.900 then the president decides whether to sign it or to veto it. And if vetoed, the Congress decides
00:15:13.360 whether to override the veto. So, it's still obviously a simplification, but various laws have
00:15:22.460 created a structure which puts some sequentiality into the moose. Right. But there's still some
00:15:27.920 simultaneous things going on. As you said earlier, games are usually a mixture of the both,
00:15:32.020 simultaneous and sequential. Absolutely. But I'd say in this case, there's still a predominant
00:15:37.840 aspect of sequentiality. Okay. So, let's move into specific games that I think people might have
00:15:43.320 heard or have experienced with that kind of highlight some insights into game theory. One
00:15:50.020 of them, I think a lot of people have heard of, is the Prisoner's Dilemma. For those who aren't
00:15:54.440 familiar with it, can you briefly explain what the Prisoner's Dilemma is and then what insights
00:15:58.560 about game theory does it provide us? I think there's a sense in which people think
00:16:05.500 the Prisoner's Dilemma and game theory are synonymous, which is unfortunate because game
00:16:10.880 theory is a lot more than the Prisoner's Dilemma. But to the extent that anybody has watched a crime
00:16:16.720 thriller or the wonderful English TV show Golden Balls, what you have is a situation where both
00:16:26.820 individuals, each individual, has an incentive to cheat or to confess no matter what the other side
00:16:33.360 does. So, here you're interviewing two prisoners in separate rooms. A crime has been committed. If
00:16:40.000 neither prisoner confesses, then they both get off. But if one confesses while, or they actually don't
00:16:46.520 quite get off, they get a light sentence. If one confesses while the other keeps mum, the one who
00:16:52.680 confesses gets off and the other one gets a very tough sentence. While if they both confess, then they
00:17:00.160 each get a medium sentence. And so, the idea is that if one side keeps mum, then I can get off by
00:17:07.320 confessing. And if the other side confesses, then I really had better confess, because otherwise I'm
00:17:12.520 going to get a really severe sentence. And so, since no matter what I think the other side is going to
00:17:19.760 do, I have an incentive to confess. But of course, when both confess, they're both worse off than if
00:17:25.920 neither one does. And this is this paradox, because in some sense, when we each act in our individual
00:17:32.220 interest, the end result is bad for us in a collective sense.
00:17:37.260 Right. And any instances where, I mean, the prisoners don't look off, you know, obviously
00:17:41.200 happens whenever you're doing the separating the prisoners and trying to negotiate confessions. But
00:17:45.460 any other like real life examples where you see prisoners dilemmas act play out?
00:17:50.100 Well, I think there's the tragedy of the commons, which is a multi-person version of this. And so,
00:17:58.920 if you think about global warming or air pollution, it's in my interest to not change my lifestyle,
00:18:07.640 to continue driving cars or flying planes or to support industry. And each of us does this.
00:18:20.100 And the end result is that we end up with global warming and we're all worse off. And so,
00:18:25.400 we need to somehow collectively agree to work and cut back carbon emissions as opposed to do this on
00:18:32.200 an individual basis. Now, the advantage that we have here is that we can actually talk to each other
00:18:38.460 and monitor what the other side is doing. So, if we had to make these decisions in isolation and not
00:18:44.040 do treaties, then we'd find ourselves in the multi-person version of prisoners dilemma. And the result
00:18:49.700 would be disastrous. And so, the good thing is we don't let prisoners talk to each other and make
00:18:54.880 a pact, but we do let countries do that. And that's why we need to actually do it via treaties
00:19:00.800 and alliances as opposed to counting on people acting in their own self-interest.
00:19:08.000 Right. So, it sounds like if you find yourself in a prisoner's dilemma situation is to open up
00:19:13.460 communication. That's how you avoid those scenarios.
00:19:15.920 Exactly. And of course, sometimes the goal is to put other people in a prisoner's dilemma and prevent
00:19:22.140 them from being able to communicate. So, you've hit on a key point, which is sometimes the best way
00:19:28.660 to play a game is to change the game. That if the game isn't working out for you, don't accept it the
00:19:35.100 way it is.
00:19:35.600 And how would you, I mean, so what's an example of changing the game so you can make things better?
00:19:45.220 You can add more players to the game. You can say, okay, if in fact I discover that you snitched,
00:19:55.900 or you discover I snitched, and we go to prison, people will beat up snitches. And so, it may look
00:20:01.100 like it's a good idea to do the confession, but actually the game isn't over yet. And that we,
00:20:07.860 in fact, want to make sure that there are other players who will end up punishing us for doing
00:20:16.840 the strategy that at the short run seems to be in our interest.
00:20:21.540 Okay. So, you extend the game, make it longer. Yeah. I mean, I guess that you bring that,
00:20:26.340 that ties into you bringing the tit-for-tat approach that one game theorist developed back,
00:20:32.260 you know, a couple decades ago. One of these sort of prisoners dilemma type games where there was
00:20:36.960 multiple prisoner dilemma games. So, like, you know, you would confess one time, and the other
00:20:42.160 guy would confess, and then the other guy would know what you did, and then he would retaliate
00:20:47.240 for, you know, sticking it to you. And at the time, they thought that this tit-for-tat approach
00:20:53.380 was a good way to solve the prisoner's dilemma. But you argue in the book that it's actually not
00:20:58.600 that great of an approach. So, it's probably not the case that we're going to play tit-for-tat
00:21:03.100 with prisoners because they would have to be serious recidivists to be doing this tens or
00:21:09.960 hundreds of times in a row. Or it's more likely, the prison dilemma exists when companies are trying
00:21:19.080 to find ways to circumvent competition and to come up with, say, implicit, sometimes even explicit,
00:21:26.240 collusion. So, firms, let's take airlines as a case, might want to keep prices high.
00:21:32.800 And so, the question is, I want to go and do a little bit of a price cut and steal some market share
00:21:40.480 from you. And it may not be in my rival's interest who has a larger fraction of that particular route
00:21:49.520 or has more to lose by coming down and matching me. But if I understand that the person is going to do
00:21:57.280 that, they will do a tit-for-tat strategy. And if I come down, they're going to come down.
00:22:01.280 Then, I don't get the gain from doing any type of price cut. They're going to punish me. And as a result,
00:22:11.780 I will learn that this type of cheating, and cheating, by the way, here is, in some sense, cheating on the
00:22:17.920 collusion. So, it's cheating on the cheating, if you want. It doesn't actually pay off. Now, the problem with
00:22:27.140 simple, mechanical responses to when you think somebody else is cheating is that every now and
00:22:33.340 then, you're going to make a mistake. And you're going to think somebody cheated even when they
00:22:36.740 didn't. And so, you're going to punish them. And then what's going to happen is they're going to
00:22:40.840 punish you for punishing them, and then you're going to punish them for punishing you for punishing
00:22:44.500 them. And you're going to get yourself into one of these spirals, perhaps a little bit like what
00:22:50.540 we see in the Mideast, where it's hard to even figure out who started it. But now, we're just
00:22:56.980 in this endless cycle of retaliation. And how do we ever get out of it?
00:23:02.340 Right. This is interesting. With the airline thing, I mean, there's laws in place where companies
00:23:08.200 can't explicitly collude, right? They can't get together in a sort of a cabal and say, all right,
00:23:13.740 here's the price we're going to set the tickets at, so all of us can benefit from it. So, because
00:23:18.960 they can't do that, they have to do these sort of implicit collusion. Okay, well, if you're going
00:23:23.280 to raise the price, I'll raise the price, and it kind of evens out. Well, it's a little bit of,
00:23:28.520 I see my rival raise price. Now, I can do two things. I can take advantage of that and get some
00:23:35.180 extra share, or I can match the price. And while I don't want to match the price, I want to give my
00:23:39.760 rival incentive to have taken this action and encourage them to keep the price high. And so, even
00:23:46.760 without talking to the other side, I can figure out that this might be in my long run interests.
00:23:55.120 People have been talking about the effects of greater concentration in industry, and that
00:24:02.540 leading to higher profits and perhaps higher prices. But what often they've missed is something
00:24:09.900 called common ownership. And two of my colleagues here, Florian Etter and Fiona Scott Morton,
00:24:17.120 have been working on this. And one of the things they've discovered is that most companies,
00:24:23.800 most competitors have the same owner. So, Vanguard or Fidelity own huge fractions of all the different
00:24:32.420 airlines, or all the different pharmaceutical companies. And when you own A and its rival, B,
00:24:39.800 you know, you say, well, wait a second. Guys, I don't think this is such a good idea for you to go
00:24:45.340 and keep on cutting prices or try and steal share from each other or adding capacity. You know, just
00:24:50.740 lay off a little bit here. And so, essentially, now we have to worry not just about firms colluding with
00:24:57.780 each other, but the person who owns both of the firms, encouraging them to not really compete
00:25:03.680 vigorously with each other.
00:25:05.520 So, another type of game that you mentioned in the book that's sort of a bit different from
00:25:08.920 the Prisoner's Dilemma is what you call a confidence game. What is a confidence game,
00:25:15.540 and how does it differ from a Prisoner's Dilemma?
00:25:17.480 You know, I think Maria Konnikova has written a great book about this, and she talks about how it
00:25:26.120 is that con artists end up fooling people. And this is some wonderful applications of game theory.
00:25:34.140 In particular, if you're ever wondering why it is that spam that's trying to get phishing exercises,
00:25:43.140 trying to go and get you to send lots of money to Nigeria or someplace else,
00:25:48.120 are full of spelling errors. You can say, well, okay, guys, you know, come on, run through a spell
00:25:54.540 checker. I mean, how bad, how stupid you have to be? And the answer is, they don't want to waste
00:26:03.960 their time with people who aren't gullible. And so, they do things that are particularly bad.
00:26:10.720 Because if, in fact, you can't spot the super obvious nature of the spam, then that's saying,
00:26:21.860 okay, I've got a real stupid fish here. And I've hooked a great one, and I can go after this person.
00:26:28.940 Whereas, if they find people who are particularly sophisticated, later on, those folks will catch on,
00:26:35.320 and they will have wasted a lot of their time. So, it's an interesting point that they make the letters
00:26:42.740 intentionally simplistic, unrealistic, riddled with spelling errors, because they're trying to find the
00:26:52.600 stupidest fish in the sea.
00:26:54.660 Right. So, it's sort of like they're signaling. They're putting out a signal to find out the signals
00:27:00.280 of their potential victims.
00:27:02.600 Exactly. It's the latter point that's the key, is that they're looking for their victims to signal
00:27:07.480 that they're not paying attention, that they're gullible. And if they don't have spelling errors,
00:27:14.820 then the other side can't really signal their gullibility.
00:27:19.820 All right. So, these spam guys from Nigeria, they're pretty smart.
00:27:22.420 Eric, they're smart to act stupid.
00:27:27.380 Right. Right. So, let's talk about, I think, something that people might have heard before
00:27:33.400 because of popular culture. I think A Beautiful Mind might have helped. But like, this idea of
00:27:37.480 the Nash equilibrium. I've heard it before, over and over, and I never quite understood until I read
00:27:43.220 your book. But for those who aren't familiar, what is a Nash equilibrium in game theory? Does it
00:27:49.280 happen in sequential games or does it happen in simultaneous games or both?
00:27:54.400 The concept of a Nash equilibrium was developed to help us understand what will happen or a resting
00:28:02.380 point in a simultaneous move game. And so, the challenge is, what do we do in a world where
00:28:11.700 I think, that you think, that I think, and ad infinitum, will happen? I don't get to see what
00:28:19.940 you're doing. You don't get to see what I'm doing. And so, what move is it that I want to make
00:28:24.700 in a world where I have to anticipate what you're doing when you're anticipating what I'm going to do?
00:28:31.680 And that seems like it's an infinitely recursive logic. And it's not clear how you ever cut through
00:28:36.400 that not. And so, the brilliant insight of John Nash is, well, are there a set of strategies or moves
00:28:44.360 such that if I'm doing A and you're doing B, and I think you're doing B, and I think you think I'm
00:28:52.440 doing A, then I still actually want to do A. So, that is, if you've correctly anticipated what I'm
00:28:57.940 going to do, and I've correctly anticipated what you're going to do, neither of us wants to change
00:29:04.220 what we're doing. And that is an attractive candidate for how a game will be played when
00:29:13.660 we can't actually see what the other person has done. So, it sounds like the goal is to get to a
00:29:19.580 Nash equilibrium when you're strategizing. No. No? Okay. Right. It is not a goal. In particular,
00:29:29.300 in the Prisoner's Dilemma, the Nash equilibrium is that both confess. So, that's not a good outcome.
00:29:34.220 So, if we want to predict how a game might be played, then a Nash equilibrium is a good starting
00:29:41.180 place for what players might end up doing. But it is not necessarily desirable as an outcome.
00:29:50.660 Okay. So, as you work through the Nash equilibrium or trying to figure out what the Nash equilibrium
00:29:57.220 is, you're going to find, I guess, what you call dominant strategies in the mix. And then,
00:30:03.780 is that the thing you should take? Is the dominant strategy for you?
00:30:06.440 Well, if there's a dominant strategy, then life is easy. Because it says, whatever I think the other
00:30:13.120 person is doing doesn't matter. It's always the case that I want to do A. A is better than any other
00:30:19.960 strategy. And so, I don't have to consider what the other person is doing. And so, that allows me a real
00:30:24.520 cheat out of this Gordian Knot. The challenge, of course, is that there aren't that many games
00:30:34.760 where there really is a dominant strategy. And so, we have to, more often than not,
00:30:45.160 refer back to a Nash equilibrium to get a better sense of how we should play the game.
00:30:49.300 Okay. So, this idea, when you're strategizing, doing a simultaneous strategy, like you said,
00:30:58.080 you're doing this recursive thing in your head. I'm thinking A, and I think my opponent or
00:31:03.740 competitor is thinking that I'm thinking A. And if he's thinking that, then I'm going to do this.
00:31:09.920 It sounds like it would be good to inject randomness, right? So, you can throw people
00:31:15.300 off. Well, again, it depends. Is my goal to coordinate with you or to get an advantage over
00:31:23.000 you? So, here's, I'll give you two versions of a simultaneous move game. Here's one. You and I
00:31:32.600 both have to pick a number. And if we pick the same number, then we both get that amount of money
00:31:39.040 paid for by a third party. And that number, let's say, has to be between one and ten. It's an integer.
00:31:47.460 So, if we both pick four, we both get four. If we both pick five, we both get five. If you pick four
00:31:55.140 and I pick six, we both get zero. So, yeah, we both pick ten, if we're cooperating.
00:32:00.580 Okay. Well, now you've ruined it because you gave me a sense of what it is you're going to do before
00:32:05.720 we played. Oh, okay. Darn it. But that's okay. So, one of the things this game illustrates is that
00:32:12.920 there's a lot of Nash equilibrium. If I'm going to pick six, what is it that you're going to do in
00:32:19.000 this game? What do you want to do? Well, I'd pick six, too, if I knew that. Exactly. Now, you might say,
00:32:24.740 okay, well, six, six is not such a great outcome. We could both pick ten. I got it. But you have to
00:32:35.020 be sufficiently confident that I'm going to pick ten in order for you to pick ten. And in fact,
00:32:43.360 this is a game where there's multiple Nash equilibria. You might decide, well, it's not that
00:32:48.560 hard to pick between them. Because isn't it obvious that everyone should pick ten? Okay. But
00:32:54.760 actually, this game helps explain a lot of why we see some countries developing faster than others.
00:33:02.220 So, in much of the world, there's corruption. And corruption creates problems with police,
00:33:08.500 with doing business. And you can say, you know, I think a world in which there's no corruption
00:33:14.660 is a better world. But if I think you're going to be corrupt, then I have to be corrupt. And if you
00:33:20.320 think I'm going to be corrupt, then you're going to be corrupt. And so, in such a case, we end up both
00:33:26.840 picking two, if you'd like, and we each get two rather than ten and ten. And so, if you're scared
00:33:33.100 that I'm going to pick two, even if you want to pick ten and you know I want to pick ten, if I'm scared
00:33:40.720 that you think I'm scared, then you might pick two because you think I'm going to pick two.
00:33:45.400 And we both know there's a better answer. But neither of us
00:33:48.500 has the confidence that we're willing to go there.
00:33:51.820 Okay. Well, and so that's one example of a game. What's another
00:33:54.660 example? You said there was another example of a game?
00:33:58.800 Well, so the other version is, if we
00:34:00.700 both pick different numbers,
00:34:03.680 now
00:34:05.000 we get a reward.
00:34:06.600 Well, now it's a little harder.
00:34:10.960 What number do you pick between one and ten?
00:34:13.660 So, how does it determine who gets the
00:34:15.840 money? Is it the person who picks the highest?
00:34:18.240 Well, we both get a third party who's going to pay
00:34:20.740 us money unless we hit a tie.
00:34:23.240 Okay. Yeah.
00:34:26.240 And
00:34:26.600 now you can think about this a little bit as
00:34:28.500 commuting.
00:34:31.240 If we both
00:34:32.620 leave at the same time, there's traffic.
00:34:34.060 And is it, do I leave early
00:34:37.180 and you leave late? Do you leave late
00:34:38.700 and I leave early?
00:34:41.260 How is it that we
00:34:42.440 end up uncoordinating our behavior?
00:34:45.820 And we both agree
00:34:46.640 that we want to be uncoordinated, but it's not quite
00:34:48.460 so clear how it is we go about doing
00:34:50.420 that.
00:34:52.380 Interesting.
00:34:53.220 So, I mean, I guess
00:34:54.420 you would have to, there would have to
00:34:56.580 again be communication
00:34:57.560 if you want to over... Well, if there was
00:35:00.120 communication, then it'd be easy. Just like
00:35:02.200 when you said, hey, I'm going to pick ten.
00:35:03.480 Right.
00:35:05.760 But here,
00:35:07.240 you know,
00:35:09.020 in other cases,
00:35:11.540 if you
00:35:13.700 don't allow
00:35:15.620 communication,
00:35:17.300 then the idea of coordination
00:35:19.560 or discoordination without
00:35:21.500 communication is what
00:35:23.440 becomes the challenge.
00:35:25.720 So, you might say, look,
00:35:27.600 the best thing to do in that case is just
00:35:29.520 pick any number at random, between one and
00:35:31.520 ten.
00:35:31.680 And that actually
00:35:35.320 would be a Nash equilibrium in this
00:35:37.280 particular game.
00:35:39.220 Interesting.
00:35:40.240 And I guess
00:35:41.340 you talk, highlight other
00:35:43.140 examples where
00:35:44.020 randomness would be the best
00:35:45.500 approach.
00:35:45.960 You talk about
00:35:46.840 in soccer,
00:35:48.760 in the penalty
00:35:49.480 kick situation,
00:35:51.020 where,
00:35:51.580 you know,
00:35:52.420 this kicker can either kick
00:35:53.880 right or left,
00:35:54.600 and the goalkeeper has to decide
00:35:56.820 whether he's going to go right.
00:35:58.140 Because he doesn't have enough time
00:35:59.120 to see where the ball is going.
00:36:00.240 He has to, like,
00:36:00.720 make that decision as soon as the guy
00:36:02.120 kicks the ball.
00:36:04.340 So,
00:36:04.720 you are, I guess,
00:36:05.700 in the book,
00:36:06.020 you say that it's in the interest of
00:36:07.660 both the kicker and the goalkeeper
00:36:09.300 just to randomize which way they're
00:36:10.680 going to go.
00:36:12.040 Yeah, well,
00:36:12.580 if, in fact,
00:36:13.560 either side could predict
00:36:14.820 what the other side is going to do,
00:36:16.880 then they'd have a big strategic
00:36:18.060 advantage.
00:36:18.520 If I know
00:36:19.260 that you're always going to kick
00:36:20.840 to the right side,
00:36:22.040 then I'm going to want to
00:36:23.100 jump to the right side
00:36:24.220 to try and prevent the shot.
00:36:26.140 And, of course,
00:36:26.660 if you know that I'm jumping to the
00:36:27.760 right side,
00:36:28.080 then you're going to want to
00:36:28.780 kick to the left.
00:36:30.400 So,
00:36:31.660 each side is trying to
00:36:33.480 get a sense of what the other
00:36:35.600 one is doing,
00:36:36.280 and this is this notion
00:36:37.800 of being unpredictable.
00:36:40.420 Now,
00:36:40.880 what I find interesting
00:36:42.140 in the current presidential
00:36:44.560 election
00:36:45.020 is that
00:36:46.320 that discussion
00:36:47.480 of whether or not
00:36:48.300 you always want to
00:36:48.920 keep people guessing
00:36:50.320 is a strategy
00:36:52.000 that Donald Trump
00:36:52.740 talks about a lot.
00:36:54.420 And sometimes
00:36:55.400 it's the right strategy
00:36:56.420 and sometimes not.
00:36:58.340 So,
00:36:58.560 if it's the case
00:37:00.560 that I'm thinking about
00:37:01.420 doing a surprise
00:37:02.340 military attack,
00:37:03.840 yeah,
00:37:04.140 then I don't want people
00:37:05.040 to know
00:37:05.580 which day I'm going,
00:37:07.640 where,
00:37:08.000 am I going to land
00:37:08.760 at Dunkirk?
00:37:09.840 Am I going to land
00:37:10.620 in Normandy?
00:37:11.980 I don't want the enemy
00:37:13.560 to have a heads up
00:37:15.500 in terms of where
00:37:16.180 my troops are landing.
00:37:17.800 On the other hand,
00:37:19.600 if it's something like,
00:37:21.560 what will my response be
00:37:23.180 if Russia attacks
00:37:25.360 a NATO country
00:37:26.300 or if China
00:37:27.480 attacks Taiwan,
00:37:28.560 I want the other side
00:37:30.320 to know
00:37:30.820 without any randomness
00:37:32.320 that NATO
00:37:34.500 will respond.
00:37:36.200 And we don't want
00:37:37.520 people to be guessing
00:37:38.300 in that case.
00:37:39.320 We want to turn this
00:37:40.300 into a sequential move game
00:37:41.540 where they can imagine
00:37:43.240 and know
00:37:44.740 with great certainty
00:37:45.660 rather than have to guess
00:37:46.760 what the response is
00:37:48.180 because if they think
00:37:48.940 NATO is going to respond
00:37:49.940 or U.S. will respond
00:37:51.220 to attack of Taiwan,
00:37:52.700 then they won't go
00:37:53.700 and initiate the conflict.
00:37:54.880 So you have to strategize
00:37:57.140 about your strategy.
00:38:00.160 It's an English expression,
00:38:01.640 different horses
00:38:02.260 for different courses.
00:38:03.700 And that it's not the case
00:38:05.200 that playing a random strategy
00:38:06.560 is always a good idea.
00:38:08.340 Sometimes you want to keep
00:38:09.400 the other side guessing
00:38:10.140 what you're going to do.
00:38:11.260 And other times,
00:38:12.320 exactly the reverse.
00:38:13.980 You want to make sure
00:38:14.660 they don't guess,
00:38:15.920 that they know
00:38:16.660 and can anticipate
00:38:17.480 with precision
00:38:18.420 how you will respond
00:38:20.040 to any action.
00:38:20.800 Another example you gave
00:38:22.620 I thought was interesting
00:38:23.220 about randomness
00:38:24.080 being effective
00:38:24.940 is with parking tickets.
00:38:30.040 So you just kind of,
00:38:30.980 you randomly enforce
00:38:32.080 who gets a parking ticket
00:38:33.460 and then that kind of
00:38:34.580 keeps people on the lookout.
00:38:36.240 Like, I better not do that
00:38:37.160 because there's a chance
00:38:38.120 that I could get a ticket.
00:38:40.740 But if people knew
00:38:41.640 in advance,
00:38:42.160 like, okay,
00:38:42.600 if they broke,
00:38:43.240 if they were parking
00:38:44.080 when they shouldn't be parking,
00:38:45.240 they knew they'd get a ticket
00:38:46.100 then they,
00:38:47.460 or they knew there's times
00:38:48.660 when they wouldn't get a ticket,
00:38:49.540 they would just
00:38:50.000 break the law all the time.
00:38:52.900 Well, here's the thing.
00:38:54.620 If we said we're only,
00:38:56.320 if, think about
00:38:56.900 enforcing parking tickets
00:38:58.460 a lot like the IRS
00:38:59.440 enforcing tax returns.
00:39:01.660 And if we said
00:39:02.380 we're only going to audit
00:39:03.200 people this year
00:39:04.180 whose name begins with D,
00:39:07.420 then 25 out of the 26 letters
00:39:09.720 would say,
00:39:10.040 okay, this is not the,
00:39:10.900 this is the year
00:39:11.420 I can get away with cheating.
00:39:13.940 And so,
00:39:15.160 instead you want to say,
00:39:16.960 look,
00:39:17.340 we don't have the resources
00:39:18.820 to audit every parked car
00:39:21.480 or every tax exam.
00:39:23.440 But what we're going to do
00:39:24.600 is keep you guessing
00:39:26.340 as to which ones
00:39:27.660 we're going to look at.
00:39:28.940 And if we find
00:39:30.060 that you have made a mistake,
00:39:32.040 then we're not just going to ask you
00:39:33.300 to pay the cost
00:39:34.260 of the parking meter,
00:39:35.300 the 50 cents
00:39:36.440 that you would have put in.
00:39:37.600 We're actually going to stick you
00:39:38.600 with a $25 fine.
00:39:39.800 And if we find
00:39:42.080 that you cheated
00:39:42.560 on your tax returns,
00:39:43.480 we're not just going to ask you
00:39:44.680 for the amount of money
00:39:45.800 that you should have paid.
00:39:47.100 We're going to impose
00:39:47.740 severe penalties on you.
00:39:49.440 And so,
00:39:49.920 we reduce the probability
00:39:51.260 of getting caught,
00:39:53.080 but we increase the penalties
00:39:54.340 and we keep people guessing
00:39:56.920 by picking a random audit
00:39:59.340 or random enforcement strategy.
00:40:01.760 So, I mean,
00:40:01.960 how do you determine
00:40:02.640 when a random strategy is best
00:40:05.300 or when it's not best?
00:40:06.380 I mean,
00:40:06.540 how do you sit down and think?
00:40:08.160 You look at the situation,
00:40:10.140 you have to decide,
00:40:11.100 okay,
00:40:11.320 should I inject some randomness here
00:40:12.980 or maybe I should go
00:40:13.880 make it a sequential game?
00:40:15.060 How do you make that decision?
00:40:17.200 Well,
00:40:17.720 it is actually,
00:40:18.700 in the case of taxes,
00:40:19.860 a little bit of sequential game.
00:40:22.740 The issue is,
00:40:24.180 in terms of randomness,
00:40:25.080 do I care
00:40:26.180 if the other side
00:40:27.180 knows what I'm doing?
00:40:29.540 Okay?
00:40:30.060 So,
00:40:30.880 if I'm playing
00:40:31.740 to shoot to the right
00:40:32.760 and I announce
00:40:35.280 that ahead of time,
00:40:36.140 would that be a good
00:40:38.200 or a bad thing?
00:40:39.620 And in the case of soccer,
00:40:40.520 it would be a bad thing.
00:40:41.200 If I said it ahead of time,
00:40:42.900 who it is
00:40:43.440 that I was going to audit,
00:40:44.660 would that be a good
00:40:45.780 or a bad thing?
00:40:47.320 And the answer is,
00:40:49.060 it's a bad thing
00:40:49.940 because then I would be auditing
00:40:51.520 exactly the wrong people.
00:40:54.220 And so,
00:40:54.880 if I move
00:40:56.280 in a way
00:40:57.960 that I don't want
00:40:59.060 other people
00:40:59.800 to anticipate
00:41:00.980 what it is
00:41:01.780 that I'm doing,
00:41:02.720 to know what I'm doing,
00:41:03.740 that's typically the case
00:41:05.620 where a random strategy
00:41:06.920 is a good idea.
00:41:08.640 Okay.
00:41:09.620 That makes sense.
00:41:10.920 All right,
00:41:11.060 so we've been talking
00:41:11.720 about game theory
00:41:13.500 whenever other people
00:41:15.020 are involved.
00:41:15.980 But you have this
00:41:16.400 interesting section
00:41:17.160 in your book
00:41:17.620 about using game theory
00:41:18.920 against yourself
00:41:20.040 for self-improvement,
00:41:22.220 personal improvement.
00:41:23.660 So,
00:41:24.160 how can we play,
00:41:25.680 apply game theory
00:41:26.480 towards ourself
00:41:27.880 when there's just
00:41:28.420 one of us?
00:41:29.920 Well,
00:41:30.280 the thing is,
00:41:30.740 there isn't just one of us.
00:41:31.740 So,
00:41:32.560 the game we're playing
00:41:33.480 is against this person
00:41:34.700 who we might call
00:41:35.440 our future self.
00:41:37.360 And we have all sorts
00:41:38.140 of aspirations
00:41:38.740 for that future person.
00:41:40.560 That person's going
00:41:41.060 to exercise more,
00:41:42.040 eat less,
00:41:43.720 not smoke,
00:41:45.200 be nicer.
00:41:47.460 And,
00:41:48.340 while we would like
00:41:49.280 this other person
00:41:50.080 to do all those things,
00:41:51.960 when we become
00:41:52.780 that person in the future,
00:41:54.100 we may decide,
00:41:55.000 you know,
00:41:55.380 I'm going to have
00:41:56.020 another cookie.
00:41:57.480 No,
00:41:58.020 I'm a little tired today.
00:41:59.300 No room to exercise.
00:42:00.940 I can delay
00:42:02.060 my quitting smoking
00:42:03.840 for another week.
00:42:06.800 And so,
00:42:07.300 what we want to do
00:42:08.520 is create a game
00:42:09.660 where we change
00:42:10.520 the payoffs
00:42:11.180 for this future person
00:42:13.480 while today
00:42:14.960 we have some ability
00:42:16.080 to do that.
00:42:17.760 And,
00:42:18.060 that's anything,
00:42:19.380 there's a,
00:42:19.840 there's a gentleman
00:42:21.400 who I think
00:42:21.940 put signs up
00:42:22.820 all over town
00:42:23.500 which said,
00:42:24.660 if you find me
00:42:25.720 eating any pies
00:42:26.960 or desserts,
00:42:28.200 you can collect
00:42:29.260 a thousand dollars.
00:42:30.940 And so,
00:42:32.600 the idea is
00:42:33.720 that
00:42:34.100 now he's turned
00:42:36.400 all of his neighbors
00:42:37.640 into enforcers
00:42:39.740 for him
00:42:40.040 and while he might
00:42:40.660 like to have that pie,
00:42:42.440 now he knows
00:42:43.660 that it'll cost him
00:42:44.320 a thousand dollars
00:42:45.060 and that's not worth it.
00:42:47.500 And so,
00:42:48.140 two colleagues
00:42:48.920 of mine here
00:42:49.600 at Yale,
00:42:51.460 Dean Carlin
00:42:52.040 and Ian Ayers,
00:42:53.240 started a website
00:42:54.160 called
00:42:54.660 stick,
00:42:55.160 S-T-I-C-K-K
00:42:57.160 dot com
00:42:58.560 that's also run
00:43:00.040 by one of my
00:43:00.600 former students,
00:43:01.420 Jordan Goldberg
00:43:01.980 and
00:43:03.280 in this website
00:43:05.940 it allows you
00:43:07.260 to make commitments
00:43:08.100 against yourself
00:43:09.440 or contracts
00:43:10.320 against yourself
00:43:11.040 where you say,
00:43:12.620 if I do X
00:43:13.800 then I will have
00:43:15.240 to pay something
00:43:16.500 to other people
00:43:17.540 as a consequence
00:43:18.880 and therefore
00:43:20.200 I'm not going to want
00:43:21.420 to go ahead
00:43:22.120 and do it.
00:43:23.100 I've used stick
00:43:25.460 in the past
00:43:26.320 for myself
00:43:26.920 when I had
00:43:28.060 writing deadlines
00:43:28.860 and I needed
00:43:30.280 to get it done.
00:43:31.340 What I think
00:43:31.560 is interesting
00:43:31.920 about stick
00:43:32.320 is that you can
00:43:32.880 set it up
00:43:33.360 to where
00:43:33.800 you can have
00:43:34.640 the money
00:43:35.000 go to
00:43:35.520 what they call
00:43:36.080 an anti-charity.
00:43:37.780 Exactly.
00:43:38.320 For an organization
00:43:38.960 because I mean
00:43:39.360 you could say,
00:43:39.840 okay,
00:43:40.160 donate the money
00:43:40.860 to the American
00:43:41.360 Heart Association
00:43:42.180 like you won't
00:43:43.240 feel that bad
00:43:44.160 if you're 500-
00:43:45.540 I was going to do it
00:43:46.120 anyway so that's not
00:43:46.960 so I have to give it
00:43:47.680 to the John Birch Society.
00:43:48.780 Right, exactly.
00:43:49.800 It's some organization
00:43:50.600 you absolutely despise.
00:43:53.240 Detest.
00:43:54.100 Yeah.
00:43:54.580 Yeah, so that's
00:43:55.380 Game Theory in Action.
00:43:56.540 That was a result
00:43:57.500 of Game Theory,
00:43:58.120 that website.
00:43:58.940 Yeah.
00:44:00.240 Absolutely.
00:44:01.420 You can either
00:44:02.520 give a small amount
00:44:04.180 of money
00:44:04.520 to a charity
00:44:05.600 you despise
00:44:06.340 or a large amount
00:44:07.660 of money
00:44:08.060 much more
00:44:09.100 than you would have liked
00:44:10.020 to somebody
00:44:11.120 who you feel neutral about
00:44:12.680 or even like.
00:44:13.420 So yeah,
00:44:13.900 I may like
00:44:14.420 the American Cancer Society
00:44:15.420 but not enough
00:44:16.280 to give them
00:44:16.620 $100,000.
00:44:17.160 Right.
00:44:19.520 And so
00:44:20.080 you may say
00:44:21.020 giving $1,000
00:44:22.360 to the John Birch Society
00:44:23.580 and $100,000
00:44:24.640 to the American Cancer Society
00:44:26.080 sort of are both
00:44:27.160 equally painful to me
00:44:28.280 if you'd like.
00:44:29.260 Right.
00:44:30.440 That's interesting.
00:44:30.980 So yeah,
00:44:31.300 I've used it before
00:44:32.680 and I think we've talked
00:44:33.260 about it before
00:44:33.640 on the podcast before
00:44:34.580 and if any of you
00:44:35.700 have had trouble
00:44:37.320 with the goal
00:44:38.100 it is extremely effective.
00:44:41.340 I've had home runs
00:44:42.540 with it.
00:44:42.760 I've never had a problem
00:44:43.600 having to pay up
00:44:45.380 my money
00:44:45.800 because I always get
00:44:46.760 the thing done.
00:44:47.060 In my own life
00:44:47.860 before STIC existed
00:44:50.180 I was teaching
00:44:51.900 a large class
00:44:52.820 and I showed up
00:44:53.920 the first day
00:44:54.380 with a scale
00:44:55.000 and I told my students
00:44:57.120 that if I didn't lose
00:44:58.360 15 pounds
00:44:59.060 during the semester
00:44:59.740 I would teach
00:45:00.240 my last class
00:45:00.940 in a Speedo.
00:45:03.940 And I
00:45:04.820 offered my students
00:45:08.400 the same ability
00:45:09.220 if they wanted
00:45:09.840 to weigh themselves
00:45:10.580 and they would have
00:45:12.160 to come to the last class
00:45:13.260 in a Speedo.
00:45:14.260 And I think
00:45:16.700 about 15
00:45:17.240 took me up
00:45:17.740 on this
00:45:18.120 and when my dean
00:45:20.200 heard about it
00:45:20.880 he was none too pleased.
00:45:21.980 He said,
00:45:22.240 you know,
00:45:22.500 Barry,
00:45:23.040 you don't understand
00:45:24.120 being a professor
00:45:24.740 here at Yale
00:45:25.420 it's just not appropriate
00:45:27.320 it's just not done
00:45:28.500 I would be
00:45:30.580 very upset
00:45:31.400 if you taught
00:45:32.520 your last class
00:45:33.260 in a Speedo.
00:45:34.900 And my response
00:45:35.960 was,
00:45:36.620 yeah,
00:45:37.180 me too
00:45:37.640 and that's why
00:45:38.680 it's not going
00:45:39.160 to happen.
00:45:40.860 And of course
00:45:41.620 I lost the weight.
00:45:42.340 so
00:45:44.220 that
00:45:45.840 the idea
00:45:47.280 of committing yourself
00:45:48.280 to do something
00:45:48.780 you really wouldn't like
00:45:49.660 and so I
00:45:50.260 worked with
00:45:51.580 ABC Primetime
00:45:52.560 and did a TV show
00:45:54.900 where
00:45:55.180 we took photographs
00:45:56.860 of people
00:45:57.540 wearing Speedos
00:45:58.700 before they lost weight
00:46:00.860 and these were people
00:46:02.100 who really needed
00:46:02.980 to lose weight
00:46:03.520 and these photographs
00:46:04.300 of them at a Speedo
00:46:05.200 were none too attractive
00:46:06.880 and the deal was
00:46:08.580 if they didn't lose
00:46:09.460 15 pounds
00:46:10.300 over the next
00:46:10.920 six weeks
00:46:11.520 those photographs
00:46:13.740 would be posted
00:46:14.340 online
00:46:15.060 on TV
00:46:15.760 and
00:46:17.120 as one woman
00:46:18.260 said
00:46:18.820 you know
00:46:19.940 I know that
00:46:20.540 being obese
00:46:21.340 will lead to
00:46:22.480 heart attacks
00:46:23.380 diabetes
00:46:24.040 stroke
00:46:24.640 and death
00:46:25.200 but
00:46:26.360 that hasn't been
00:46:27.100 enough to get me
00:46:27.740 to lose weight
00:46:28.320 having my ex-boyfriend
00:46:30.520 see me hanging out
00:46:31.340 of a bikini
00:46:31.840 that's the motivation
00:46:33.480 I need
00:46:33.960 and let's be clear
00:46:35.760 we didn't trick people
00:46:36.720 into doing this
00:46:37.460 they voluntarily
00:46:38.760 signed up
00:46:39.540 and they were
00:46:40.020 happy to do it
00:46:40.920 because they knew
00:46:42.120 that doing this
00:46:42.940 would actually
00:46:43.800 provide them
00:46:44.660 the deadline
00:46:46.240 and the incentive
00:46:46.820 they needed
00:46:47.380 to get started
00:46:48.440 and to lose weight
00:46:49.300 but you talk about
00:46:51.420 in the book though
00:46:52.000 it lost its effectiveness
00:46:54.380 that game
00:46:55.160 because
00:46:56.000 someone didn't
00:46:57.460 lose the weight
00:46:58.080 and there was
00:46:58.880 a lawsuit
00:46:59.320 involved
00:46:59.700 you can't let
00:47:00.740 that image
00:47:01.420 be shown
00:47:02.600 and they didn't
00:47:03.560 not quite
00:47:04.340 one person
00:47:06.520 didn't lose
00:47:08.440 the weight
00:47:08.800 she actually
00:47:09.920 went down
00:47:10.300 several dress sizes
00:47:11.460 and I think
00:47:12.060 it gained some muscle
00:47:12.800 because she was
00:47:13.440 really working out
00:47:14.240 and so
00:47:14.580 really looked better
00:47:15.660 and the wimps
00:47:18.180 at ABC
00:47:20.160 decided
00:47:21.680 they didn't
00:47:22.360 want to show
00:47:23.220 the photographs
00:47:24.140 because they were
00:47:25.500 afraid of a lawsuit
00:47:26.380 and so
00:47:28.860 as a result
00:47:29.580 now when they
00:47:31.560 tried to redo
00:47:32.360 the series
00:47:32.960 people
00:47:34.980 would say
00:47:35.800 well
00:47:36.100 you know
00:47:36.480 okay
00:47:36.740 if I don't
00:47:37.120 lose the weight
00:47:37.640 all I have to do
00:47:38.800 is threaten
00:47:39.180 a lawsuit
00:47:39.620 and
00:47:41.340 I'll get off
00:47:43.260 the hook
00:47:43.580 and so
00:47:44.720 it's just not
00:47:45.280 the threat
00:47:46.060 no longer
00:47:46.540 really exists
00:47:47.220 right
00:47:47.800 so what makes
00:47:48.420 that work
00:47:48.800 is the threat
00:47:49.440 that's part
00:47:49.900 that's one
00:47:50.320 aspect of
00:47:51.260 strategizing
00:47:52.040 is using
00:47:52.480 threats
00:47:53.020 well it's
00:47:54.900 or a promise
00:47:56.000 in this particular
00:47:56.460 case
00:47:56.820 I mean
00:47:57.060 it's a sense
00:47:57.620 of
00:47:57.860 we reach
00:47:58.880 this agreement
00:47:59.400 and if you
00:48:01.080 lose the weight
00:48:01.660 we don't show it
00:48:02.340 and if you do
00:48:02.820 lose the weight
00:48:03.280 we will show it
00:48:04.080 and while
00:48:05.380 if you're going to
00:48:06.460 do this only once
00:48:07.380 it's nice to let
00:48:08.400 somebody off the hook
00:48:09.160 because it doesn't
00:48:09.680 really matter
00:48:10.260 if you ever thought
00:48:12.540 about doing it again
00:48:13.420 then once you let
00:48:14.420 them off the hook
00:48:15.080 you have no
00:48:15.740 credibility in this
00:48:16.700 well that
00:48:18.380 that leads me to
00:48:19.060 I think
00:48:19.420 a nice segue
00:48:20.360 to my next
00:48:20.820 question
00:48:21.120 about using
00:48:21.620 game theory
00:48:22.480 as a parent
00:48:24.260 with your children
00:48:25.520 to influence
00:48:26.960 good behavior
00:48:27.780 because as you
00:48:29.680 said
00:48:29.880 you can
00:48:30.440 there's that
00:48:31.280 tendency
00:48:31.660 if you're
00:48:32.400 talking to your
00:48:33.060 child about
00:48:33.500 something
00:48:33.820 and you let
00:48:34.180 them off the
00:48:34.560 hook once
00:48:35.200 they might get
00:48:36.240 in their head
00:48:36.540 well
00:48:36.840 if mom and dad
00:48:38.100 did it this
00:48:38.500 one time
00:48:39.140 well maybe
00:48:39.680 if I do
00:48:39.920 the same
00:48:40.300 thing again
00:48:40.740 they'll do
00:48:41.120 it again
00:48:41.640 so there's a
00:48:45.020 great book on
00:48:45.700 this called
00:48:46.160 Parentonomics
00:48:47.400 Parentonomics
00:48:49.040 by Joshua Gans
00:48:50.480 which applies
00:48:51.460 game theory
00:48:52.120 to child
00:48:52.820 rearing
00:48:53.140 and you
00:48:55.740 might decide
00:48:56.220 the following
00:48:56.720 is a
00:48:57.340 evil
00:48:58.460 sinister
00:49:00.200 version of
00:49:01.140 parenting
00:49:01.680 but let's
00:49:02.960 say that
00:49:03.420 you've got
00:49:04.420 two kids
00:49:05.020 and you
00:49:07.220 know as a
00:49:07.580 parent you
00:49:08.280 really believe
00:49:08.940 that you
00:49:09.480 can't
00:49:11.020 hit a
00:49:11.680 kid
00:49:12.060 or
00:49:12.440 corporal
00:49:13.080 punishment
00:49:13.380 that just
00:49:14.300 doesn't
00:49:15.060 go anymore
00:49:15.760 and the
00:49:16.720 kid knows
00:49:17.140 it
00:49:17.480 and so
00:49:18.000 is willing
00:49:18.260 to take
00:49:18.460 advantage
00:49:18.820 of you
00:49:19.160 well
00:49:21.320 so the
00:49:22.100 parent says
00:49:22.900 to the
00:49:23.200 older kid
00:49:23.800 look
00:49:24.440 if you
00:49:24.960 misbehave
00:49:25.700 I'm going
00:49:27.720 to punish
00:49:28.180 your sibling
00:49:28.720 and
00:49:32.260 the punishment
00:49:33.900 might be
00:49:34.720 whether it be
00:49:35.200 going to bed
00:49:35.600 early or
00:49:36.360 you can't go
00:49:37.980 see a movie
00:49:38.420 or something
00:49:38.880 well I'll
00:49:40.800 tell you
00:49:41.120 one sibling
00:49:41.640 has no
00:49:42.240 compulsion
00:49:42.960 against
00:49:44.320 hitting or
00:49:45.740 doing things
00:49:46.600 to annoy
00:49:47.000 the other
00:49:47.320 sibling
00:49:47.580 much more
00:49:48.120 so than
00:49:48.520 any parent
00:49:48.980 can do
00:49:49.380 and so
00:49:50.500 if you
00:49:50.680 want to
00:49:50.840 make a
00:49:51.100 credible
00:49:51.440 threat
00:49:51.920 to one
00:49:53.000 kid
00:49:53.300 that if
00:49:55.100 they really
00:49:55.480 misbehave
00:49:56.220 there's going
00:49:57.060 to be
00:49:57.500 a serious
00:49:58.040 consequence
00:49:58.480 to it
00:49:59.020 the idea
00:50:00.220 of punishing
00:50:00.700 the other
00:50:01.100 one in
00:50:01.460 response
00:50:01.940 might be
00:50:03.640 a much
00:50:03.900 more
00:50:04.080 effective
00:50:04.340 threat
00:50:04.660 okay
00:50:06.220 that's
00:50:07.300 interesting
00:50:07.680 I'm going
00:50:07.980 to I might
00:50:08.740 try that
00:50:09.240 on my
00:50:09.600 five and
00:50:09.960 three year
00:50:10.260 old
00:50:10.800 we'll see
00:50:11.140 it's wildly
00:50:12.920 unfair
00:50:13.380 and that's
00:50:14.300 actually part
00:50:14.780 of the whole
00:50:15.060 point
00:50:15.400 is that
00:50:16.740 it's unfair
00:50:17.220 all right
00:50:19.040 well any
00:50:19.400 other ways
00:50:20.580 that parents
00:50:21.080 can use
00:50:21.420 game theory
00:50:21.940 to you know
00:50:22.560 influence
00:50:23.180 good behavior
00:50:23.620 like get
00:50:23.940 your kid
00:50:24.260 to read
00:50:24.840 more or
00:50:25.980 say please
00:50:27.180 or thank you
00:50:27.600 I mean just
00:50:27.960 anything
00:50:28.500 well the one
00:50:30.500 that you
00:50:31.820 I don't know
00:50:32.260 how much
00:50:32.580 game theory
00:50:32.900 you'll think
00:50:33.280 and this
00:50:33.540 is my
00:50:34.640 strategy
00:50:35.960 with our
00:50:36.900 kids
00:50:37.200 was to
00:50:38.860 give them
00:50:39.540 an opportunity
00:50:40.440 to have a
00:50:41.160 list of
00:50:41.560 three foods
00:50:42.360 that they
00:50:43.040 didn't have
00:50:43.380 to eat
00:50:43.680 and they
00:50:46.400 could change
00:50:46.940 that list
00:50:47.480 after any
00:50:48.160 meal but
00:50:48.720 not before
00:50:49.240 a meal
00:50:49.520 and so
00:50:51.840 this
00:50:52.820 led them
00:50:54.180 to think
00:50:55.260 about strategies
00:50:56.020 in the sense
00:50:56.800 of well dad
00:50:57.540 doesn't like
00:50:58.000 brussel sprouts
00:50:58.620 so even if I
00:50:59.800 don't like
00:51:00.100 brussel sprouts
00:51:00.600 I probably
00:51:01.060 don't have to
00:51:01.460 put it on the
00:51:01.860 list because
00:51:02.560 it's not going
00:51:03.060 to be served
00:51:03.640 very often
00:51:04.300 and then
00:51:06.940 it's
00:51:07.740 how
00:51:08.380 intensely
00:51:09.320 do I
00:51:09.740 dislike
00:51:10.140 this
00:51:10.500 how often
00:51:11.880 is something
00:51:12.260 going to be
00:51:12.620 served
00:51:12.980 and so
00:51:14.820 they have
00:51:15.120 control
00:51:15.640 in the sense
00:51:16.460 that they
00:51:16.720 really don't
00:51:17.060 like something
00:51:17.520 they can
00:51:17.820 put it
00:51:18.060 on a
00:51:18.260 list
00:51:18.540 but
00:51:19.940 they have
00:51:20.800 to choose
00:51:21.480 which items
00:51:22.720 they can't
00:51:23.180 have an
00:51:23.440 infinite list
00:51:24.040 and that
00:51:26.660 seemed to
00:51:27.260 eliminate
00:51:27.680 most fights
00:51:28.460 that we
00:51:28.680 had over
00:51:29.080 food
00:51:29.400 okay
00:51:30.220 that's
00:51:30.600 interesting
00:51:30.920 too
00:51:31.120 I'm going
00:51:31.560 to try
00:51:31.780 that one
00:51:32.120 out too
00:51:32.500 all my
00:51:33.560 kiddos
00:51:33.900 well
00:51:35.000 Barry
00:51:35.260 this has
00:51:35.540 been a
00:51:35.920 really
00:51:36.200 interesting
00:51:36.500 conversation
00:51:37.040 where can
00:51:37.420 people learn
00:51:37.820 more about
00:51:38.180 your book
00:51:38.760 and your
00:51:39.060 work
00:51:39.440 if they
00:51:41.760 go to
00:51:42.300 barrynailbuff.com
00:51:43.980 there's a
00:51:45.940 whole list
00:51:46.560 of books
00:51:47.740 that
00:51:48.140 I've
00:51:49.800 co-authored
00:51:50.660 on everything
00:51:51.680 from
00:51:52.400 innovation
00:51:53.120 to
00:51:53.820 competing
00:51:55.720 and cooperating
00:51:56.480 to
00:51:57.080 problem
00:51:58.560 solving
00:51:58.940 to
00:51:59.440 business
00:52:00.080 strategy
00:52:00.620 to
00:52:01.200 startups
00:52:02.240 there's
00:52:03.660 my
00:52:03.960 most
00:52:04.200 recent
00:52:04.420 book
00:52:04.700 is
00:52:05.040 done
00:52:05.640 in
00:52:05.740 graphic
00:52:06.020 form
00:52:06.440 telling
00:52:07.340 the
00:52:07.500 startup
00:52:07.880 of
00:52:08.260 honesty
00:52:08.720 and
00:52:09.920 then
00:52:10.260 I've
00:52:10.860 just
00:52:11.100 done
00:52:11.360 a
00:52:11.540 free
00:52:12.020 online
00:52:15.300 course
00:52:16.000 on
00:52:16.380 negotiation
00:52:17.000 which
00:52:18.000 takes
00:52:18.380 game
00:52:18.660 theory
00:52:18.900 and
00:52:19.040 applies
00:52:19.420 it
00:52:19.640 to
00:52:19.920 how
00:52:20.180 you
00:52:20.320 can
00:52:20.440 become
00:52:20.720 a
00:52:20.900 better
00:52:21.080 negotiator
00:52:21.680 and
00:52:22.300 that's
00:52:22.600 at
00:52:22.800 Coursera
00:52:23.560 C-O-U-R-S-E-R-A
00:52:26.120 dot org
00:52:27.080 and
00:52:27.800 if
00:52:27.900 you
00:52:28.000 want
00:52:28.220 the
00:52:28.360 whole
00:52:28.640 link
00:52:28.980 it's
00:52:29.220 Coursera
00:52:29.600 dot org
00:52:30.240 slash
00:52:30.760 learn
00:52:31.260 slash
00:52:31.920 negotiation
00:52:32.760 perfect
00:52:33.980 well
00:52:34.560 Barry
00:52:34.780 Nailbuff
00:52:35.080 thank you
00:52:35.420 so much
00:52:35.620 your time
00:52:35.880 it's
00:52:36.080 been
00:52:36.220 a
00:52:36.360 pleasure
00:52:36.640 thank
00:52:37.700 you
00:52:37.840 for
00:52:38.020 having
00:52:38.260 me
00:52:38.880 my
00:52:40.340 guest
00:52:40.500 today
00:52:40.600 was
00:52:40.720 Barry
00:52:40.940 Nailbuff
00:52:41.300 he's
00:52:41.500 the
00:52:41.620 author
00:52:41.840 of
00:52:41.960 the
00:52:42.020 book
00:52:42.160 the
00:52:42.320 art
00:52:42.540 of
00:52:42.680 strategy
00:52:43.140 it's
00:52:43.400 available
00:52:43.660 on
00:52:43.820 amazon.com
00:52:44.620 and
00:52:44.760 bookstores
00:52:45.160 everywhere
00:52:45.620 also
00:52:45.940 check
00:52:46.180 out
00:52:46.320 his
00:52:46.440 website
00:52:46.760 barry
00:52:47.120 nailbuff
00:52:47.580 dot com
00:52:48.080 for
00:52:48.220 more
00:52:48.380 information
00:52:48.720 about
00:52:48.960 his
00:52:57.080 well
00:53:06.940 that
00:53:07.140 wraps
00:53:07.360 up
00:53:07.540 another
00:53:07.840 edition
00:53:08.160 of
00:53:08.360 the
00:53:08.500 art
00:53:08.700 of
00:53:08.880 manliness
00:53:09.260 podcast
00:53:09.640 for
00:53:09.900 more
00:53:10.100 manly
00:53:10.340 tips
00:53:10.540 and
00:53:10.620 advice
00:53:10.880 make
00:53:11.120 sure
00:53:11.260 to
00:53:11.360 check
00:53:11.500 out
00:53:11.620 the
00:53:11.740 art
00:53:11.900 of
00:53:12.020 manliness
00:53:12.340 website
00:53:12.740 at
00:53:12.900 art
00:53:13.020 of
00:53:13.140 manliness
00:53:13.460 dot com
00:53:14.120 our
00:53:14.540 show
00:53:14.740 is
00:53:14.860 edited
00:53:15.080 by
00:53:15.260 creative
00:53:15.500 audio
00:53:15.740 lab
00:53:15.960 here
00:53:16.140 in
00:53:16.280 Tulsa
00:53:16.560 Oklahoma
00:53:16.940 if
00:53:17.120 you
00:53:17.160 have
00:53:17.260 any
00:53:17.400 audio
00:53:17.720 editing
00:53:17.960 needs
00:53:18.340 or
00:53:18.460 any
00:53:18.620 audio
00:53:19.100 production
00:53:19.540 needs
00:53:19.800 check
00:53:20.000 them
00:53:20.120 out
00:53:20.220 at
00:53:20.360 creative
00:53:20.580 audio
00:53:20.840 lab
00:53:21.100 dot
00:53:21.380 com
00:53:21.680 and
00:53:22.380 we
00:53:22.520 appreciate
00:53:22.920 your
00:53:23.300 reviews
00:53:23.820 on
00:53:27.080 your
00:53:27.200 support
00:53:27.480 and
00:53:27.620 until
00:53:27.860 next
00:53:28.060 time
00:53:28.220 this
00:53:28.320 is
00:53:28.420 Brett
00:53:28.620 McKay
00:53:28.940 telling
00:53:29.200 you
00:53:29.320 to
00:53:29.460 stay
00:53:29.860 manly