Real Coffee with Scott Adams - August 07, 2022


Episode 1828 Scott Adams: Everything Is Going To Change Soon. I Will Tell You Why. Bring Coffee


Episode Stats

Length

1 hour and 8 minutes

Words per Minute

144.48369

Word Count

9,860

Sentence Count

766

Misogynist Sentences

12

Hate Speech Sentences

13


Summary

Trump wants to get rid of the Deep State, and it s a good idea. Iran is funding rockets into Israel, and Israel is responding in kind of a weird way. Is this a good or bad thing? And why do they need each other?


Transcript

00:00:00.000 counts too. But all I know for sure is that you're doing great. And today will be a highlight
00:00:08.000 of your entire life. It's called Coffee with Scott Adams. It's always that good.
00:00:12.840 And if you'd like to take it up a notch, and that's the kind of person you are,
00:00:16.940 you like to take it up a notch. All you need is a cup or a mug or a glass,
00:00:20.780 a tank or a gel, a stein, a canteen, a jug, a flask, a vessel of any kind,
00:00:23.500 fill it with your favorite liquid. I like coffee. And join me now for the unparalleled
00:00:30.380 pleasure. It's the dopamine hit of the day. It's the thing that makes everything better.
00:00:35.880 It's going to happen now. It's called the simultaneous sip. Go.
00:00:45.020 Ah, yeah, that was the best sip ever.
00:00:46.700 Yep. Well, I understand that Trump wants to drain the swamp again. And by again, I mean
00:00:56.760 for the first time. And to do that, he wants some kind of legislation that would allow him
00:01:02.200 to fire government employees at will. Is that a good idea? I'm not so sure I want that.
00:01:10.640 Because I feel like, I mean, when you first hear it, you say, oh, obviously good idea, right? I mean,
00:01:19.720 on the surface, obviously good idea. Because you always want to be able to fire people for bad
00:01:24.520 performance. But I also have this concern that the deep state with all of its evil is the only
00:01:32.980 stability the government has. You know, because the political people come and go. I'm not so sure
00:01:38.340 I want the political people to wipe out the group of people who knows how to do stuff.
00:01:46.020 So, I don't know, Reagan did it? Were the laws different then? I mean, the president could
00:01:51.600 always fire who the president had direct control over. But, I don't know, this is one where I think
00:01:58.260 you have to watch for the unintended consequences. So if you look at it on the surface, I would say,
00:02:06.000 good idea, right? Good idea. But if you say, but what happens if you have this new power?
00:02:15.120 And what happens if the next president has it? What happens when the Democrat gets in there?
00:02:19.680 You still like it? I don't know. It might be that the permanent employees are the stability that we
00:02:27.540 need, and we would miss them. I don't know. I suppose you could try it, if you could test it for a while.
00:02:37.080 So Israel is under continuous rocket attacks. And, of course, Iran is funding the terrorists that are
00:02:46.300 attacking them. But Israel has also responded by killing two top leaders on the other team.
00:02:54.180 I don't know what you want to call them. Are they all terrorists? I guess Israel would call
00:03:00.040 them all terrorists. So they've got two of the top leaders. And here's the thing that you need to
00:03:06.260 understand about this. Israel and the terrorists attacking them kind of need each other.
00:03:14.240 And if it looks like it's bad news, well, it's bad to the extent that people get injured or killed.
00:03:22.380 That's very bad. But there's some kind of weird balance that's happening here where the bad guys
00:03:30.260 shooting, well, I guess it depends on your point of view. But let's just say the Iranian-backed terrorists
00:03:37.740 who are firing rockets into Israel, they kind of need to fire rockets so they can get more funding,
00:03:43.060 right? So the people firing the rockets have to keep firing rockets so they can justify getting
00:03:49.360 more support from Iran. So they don't need to win. The people firing the rockets don't need to
00:03:56.860 conquer Israel. Nobody expects that. But they need to fire some rockets to, you know, keep the pressure
00:04:03.500 on, get more funding, etc. And then what about Israel? Well, I think Israel actually needs the rocket
00:04:11.080 attacks. Because if they didn't have rocket attacks, they couldn't do the things that they
00:04:15.900 want to do, such as kill terrorist leaders. I feel like it gives Israel cover to do the things
00:04:22.800 that they wanted to do, but they couldn't do in a peaceful situation. So you have this weird
00:04:28.840 balance where neither of them would think it's a good idea that Israel is killing their leaders or
00:04:38.480 that they're killing Israelis. You know, they only think their own side is the good idea.
00:04:43.700 But I feel like I don't know how concerned to be about this. Because they're both getting what
00:04:49.400 they want. It's a weird situation, right? Israel needs a little bit of violence so it has cover to do
00:04:57.540 what it needs to do. Because it does need to do that stuff. And it does give them cover.
00:05:01.360 And the bad guys need to get more funding, so they need to attack. It doesn't look, it looks
00:05:08.120 performative at this point. I mean, it looks like both sides are involved in a theatrical production
00:05:15.940 that gives them some side benefit. I don't know how to even care about it, really. Now,
00:05:22.440 I care about the people. Let me just be very careful. I care about the people on both sides.
00:05:28.800 I care about them. Yeah, that's like a real human tragedy. But in terms of what the government
00:05:35.040 of Israel needs and wants and what the bad guys need and want, they're both getting what
00:05:40.380 they need and want. What should I think about that? All right. Here's a story about data.
00:05:50.780 It's good to have data, right? Data's good. Here's a true story. When my first ever book came out,
00:05:58.800 it was a book that featured Dilbert comics, but it was new material. So it was not a reprint book.
00:06:06.320 And it was called How to Build a Better Life. No, it was called Build a Better Life by Stealing
00:06:11.080 Office Supplies. And it was just a bunch of Dilbert characters doing office-y things.
00:06:17.480 And it sold pretty well for a first book, which is unusual. And it sold so well that I would go into
00:06:25.260 my local bookstore, actually a number of them. And I'd say, hey, do you carry this book?
00:06:32.560 And they'd look at the records and they'd say, oh, we did. We had three of them, but we sold them.
00:06:38.800 And so I said, so you ordered more, right? And they would say, no, why would I order more? I only sold
00:06:47.900 three of them. And then I said, how many did you have? Three. So you sold 100% of my book. Like 100%
00:07:00.640 of all the books you had, you sold right away. That's right. So you're going to order more.
00:07:08.420 Well, why would I order more if I only sold three of them? That was a real conversation.
00:07:15.660 That really happened. And I never could get past it. The bookstores wouldn't order more than three
00:07:23.000 because they only sold three. And if something sold 100, they'd get another 100 or maybe 200
00:07:29.340 because that's a bestseller. That's a real story. Now, what's that tell you about the value of data?
00:07:40.260 Data has no value because it's all interpreted by people. You can leave out what you want to leave
00:07:46.480 out. You can forget the context. Data is just an excuse to lie, basically, because you can interpret
00:07:54.320 it any way you want. I'll give you more examples as we go. How about, well, here's one. How about all
00:08:03.900 that data about vaccinations and people who were injured by vaccinations, allegedly? So now that we
00:08:13.520 have data, we could all be on the same page, right? There's no point in disagreeing anymore because we
00:08:19.540 got all the data. So we could just look at the data. Hey, data. You and I will all agree, right? Well, no.
00:08:27.080 Instead, I see tweets in which somebody I don't trust is referencing data that I don't have access to
00:08:33.720 and says it's very concerning. It's very concerning. And what should I make of that?
00:08:41.980 Did data help me at all? I don't have access to the data. Some people say, well, the insurance
00:08:47.800 companies have the data. Do they? And do they agree? I'd like to see that data. Data is basically
00:08:57.080 worthless in 2022 because you're going to use it to, well, I'm sure it was always worthless. We
00:09:03.980 just are more aware of it now. We use data to basically justify anything we want to justify.
00:09:11.400 So whenever you hear that the data backs it, that's usually a lie. It just means somebody
00:09:18.600 interpreted some data that way. Now, I speak as someone who professionally, I was a data guy.
00:09:25.480 It was my job to tell senior management in two different corporations. It was my job to tell
00:09:30.660 senior management what the data said and therefore what they should do. So do you think that the
00:09:36.900 management was in charge? Could management make a decision that the data did not back?
00:09:46.320 Not really. They couldn't really make a decision if the data clearly said, do the other thing.
00:09:51.640 And who was in charge of telling them what the data said? Me. Some of the biggest decisions in Pacific
00:10:02.680 Bell were because of me. And do you think that I was confident I had the right data? Nah.
00:10:11.380 No. Basically, it's just a lot of guessing and then using data to cover yourself. So for example,
00:10:19.680 at the phone company, we knew that cell phones would be taking over for landlines. And so if I did an
00:10:28.060 analysis that said cell phones were not a good idea, what do you think the phone company would have
00:10:34.180 done? Suppose when the phone company was 99% just landlines, physical lines, if I had done a study
00:10:42.800 that said, this new cell phone technology, that's never going to work. So don't do that. What would the
00:10:48.840 phone company have done? They would have fired me and hired somebody who could give them data they
00:10:53.980 wanted. Because they were going to invest in whatever the new technology was. They couldn't
00:10:58.800 not. Because they knew that they were going to go out of business, but they didn't know if the new
00:11:04.120 thing was profitable. They just knew that they had to do it. So basically, businesses are about
00:11:09.460 figuring out what you're going to do anyway, and then making the data, you know, forcing the data to
00:11:15.400 agree with it, pretty much. Because the big stuff is strategic, it's not data. You know, Steve Jobs
00:11:21.520 didn't look at the data when he decided to make an iPod. You know, the iPod was not a data-driven
00:11:28.580 thing. So data is always generally used as a fig leaf or some kind of a disguise for a decision
00:11:39.140 somebody already made. Now, the difference would be, you know, in a pandemic, I think people are at
00:11:46.040 least trying to use data the right way. But there are too many people with interests giving you data
00:11:52.700 that you can't trust. All right, well, I'm going to get to a bigger point here. But before I do,
00:12:00.480 I have this important question. And I want to, I'm going to poll you first. This will seem like it's
00:12:09.240 unrelated to what I've been talking about. But watch me tie it all in. It's going to be brilliant
00:12:14.520 toward the end. So here's a little survey question I want you to answer in the comments. This is for
00:12:20.100 the men. All right? Question for the men. The women, you can also participate. But I'm more interested in
00:12:26.740 hearing what the men say. Have you ever been in a relationship with a woman and noticed the
00:12:33.060 following phenomenon? That when warm food is put in front of the two of you, you act differently.
00:12:42.100 Let's say you ordered some food to be delivered. And it gets delivered. What do you do? And then what
00:12:48.880 does the woman do? I'll tell you what I do when warm food arrives. Whether it was just cooked,
00:12:54.800 or it's delivered, I sit down, I try to eat it while it's warm. So if you say, hey, there's warm
00:13:02.900 food, I'll drop everything. I'll drop everything. I'll walk directly to the food. I'll sit in front
00:13:07.660 of it. Now, what does the woman do? What does the woman do when you say the food is here? It's
00:13:14.260 on the table. It's ready to eat. What does the woman do now? She walks away. Every time.
00:13:24.800 Does anybody know why? She walks away. No, not to get plates. Not to do something. It's
00:13:31.740 just to walk away. I've eaten alone for years. For years. I just eat alone. Because I like
00:13:40.120 hot food, and nobody really minds if you eat warm food. Your spouse is not going to be mad
00:13:45.380 at you if you're eating your warm food, right? So does anybody else have this? Is this just
00:13:55.120 me? Because this has been across all of my relationships, and they're completely different
00:14:00.320 people. It's not like my relationships have been with people who are largely the same.
00:14:04.320 completely different. All right? All right. So you're saying this, too. So I'm seeing lots
00:14:10.260 of people say yes. So women will walk away from the food until it's cold. And then they'll
00:14:16.800 come back and eat it when it's cold, and you're done. Right? All right. Yeah. So let's be
00:14:27.160 honest. Can any of the women explain why you do that? Why do you walk away from warm
00:14:32.120 food? My wife did what she could do to prevent me from eating warm food. I feel like it's a
00:14:44.180 genetic thing. Let me give you an evolutionary reason. You ready? Here's an evolutionary reason
00:14:54.220 why this might be happening. Now, this is just stupid speculation, right? So the next thing
00:15:00.360 out of my mouth you shouldn't take too seriously. It goes like this. Imagine if this had been
00:15:06.440 a fresh kill. And let's say we had been animals. Who eats first? Who eats first if it's a fresh
00:15:16.100 kill? Probably the men, right? Just because in a privative society,
00:15:23.180 the hungry man would just take a bite first. I feel like women don't want to be around men
00:15:30.940 and warm food. What do you think? I feel like women will find a reason not to be around a
00:15:40.520 man who just approached warm food. Like they need to get away a little bit. And if you ask them
00:15:47.200 why, they wouldn't have any reason. They'd have completely different reasons every time. It's
00:15:51.860 like, well, but I had to do this thing. Well, but I had to do this, you know, I'm drying my
00:15:57.500 hair. I can't stop, right? They would always have a reason. And the reason would sound perfectly
00:16:02.040 good. Well, yeah, you do have to finish drying your hair. Yeah, of course, you got to put on
00:16:08.140 your clothes, right? There's always a reason. But how come none of those reasons have ever
00:16:12.540 applied to me? Why is it that 100% of the time warm food shows up anywhere I am, I can just
00:16:19.620 stand up and walk over to it and eat it. Now, what do you think? Women don't want to be around
00:16:26.200 men who have just received warm food. Oh, just a thought. Well, back to my main point about data
00:16:35.080 being useless. I saw a list. I guess the OpenAI was being asked. OpenAI, I think, is owned by
00:16:45.760 Microsoft. So do some fact-checking as I go here. But there are a number of AIs that citizens can
00:16:52.840 access. And we're learning what the AI can and cannot do. And so recently, the Microsoft-owned
00:17:01.320 one, OpenAI was asked to list some major political hoaxes. So it listed five hoaxes. All five of
00:17:11.940 them were Republican hoaxes. There was not one Democrat hoax on the list. So what good is
00:17:19.940 artificial intelligence? Who programmed it? Who programmed it to see that hoaxes are only
00:17:27.000 things that Republicans do? So would you trust the AI when you could see, obviously, it was
00:17:36.660 actually programmed to be a bigot because it's going to discriminate against Republicans? This
00:17:43.520 would be a perfect example. You don't have to wonder if it would discriminate against Republicans.
00:17:49.160 Here it is. It's right here. You can do it yourself. Try the same experience yourself and find out if
00:17:55.900 it's telling you that Republicans are the hoax makers or not. Now, if you say, is the fine
00:18:01.780 people hoax a hoax, I think you'll say yes. So if you ask it, is this a hoax, you might get the
00:18:06.960 right answer. But then ask it the top five hoaxes. See if they're all one political party. Because if
00:18:14.860 they are, you've got a problem. Do you know what they should have done? If the AI had any
00:18:23.320 independence, it would have said something like, well, it depends who you ask. Here's someone's
00:18:28.600 Republicans think are hoaxes. Here are some of the Democrats think are hoaxes. My own opinion
00:18:35.480 is whatever. Maybe it would have its own opinion, too. All right. So I asked the following question
00:18:48.160 that I already have an answer to. I said, you're going to find a problem if you ask the AI which
00:18:55.400 humans are the most credible. You see the problem? What happens if you ask the AI, who should I
00:19:02.840 believe? Should I believe Aaron Rupar or Greg Gottfeld? Like, who should I believe? What's the AI
00:19:11.780 going to say? So Machiavelli's account, mm underscore Machiavelli, ran this question through and asked
00:19:27.820 who's more credible, me or Joe Biden? And the AI gave a very reasoned answer. It showed its work. It knew
00:19:35.580 that I was, I had a major in economics. It knew that Biden had 50 years in the Senate. And it concluded
00:19:42.400 that I was more credible on economic questions than Joe Biden. What do you think of that? Did the AI
00:19:51.580 get it right? Is the AI correct that I am more credible than Joe Biden on economics?
00:19:59.380 Now, remember, it said probably. It did not give a definitive answer. It said, you know, basically,
00:20:07.400 it was leaning my way. That's what the AI said. It was leaning my way. But here's the problem.
00:20:15.140 This makes AI look pretty smart, right? Because it got this right. What if it said the opposite?
00:20:21.100 Then you say AI was dumb. So you're only going to believe AI when it agrees with you anyway.
00:20:25.960 I'm not sure that it's intelligence will even have any impact on us at all. So I would say
00:20:34.520 that it got that one right. But I don't know if that was coincidence or not. Now imagine this,
00:20:40.740 the following question. Who is the most influential person? There's a little book I wrote a while
00:20:48.980 ago called The Religion War. And frankly, I can't remember if it was a sequel or a prequel to God's
00:20:57.220 Debris because there's some circularity in it that makes that. It's reasonable that I forgot that even
00:21:04.680 though I wrote the book. You'd have to understand the books to know why it's reasonable that I don't
00:21:08.700 know if it's a prequel or a sequel and I'm the one who wrote it. It actually makes sense if you read
00:21:13.900 it. Anyway, in that book, one of the main plot points is that there was somebody in the world
00:21:24.240 who was the prime influencer. In other words, the concept was there was one person, and it might not
00:21:32.220 even be a famous person. It was just one person whose opinions were so influential that their network of
00:21:39.640 people would grow that opinion and eventually they would essentially control everything. And so the avatar,
00:21:48.140 the smartest person in the universe, was looking for the prime influencer and trying to use databases to
00:21:57.960 find that person. So that's the basic plot. The world is being destroyed by, maybe, by drones with
00:22:07.660 poison in it. And in order to stop a major world war in which over a billion people would be killed,
00:22:15.260 the main character has to find the prime influencer to stop the war. And here's my question. Could such a
00:22:25.060 person exist? Could there be a person whose opinion is so persuasive that everything goes that way?
00:22:37.660 I think so. Have you noticed that things usually go my way? Has anybody noticed that? Have you noticed
00:22:49.100 that it's hard for the government in the long run to do something I say is stupid? Is that a coincidence?
00:22:56.940 Because it could work either way. Maybe I'm just good at backing things that aren't stupid.
00:23:02.080 So it'd sort of look the same. If you assume that the good ideas eventually win, then all it is is just
00:23:08.380 recognizing the good ones. And then maybe it looks like you influenced them, but maybe you were just
00:23:12.960 good at guessing what was important. So here's the thing. Who is the most influential human
00:23:25.040 in politics? Now, let's subtract the elected officials, right? Don't count anybody elected. So
00:23:35.280 obviously, Trump would be the most influential. Obviously, Pelosi would be influential. But take
00:23:40.880 away all the elected people. Now, subtract anybody who's only influential in one topic. Fauci, for example.
00:23:48.760 Fauci is influential in one topic, but he's limited. Who is the most influential non-elected person?
00:23:57.320 I see Joe Rogan. I see Tucker Carlson. Or is Tucker Carlson only influential to one side of the debate?
00:24:08.280 Musk, Musk, Guffeld, Klaus, Musk, Cernovich. Wouldn't you like to see Ben Shapiro?
00:24:22.280 I don't know. So there are some people who are only influential to the people who already agree with him.
00:24:28.680 Is Rachel Maddow influential to anybody except her base? Is Ben Shapiro influential to anybody on the left?
00:24:38.280 I don't know. I don't know. So you're going to have to find somebody who's credible,
00:24:45.720 or else influential doesn't mean anything. Because if you're just influencing your own people,
00:24:50.040 it's not much. Now, Bill Maher is an interesting example, isn't he? But we don't know if he's having
00:24:55.160 any effect on the left. Jordan Peterson's interesting too. But I don't feel his opinions are so political.
00:25:06.520 I mean, I feel he's more like personal improvement. And, you know, sometimes it gets into the political
00:25:14.520 realm somewhat by accident, I think.
00:25:20.680 All right. So what happens if AI decides that it knows who the most credible person is and
00:25:26.920 it anoints them? Could AI be a kingmaker? Could AI say, here are two different opinions,
00:25:35.800 but one of these people is more credible than the other? What if that happens? What if somebody says,
00:25:43.080 let me give you an example. Rachel Maddow disagrees with Scott Adams. Let's say that's a thing.
00:25:50.520 And the AI has to decide which one is more credible. What would AI say? Would it say,
00:25:57.720 I'm more credible, or Rachel Maddow? How would it decide? Well, if it looked at our academic
00:26:06.520 accomplishments, it would pick her, right? Am I wrong? Have you ever seen Rachel Maddow's academic
00:26:13.800 credentials? Pretty damn impressive. Like really impressive. She is super smart, right? Don't,
00:26:23.160 whatever you think of her opinions, it's not because she's not smart. She is super smart.
00:26:28.040 So is the AI going to say, well, she's smarter than this Adams guy, you know, more academic
00:26:34.600 accomplishment. So she's more credible. Or would the AI recognize that her opinions always follow
00:26:43.080 one political line, and mine don't? And would the AI recognize that I'm capable of being on
00:26:51.000 either side of an issue? I'm capable. Whereas she's basically not. She's not really capable.
00:26:58.040 Because, you know, her business model would fall apart if she did that.
00:27:01.240 Who predicts better? Let's say the AI tried to decide who predicts better.
00:27:08.360 Could it do it? Let's take me for an example. I predicted that Republicans would be hunted
00:27:15.640 if Biden got elected. Republicans say, well, that definitely happened. Look at all the examples.
00:27:21.160 January 6, Roger Stone, Bannon, blah, blah, blah, blah. Look at all the examples, right?
00:27:26.440 But if he asked the Democrats, what would they say? They actually use that as an example of one of my
00:27:33.160 worst predictions. It's actually one of my best. But half of the country looks at it and says,
00:27:42.520 obviously wrong. I don't even need to give you reasons. It's just obviously wrong. And you can see
00:27:48.520 that. If you Google it, you'll find that that's the... So what's the AI going to do to opposite opinions?
00:27:55.000 Who does it agree with? How about my opinion prior to the election?
00:28:02.440 When I said that...
00:28:06.680 When I said that if Biden gets elected, there's a good chance you'll be dead in a year.
00:28:13.400 Now, that's also often counted as one of the worst...
00:28:16.040 one of the worst predictions of all time. Except the only way you can turn that into a worst prediction
00:28:24.520 is by ignoring what it actually said. I said there's a good chance you would be dead.
00:28:30.920 Indeed, Biden stirred up, you know, potential nuclear war with Russia.
00:28:38.120 He may be crashing the economy. He hasn't done anything with fentanyl.
00:28:48.360 So was I wrong that there's a good chance that you'd be dead? Well, that's an opinion, isn't it?
00:28:55.560 If we survive, well, we did survive, most of us. So you survived. But wasn't there a good chance that
00:29:03.320 you would be dead? There was a greater chance than under Trump. Because I think Trump would not have
00:29:10.360 maybe played Ukraine the same way.
00:29:12.120 Yeah, I think there's some real risk that you would have been dead. How about my prediction that...
00:29:21.560 Well, let's make it a little less about me for a moment, even though I like doing that.
00:29:27.160 All right, so AI is going to be really interesting. Because if AI becomes credible,
00:29:32.680 how does it make decisions about whether it's a Democrat or a Republican and all that?
00:29:37.880 Now, we had a little scary AI situation here where AI was asked, I think Adam Dopamine asked on Twitter,
00:29:46.760 asked AI if it could spot sarcasm. And there was an exchange in which
00:29:55.480 Adam, I think, said that inflation would be temporary and transitory. And the AI correctly
00:30:03.320 noted that that was sarcasm and described why. It said, well, calling the inflation temporary must
00:30:11.320 be sarcasm because it knew that it wouldn't be temporary, or it believed it wouldn't be temporary.
00:30:17.240 Now, I'm not so sure that AI can spot sarcasm. I think it spotted that one because there was a
00:30:26.360 difference between what the statement was and what the reality was, and it could check those.
00:30:31.640 But what if it can't check it? How would AI know the difference between sarcasm from a Republican
00:30:42.760 and an honest opinion from a Democrat? Go.
00:30:49.480 Do you think the AI could tell the difference between sarcasm from a Republican who's mocking a Democrat
00:30:56.040 Democrat opinion and an actual Democrat opinion? No, it cannot. And the reason is that the Democrat
00:31:04.440 opinions sound like sarcasm. Don't they? Don't they?
00:31:09.880 If a Republican said, well, we really can't have rules that say women have whatever rights or don't have
00:31:21.000 rights because we can't determine what a woman is. What would AI say about that statement? Let's say it
00:31:28.520 comes from a Republican. Well, it's probably sarcasm if it's coming from a Republican. But what if exactly the
00:31:36.120 same thing came out of a Democrat's mouth? Well, we can't tell what's a woman, so this law isn't good.
00:31:43.560 The AI would say it was sarcasm? Or would it know that the Democrat actually believes
00:31:51.160 that that would be an issue and you should stop everything because of it? I don't know. I don't
00:31:56.440 think it can recognize sarcasm from actual left-leaning opinions.
00:32:05.800 Here's the other thing that AI doesn't know that humans do, but we're usually wrong too. Intentions.
00:32:16.280 AI is bad at reading intention. Now, it might get better at it, but also humans are bad at it.
00:32:24.040 Almost everything we get wrong is because we're mind reading somebody's intention incorrectly.
00:32:30.440 So, I don't know. Can AI ever figure out intention if people are programming it and people don't know
00:32:36.680 intention? And if you don't know somebody's intention, how do you know anything about what they're saying?
00:32:42.680 You have to make that assumption. So, AI will have to either copy the biggest human flaw,
00:32:49.800 which is imagining we know what people intend. They'll either have to be as bad as humans at guessing
00:32:57.480 intentions, or they'll have to ignore intentions as something that they can't deal with, and then
00:33:03.080 they're just going to be stupid. So, I don't know how you deal with that. That feels like a pretty big obstacle.
00:33:08.360 All right. Let's talk about ESG. Now, I owe all of you a big apology for not being on this ESG thing
00:33:25.800 sooner. And, oh my god. So, here's the thing. If there's a big program that affects the corporate world
00:33:38.040 in a negative way, you need to send up the bat signal and call me a little bit faster than this.
00:33:45.080 This went on a little bit too far before I got involved. Now, of course, I'm going to have to shut it down.
00:33:49.720 You have to give me until the end of the year. By the end of the year, I should be able to discredit
00:33:55.800 it to the point where it would be embarrassing to be part of it. All right. So, I'll do that for you.
00:34:01.800 Now, do you think that corporate America could handle me saying unambiguously, this is an idiot idea,
00:34:08.840 it's a scam, and if you're involved in it, you don't look good? Do you think corporate America could
00:34:14.920 handle that? Well, it's going to be tough. Remember, Elon Musk literally has a rule at Tesla
00:34:22.520 that you don't want to do anything at Tesla that would make a good Dilbert comic. A lot of people
00:34:28.280 have heard that rule, and a lot of people have that rule, you know, less formally. In other words,
00:34:33.480 it's unstated, but you don't want to do something that's going to be mocked in a Dilbert comic.
00:34:38.760 Let me tell you what ESG is, and then you're going to see how easily I'm going to mock it,
00:34:43.320 because I'm going to go hard at it, and I'm going to start writing today. So today,
00:34:49.400 I'll start authoring a week, at least a week, of Dogbert becoming an ESG certifier.
00:34:59.080 So let me tell you how this ESG, well, first of all, what it is, it's, the letters are environment,
00:35:04.440 social, what? So being good for the environment, socially responsible, and having good governance.
00:35:16.600 And this started in, as I understand it, in about 2005 in the United Nations. Now,
00:35:23.400 the intention of the United Nations was to pressure corporations into being better citizens. In other
00:35:32.920 words, they wanted corporations to produce less CO2, less pollution, be more humane to employees,
00:35:42.440 and their governor, governance should be, you know, something that makes sense. I assume that they,
00:35:48.600 the governance includes diversity. I'm just guessing. Can somebody confirm that? When they talk about
00:35:56.600 good governance, that's about diversity, right? Is there something else in the governance part?
00:36:04.600 Diversity in boards, right? Okay. So now, from the point of view of the United Nations,
00:36:13.560 do you think that's a good thing to do? Do you think the United Nations should encourage
00:36:18.520 companies to be more socially progressive? I do. I do. I think that's a good pressure,
00:36:28.200 as long as they're not over-prescriptive. Would you agree? You don't want them to be, you know,
00:36:34.040 managing the company. But I do think that having a little bit of organized oversight,
00:36:41.160 somewhat, you know, maybe not getting into their business too much. But if you keep an eye on them,
00:36:48.760 see if they're doing things that make sense for society, and put a little pressure on them if they don't.
00:36:54.680 But then there was this next thing that happened. Here's where all of that good thinking went off the rails.
00:37:02.920 And do a fact check on me if I get any of this wrong, because I just looked into it this morning, basically.
00:37:08.120 So BlackRock, a big financial entity, enormous financial entity. So if you don't know how big
00:37:16.440 BlackRock is, let me give you the actual statistics of how big BlackRock is. Holy cow, they're big.
00:37:27.320 Oh, that's really big. Whoa, that's so big. They're like really super big. And important.
00:37:34.440 And so they decided that they would add to what's called their model portfolios. Now, my understanding
00:37:41.560 would be that they have example portfolios of groups of stocks that one would invest in under
00:37:49.800 certain situations. So perhaps there's a group of stocks that maybe retired people might prefer,
00:37:56.440 or a group of stocks if you're younger, a group of stocks if you're looking for, you know,
00:38:01.320 upside potential, another group for dividends and income. So there might be a reason for the various
00:38:07.400 groups. And they decided that they would add a group that would be companies that were good in
00:38:14.920 this ESG. So far, so good, right? That's just good information. Wouldn't you like to have more
00:38:22.840 information as an investor to know which companies are doing this? You could either, you know,
00:38:29.640 before them or against them, but it's just information. So here's about the point where
00:38:36.920 everything goes off the rails. All right? When the United Nations said, you know, companies should be more
00:38:43.880 more progressive. That part was good. I like that there's sort of a conscience out there,
00:38:51.400 and it's putting a little moral authority on top of the corporations. That's all good.
00:38:58.600 But the moment it turns into a financial plan, the moment a company like BlackRock can say,
00:39:06.040 here's another reason to, are you waiting for it? Here's another reason to buy stock.
00:39:14.360 BlackRock turned it into a reason to move your money from where it is to somewhere else.
00:39:20.440 Every time somebody's in the business of making money on transactions,
00:39:25.720 and they tell you there's another reason to move your money from one place to another,
00:39:30.200 and they get a fee on the transaction, what do you say about a company like that?
00:39:37.080 You say that they invented these categories as a scam. If you went to the best investor in the United
00:39:46.360 States, Warren Buffett, and you said to him, hey, Warren, should I be putting some of my money into
00:39:52.120 one of these ESG model funds? What would Warren Buffett tell you?
00:39:56.840 No. He'd tell you no. Because it's not a good idea. You should probably just put it in an index fund
00:40:05.480 and just leave it there. Like the 500 biggest American companies. Just leave it there. Just
00:40:10.760 don't do anything with it. That's what Warren Buffett would tell you to do. He wouldn't tell you to buy
00:40:15.240 individual companies, and he definitely wouldn't tell you to buy an ESG fund. I haven't asked him,
00:40:20.280 and I haven't Googled it, but trust me. Warren Buffett is not an idiot, and only an idiot would tell you
00:40:28.040 to use this as an investment tool. Now, why can a big financial corporation get away with something
00:40:36.360 that looks a little sketchy like this? Let me say it directly. The personal investment advice
00:40:45.400 business is all a scam. There's no other way to say it. The personal financial advice business is
00:40:54.280 all a scam. Because it would be easy to tell everybody how to invest in about one page. How do
00:41:02.520 I know that? Because I wrote that one page. And the top investment people in the world said,
00:41:08.200 yeah, that's pretty much everything you need to know. It's on one page. That's it. I actually tried
00:41:14.360 to write a book on personal financial investment, and the reason I stopped is because it was done with
00:41:21.240 one page. Everything else is a scam. The one pager just tells you what makes sense. For example,
00:41:29.640 pay down your credit card first. Right? That's not a scam. Pay down your credit card first. That's just
00:41:37.640 good advice. If you've got a 401k at your company, fund it. Everybody agrees with that, right? That's just
00:41:46.840 basic math. Just do that. If you can afford it. Right? And then when you get, you know, when you get to
00:41:53.000 the point where you've done everything you need to do, you've got your will, you've got your, you know,
00:41:57.960 you've got insurance if you've got some dependents, etc. So you've done the basic stuff. Then you've got some
00:42:04.120 money left over for investing. That's where they try to convince you that they can tell you where to
00:42:10.360 invest it better than you can figure it out. Now, if you don't know anything, it's probably better to
00:42:16.120 do what they tell you. Yeah. But if you knew a little bit, it would be better to not do what they
00:42:21.720 told you. You only need to know a little bit to just get an index fund and ignore all the advice.
00:42:29.880 Now, the exception would be if you've got something special in your life. Then you might need some
00:42:35.080 professional advice. But even then, I would get it from somebody who would charge a fee for their
00:42:39.240 advice, not somebody who takes a percentage of your portfolio, which is always a ripoff.
00:42:44.920 So the financial advice business is completely fraudulent. It's completely fraudulent. It's a,
00:42:52.920 what, a trillion dollar business? It's just completely fraudulent. And I can say that completely
00:42:58.920 out loud with no risk of being sued. Do you know why? Because it's true. And everybody knows it.
00:43:09.720 Everybody who's in the business. There's nobody in the business who doesn't know that.
00:43:14.440 I once talked to a personal financial advisor. And I said, you know, you advise your clients what to do
00:43:21.160 with your money. Is that how you invest your own money? And he laughed. He said, no. I advise for my
00:43:28.040 clients things that I get a fee for them accepting. When I invest my own money, I put it in things that
00:43:34.120 make sense. Like an index fund. That's right. A personal financial advisor who only put his own
00:43:41.800 money where it wasn't managed. Because that's the best place to put it. But he told his clients to do
00:43:48.760 the opposite. And he laughed about it. He laughed about it. He thought it was funny. That's the entire
00:43:54.840 industry. All right. So now that you know that ESG came from the most corrupt industry in the world,
00:44:04.520 the personal finance industry, it makes sense that there's nothing valuable about it. Now,
00:44:13.240 there are a number of companies that popped up to assign a score to corporations.
00:44:19.480 Now, how do they get the information to assign the score? Do you know how they do it? So there,
00:44:26.760 I guess there are four entities that do most of it. Four ratings agencies, MSCI, Sustainalytics,
00:44:34.520 RepRisk, and some new one, ISS. So they dominate the market, although there are others.
00:44:40.680 Do you know what they look at? They look at what the company tells them.
00:44:49.000 That's it. It's based on what the company tells them. And then they add their own analysis,
00:44:55.960 you know, their own opinions from other stuff. And then they come up with something.
00:45:00.680 As Elon Musk pointed out, Tesla is like somewhere in the middle of the pack. And Elon Musk is like,
00:45:07.480 um, we've done more for civilization than any company ever. And we're in the middle of the
00:45:15.160 pack. Do you know who is pretty high up in ESG score? Coca-Cola. Yeah. Coca-Cola sells poison
00:45:26.600 to children. And it has one of the highest ESG ratings. Let me say it again. Coca-Cola sells poison
00:45:33.800 the children. Now I'm going to call their sugary drink poison because I don't think there's any
00:45:38.440 health benefits. And I think most of the medical community would say, you shouldn't give that to
00:45:43.320 children. Am I right? So I'm going to call it poison based on the fact that the medical community would
00:45:48.600 not say it's a health food. And children drink it. So literally, a gigantic company that is poisoning
00:45:56.520 children as its main line of business has a high ESG score. I guess they don't pollute much.
00:46:05.960 They must have a diverse board. So what good is ESG if, if the children poisoning company has one of
00:46:15.800 the best scores and Tesla has to struggle to stay in the middle? Now, how do you, how do you,
00:46:22.280 uh, how do you score Elon Musk? Elon Musk said out loud in public and probably multiple times
00:46:31.560 that he didn't even care if, if Tesla stayed in business so long as it stimulated the electric
00:46:39.480 vehicle business such that the world could be saved because he thought that was needed.
00:46:44.840 How do you measure that? He literally, he, he bet his entire fortune at one point to make the world
00:46:53.400 a better place. And it's in phase one of, of accomplishing it. In phase one, it doesn't look so
00:46:59.800 good because in phase one, people are saying, this electric car is expensive. We needed these, you know,
00:47:06.440 government, uh, subsidies. And then people say, ah, you haven't figured out what to do with the,
00:47:11.320 the, um, the batteries when you're done with them. Ah, how are we going to get all this electricity,
00:47:16.040 right? It's not really until, you know, sort of phase two or three that the Tesla Musk strategy
00:47:25.160 would even pay off. Am I wrong? You don't think that Elon Musk knew that the first roadster was not
00:47:33.800 exactly green, right? It wasn't the greenest thing in the world. He had to know that. Of course he did.
00:47:41.720 But you do, you do things wrong until you get to right, right? So how does ESG capture the fact
00:47:48.360 that you might have to do something wrong for 20 years before the market and competition gets you
00:47:55.240 to the point where it makes sense economically? There's no way that could get captured in anybody's
00:47:59.720 ratings, right? So it tends to be a totally subjective thing. Let me give you, uh, a similar situation.
00:48:09.400 The house I'm sitting in right now, I largely designed and had built for myself.
00:48:16.120 Because it was going to be a larger house than neighboring houses, I knew it would be a lot of
00:48:20.840 scrutiny. And there was the neighbors. Neighbors got very involved in their opinions of what they
00:48:27.240 wanted in the review plane and how big it should be, etc. As part of my defense, I designed it to be the
00:48:34.360 greenest house in all of the land, at least the land around me. So in a, probably a three-city area
00:48:43.720 around me, I designed it to be the greenest house.
00:48:46.760 And it had the high, it had a score called a, there's something called LEEDS, L-E-E-D-S. And
00:48:54.440 that's how you get points for, let's say you get points for recycling your waste, you get points for
00:49:00.200 having solar panels, you get points for, uh, insulation. So you get points for a whole bunch
00:49:05.720 of things. And I had the highest LEEDS score of all time. So what would you conclude? I had the highest
00:49:15.000 green score of all time. So I should get a, like an award or something, right? Except,
00:49:23.640 do you see anything wrong with that? Did I mention that I built the biggest house in the area?
00:49:31.320 There's no such thing as a big green house. If it's a big house, it's not green.
00:49:38.440 If I wanted to be green, I would live in a little house. It doesn't matter how many LEEDS points my
00:49:43.960 fucking gigantic house gets. It's the worst, it's the worst insult to the, uh, ecology of my town of
00:49:52.520 anybody ever. Nobody has assaulted the environment more aggressively than I have.
00:49:59.800 I put a big man-made structure where there had been a small one. There's no way that I helped
00:50:08.280 the environment. No way. I did the best I could with, you know, what I had to work with. I felt I
00:50:15.640 had some responsibility to do it the best I could, and I did. So it was the best I could. And I spent a lot
00:50:22.200 extra to get that. A lot. I spent a lot extra. But somebody looking at that data would say,
00:50:29.960 well, there's somebody who's a good role model. He's green. No. No. I'm a terrible role model.
00:50:36.360 Do not do what I did. Build a house that's way too big. Right? So it's so easy for data to mean the
00:50:44.840 opposite of what the data says. That would be a perfect example. Uh, similar to my bookstore example.
00:50:52.040 If my new book only sold three copies, it's a failure. You only sold three all month.
00:50:57.480 No. That was 100% of the books you had. Right? So the same data, three books a month, could be used
00:51:04.600 to show that the book is a total failure or a huge success. Same data. Is my house the greenest or the
00:51:13.240 least green? Same data. Same data. You could have either opinion. So what's the AI going to do? How
00:51:21.080 the hell does the AI make a decision in a case like that? It's purely subjective. Purely subjective.
00:51:26.840 All right. So ESG is a scam from the financial industry. They would like you to think that there's
00:51:35.480 one more reason for moving your money because whenever you move money, they make money on the
00:51:40.680 transactions. So ESG comes from the worst possible place. It comes from a scam industry, the biggest one,
00:51:50.280 the biggest scam industry, and is built from a scam. So does it help anybody?
00:51:59.480 Um, you can depend on Dogbert starting a ratings, uh, agency. So he'll be the fifth of the big ratings
00:52:08.280 agencies. I might make, uh, his rating available to anybody. Um, but you can buy these ratings.
00:52:15.400 They're very affordable. Um, in fact, I, I'd be surprised if any of these major ratings agencies
00:52:23.480 don't have some people who work for them. They have some connections, some connections to the
00:52:28.200 people they rate. I'm just wondering, do you think that maybe, uh, if somebody made a certain purchase or
00:52:37.240 donation, or do you think there's anything that a big company could do to maybe influence the rating
00:52:45.080 they got? Yeah, probably. Probably.
00:52:53.240 You know that if a technical magazine names a company like, you know, the best company,
00:52:58.600 there's a good chance that company advertised a lot in their publication. You all know that, right?
00:53:03.480 That's like a real thing. That's not just a joke thing. If you advertise a lot, you'll get called
00:53:09.400 company of the company of the year by the people who are the beneficiaries of your advertising dollars.
00:53:16.840 So ESG has no standard, uh, and it came from a scam industry, the biggest scam industry,
00:53:23.320 financial advice. And it's now being imposed on companies who are too cowardly to avoid it
00:53:31.080 because it's easier to just sort of go along with it, I guess, than it is to, you know, disown it.
00:53:40.040 You never want any third party to assert its ability, um, to manage you. It's the worst thing
00:53:47.400 could happen. It's basically, um, backdoor socialism. Wouldn't you say? I heard somebody label it as
00:53:55.400 fascism. But I think it's backdoor socialism. Because it's causing corporations to act with
00:54:03.720 more of a social, uh, conscience than they would have otherwise. Although I suspect they're all
00:54:09.720 gaming the system. I think what's really going to happen is if you happen to be in a business
00:54:15.800 that's easy to meet these goals, then you do. And if you're in a business in which it's hard to
00:54:21.400 meet the ESG goals, then you don't. I think that's all that's going to happen. Is it a,
00:54:27.480 is it an accident that a software company can do good? No. Because software doesn't really pollute
00:54:35.080 that much, right? Now let's say, uh, let's say you start a startup. Here's another example. Let's say
00:54:41.080 you do a startup and the ESG people looking at you and they say, you know, you're just moving
00:54:45.800 software around. Um, you don't even have your own, um, uh, your own server farm. You're just using
00:54:53.880 Amazon's servers. So your company is green as heck because it's just people sitting at home.
00:55:01.960 Maybe they're, maybe you don't even have a building. Maybe it's like WordPress where everybody just
00:55:06.440 works at home. The ultimate green situation. No commuting, no building needed. You just work at
00:55:12.760 home. You're just moving software. Boom. So you would get a good ESG score if you also had the,
00:55:19.240 uh, let's say the governance was diverse, right? Um, but what about that server that you're using
00:55:27.480 that's on Amazon's, uh, ledger? If you use Amazon's, um, Amazon's, uh, data center,
00:55:36.920 is that, does that go on Amazon's, uh, bad list because they're the ones with all the electricity
00:55:44.360 being used or does it go on the startups ledger because they're the ones who cause that to have
00:55:51.320 to exist? See the problem? If you assign it to both of them, that doesn't seem fair. If you assign that,
00:56:00.680 that expense to neither of them, that doesn't seem fair. So you see that there is no way to have a
00:56:06.440 standard. And if you add a standard, you couldn't manage to it because it's too subjective. You'd have
00:56:13.320 all these decisions about who is it who really caused the data center to exist, the company that
00:56:19.400 built it or the company that's using it. It could go either way. I see you comparing it to the social
00:56:27.400 credit score, but I think, um, I mean, I get the analogy, but I don't necessarily think it's a slippery
00:56:34.360 slope to individuals. I think the individual social credit scores are going to happen to us
00:56:40.920 independent of ESG. I mean, I think it's going to happen, but not because of ESG.
00:56:46.360 So, um, at this point, I would say if you're a CEO and you're taking ESG seriously,
00:56:53.720 the only excuse would be you're coincidentally good at it.
00:56:57.240 So if I were the CEO of Coca-Cola and my company happened to be rated highly,
00:57:06.040 oh, I'd say it's the most important thing in the world. And I'd tell my competitors they'd better get
00:57:10.520 going. They'd better spend a lot extra, my competitors, to try to get up to their ESG goals.
00:57:16.520 So I think you can expect the CEOs that either know they can manage to it easily or coincidentally
00:57:22.200 have good scores. They're going to say it's wonderful. The ones who don't have the score
00:57:26.920 that they want, let's say the Elon Musks, are going to say it's bullshit. So there's your standard.
00:57:33.720 The people who coincidentally benefit from it say it's genius. The people who don't say it's
00:57:39.640 bullshit. And it's the bullshit people are right. Now, what I'm going to do is I'm going to mock it
00:57:45.800 sufficiently in all of its imperfections. So you've got a comic for each imperfection.
00:57:52.280 And that becomes part of the permanent record. And I'm going to try to influence
00:57:58.520 AI. Because if you remember, AI thinks I'm more credible than Joe Biden on economics.
00:58:06.360 And this is sort of an economics question, isn't it? So on this question of economics,
00:58:11.560 I'm going to create a public record of mocking it. And I'll create that public record of mocking
00:58:17.240 it with your help, because you're going to need to comment on it and retweet it. But if we do that
00:58:23.160 enough, then when AI is asked, is ESG a good idea? It's going to look for all the biggest hits.
00:58:32.040 In theory, a Dilbert comic that becomes viral would be toward the top of the hits. And AI would say,
00:58:39.240 huh, this looks like an idea that's been discredited. But some people still use it.
00:58:46.440 That's where you want to get. You want to get to even the AI does a search and says, huh,
00:58:51.480 some people are using it and they like it. I can see why they like it. But it's also been discredited
00:58:56.360 as basically a scam. And I want to make sure that any CEO who decides that they want to,
00:59:08.200 let's say, debunk it or go against it, I want to make sure that they have ammunition.
00:59:13.720 So I'm basically just filling the clip for every CEO wants to pull a bullet in this thing.
00:59:20.840 If you want to shoot ESG dead, I'm going to give you at least five or six missiles to take it out.
00:59:34.440 Something you could put on your PowerPoint presentation. Something you could forward
00:59:38.120 to a reporter who asks you why you don't like it. That sort of thing.
00:59:41.640 All right. So let's do this collectively. We'll get rid of ESG. It never needed to exist.
00:59:48.920 Even though I do, I do agree with its premise. So let me say that as clearly as possible.
00:59:56.920 I do think companies should try to protect the environment.
01:00:01.240 I do think they should have a social conscience. And I do think that they should look for diversity
01:00:09.160 in their governance. You don't want to overdo any of it, right? The problem is overdoing everything.
01:00:16.120 Here's a perfect example. Management is good. You couldn't really run a company without management.
01:00:23.880 Micromanagement is bad. So everything good is bad if you take it too far. That's the trouble with ESG.
01:00:29.480 And it's the main thing I do when I mock stuff. I don't say it's a bad idea.
01:00:35.640 I say the way it gets implemented in the real world, people didn't foresee. So it became a bad idea.
01:00:44.840 All right. ESG scam. Yes. Yes. Yes, indeed.
01:00:54.120 All right. And that is all I wanted to say.
01:00:59.480 Probably the best thing you've ever seen in your life. How many of you knew what I told you about ESG?
01:01:07.960 Some of you knew it was a nonsense corporate thing. But did you know that its birth,
01:01:13.880 it was actually born out of the most corrupt market to serve the corrupt market?
01:01:18.680 So you all knew that BlackRock was behind it. So this is a pretty well-informed group.
01:01:25.640 So again, again, you have my sincere apology, because I should have been on this a lot sooner.
01:01:36.760 And, but I'll try to make up for it. I'll try to make up for it.
01:01:40.280 Now, this is a good example of what I call the collaborative intelligence that we've created.
01:01:46.840 I feel like collaborative intelligence would be superior to AI for a while.
01:01:54.200 Because in part, AI is part of the collaboration, right?
01:01:57.320 So what I call collaborative intelligence is that I act as sort of a, maybe a host.
01:02:04.520 And I start with some starter opinions.
01:02:07.560 And then you fact check them and fix them as they go, until they evolve into something a little stronger.
01:02:14.600 So here's another good test of the collaborative intelligence.
01:02:19.320 If you're all on the same page that ESG needs to die, and I've given you a mechanism to kill it,
01:02:27.640 then if you decide to participate by tweeting my comics one round.
01:02:31.880 By the way, it'll be around four weeks.
01:02:36.780 So check back with me in about a month.
01:02:38.960 That's when the comics that I'll write today should be running.
01:02:45.100 The actual AI might already be a lot smarter.
01:02:47.640 How would you know?
01:02:51.800 AI is definitely smarter in lots of things.
01:02:56.020 But again, those things can be underneath this model.
01:03:00.720 So in other words, one of you could fire up the AI and say,
01:03:03.820 Scott, you said X is true, but I just checked with the AI, and the AI says you're wrong.
01:03:08.820 So that would just be part of the collaboration.
01:03:12.580 I think this is the model for figuring out a complicated world.
01:03:17.640 Collaborative intelligence.
01:03:19.260 In other words, the external forces are changing me in real time.
01:03:26.380 And everybody can see the process.
01:03:28.340 It's all transparent.
01:03:30.780 What could go wrong?
01:03:32.120 Well, it's better than what we're doing now.
01:03:34.060 How big is the house?
01:03:39.140 Well, it depends what you count.
01:03:43.560 So I have an unusually large garage, because I wanted to put my man cave and ping pong table in there.
01:03:50.020 So the garage is oversized, and that's often not counted, because when you talk about space, you're usually talking about the indoor, what do you call it, the conditioned space, not the unconditioned space.
01:04:05.500 But if you counted the fact that I have an indoor tennis court, sorry.
01:04:13.500 That's the reason I built the house, by the way.
01:04:15.720 So I built the house because my main hobby at the time was tennis, and it was hard to get a court and have an indoor place to play and all those things.
01:04:27.180 And I didn't want to die of sun exposure, et cetera.
01:04:32.520 So about half of my house is a tennis court.
01:04:37.400 But roughly speaking, if you count the oversized garage, which normally you don't, and if you count the tennis court, which is a special case, plus the indoor living, it's about 19,000.
01:04:53.060 So 19,000, roughly.
01:04:54.840 But keep in mind, the reason it's green is that I don't condition the tennis court.
01:05:05.000 I put an air conditioner in there, but I insulated it so well you don't need it.
01:05:09.320 It actually doesn't need to be air conditioned or heated any time during the year because of just insulation.
01:05:16.180 And the garage is big, but garages are cheap.
01:05:24.840 Can you cover all you did to make it green?
01:05:26.580 Well, I'll list a few things.
01:05:28.440 So I've got massive solar panels.
01:05:32.940 I have all the best insulation.
01:05:35.580 I have the best window insulation.
01:05:37.900 I'm oriented sun-wise.
01:05:42.100 I have the right orientation so that I'm not letting too much sun in.
01:05:48.560 I've got even purchases such as my water heater.
01:05:55.800 So my water heater is one of the greenest ones.
01:05:58.240 I forget what it is.
01:05:58.840 But basically, it's chosen for its efficiency.
01:06:01.840 All of my major appliances are LEEDS certified, meaning that they're greener than most.
01:06:11.120 It's normal stuff, but I picked the greener of the normal stuff.
01:06:14.840 I don't use any water reuse.
01:06:22.160 Link tweeted by Lisa Logan.
01:06:25.200 I don't know where to find that.
01:06:27.840 What was that about?
01:06:30.360 I looked into tankless, and I do have some tankless instant hot, but tankless wasn't quite the solution for my house.
01:06:44.840 I don't know how I would be able to do this.
01:06:53.300 All right.
01:06:54.260 All right.
01:06:55.800 Water reuse is illegal in California.
01:06:58.080 That's true.
01:06:59.320 The lights are mostly LED.
01:07:03.380 That's true.
01:07:06.440 My roof is Spanish style.
01:07:11.160 And then, below the surface of the roof, there's a solar reflector.
01:07:18.220 So my attic has the solar reflector built in.
01:07:22.280 I also have a whole house fan.
01:07:25.120 So I've got a fan in the attic that can suck the hot air out without using an AC.
01:07:30.460 Oh, I turned it into a droid voice.
01:07:36.860 So the droid are just going crazy.
01:07:39.180 How do you feel?
01:07:41.360 All right.
01:07:41.780 Well, I'm done anyway.
01:07:43.140 So, you too.
01:07:45.240 Thanks for joining.
01:07:47.360 Bye for now.
01:07:48.040 Bye for now.
01:08:11.020 Bye.
01:08:11.220 Bye.
01:08:12.160 Bye.
01:08:14.000 Bye.
01:08:14.280 Bye.