The Joe Rogan Experience - March 05, 2019


Joe Rogan Experience #1258 - Jack Dorsey, Vijaya Gadde & Tim Pool


Episode Stats

Length

3 hours and 25 minutes

Words per Minute

186.75555

Word Count

38,288

Sentence Count

2,736

Misogynist Sentences

25


Summary

Jack Dorsey, Tim Pool, and Jack Dorsey join me in this episode to talk about Bitcoin, crypto-currencies, and much more! Recorded in Los Angeles, CA! Bitcoin and other cryptocurrencies are the future of the financial services industry. Today's episode is a mashup of a few of my favorite moments from the past week, and a special guest appearance from my good friend Tim Pool. Tweet me if you liked the episode and/or have any thoughts on any of the topics covered. Tim's AMA is linked here. If you like the show and want to become a supporter, please HIT SUBSCRIBE and leave us a rating and review on Apple Podcasts, too! Tim and I talk about: - Bitcoin, Bitcoin, and more. - Bitcoin's future in the space - Bitcoin and more - The future of Bitcoin in the cryptocurrency space - Why Bitcoin is not Bitcoin - Bitcoin vs. traditional money - Bitcoin as a financial service - What's next for Bitcoin? - Can Bitcoin be the next $100k or $1,000,000 or $5,000? What's the difference between Bitcoin and traditional money? Can Bitcoin and Bitcoin be more than $10,000 worth of gold? What are the best crypto-dividend options in the near future? Is Bitcoin's role in the Bitcoin ecosystem? -- and is Bitcoin better than traditional money better than $5 or $10? Does Bitcoin have a place in the world of Bitcoin's value? Will Bitcoin have more value than Bitcoin be better than a better than Bitcoin? What is more valuable than $15, or $25 or $20,000 than $50? ? Bitcoin's impact on Bitcoin's growth in the long-term outlook than $3,000 vs. $3 or $15? Bitcoin s impact on the world? (and more? ) - What does Bitcoin have to do with Bitcoin's potential in the 21st century? ...and much, much more? Recorded in San Francisco, California, more? ...and so much more... ... and much, more! Recorded in Tel Aviv, Israel, Israel and more! ...and a lot more. ...and more! (more! ) . . . and more!! Subscribe to our new episode coming in the coming weeks! - Tom and Vija Vija


Transcript

00:00:00.000 Five, four, three, dos, uno.
00:00:09.000 Come on, TriCaster.
00:00:12.000 Live?
00:00:13.000 Yes.
00:00:13.000 All right.
00:00:14.000 We're live, ladies and gentlemen.
00:00:15.000 To my left, Tim.
00:00:17.000 Tim Pool.
00:00:18.000 Everybody knows and loves him.
00:00:20.000 Vija.
00:00:20.000 How do I pronounce your last name?
00:00:22.000 Vija.
00:00:22.000 Vija.
00:00:23.000 Not Vija.
00:00:24.000 Vija.
00:00:24.000 Vija.
00:00:25.000 Gaddy.
00:00:26.000 Gaddy.
00:00:26.000 And your position at Twitter is?
00:00:29.000 I lead trust and safety, legal, and public policy.
00:00:32.000 That's a lot.
00:00:33.000 That's a lot.
00:00:34.000 And Jack Dorsey, ladies and gentlemen.
00:00:37.000 First of all, thank you everybody for doing this.
00:00:39.000 Appreciate it.
00:00:40.000 Thank you.
00:00:41.000 All of a sudden there's tension in the room.
00:00:44.000 We're all loosey-goosey just a few minutes ago.
00:00:46.000 There's no tension.
00:00:47.000 Now everyone's like, oh, this is really happening.
00:00:49.000 Here we go.
00:00:50.000 Before we get started, we should say, because there were some things that people wanted to have us talk about.
00:00:58.000 One, that the Cash App is one of the sponsors of the podcast.
00:01:01.000 It's been a sponsor for a long time.
00:01:03.000 And also a giant supporter of my good friend Justin Wren's Fight for the Forgotten Charity, Building Wells for the Pygmies in the Congo.
00:01:09.000 This is very important to me, and I'm very happy that you guys are a part of that, and you are connected to that.
00:01:16.000 I mean, it's easy for someone to say that doesn't have an influence on the way we discuss things, but it doesn't.
00:01:22.000 So if it does, I don't know what to tell you.
00:01:25.000 I'm going to mention, too, just because I don't want people to come out and freak out later.
00:01:28.000 I actually have like 80 shares in Square, which isn't really that much.
00:01:32.000 But it's something.
00:01:34.000 It is.
00:01:34.000 It is.
00:01:34.000 So I don't want people to think, you know, whatever.
00:01:37.000 You're the CEO of Square, I think, right?
00:01:38.000 Yep.
00:01:39.000 Yeah, there you go.
00:01:40.000 We're on the cash app.
00:01:42.000 And the reason why we decided to come together is we had, I thought, a great conversation last time, but there's a lot of people that were upset that there were some issues that we didn't discuss or didn't discuss in depth enough or they felt that I didn't press you enough.
00:01:55.000 I talked to Tim because, you know, Tim and I have talked before and he made a video about it and I felt like his criticism was very valid.
00:02:03.000 So we got on the phone and we talked about it and I knew immediately within the first few minutes of the conversation that he was far more educated about this than I was.
00:02:11.000 So I said, would you be willing to do a podcast and perhaps do a podcast with Jack?
00:02:16.000 And he said, absolutely.
00:02:17.000 So we did a podcast together.
00:02:18.000 It was really well received.
00:02:20.000 People felt like we covered a lot of the issues that they felt like I didn't bring up.
00:02:24.000 And so then Jack and I discussed it and we said, well, let's bring Tim on and then have Vidya on as well.
00:02:31.000 I said that right?
00:02:32.000 Yep.
00:02:32.000 It's a hard one.
00:02:33.000 Sorry, I'll get it right.
00:02:34.000 I promise.
00:02:35.000 So we're here.
00:02:37.000 We're here.
00:02:38.000 Today, do you know who Sean Baker is?
00:02:41.000 He's a doctor who's a prominent proponent of the carnivore diet.
00:02:46.000 His account was frozen today.
00:02:50.000 I just sent it to you, Jamie.
00:02:52.000 It's just a screenshot.
00:02:53.000 Yeah.
00:02:53.000 His count was frozen today because of an image that he had because he's a proponent of the carnivore diet.
00:02:59.000 There's a lot of people that believe that this elimination diet is very healthy for you and it's known to cure a lot of autoimmune issues with certain people, but some people ideologically oppose it because they think it's bad for the environment or you shouldn't eat meat or whatever the reasons are.
00:03:13.000 This is huge in the Bitcoin community.
00:03:15.000 Yes.
00:03:16.000 Well, for a lot of people that have autoimmune issues, particularly psoriasis and arthritis, it's a lifesaver.
00:03:23.000 It's crazy.
00:03:25.000 Essentially, it's an autoimmune issue.
00:03:28.000 So, because he has a photo of a lion in a header eating what looks like a wildebeest or something like that, his account was locked for violating his rules against graphic violence or adult content in profile images.
00:03:42.000 That seems a little silly.
00:03:45.000 And I wanted to just mention that right away.
00:03:47.000 Now, whose decision is something like that?
00:03:50.000 Like, who decides to lock a guy's account out because it has a nature image of, you know, natural predatory behavior?
00:03:57.000 On this particular case, it's probably an algorithm that detected it and made some sort of an assessment.
00:04:02.000 But as a general rule, how we operate as a company is we rely on people to report information to us.
00:04:08.000 So if you look at any tweet, you can kind of pull down on the carrot on the right and you can say report the tweet, and then you have a bunch of categories you can choose from of what you want to report.
00:04:17.000 I think this one in particular, though, is probably an algorithm.
00:04:20.000 So does he have the option to protest that or to ask someone to review it?
00:04:27.000 Absolutely.
00:04:27.000 And I'm guessing that people are already reviewing it, but there's a choice to appeal any action, and that would go to a human to make sure that it is actually a violation of the rules, or in this case, if it's not, then it would be removed.
00:04:40.000 Is that a violation of the rules?
00:04:41.000 That image?
00:04:42.000 I don't think so.
00:04:43.000 I don't think that that would be what we're trying to capture in terms of graphic images in an avatar.
00:04:47.000 It's more about violence towards humans unless it was some sort of cruelty depicting animals or something like that.
00:04:54.000 But this seems not the intention of the rule.
00:04:56.000 Does this highlight a flaw in the system in that people can target An individual.
00:05:05.000 Because with him, he's a doctor and a proponent of this carnivore diet, but he's also ruthless in his condemnation and mocking of vegans.
00:05:16.000 He does it all the time.
00:05:17.000 And so then they get upset at him, and they can target posts and just report them en masse, and when they do that, then this becomes an issue.
00:05:26.000 I think this does reveal part of the challenges that we face as a global platform at scale.
00:05:34.000 I don't know what happened in this case.
00:05:36.000 Sorry, it's hard for me to talk about it.
00:05:37.000 But what I would say is that it doesn't really matter if one person reports it or 10,000 people report it.
00:05:43.000 We're going to review the reports and we're going to make an assessment.
00:05:46.000 And we're never going to kick someone off the platform finally and forever without a person taking a look and making sure that it's an actual violation of the reports.
00:05:55.000 Right.
00:05:57.000 But the mob reporting behavior does happen.
00:05:59.000 Yeah, it does.
00:06:00.000 It happens across the spectrum.
00:06:02.000 I'd have to assume it's going to be one direction.
00:06:04.000 I can't imagine he would target vegans, but vegans would target him, right?
00:06:08.000 Well, he might.
00:06:09.000 I mean, he doesn't.
00:06:10.000 Is he the kind of guy who's going to want to report vegans and get them banned from Twitter?
00:06:13.000 Or is he going to want to make fun of them?
00:06:14.000 He's going to make fun of them.
00:06:15.000 They're going to target him to try and get him removed by exploiting the system that you guys have.
00:06:19.000 It may not be him, though.
00:06:20.000 It could also be his followers.
00:06:21.000 It's a really complicated world out there, so the motivations of why people mob report are different, and it's not always under someone's control.
00:06:30.000 It could even be other carnivore diet proponents who are just jerks that don't like him because he's getting all the love.
00:06:36.000 People are weird.
00:06:38.000 Yeah, that's true.
00:06:39.000 The idea, though, is that it does kind of highlight a bit of a flaw in that it's good that someone can – because you might see something awful, someone doxing someone or something like that, and then you can take that and report it, and then people can see it and get rid of it and minimize the damage that's done.
00:06:56.000 There's another big problem here in that is the carnivore diet legitimately healthy?
00:07:00.000 Is it a threat to your health?
00:07:01.000 And if it is, what is Twitter's responsibility in controlling that information?
00:07:07.000 So just to clarify, my opinion is if he wants to be a proponent for the carnivore diet, let him.
00:07:12.000 But you've got people on YouTube who are being deranked for certain beliefs about certain health issues that I don't agree with.
00:07:18.000 And so one of the risks then is we're coming towards a position where people think some ideas are better than others.
00:07:25.000 Therefore, as a company, we're going to restrict access to certain information.
00:07:27.000 You mean like anti-vax people?
00:07:29.000 Exactly, right.
00:07:30.000 So I guess what I'm trying to say is, would you guys restrict someone from sharing false information about vaccines that could get someone hurt?
00:07:39.000 That is not a violation of Twitter's rules.
00:07:41.000 No.
00:07:42.000 I think, I mean, I'd be interested to hear your ideas around this, but our perspective right now is around this concept of variety perspective.
00:07:52.000 Like, are we encouraging more echo chambers and filter bubbles, or are we at least showing people other information that might be counter to what they see?
00:08:01.000 And there's There's a bunch of research that would suggest that further emboldens their views.
00:08:05.000 There's also research that would suggest that it at least gives them a consideration about what they currently believe.
00:08:13.000 Given the dynamics of our network being completely public, we're not organized around communities, we're not organized around topics, we have a little bit more freedom to show more of the spectrum of any one particular issue.
00:08:28.000 And I think that's how we would approach it from the start.
00:08:32.000 That said, we haven't really dealt much with misinformation more broadly across, like, these sorts of topics.
00:08:40.000 We've focused our efforts on elections and, well, mainly elections right now.
00:08:46.000 You know, YouTube is a different animal.
00:08:48.000 You know, YouTube, someone can really convince you that the earth is flat if you're gullible and you watch a 45-minute YouTube video.
00:08:54.000 You know, it's kind of a different thing.
00:08:56.000 But I wanted to just kind of get into that statement you made about misinformation and whether or not you'll police it.
00:09:01.000 I think that the tough part of this is really, and I'd love to have a discussion about this, is do you really want corporations to police what's true and not true?
00:09:10.000 Absolutely not.
00:09:10.000 That's a really, really tough position.
00:09:12.000 But you guys do that.
00:09:13.000 We try not to do that.
00:09:14.000 We don't want to do that.
00:09:15.000 But you do in your rules.
00:09:17.000 But the places that we focus on is where we think that people are going to be harmed by this in a direct and tangible way that we feel a responsibility to correct.
00:09:25.000 When you say in your rules, Tim, what do you mean by that?
00:09:28.000 Naming and misgendering.
00:09:29.000 Dead naming and misgendering.
00:09:31.000 That's a specific ideology that's unique to a very small faction of people in this world that you guys actually ban people for.
00:09:36.000 So the way I think of it is it's behavior-based.
00:09:40.000 And I know you think of it as content, and we can disagree on this point.
00:09:43.000 But this is about...
00:09:45.000 Why are you doing this to a trans person?
00:09:47.000 Why are you calling them by this name when they've chosen to go by a different name?
00:09:50.000 Or why are you outing them in some way?
00:09:52.000 What is your intent and purpose behind that?
00:09:55.000 I don't mean to interrupt, but in the interest of clarity, I want to explain what dead naming means.
00:10:01.000 Right, right.
00:10:01.000 So why don't you go ahead?
00:10:02.000 So a transgender individual changes their name when they transition.
00:10:06.000 A dead name would be their birth name or the name they went by before the transition.
00:10:10.000 Because my mom's probably going, what?
00:10:12.000 I'm ready for the text.
00:10:13.000 What's a deadname?
00:10:14.000 And I will clarify, too, your rules specifically say targeted misgendering and deadnaming, I believe is correct, right?
00:10:20.000 So years ago, we passed a policy that we call our hateful conduct policy, and that prohibits targeting or attacking someone based on their belonging in any number of groups.
00:10:40.000 But you can't detonate someone,
00:10:55.000 but you can call them stupid.
00:10:58.000 Generally.
00:10:59.000 I mean, if you created an account that only was there to call the same person stupid 5,000 times, we'd probably view that as a, you know...
00:11:07.000 Targeted harassment.
00:11:08.000 Targeted harassment.
00:11:09.000 Right.
00:11:09.000 It's a function of behavior.
00:11:12.000 Because people with our system can do this in massive velocity.
00:11:15.000 Which would ultimately silence you from the platform or just say, like, I give up.
00:11:19.000 I don't want to deal with this thing.
00:11:21.000 I'm out.
00:11:22.000 So we can just get into all of the big examples.
00:11:25.000 I mean, starting with me...
00:11:27.000 I'd love to, Tim.
00:11:27.000 But can we just take a step back and try to level set what we're trying to do with our policies?
00:11:32.000 Because I think it's worth doing that.
00:11:34.000 So as a high level, I personally, and this is my job to run the policy team, I believe that everyone has a voice and should be able to use it.
00:11:43.000 And I want them to be able to use it online.
00:11:45.000 Now where we draw a line is when people use their voice and use their platform to abuse and harass other people to silence them.
00:11:53.000 Because I think that that's what we've seen over the years is a number of people who have been silenced online because of the abuse and harassment they've received and they either stop talking or they leave the platform in its entirety.
00:12:03.000 If you look at free expression and free speech laws around the world, they're not absolute.
00:12:07.000 They're not absolute.
00:12:08.000 There's always limitations on what you can say and it's when you're starting to endanger other people.
00:12:12.000 So my question then is, when I was physically threatened on Twitter, you guys refused to take down the tweet.
00:12:18.000 And I showed up in Berkeley and someone physically threatened me because they were encouraged to.
00:12:22.000 When I was in Venezuela, I was physically threatened by a high-profile individual, 10,000 people tweeting at me.
00:12:27.000 You guys do nothing.
00:12:28.000 Right, so I guess there's the obvious question of why does it always feel like your policies are going one direction politically?
00:12:34.000 You say it's about behavior, you said it several times already, but I've already, I've got tons of examples of that not being the case.
00:12:40.000 And you will always be able to find those examples.
00:12:42.000 Yeah, examples where you guys were alerted multiple times and did nothing, like when Antifa doxed a bunch of law enforcement agents, some of the tweets were removed, but since September, this tweet is still live with a list of private phone numbers, addresses, yet Kathy Griffin...
00:12:58.000 She's fine.
00:12:59.000 The guy who threatened the lives of these kids in Covington and said, lock them in the school and burn it down, you did nothing.
00:13:04.000 I mean, he got suspended.
00:13:05.000 I'd take his tweets down.
00:13:06.000 Was he banned for threatening the lives of kids?
00:13:08.000 Absolutely not.
00:13:09.000 So, again, we have, and I'm happy to talk about all these details.
00:13:13.000 We have our policies that are meant to protect people.
00:13:16.000 And they're meant to enable free expression as long as you're not trying to silence somebody else.
00:13:20.000 Now, we take a variety of different enforcement mechanisms around that.
00:13:24.000 Sometimes you get warned.
00:13:25.000 Sometimes your tweet is forced to be deleted.
00:13:27.000 It's a very rare occasion where we will outright suspend someone without any sort of warning or any sort of ability to understand what happened.
00:13:36.000 What did you guys do with Kathy Griffin when she was saying she wanted the names of those young kids wearing the MAGA hats at the Covington High School kids?
00:13:43.000 Yeah, that's a great example, Joe.
00:13:45.000 So in that particular case, you know, our doxing policy really focuses on posting private information, which we don't consider names to be private.
00:13:52.000 We consider your home address, your home phone number, your mobile phone number, those types of things to be private.
00:13:58.000 So in that particular case, we took what I think now is probably a very literal interpretation of our policy and said that that was not a doxing incident.
00:14:07.000 Do you think that was an error?
00:14:08.000 I think that it was short-sighted.
00:14:09.000 And given the context of what was going on there, that if I was doing this all over again, I would probably ask my team to look at that through the lens of what was the purpose behind that tweet?
00:14:19.000 And if the purpose was, in fact, to identify these kids to either dox them or abuse and harass them, which it probably was, then we should be taking a more expansive view of that policy and including that type of content.
00:14:30.000 Especially considering the fact they're minors.
00:14:32.000 I mean, I would think that right away that would be the approach.
00:14:35.000 So this is a trial and error, sort of learn and move on with new information sort of a deal.
00:14:41.000 Absolutely.
00:14:41.000 We're going to learn.
00:14:42.000 We're going to make a ton of mistakes.
00:14:43.000 We're trying to do this with hundreds of millions of accounts all around the world, numerous languages.
00:14:49.000 We're going to make mistakes.
00:14:50.000 Even if we get better, there will always be mistakes.
00:14:52.000 But we're hoping to learn from those and to make ourselves better and to catch cases like Tim's or others where we clearly may have made an error.
00:15:00.000 And I'm open to having those discussions.
00:15:02.000 I'm sorry, Tim, I'm familiar with your specific cases, but I'd love to follow up with you.
00:15:07.000 Do you want to pull it up?
00:15:08.000 Do you want to see the tweet?
00:15:10.000 We definitely can pull that up.
00:15:11.000 So it's bit.ly slash antifatweet, all lowercase.
00:15:16.000 This is also an evolution in prioritization as well.
00:15:20.000 One of the things we've come to recently is we do need to prioritize these efforts, both in terms of policy, enforcement, how we're thinking about evolving them.
00:15:30.000 One of the things that we want to focus on as number one is physical safety.
00:15:34.000 And this leads you immediately to something like doxing.
00:15:37.000 And right now, the only way we take action on a doxing case is if it's reported or not.
00:15:44.000 What we want to move to is to be able to recognize those in real time, at least in the English language, recognize those in real time through our machine learning algorithms, and take the action before it has to be reported.
00:15:55.000 So we're focused purely right now on Going after doxing cases with our algorithms so that we can be proactive.
00:16:03.000 That also requires a much more rigorous appeals process to correct us when we're wrong.
00:16:09.000 But we think it's tightly scoped enough.
00:16:12.000 It impacts the most important thing, which is someone's physical safety.
00:16:15.000 Once we learn from that, we can really look at the biggest issue with our system right now is all the burden is placed upon the victim.
00:16:23.000 So we only act based on reports.
00:16:25.000 We don't have a lot of enforcement, especially with more of the takedowns that are run through machine learning and deep learning algorithms.
00:16:36.000 But if something is reported, a human does review it eventually, or are there a series of reports that you never get to?
00:16:42.000 There's probably reports we don't.
00:16:43.000 I mean, we prioritize the queue based on severity, and the thing that will mark severity is something like physical safety or private information or whatnot.
00:16:51.000 So generally, we try to get through everything, but we have to prioritize that queue even coming in.
00:16:57.000 So if someone threatened the lives of someone else, would you ban that account?
00:17:01.000 Would you tell them?
00:17:02.000 Like, let's say someone tweeted three times.
00:17:05.000 Kill these people, I want them dead.
00:17:07.000 Three times.
00:17:08.000 Yes, that's a violation.
00:17:10.000 You didn't ban him, though.
00:17:11.000 Let's pull that up, Jamie.
00:17:13.000 I don't necessarily want to give out specific usernames because then people just point the finger at me and say, I'm getting these people banned.
00:17:21.000 During Covington, this guy said multiple times he wanted his followers to go and kill these kids.
00:17:26.000 And we have to look at that, but we also have to look in the context.
00:17:29.000 Because we also have, I think we talked about this a little bit in the last podcast, but we have gamers on the platform who are saying exactly that to their friends that they're going to meet in the game tonight.
00:17:40.000 And without the context of that relationship, without the context of the conversation that we're having, we would take the exact same action on them incorrectly.
00:17:48.000 Yeah, absolutely.
00:17:49.000 That I understand.
00:18:10.000 So I do know that some of these accounts got locked.
00:18:13.000 A Disney producer was doing that?
00:18:14.000 Well, I'll clarify.
00:18:15.000 Fact check me on that, but that's basically the conversation that was had.
00:18:19.000 There's a guy at Disney, he posted a picture from Fargo of someone being tossed in a wood chipper, and he says, I want all these MAGA kids done like this.
00:18:26.000 You had another guy who specifically said, lock them in the school, burn it down, said a bunch of disparaging things, and then said, if you see them, fire on them.
00:18:34.000 And he tweeted that more than once.
00:18:35.000 And those accounts were, those tweets were taken down.
00:18:37.000 Those were violations of our rules.
00:18:39.000 I'm pretty sure it's actually illegal to do that, right?
00:18:41.000 To tell any individual to commit a felony is a crime, right?
00:18:48.000 Well, incitement of violence is certainly a crime in many places.
00:18:51.000 I just have to wonder, I understand the context issue.
00:18:54.000 But this is what I talk about.
00:18:57.000 Tim, those accounts were actioned.
00:19:00.000 They may not have been actioned the way you wanted to, but the tweets were forced to be deleted, and the account took a penalty for that.
00:19:09.000 What kind of a penalty?
00:19:10.000 Well, again, as I said earlier, Joe, we don't usually automatically suspend accounts with one violation because we want people to learn.
00:19:18.000 We want people to understand what they did wrong and give them an opportunity not to do it again.
00:19:22.000 And it's a big thing to kick someone off the platform.
00:19:25.000 And I take that very, very seriously.
00:19:26.000 So I want to make sure that when someone violates our rules, they understand what happened and they're given an opportunity to get back on the platform and change their behavior.
00:19:36.000 And so in many of these cases, what happens is we will force someone to acknowledge that their tweet violated our rules, force them to delete that tweet before they can get back on the platform.
00:19:46.000 And in many cases, if they do it again, we give them a timeout, which is like seven days, and we say, look, you've done it again.
00:19:52.000 It's a temporary suspension.
00:19:54.000 Timeout, you're a mom.
00:19:56.000 I'm totally a mom, exactly.
00:19:58.000 And if you do it again, then you're done.
00:20:01.000 So it's kind of like, you know, three strikes.
00:20:03.000 Sort of like baseball.
00:20:05.000 And so in some of these cases that Tim's referencing, I have to imagine, because these tweets were deleted.
00:20:09.000 They are violations of our rules.
00:20:11.000 People are upset that the account came back again and was allowed to say other things, but we did take action on those tweets.
00:20:17.000 They were violations of our rules.
00:20:18.000 And then you have people like Milo, who is mean to a person, and you banned him permanently.
00:20:23.000 There's a little more to that.
00:20:24.000 Actually, Tim, let's talk about it.
00:20:26.000 I'm happy to talk about Milo, and I actually brought the tweets.
00:20:29.000 So let's preface that by saying the point I want to make sure is clear is that you had somebody who actively called for the death of people.
00:20:35.000 I understand the context issue.
00:20:36.000 Maybe he's talking about video games.
00:20:38.000 Context and scale.
00:20:39.000 So this is a verified user.
00:21:00.000 The action taken against him is, delete the tweets, you get a suspension, you get a timeout.
00:21:04.000 Then you have people like Alex Jones, who berated a CNN reporter, permanently banned.
00:21:08.000 You get Milo Yiannopoulos, he was mean, permanently banned.
00:21:10.000 But that's your impression.
00:21:11.000 That's not what happened.
00:21:12.000 And I'm here to talk about the details, if you want to.
00:21:15.000 Yeah, let's do this one at a time.
00:21:17.000 Let's start with Milo.
00:21:18.000 So what was the details of Milo?
00:21:19.000 So Milo had a number of tweets that violated our rules going back to 2014, but I'm going to talk about the final three in this three strikes concept.
00:21:29.000 He claimed to be a BuzzFeed reporter in his bio, and he's a verified account, so that is impersonation.
00:21:36.000 I'm not sure why he did that.
00:21:38.000 He did do that.
00:21:39.000 Well, BuzzFeed's a left-wing thing, so he was doing parody.
00:21:42.000 Potentially, but our parity rules are very specific that if you have an account that is a parity account, you need to say that it is a parity account so you don't confuse people.
00:21:50.000 Everybody who knows Milo would know that he's not a BuzzFeed reporter.
00:21:54.000 But people who don't know Milo will look at that verified account and say, hey.
00:21:58.000 But he wasn't verified after a while.
00:22:00.000 You removed his verification.
00:22:01.000 Because he violated our rules around verification.
00:22:03.000 So the verification was removed because of the BuzzFeed thing?
00:22:08.000 I believe so.
00:22:09.000 I can confirm that, but I believe so.
00:22:11.000 He also docked someone.
00:22:13.000 He posted private information about an individual.
00:22:17.000 So that was the second one.
00:22:18.000 He tweeted to somebody else, which we viewed as a threat.
00:22:27.000 Really?
00:22:29.000 That seems like he's saying your mom should have swallowed you.
00:22:34.000 You know what I'm saying?
00:22:36.000 He's like, you're a mistake.
00:22:38.000 I don't think that's a threat.
00:22:39.000 I understand why reasonable people would have different impressions of this.
00:22:43.000 I'm just going through and telling you what they are just so we can have all the facts on the table and then we can debate them.
00:22:47.000 And then the last one, we found a bunch of things that he posted that we viewed as incitement of abuse against Leslie Jones.
00:22:55.000 So there's a bunch of them, but the one that I like to look at, which really convinced me, is he posted two doctored tweets that were supposedly by Leslie Jones.
00:23:05.000 They were fake tweets.
00:23:06.000 The first one said, And then the second one said, the goddamn slur for a Jewish person, at Sony ain't paid me yet,
00:23:22.000 damn Bix nude better pay up.
00:23:25.000 So this was just a fake tweet that someone had photoshopped?
00:23:28.000 They were two.
00:23:29.000 Two fake tweets.
00:23:30.000 Two fake tweets.
00:23:30.000 And we know they were faked because we could still tell from the software that they were faked.
00:23:36.000 You can't always tell.
00:23:37.000 So...
00:23:38.000 It is possible that he didn't know they were faked.
00:23:41.000 It's possible.
00:23:42.000 That someone sent it to him and he didn't do his due diligence in looking it up.
00:23:46.000 It is possible, but it was pointed out to him that they were fake because...
00:23:49.000 And he left it on.
00:23:50.000 And not only did he leave it on, he said, don't tell me some mischievous internet rascal made them up!
00:23:57.000 So this in the context of a bunch of other things he was saying towards Leslie Jones on Twitter...
00:24:03.000 I and my team felt that this was taken as a whole incitement of harassment against her.
00:24:09.000 Wasn't there another issue with multiple accounts that were connected to him?
00:24:14.000 There were a bunch of other issues on the background, but these are the three primary things that we looked at in terms of...
00:24:21.000 But the other things that were in the background, weren't they multiple accounts that were connected to him?
00:24:27.000 Like...
00:24:28.000 I'm not sure about that, Joe.
00:24:30.000 I think it was more that we found him to be engaging in coordinated behavior and inciting people to attack Leslie Jones.
00:24:38.000 Now, with a case like him, no, I'm just going to be honest.
00:24:41.000 When I'm listening to those...
00:24:43.000 Or listening to you read those tweets out, they don't sound that bad.
00:24:46.000 And they certainly don't sound as bad as calling for the death of a child who's wearing a MAGA hat and throwing them into a wood shepherd.
00:24:51.000 The fact that that guy's still out there tweeting, and yet Milo's not.
00:24:56.000 Milo's initial, the whole thing stemmed from, other than the BuzzFeed thing, stemmed from his legitimate criticism of a film.
00:25:04.000 And he's, you know, he's a satirist.
00:25:07.000 He was mocking this film.
00:25:08.000 The doxing incident wasn't related to the film.
00:25:11.000 I hope we all agree that doxing is something that Twitter should take action on.
00:25:17.000 And it can threaten people in real life.
00:25:19.000 And I take an enormous amount of responsibility for that because I fear daily for the things that are happening on the platform that are translating into the real world.
00:25:29.000 So Milo is a contentious figure, and there's certainly things you can pull up that I wouldn't agree with anything he did there.
00:25:34.000 I think those are horrible.
00:25:35.000 I think Joe brought up some really good points, but what about Chuck Johnson?
00:25:39.000 Why was Chuck Johnson banned?
00:25:41.000 I don't know.
00:26:06.000 So you have...
00:26:07.000 And again, maybe there's some hidden context there.
00:26:09.000 I don't know.
00:26:09.000 But on the surface...
00:26:11.000 The concern is that this is always leaning towards the left.
00:26:15.000 Oh, it absolutely is.
00:26:16.000 And I'm not even getting started.
00:26:18.000 Yeah.
00:26:19.000 I can understand why you feel that way.
00:26:21.000 I don't think that's true.
00:26:22.000 I think we look at each individual instance of violations of our rules and try to make the best case that we can.
00:26:27.000 And I'm not trying...
00:26:28.000 And I do think, Joe, just to say, I do think we've failed in a couple of ways.
00:26:33.000 And I want to admit that.
00:26:34.000 Okay.
00:26:34.000 Number one, we haven't done enough education about what our rules are.
00:26:38.000 Because a lot of people violate our rules and they don't even know it.
00:26:40.000 Like, some of the statistics that we've looked at, like, for a lot of first-time users of the platform, if they violate the rule once, almost two-thirds of them never violate the rules again.
00:26:49.000 So we're not talking about, like, a bunch of people accidentally.
00:26:51.000 Like, if they know what the rules are, most people can avoid it.
00:26:54.000 And most people, when they feel the sting of a violation, they go, okay, I don't want to lose my rights to post.
00:27:00.000 Exactly.
00:27:00.000 And they're able to do it.
00:27:01.000 So we have a lot of work to do in education so people really understand what the rules are in the first place.
00:27:06.000 The other thing we have to do to address these allegations that we're doing this from a biased perspective is to be really clear about what types of behavior are caught by our rules and what types are not.
00:27:17.000 And to be transparent within the product.
00:27:19.000 So when a particular tweet is found to be in violation of our rules, being very, very clear, like this tweet was found to be in violation of this particular rule.
00:27:27.000 And that's all work that we're doing.
00:27:28.000 Because we think the combination of education and transparency is really important, particularly for an open platform like Twitter.
00:27:34.000 It's just part of who we are, and we have to build it into the product.
00:27:37.000 I appreciate that your particular thoughts, though, on those examples that he described, when he's talking about someone saying they should throw these children into a wood chipper versus Chuck Johnson saying he should take this guy – he wants to prepare a dossier to take this guy out, or how did he say it?
00:27:52.000 He said something like, I'm going to take out DeRay McKesson with – he said, I'm preparing to take out – something like that, I can't remember.
00:27:58.000 Preparing to take him out.
00:27:59.000 I can understand how – So it could be misconstrued as he was trying to assassinate him.
00:28:03.000 You could misconstrue with that.
00:28:04.000 Not a direct threat.
00:28:05.000 But the other one's a direct threat.
00:28:07.000 One guy is banned for life.
00:28:08.000 The other guy is still posting.
00:28:10.000 I'm happy to follow up.
00:28:11.000 I just don't have all the Chuck Johnson.
00:28:12.000 It's not about one thing, as I said.
00:28:14.000 It's about a pattern and practice of violating our rules.
00:28:17.000 And we don't want to kick someone off for one thing.
00:28:19.000 But if there's a pattern and practice like there was from Milo, we are going to have to take action at some point because we can't sit back and let people be abused and harassed and silenced on the platform.
00:28:28.000 Well, so one really important thing that needs to be stated is that Twitter, by definition, is a biased platform in favor of the left, period.
00:28:35.000 It's not a question.
00:28:36.000 I understand you might have your own interpretation, but it's very simple.
00:28:40.000 Conservatives do not agree with you on the definition of misgendering.
00:28:42.000 If you have a rule in place that specifically adheres to the left ideology, you, by default, are enforcing rules from a biased perspective.
00:28:49.000 Well, Tim, there are a lot of people on the left who don't agree with how we're doing our job either.
00:28:53.000 For sure.
00:28:54.000 And those people think that we don't take enough action on abuse and harassment, and we let far too much behavior go.
00:28:59.000 But that's a radical example, though.
00:29:01.000 I mean, what he's talking about, I mean, in terms of generalities, in general, things lean far more left.
00:29:07.000 Would you agree to that?
00:29:08.000 I don't know what that means.
00:29:09.000 But in this particular case, it's how the speech is being used.
00:29:12.000 This is a new vector of attack that people have felt that I don't want to be on this platform anymore because I'm being harassed and abused and I need to get the hell out.
00:29:21.000 Will people harass and abuse me all day and night?
00:29:22.000 You don't do anything about that.
00:29:26.000 My notification is permanently locked at 99. You have it worse than I do.
00:29:29.000 I mean, you get substantially more followers.
00:29:30.000 And I don't click the notification tab anymore because it's basically just harassment.
00:29:36.000 So this is a really funny anecdote.
00:29:37.000 I was covering a story in Berkeley and someone said, if you see him, attack him.
00:29:43.000 I'm paraphrasing.
00:29:44.000 They said basically to swing at me, take my stuff, steal from me.
00:29:48.000 And Twitter told me after review it was not a violation of their policy.
00:29:52.000 Somebody made an allusion to me being a homosexual, and I reported that instantly gone.
00:29:57.000 So for me, I'm like, well, of course.
00:30:00.000 Of course Twitter is going to enforce the social justice aspect of their policy immediately, in my opinion, probably because you guys have PR constraints and you're probably nervous about that.
00:30:09.000 But when someone actually threatens me with a crime and incites their followers to do it, nothing got done.
00:30:13.000 And I'm not the only one who feels that way.
00:30:15.000 Well, Tim, that's a mistake.
00:30:16.000 If someone...
00:30:17.000 Acts in that manner and threatens to hurt you.
00:30:20.000 That's a violation of our rules.
00:30:22.000 Maybe there was a mistake there and I'm happy to go and correct that and we can do it offline so we don't fear any sort of reprisal against you.
00:30:28.000 But that's a mistake.
00:30:29.000 That's not an agenda on my part or in the team's part.
00:30:32.000 We don't have any PR constraints.
00:30:36.000 So why did you ban Alex Jones?
00:30:38.000 We can get into that.
00:30:39.000 You want to get into that?
00:30:40.000 Absolutely.
00:30:40.000 Are you ready for Alex Jones?
00:30:41.000 Sure.
00:30:42.000 All right.
00:30:42.000 Oh, I've been ready for Alex Jones.
00:30:45.000 Well, let me say this.
00:30:48.000 The reason I bring him up is that Oliver Darcy, one of the lead reporters covering Alex Jones and his content, said on CNN that it was only after media pressure did these social networks take action.
00:30:58.000 So that's why I bring him up specifically because it sort of implies you are under PR constraints to get rid of him.
00:31:03.000 I think if you look at the PR that Twitter went through in that incident, it wouldn't be that we looked good in it.
00:31:08.000 And that's not at all why we took action on this.
00:31:11.000 You have to look at the full context on the spectrum here.
00:31:13.000 Because one of the things that happened over a weekend is what Alex mentioned on your podcast with him.
00:31:22.000 He was removed from the iTunes podcast directory.
00:31:26.000 That was the linchpin for him because it drove all the traffic to...
00:31:32.000 What he said, basically zero.
00:31:35.000 Immediately after that, we saw our peer companies, Facebook, Spotify, YouTube, also take action.
00:31:42.000 We did not.
00:31:43.000 We did not because when we looked at our service and we looked at the reports on our service, we did not find anything in violation of our rules.
00:31:53.000 Then we got into a situation where suddenly a bunch of people were reporting content on our platform, including CNN, who wrote an article about all the things that might violate our rules that we looked into.
00:32:08.000 And we gave him one of the warnings.
00:32:12.000 And then we can get into the actual details.
00:32:14.000 But we did not follow.
00:32:16.000 We resisted just being like a domino with our peers because it wasn't consistent with our rules and the contract we put before our customers.
00:32:26.000 So what was it that made you ban him?
00:32:29.000 So there were three separate incidents that came to our attention after the fact that were reported to us by different users.
00:32:36.000 There was a video that was uploaded that showed a child being violently thrown to the ground and crying.
00:32:41.000 So that was the first one.
00:32:43.000 The second one was a video that we viewed as incitement of violence.
00:32:48.000 I can read it to you.
00:32:49.000 It's a little bit of a transcript.
00:32:51.000 Sure.
00:32:52.000 But now it's time to act on the enemy before they do a false flag.
00:32:55.000 I know the Justice Department's crippled a bunch of followers and cowards, but there's groups, there's grand juries, there's you called for it.
00:33:01.000 It's time politically, economically, and judiciously, and legally and criminally to move against these people.
00:33:06.000 It's got to be done now.
00:33:08.000 Get together the people you know aren't traitors, aren't cowards, aren't helping their frickin' bets, hedging their frickin' bets like all these other assholes do, and let's go, let's do it.
00:33:16.000 So people need to have their, and then there's a bunch of other stuff, but at the end, so people need to have their battle rifles ready and everything ready at their bedsides, and you've got to be ready because the media is so disciplined in their deception.
00:33:29.000 So you're saying that this is a call to violence against the media?
00:33:32.000 That's what it sounded like to us at the time.
00:33:34.000 And there have been a number of incidents of violence against the media.
00:33:37.000 And again, I take my responsibility for what happens on the platform and how that translates off-platform very seriously.
00:33:43.000 And that felt like it was an incitement to violence.
00:33:45.000 So if he only tweeted the incitement to violence, he would have been fine?
00:33:48.000 If he only posted that transcript saying, get your battlefield rifles ready, you wouldn't have deleted his account?
00:33:55.000 Again, context matters to him.
00:33:56.000 It's not about one thing.
00:33:58.000 So we'd have to look at the entire context of what's going on.
00:34:00.000 So I'm asking, was that egregious enough for you to say, that alone?
00:34:03.000 That wasn't the final.
00:34:05.000 That was just number two.
00:34:06.000 Right, right.
00:34:06.000 So then I guess the question is, what was the video context of the kid being thrown to the ground?
00:34:10.000 Was it newsworthy?
00:34:11.000 We obviously didn't think so, and depicting violence against a child is not something that we would allow on the platform.
00:34:16.000 Even if it's news content.
00:34:18.000 There are certain types of situations where if you were reporting on, you know, war zone and things that might be happening, we would put an interstitial on that type of content that's graphic or violent, but we didn't feel that that was the context here.
00:34:31.000 Well, there's a video that's been going around that was going around a few...
00:34:35.000 Four or five weeks ago, the one where the girls were yelling at that big giant guy and the guy punched that girl in the face and she was like 11 years old.
00:34:41.000 I saw that multiple times on Twitter.
00:34:44.000 That was one of the most violent things I've ever seen.
00:34:46.000 This giant man punched this 11-year-old girl in the face.
00:34:50.000 And was that removed from Twitter?
00:34:53.000 I don't know.
00:34:53.000 I would have to go see if anyone reported it to us.
00:34:56.000 I think one of the issues here, too, is...
00:34:59.000 Do you want me to get to the third one?
00:35:00.000 Yeah, please do.
00:35:01.000 So the third strike that we looked at was a verbal altercation that Alex got into with a journalist, and in that altercation, which was uploaded to Twitter, there were a number of statements using eyes of the rat, even more evil-looking person,
00:35:17.000 he's just scum, You're a virus to America and freedom, smelling like a possum that climbed out of the rear end of a dead cow.
00:35:23.000 You look like a possum that got caught doing some really, really nasty stuff in my view.
00:35:27.000 So it was a bunch of verbal...
00:35:29.000 That's enough?
00:35:30.000 Really?
00:35:30.000 That's hilarious.
00:35:31.000 ...pattern and practice, but it was a verbal altercation that was posted on our platform.
00:35:36.000 So we took the totality of this, having been warned that we have rules against abuse and harassment of individuals.
00:35:42.000 We saw this pattern in practice, one strike, two strikes, three strikes, and we made a decision to permanently smash it.
00:35:48.000 And so that last one was on Periscope, is that what it was, that he broadcast through?
00:35:53.000 I think it was originally on Periscope, but it was also reposted from multiple related accounts onto Twitter.
00:36:01.000 So we can agree with you when you say these things like, you know, Alex said this sounds like a threat.
00:36:06.000 He was berating this person saying awful things.
00:36:09.000 But ultimately, your judgment is the context.
00:36:12.000 You say we have to pay attention to the context.
00:36:13.000 We're just trusting that you made the right decision.
00:36:16.000 Well, I'm giving you as much facts as I can give you here.
00:36:20.000 And I think that this is a real...
00:36:22.000 Hard part of content moderation at scale on global platforms.
00:36:26.000 It's not easy, and I don't think Jack or I would tell you that it's easy.
00:36:28.000 It's a preposterous volume that you guys have to deal with, and that's one of the things that I wanted to get into with Jack when I first had him on, because when my thought, and I wasn't as concerned about the censorship as many people were, my main concern was, what is it like to start this thing that's kind of for fun,
00:36:47.000 and then all of a sudden it becomes the premier platform for free speech on the planet Earth?
00:36:53.000 It is that, but it's also a platform that's used to abuse and harass a lot of people and used in ways that none of us want it to be used, but nonetheless it happens.
00:37:02.000 And I think it's an enormously complicated challenge.
00:37:12.000 It doesn't scale.
00:37:13.000 It doesn't scale.
00:37:34.000 He keeps going after Alex Jones.
00:37:35.000 He keeps digging through his history.
00:37:37.000 Then he goes on TV and says, we got him banned.
00:37:40.000 Then Alex Jones confronts him in a very aggressive and mean way, and that's your justification for, or I should say, I inverted the timeline.
00:37:46.000 Basically, you have someone who's relentlessly digging through stuff, insulting you, calling you names, sifting through your history, trying to find anything they can to get you terminated, going on TV even, writing numerous stories.
00:37:58.000 You confront them and say, you're evil, and you say a bunch of really awful mean things.
00:38:03.000 And then you ban him.
00:38:19.000 The conservatives, to an extent, probably will try and mass flag people on the left.
00:38:24.000 But from an ideological standpoint, you have the actual, you know, whatever people want to call it, sect of identitarian left that believe free speech is a problem, that have literally shown up in Berkeley burning free speech signs.
00:38:36.000 And then you have conservatives who are tweeting mean things.
00:38:38.000 And the conservatives are less likely, I think it's fair to point out, less likely to try and get someone else banned because they like playing off them.
00:38:46.000 And the left is targeting them.
00:38:48.000 So you end up having disproportionate- I feel like there are a lot of assumptions in what you're saying.
00:38:52.000 And I don't know what basis you're saying those things.
00:38:54.000 I mean, you have conservatives demanding free speech and you have liberals.
00:38:57.000 I shouldn't say liberals.
00:38:58.000 You have what people refer to as the regressive left calling for the restrictions on speech.
00:39:04.000 I don't know what those terms mean, to be honest with you.
00:39:07.000 We have people on all sides of the spectrum who believe in free speech, and I believe that to be the case.
00:39:11.000 So your platform restricts speech?
00:39:15.000 Our platform promotes speech unless people violate our rules.
00:39:19.000 And in a specific direction?
00:39:21.000 In any direction.
00:39:22.000 But uncle, I don't want to say his name, the guy who calls for death gets a suspension, the guy who insinuates death gets a permanent ban.
00:39:28.000 But Tim, you're misinterpreting what I'm saying, and I feel like you're doing it deliberately.
00:39:32.000 It's not about one particular thing.
00:39:34.000 It's about a pattern and practice of violating our rules.
00:39:35.000 And you have a pattern and practice of banning only one faction of people.
00:39:38.000 I don't agree with that.
00:39:39.000 Quillette recently published an article where they looked at 22 high-profile bannings from 2015 and found 21 of them were only on one side of the cultural debate.
00:39:47.000 But I don't look at the political spectrum of people when I'm looking at their tweets.
00:39:50.000 Right, you have a bias.
00:39:51.000 I don't know who they are.
00:39:52.000 You're biased, and you're targeting specific individuals because your rules support this perspective.
00:39:57.000 No, I don't agree with that.
00:39:58.000 Can you be clear, though, in what rules support that perspective?
00:40:01.000 Specifically, the easiest one is misgendering, because that's so clearly ideological.
00:40:06.000 If you ask a conservative...
00:40:08.000 What is misgendering?
00:40:26.000 I have a rule against the abuse and harassment of trans people on our platform.
00:40:30.000 Can we just give context in the background as to why that is?
00:40:47.000 How that translates into real-world harm.
00:40:50.000 And they give us feedback.
00:40:51.000 And they tell us, like, you should consider different types of rules, different types of perspectives, different...
00:40:56.000 Like, for example, when we try to enforce hateful conduct in our hateful conduct policy in a particular country, we are not going to know all the slur words that are used to target people of a particular race or a particular religion.
00:41:08.000 So we're going to rely on building out a team of experts all around the world who are going to help us enforce our rules.
00:41:15.000 So in the particular case of misgendering, I'm just trying to pull up some of the studies that we looked at, but we looked at the American Association of Pediatrics and looked at the number of transgender youths that were committing suicide.
00:41:29.000 It's an astronomical, I'm sorry, I can't find it right now in front of me.
00:41:32.000 It's a really, really high statistic that's like 10 times what the normal suicide rate is.
00:41:35.000 Of normal teenagers.
00:41:37.000 And we looked at the causes of what that was happening.
00:41:39.000 And a lot of it was not just violence towards those individuals, but it was bullying behavior.
00:41:44.000 And what were those bullying behaviors that were contributing to that?
00:41:48.000 And that's why we made this rule.
00:41:50.000 Because we thought, and we believe, that those types of behaviors were happening on our platform, and we wanted to stop it.
00:41:58.000 Now there are exceptions to this rule.
00:41:59.000 We don't, and this is all, this isn't about like public figures, and there's always going to be public figures that you're going to Want to talk about, and that's fine.
00:42:07.000 But this is about, are you doing something with the intention of abusing and harassing a trans person on the platform?
00:42:12.000 And are they viewing it that way and reporting it to us so that we take action?
00:42:17.000 So I will just state, I actually agree with the rule.
00:42:21.000 From my point of view, I agree that bullying and harassing trans people is entirely wrong.
00:42:26.000 I disagree with it.
00:42:27.000 But I just want to make sure it's clear to everybody who's listening.
00:42:29.000 My point is simply that Ben Shapiro went on a talk show and absolutely refused.
00:42:34.000 And that's his schtick.
00:42:35.000 And he's one of the biggest podcasts in the world.
00:42:38.000 So if you have all of his millions upon millions of followers who are looking at this rule saying this goes against My view of the world, and it's literally 60-plus million in this country, you do have a rule that's ideologically bent.
00:42:49.000 And it's true.
00:42:50.000 You did the research.
00:42:51.000 You believe this.
00:42:53.000 Well, then you have Ben Shapiro, who did his research and doesn't believe it.
00:42:56.000 And I relied on the American Association of Pediatrics and Human Rights Council and other...
00:43:02.000 And I'm sure he has his sources, too, for when he gives his statements.
00:43:05.000 The point is...
00:43:05.000 But I just wonder if they have that context.
00:43:09.000 And that's where we have also failed, as well as just explaining the why behind a lot of our policy and reasons.
00:43:16.000 I would agree, and I think it's fine.
00:43:18.000 You did research and you found this to be true.
00:43:20.000 But we can't simply say maybe Ben Shapiro and the other conservatives who feel this way don't know.
00:43:27.000 The point I'm trying to make is simply whether you justify it or not is not the point.
00:43:31.000 The point is you do.
00:43:32.000 You do have this rule.
00:43:33.000 That rule is at odds with conservatives, period.
00:43:35.000 Well, I think that you're generalizing, but I think it is really important, as Jack said, to the why behind these things.
00:43:42.000 The why is to protect people from abuse and harassment on our platform.
00:43:46.000 I understand, but you essentially created a protected class, if this is the case, because despite these studies and what these studies are showing...
00:43:55.000 There's a gigantic suicide rate amongst trans people, period.
00:44:00.000 It's 40%.
00:44:01.000 It's outrageously large.
00:44:04.000 Now, whether that is because of gender dysphoria, whether it's because of the complications from sexual transition surgery, whether it's because of bullying, whether it's because of this awful feeling of being born in the wrong gender, all that is yet to be determined.
00:44:20.000 The fact that they've shown that That there's a large amount of trans people that are committing suicide.
00:44:25.000 I don't necessarily think that that makes sense in terms of people from someone's perspective, like a Ben Shapiro, saying that if you are biologically female, if you are born with a double X chromosome,
00:44:41.000 you will never be XY. If he says that, that's a violation of your policy.
00:44:48.000 And you're creating a protected class.
00:44:52.000 To be fair, targeted.
00:44:54.000 If he wants to express that opinion, he is fully entitled to express that opinion.
00:45:01.000 If he's doing it in a manner that's targeted at an individual, repeatedly, and saying that, that's where the intent and the behavior comes in.
00:45:09.000 You know what's going on with Martina Navatrolova right now?
00:45:11.000 Martina Navatrolova.
00:45:13.000 Why can't I say her last name?
00:45:14.000 Yeah, I don't know.
00:45:16.000 I don't think I've ever said it.
00:45:17.000 Martina Navatrolova.
00:45:19.000 Is it Talova or Talova?
00:45:21.000 That doesn't sound right.
00:45:22.000 Epic, world-class, legend tennis player, who happens to be a lesbian, is being harassed because she says that she doesn't believe that trans women, meaning someone who is biologically male, who transitions to a female, should be able to compete in sports against biological females.
00:45:41.000 This is something that I agree with.
00:45:42.000 This is something I have personally experienced a tremendous amount of harassment because I stood up when there was a woman who was a trans woman who was fighting biological females in mixed martial arts fights and destroying these women.
00:45:55.000 And I was saying, just watch this and tell me this doesn't look crazy to you.
00:46:00.000 Well, my point is, You should be able to express yourself.
00:46:07.000 And if you say that you believe someone is biologically male, even though they identify as a female, that's a perspective that should be valid.
00:46:17.000 First of all, it's biologically correct.
00:46:22.000 So we have a problem in that if your standards and your policies are not biologically accurate, Then you're dealing with an ideological policy.
00:46:37.000 I don't want to target trans people.
00:46:41.000 I don't want to harass them.
00:46:42.000 I'll call anybody whatever they want.
00:46:44.000 If you want to change your name to a woman's name and identify as a woman, I'm 100% cool with that.
00:46:51.000 By saying, I don't think that you should be able to compete as a woman, this opens me up for harassment.
00:46:58.000 Now, I never reported any of it.
00:46:59.000 I just don't pay attention to it.
00:47:01.000 But going into, like, Meghan Murphy, for instance, right?
00:47:04.000 You can call that targeted harassment.
00:47:06.000 If Meghan Murphy, who is – for those that don't know, she's a radical feminist who refuses to use the transgender pronouns – If she's in an argument with a trans person over whether or not they should be allowed in sports or in biologically female spaces, and she refuses to use their pronoun because of her ideology,
00:47:23.000 you'll ban them.
00:47:24.000 Again, it depends on the context on the platform.
00:47:26.000 And it's also not banned permanently.
00:47:30.000 Like, you get warnings.
00:47:31.000 Well, she was banned permanently, but let's be clear about what happened.
00:47:35.000 But she was warned multiple times.
00:47:37.000 What did she actually do?
00:47:39.000 My understanding, and I don't have the tweet by tweet the way that I did for the others, but my understanding is that she was warned multiple times for misgendering an individual that she was in an argument with, and this individual is actually bringing a lawsuit against her in Canada as well.
00:47:54.000 So you have an argument between two people, and you have a rule that enforces only one side of the ideology, and you've banned only one of those people.
00:48:02.000 We have a rule that attempts to address what we have perceived to be instances of abuse and harassment.
00:48:09.000 I completely agree.
00:48:10.000 It's your ideology.
00:48:11.000 Right, but it is an ideology, right?
00:48:12.000 If she's saying a man is never a woman, if that's what she's saying, and then biologically she's correct, we obviously have a debate here.
00:48:19.000 This is not a clear-cut...
00:48:21.000 This is not something like you can say, water is wet, you know, this is dry.
00:48:25.000 This is not like something you can prove.
00:48:27.000 This is something where you have to acknowledge that there's an understanding that if someone is a trans person, we all agree to consider them a woman, and to think of them as a woman, to...
00:48:39.000 We're good to go.
00:49:00.000 If you're taking those viewpoints and you're targeting them at a specific person in a way that reflects your intent to abuse and harass them.
00:49:07.000 What if it's in the context of the conversation?
00:49:09.000 What if she's saying that, I don't think that trans women should be allowed in these female spaces to make decisions for women?
00:49:14.000 And then this person's arguing and she says, a woman is biologically female.
00:49:19.000 You are never going to be a woman.
00:49:21.000 She responded with, men aren't women though.
00:49:23.000 And that was her first, in the series of events, that's what got her the suspension and the warning process.
00:49:28.000 That was one of many tweets that was part of providing context, and that was actually the second, actually.
00:49:34.000 Strike is my understanding.
00:49:36.000 But why is that a strike?
00:49:36.000 Yeah, why is that a strike?
00:49:38.000 But again, it's the context of, I don't have all the tweets in front of me.
00:49:42.000 There were like 10 or 12 tweets going back and forth.
00:49:44.000 And my understanding is that in the context of all of those, she was misgendering a particular person.
00:49:50.000 Not that she was holding a belief or statement.
00:49:51.000 It was a public figure, though, wasn't it?
00:49:52.000 I don't know.
00:49:53.000 It was.
00:49:54.000 So you're having an individual who is debating a high-profile individual in her community, and she's expressing her ideology versus hers, and you have opted to ban one of those ideologies.
00:50:04.000 And it's within the context of this conversation.
00:50:06.000 This is what is being debated, whether or not someone is, in fact, a woman when they were born a male.
00:50:13.000 I understand that this is controversial.
00:50:15.000 I do.
00:50:16.000 Especially to a radical feminist.
00:50:17.000 I understand why people would not agree with the rule.
00:50:21.000 But that being said, it is a rule on our platform.
00:50:23.000 And once you're warned about the rule, to repeatedly post the same content...
00:50:28.000 Is also going to be a violation of our rules.
00:50:30.000 Right, but the rule, this seems like a good example of an ideologically based rule.
00:50:36.000 If she's saying that, a man is never a woman, though, that is not, in that context, harassment.
00:50:44.000 That is a very specific opinion that she has that happens to be biologically accurate.
00:50:50.000 Now, I don't...
00:50:51.000 I don't agree with targeting harassment on anybody.
00:50:55.000 I targeted harassment on trans people or straight people or whatever.
00:50:59.000 I don't agree with it.
00:51:01.000 I don't think you should do it.
00:51:02.000 It's not something I want to do.
00:51:03.000 But in this context, what she's saying is not just her expression, but it's accurate.
00:51:11.000 I think an important point is if I tweeted to you, Joe, Joe, you are not a hamster, that's clearly not a violation of the rules.
00:51:19.000 But if I identify as a hamster?
00:51:21.000 Well, no, it wouldn't be.
00:51:23.000 Because I know people who have specifically begun using insults of animals to avoid getting kicked off the platform for breaking the rules.
00:51:29.000 Certain individuals who have been suspended now use certain small woodland creatures in place of slurs, so they're not really insulting you, and it's fine.
00:51:36.000 But there are people who consider themselves trans species.
00:51:39.000 Now, I'm not trying to belittle the trans community by no means.
00:51:42.000 I'm just trying to point out that you have a specific rule for one set of people.
00:51:45.000 So there are people who have general body dysphoria.
00:51:48.000 You don't have rules on that.
00:51:49.000 There are people who have actually amputated their own arms.
00:51:51.000 You don't have rules on that.
00:51:52.000 You have a very specific rule set.
00:51:55.000 And more importantly, in the context of a targeted conversation, I can say a whole bunch of things that would never be considered a rule break, but that one is, which is ideologically driven.
00:52:08.000 Thank you for the feedback.
00:52:09.000 I mean, we're, again, always learning and trying to understand different people's perspectives.
00:52:14.000 And all I'll say is that our intent is not to police ideology.
00:52:19.000 Our intent is to police behaviors that we view as abuse, movement, and harassment.
00:52:23.000 And I hear your point of view, and it's something that I'll definitely discuss with my team.
00:52:27.000 And even in this case, it wasn't just going against this particular rule, but also things that were more ban-evasive as well, including taking a screenshot of the original tweet, reposting it, which is against our Terms of Service.
00:52:44.000 Well, that sounds like a protest.
00:52:45.000 It sounded like a protest against your rule.
00:52:47.000 I understand you could ban them for it.
00:52:49.000 But people can protest any of our rules.
00:52:50.000 We can't let them do that.
00:52:53.000 No, no, no, I agree.
00:52:53.000 They can protest any of them.
00:52:55.000 I understand what you're saying, but I just want to make sure I point out she was clearly doing it as an effort to push back on what she viewed as an ideologically driven rule.
00:53:01.000 Well, the problem is this is a real debate in...
00:53:04.000 The LGBT community.
00:53:07.000 This is a debate where there's a division, and there's a division between people that think that trans women are invading biological female spaces and making decisions that don't benefit these biological females, cisgender, whatever you want to call them.
00:53:22.000 This is an actual debate, and it's a debate amongst progressive people, amongst left-wing people, and it's a debate amongst liberals.
00:53:29.000 This is, I mean, I would imagine the vast majority of people in the LBGT community are, in fact, on the left.
00:53:37.000 And this is one example of that.
00:53:39.000 So you have a protected class that's having an argument with a woman who feels like there's an ideological bent to this conversation that is not only not accurate, but not fair.
00:53:52.000 And she feels like it's not fair for biological women.
00:53:54.000 The same as Martina.
00:53:56.000 Well, I'll take this to its logical conclusion.
00:53:59.000 I got sent a screenshot from somebody, and maybe it's faked.
00:54:01.000 I think it was real.
00:54:02.000 They were having an argument with someone on Twitter and responded with, dude, comma, you don't know, blah, blah, blah.
00:54:08.000 And they got a suspension and a lockout and had to delete the tweet because the individual, using a cartoon avatar with the name apparently was Sam… Thank you for not being offended.
00:54:37.000 Thank you.
00:54:41.000 Yeah, it's tricky, but in this case of Megan Murphy, that's her name, right?
00:54:46.000 Yeah.
00:54:47.000 That doesn't make any sense to me.
00:54:48.000 That seems like she should be allowed to express herself.
00:54:52.000 She's not being mean by saying a man is never a woman.
00:54:57.000 This is a perspective that...
00:54:59.000 That is scientifically accurate.
00:55:01.000 That's part of the problem.
00:55:03.000 I just don't want to run into beating a dead horse.
00:55:06.000 It's a really important thing to go over all the nuances of this particular subject because I think that one in particular highlights this idea of where the problems lie in having a protected class.
00:55:20.000 And I think we should be compassionate.
00:55:23.000 We have a lot of protected classes.
00:55:24.000 Gender, race, nationality.
00:55:26.000 These are the protected classes.
00:55:27.000 But it's not for white people.
00:55:29.000 When you say gender or race...
00:55:32.000 It protects all protected categories.
00:55:34.000 So you can't attack someone for their belonging to a particular race or a particular religion.
00:55:38.000 But you can mock white people ad nauseum.
00:55:41.000 It's not a problem.
00:55:42.000 It doesn't get removed.
00:55:44.000 I'm not talking about mocking.
00:55:45.000 I'm talking about abusing and harassing somebody.
00:55:47.000 But I mean, if you mock a black person in the same way, it would be considered targeted racism.
00:55:54.000 Again, it's about targeted harassment on the platform.
00:55:57.000 What is targeted harassment?
00:56:01.000 What is racism?
00:56:06.000 There's this progressive perspective of racism that it's only possible if you're from a more powerful class.
00:56:12.000 It's only punching down.
00:56:14.000 That's the only racism.
00:56:15.000 I don't think that makes any sense.
00:56:16.000 I think racism is looking at someone that is from whatever perspective Whatever race and deciding that they are in fact less or less worthy or less valuable, whatever it is.
00:56:28.000 That takes place across the platform against white people.
00:56:33.000 Now I'm not saying white people need to be protected.
00:56:35.000 I know it's easier being a white person in America.
00:56:37.000 It's a fact.
00:56:38.000 But it's hypocritical.
00:56:40.000 To have a policy that only distinguishes you can make fun of white people all day long, but if you decide to make fun of Asian folks or, you know, fill in the blank, that is racist.
00:56:51.000 But making fun of white people isn't, and it doesn't get removed.
00:56:54.000 There are tons of tools.
00:56:55.000 How about Sarah Jong from the New York Times?
00:56:57.000 Well, I can actually explain that one.
00:56:59.000 Please do.
00:57:01.000 My understanding is that you guys started banning people officially under these policies around 2015, and all the tweets she made was prior to that, and so you didn't enforce the old tweets.
00:57:08.000 Yeah, so our hateful conduct policy, Joe, just to be clear, is across the board, meaning it doesn't just protect women.
00:57:15.000 It protects men and women.
00:57:16.000 It protects all races.
00:57:17.000 It doesn't matter.
00:57:18.000 And this is how the law is set up in the United States, right?
00:57:20.000 You can't discriminate against white men.
00:57:22.000 You can't discriminate against...
00:57:24.000 Black men.
00:57:25.000 Like, those are the laws, right?
00:57:27.000 Like, that's the structure it is.
00:57:28.000 It doesn't take into consideration power dynamics.
00:57:31.000 So if someone says something about white people and mocks white people on Twitter, what do you do about that?
00:57:37.000 If it's targeted harassment.
00:57:38.000 Targeted.
00:57:38.000 At a person.
00:57:39.000 So just white people in general.
00:57:41.000 If you say something about white people in general, that's not an issue?
00:57:43.000 Well, I mean, we focus on targeted harassment, which is behavior that is targeted against an individual who belongs to that class.
00:57:49.000 Okay.
00:57:49.000 Because if you try to police every opinion that people have about different races or religions, like, obviously, that's a very different story.
00:57:56.000 So this is about if you target that to somebody who belongs to that class, and that's reported to us, that is a violation of our rules.
00:58:03.000 And so in the Sarah Jean case...
00:58:06.000 We did see many tweets of that nature that were focused on people who are white or men.
00:58:13.000 And our rules in this area came into effect in 2015, which was our hateful conduct policy.
00:58:19.000 And a lot of those tweets were from a time period where those rules weren't in effect.
00:58:23.000 And in her defense, she was actually supposedly responding to people that have – you don't believe that?
00:58:29.000 Oh, come on.
00:58:29.000 Over three years – And she's tweeting blanket statements about white people.
00:58:33.000 She holds a grudge.
00:58:33.000 Sure, sure.
00:58:34.000 So I will say too, obviously I've done a ton of- Can I just finish on my one point?
00:58:39.000 So in that case, there were tweets from before the rules went into effect and tweets from after the rules went into effect.
00:58:43.000 And we did take action on the tweets from after the rules went into effect.
00:58:47.000 She's also pretty young.
00:58:48.000 So I want to point, yeah, she's in her 20s.
00:58:52.000 Yeah, so we're talking about something that might have happened eight years ago, right?
00:58:56.000 Right.
00:58:57.000 It was like 2011 to 2013. But I do want to point this out.
00:59:00.000 Before coming on, I obviously did a decent amount of research.
00:59:03.000 I searched for slurs against white people, black people, Latinos, and I found copious just tons and tons of them.
00:59:10.000 Now, they don't go back.
00:59:11.000 Most of what I found didn't go back too far because it does seem like you guys are doing your best, but there's a lot.
00:59:16.000 And it targets white people, black people, Jewish people.
00:59:18.000 It's everywhere.
00:59:19.000 And I can understand that.
00:59:20.000 You guys, you've got hundreds of millions.
00:59:22.000 But let's try another subject.
00:59:25.000 Just to address that point, and I think Jack talked about this a little bit, like this is where right now we have a system that relies on people to report it to us, which is a huge burden on people.
00:59:35.000 And especially if you happen to be a high-profile person, and Tim, you would understand this, you're not going to sit there and report every tweet.
00:59:42.000 And Joe, you'll understand this.
00:59:43.000 I ignore it.
00:59:44.000 Like, it's not worth your time.
00:59:45.000 You're not going to go through tweet by tweet as people respond to you and report it.
00:59:48.000 People tell us this all the time.
00:59:50.000 So this is where we have to start getting better at identifying when this is happening and taking action on it without waiting for somebody to tell us it's happening.
00:59:57.000 But using an algorithm, though, do you not miss context?
01:00:01.000 I mean, it seems to me that there's a lot of people that say things in humor, you know.
01:00:05.000 Or slurs within particular communities, which is perfectly reasonable.
01:00:09.000 Right, right.
01:00:10.000 So, yes, there is a danger of the algorithms missing context, and that's why we really want to go carefully into this, and this is why we've scoped it down, first and foremost, to doxing.
01:00:20.000 Which is, at least, first, it hits our number one goal of protecting physical safety.
01:00:25.000 Like, making sure that nothing done online will impact someone's physical safety offline, on our platform, in this case.
01:00:32.000 The second is that there are patterns around doxing that are much easier to...
01:00:39.000 There are exceptions, of course, because you could dox a representative's public office phone number and email address, and the algorithm might catch that, not have the context that this is a U.S. representative and this information is already public.
01:00:57.000 So essentially, it highlights how insanely difficult it is to monitor all of these posts.
01:01:04.000 And then, what is the volume?
01:01:06.000 What are we dealing with?
01:01:07.000 How many posts do you guys get a day?
01:01:09.000 Hundreds of millions of posts a day.
01:01:12.000 And how many human beings are manually reviewing any of these things?
01:01:17.000 I don't have that number.
01:01:19.000 A lot.
01:01:19.000 A lot.
01:01:20.000 Thousands?
01:01:20.000 Hundreds of thousands?
01:01:21.000 How many employees do you guys have?
01:01:22.000 We have 4,000 employees around the world.
01:01:25.000 That's it?
01:01:26.000 Yeah.
01:01:26.000 We have 4,000 employees.
01:01:29.000 That's crazy, though, but stop and think about that.
01:01:30.000 4,000 people that are monitoring hundreds of millions of tweets?
01:01:34.000 No, no, no.
01:01:34.000 We have a small team who's monitoring tweets, and some of them are employed by us.
01:01:40.000 Some of them are contractors throughout the world.
01:01:42.000 So 4,000 employees total?
01:01:44.000 4,000 employees who are engineers, who are designers, who are lawyers, policy experts.
01:01:48.000 So the number of people actually monitoring tweets is probably less than 1,000?
01:01:52.000 Well, the reason we don't give out specific numbers is we need to scale these dynamically.
01:01:58.000 If we see a particular event within a country, we might hire 100 more people on contract to deal with it.
01:02:06.000 Whereas they may not be full-time and with us the entire time.
01:02:09.000 And would they have the ability to take down tweets?
01:02:11.000 They have the, so as we get reports, it goes into a queue, and those are ranked by severity.
01:02:18.000 And then we have people who look at our rules and look at the tweets and look at the behavior and the context around it.
01:02:23.000 And they have the ability to go down that enforcement spectrum that Vijay talked about.
01:02:29.000 One, make people log in, read why it's a violation over a tweet and delete it.
01:02:34.000 Two, temporary suspensions.
01:02:36.000 And finally, a permanent suspension, which is the absolute last resort, which we ultimately do not want to do.
01:02:43.000 We want to make sure that our rules are also guided towards incentivizing more healthy conversation and more participation.
01:02:51.000 So let me ask you, the rules you have are not based in US law, right?
01:02:56.000 US law doesn't recognize restrictions on hate speech.
01:02:58.000 It's considered free speech.
01:02:59.000 So if you want to stand in a street corner and yell the craziest things in the world, you're allowed to.
01:03:02.000 On your platform, Twitter, you're not allowed to.
01:03:05.000 So even in that sense alone, your rules do have an ideology behind them.
01:03:09.000 I don't completely disagree.
01:03:10.000 I think, you know, I don't want harassment.
01:03:12.000 But the reason I bring this up is getting into the discussion about democratic health of a nation.
01:03:18.000 So I think it can't be disputed at this point that Twitter is extremely powerful in influencing elections.
01:03:25.000 I'm pretty sure you guys published recently a bunch of tweets from foreign actors that were trying to meddle in elections.
01:03:30.000 So even you as a company recognize that foreign entities are trying to manipulate people using this platform.
01:03:35.000 So there's a few things I want to ask beyond this, but...
01:03:39.000 Wouldn't it be important then to just – at a certain point, Twitter becomes so powerful in influencing elections and giving access to even the president's tweets that you should allow people to use the platform based under the norms of U.S. law.
01:03:52.000 First Amendment, free speech, right to expression on the platform.
01:03:56.000 This is becoming too much of a – it's becoming too powerful in how our elections are taking place.
01:04:02.000 So even if – You are saying, well, hate speech is our rule, and a lot of people agree with it.
01:04:06.000 If at any point one person disagrees, there's still an American who has a right to access to the public discourse, and you've essentially monopolized that, and not completely, but for the most part.
01:04:17.000 So isn't there some responsibility on you to guarantee, at a certain extent, less regulation happen, right?
01:04:23.000 Like, look, if you recognize foreign governments are manipulating our elections, then shouldn't you guarantee the right to an American to access this platform to be involved in the electoral process?
01:04:35.000 I'm not sure I see the tie between those things, but I will address one of your points, which is we're a platform that serves the world.
01:04:43.000 So we're global.
01:04:44.000 75% of the users of Twitter are outside of the United States.
01:04:49.000 So we don't apply laws of just one country when we're thinking about it.
01:04:55.000 We think about how do you have a global standard that can meet the threshold of as many countries as possible because we want all the people in the world to be able to participate And also meet elections like the Indian election coming up as well.
01:05:08.000 Right.
01:05:09.000 And my understanding is you were also accused of being biased against conservatives in India recently.
01:05:14.000 There was a report on that, as well as you held up a sign that said something offensive about the Brahmin.
01:05:20.000 So in that sense, even in other countries, you're accused of the same things that you're being accused of by American conservatives.
01:05:28.000 I think that the situations are very, very different.
01:05:31.000 And I don't think that the ideologies in play are the same at all.
01:05:36.000 Can we clarify that?
01:05:38.000 Because I'm not aware of the case.
01:05:40.000 I'm not sure what you're talking about, but we did have our Vice President of Public Policy testify in front of Indian Parliament a couple weeks ago, and they were really focused on election integrity and safety and abuse and harassment of women and political figures and the likes.
01:05:55.000 So my concern, I guess, is I recognize you're a company that serves the world, but as an American, I have a concern that the democracy I live in, the democratic republic, I'm sorry, and the democratic functions are healthy.
01:06:06.000 One of the biggest threats is Russia, Iran, China.
01:06:10.000 They're trying to meddle in our elections using your platform, and it's effective, so much so that you've actually come out and removed many people.
01:06:16.000 Covington was apparently started by a account based in Brazil.
01:06:19.000 The Covington scandal where this fake news goes viral.
01:06:21.000 It was reported by CNN that it was a dummy account.
01:06:25.000 They were trying to prop it up and they were pushing out this out of context information.
01:06:28.000 So they do this.
01:06:29.000 They use your platform to do it.
01:06:31.000 You've now got a platform that is so powerful in our American discourse that foreign governments are using it as weapons against us.
01:06:39.000 And you've taken a stance against the laws of the United States.
01:06:43.000 I don't mean like against like you're breaking the law.
01:06:45.000 I mean you have rules that go beyond the scope of the U.S. which will restrict American citizens from being able to participate.
01:06:50.000 Meanwhile, foreign actors are free to do so so long as they play by your rules.
01:06:54.000 So our elections are being threatened.
01:06:56.000 By the fact that if there's an American citizen who says, I do not believe in your misgendering policy, and you ban them, that person has been removed from public discourse on Twitter.
01:07:04.000 Right, but they don't get banned for saying they don't agree with it.
01:07:07.000 They get banned for specifically violating it by targeting an individual.
01:07:11.000 Let's say in protest, an individual repeatedly says, no, I refuse to use your pronouns, in like Megan Murphy's case.
01:07:17.000 She's Canadian, so I don't want to use her specifically.
01:07:19.000 The point I'm trying to make is, at a certain level, There are going to be American citizens who have been removed from this public discourse which has become so absurdly powerful foreign governments weaponize it because you have different rules than the American country has.
01:07:32.000 Just to be clear, my understanding, and I'm not an expert on all the platforms, is that foreign governments use multiple, multiple different ways to interfere in elections.
01:07:41.000 It is not limited to our platform, nor is it limited to social media.
01:07:44.000 But the president is on Twitter.
01:07:46.000 The president is on a lot of different platforms, as is the White House.
01:07:49.000 I think it's fair to point out the media coverage of his Twitter account is insane, and they run news stories every time he tweets.
01:07:55.000 That's certainly true.
01:07:56.000 Absolutely undeniable.
01:07:57.000 I'm just pointing out that there are a number of different avenues for this, and individuals have choices in how they use the platform.
01:08:03.000 Yeah, he might have other platforms, but he uses Twitter almost exclusively.
01:08:07.000 And what I'm trying to bring up is that if Twitter refuses to acknowledge this problem, you are facing regulation.
01:08:12.000 I don't know if you care about that, but at a certain point...
01:08:15.000 Sorry, which problem?
01:08:28.000 I think we're good to go.
01:08:38.000 So there might be someone who says, I refuse to live by any other means than what the Supreme Court has set down.
01:08:43.000 That means I have a right to hate speech.
01:08:45.000 You will ban them.
01:08:45.000 That means your platform is so powerful, it's being used to manipulate elections, and you have rules that are not recognized by the government to remove American citizens from that discourse.
01:08:55.000 So as a private platform, you've become too powerful to not be regulated if you refuse to allow people free speech.
01:09:03.000 But I'm trying to pick apart the connection.
01:09:06.000 I think...
01:09:08.000 So, yes, we do have an issue with foreign entities and misinformation.
01:09:16.000 And this is an extremely complicated issue, which we're just beginning to understand and grasp and take action on.
01:09:26.000 I don't think that issue is solved purely by not being more aggressive on something else that is taking people off the platform entirely as well, which is abuse and harassment.
01:09:40.000 It's a cost-benefit analysis, ultimately, and our rules are designed, again...
01:09:46.000 You know, they don't always manifest this way in the outcomes, but in terms of what we're trying to drive is opportunity for every single person to be able to speak freely on the platform.
01:09:57.000 That's absolutely not true.
01:10:00.000 You don't allow hate speech, so free speech is not on your platform.
01:10:03.000 I said enable everyone, create the opportunity for everyone to speak on our service.
01:10:12.000 Unless they've, it's hate speech, right?
01:10:14.000 And in part of that, the recognition that we're taking action on is that when some people encounter particular conduct, Yeah.
01:10:37.000 And we have particular outcomes to make sure that those opportunities are as large as possible.
01:10:41.000 Let's separate the first.
01:10:42.000 The point I made about foreign governments was just to explain the power that your platform holds and how it can be weaponized.
01:10:49.000 We'll separate that now.
01:10:50.000 When Antifa shows up to Berkeley and bashes a guy over there with a bike lock, that is suppressing his speech.
01:10:56.000 That's an act of physical violence.
01:10:57.000 However, when Antifa links hands and blocks a door so that no one can go to an event, That is also legally allowed.
01:11:19.000 I think we're good to go.
01:11:41.000 That means American citizens who are abiding by all of the laws of our country are being restricted from engaging in public discourse because you've monopolized it.
01:11:48.000 Can I counter that though?
01:11:49.000 Because these foreign governments are restricted by the same rules.
01:11:52.000 So if they violate those same rules, they will be removed.
01:11:55.000 So if they play within those rules, they can participate in the discourse even if they are just trying to manipulate our elections.
01:12:01.000 On the other hand, if the people that are on the platform I see what you're saying.
01:12:28.000 We can see that at a certain point, Twitter is slowly gaining, in my opinion, too much control from your personal ideology based on what you've researched, what you think is right, over American discourse.
01:12:41.000 If Twitter, and again, this is my opinion, I'm not a lawmaker, but I would have to assume if Twitter refuses to say, in the United States, you are allowed to say what is legally acceptable, period, then lawmakers' only choice will be to enforce regulation on your company.
01:12:57.000 Actually, Tim, I spent quite a bit of time talking to lawmakers as part of my role.
01:13:01.000 I had a public policy.
01:13:02.000 I spent a lot of time in D.C. I want to say that Jack and I have both spent a lot of time in D.C. And I think from the perspective of lawmakers, they, across the spectrum, are also in favor of policing abuse and harassment online and bullying online.
01:13:20.000 Well, it's true.
01:13:20.000 Those are things that people care about because they affect their children, and they affect their communities, and they affect individuals.
01:13:27.000 And so I don't think that, and as a private American business, we can have different standards than what an American government-owned corporation or American government would have to institute.
01:13:40.000 Those are two different things.
01:13:42.000 And I understand your point about the influence, and I'm not denying that.
01:13:46.000 Certainly, Twitter is an influential platform.
01:13:48.000 But, like anything, whether it's the American law or the rules of Twitter or the rules of Facebook or rules of any platform, there are rules.
01:13:56.000 And those rules have to be followed.
01:13:57.000 So it is your choice whether to follow those rules and to continue to participate in a civic dialogue, or it is your choice to not do that.
01:14:04.000 Absolutely.
01:14:05.000 You've monopolized public discourse to an extreme degree, and you say, my way or the highway.
01:14:10.000 We haven't monopolized it.
01:14:12.000 There are many different avenues for people to continue to have a voice.
01:14:15.000 There are many different platforms that offer that.
01:14:18.000 We are a largely influential one.
01:14:20.000 I'm not trying to take away from that, and we're a very important one.
01:14:22.000 You don't need to be the most important.
01:14:23.000 It's just that you are extremely important.
01:14:26.000 And that's a compliment.
01:14:27.000 Twitter has become extremely powerful.
01:14:29.000 But at a certain point, you should not have the right to control what people are allowed to say.
01:14:34.000 Look, I'm a social liberal.
01:14:36.000 I think we should regulate you guys because you are unelected officials running your system the way you see fit against the wishes of a democratic republic.
01:14:44.000 And there are people who disagree with you who are being excised from public discourse because of your ideology.
01:14:48.000 That terrifies me.
01:14:49.000 And we can take it one step further.
01:14:51.000 So Tim, just so I understand, so are you suggesting that we don't have any policies around abuse and harassment on the platform?
01:14:59.000 I'm trying to understand what it is you're saying because I'm not sure I'm following you.
01:15:03.000 So you don't think we should have any rules about abuse and harassment?
01:15:06.000 So even the threats that you received that you mentioned that we didn't...
01:15:10.000 But you mentioned a number of threats that you received, and you were quite frustrated that we hadn't taken action on them.
01:15:14.000 You think we shouldn't have rules that- Well, I'm frustrated because of the hypocrisy of when I see the flow of one direction.
01:15:22.000 And then what I see are Republican politicians, who in my opinion are just too ignorant to understand what the hell's going on around them.
01:15:27.000 And I see people burning signs that say free speech.
01:15:30.000 I see you openly saying, we recognize the power of our platform, and we're not going to abide by American norms.
01:15:37.000 I see the manipulation of Twitter in violation of our elections.
01:15:40.000 I see Democratic operatives in Alabama waging a false flag campaign using fake Russian accounts.
01:15:46.000 And the guy who runs that company has not been banned from your platform.
01:15:50.000 Even after it's been written by the New York Times, he was doing this.
01:15:53.000 So we know that not only are people manipulating your platform, you have rules that remove honest American citizens with bad opinions who have a right to engage in public discourse.
01:16:03.000 And it's like you recognize it, but you like having the power?
01:16:06.000 I'm not quite sure at what point— So just to get back to my point, so you believe that Twitter should not have any rules about abuse and harassment or any sort of hate speech on the platform?
01:16:15.000 That's your position?
01:16:16.000 Well, that's extremely reductive.
01:16:19.000 I don't know.
01:16:19.000 That may be too simplistic.
01:16:20.000 The point I'm trying to make is— But that is a point you're trying to make.
01:16:23.000 You're asking us to comply with the U.S. law that would criminalize potential speech and put people in jail for it, and you're asking us to enforce those standards.
01:16:32.000 Well, I mean, if you incite death, that's a crime.
01:16:36.000 You can go to jail for that.
01:16:37.000 So at the very least, you could...
01:16:39.000 When you have people on your platform who've committed a crime and you don't ban them, I say, well, that's really weird.
01:16:43.000 And then when you have people on your platform who say a bad, naughty word, you do ban them.
01:16:46.000 I say, well, that's really weird.
01:16:48.000 I mean, I've seen people get banned for tweeting an N to you.
01:16:51.000 I understand what they're trying to do when they tweet letters at you, Jack.
01:16:53.000 But they get suspended for it, and they get a threat.
01:16:56.000 Let's talk about learn to code.
01:16:58.000 What do you mean by that?
01:17:00.000 I haven't seen that.
01:17:01.000 What are they trying to do?
01:17:03.000 There are people who know that they can tweet a single letter, and the next person knows what letter they need to tweet.
01:17:07.000 You see what I'm saying?
01:17:08.000 So you'll see, you know, one user will say N, the next user will put an I, the next user will put a G. Yes.
01:17:13.000 And so they get suspended for doing so.
01:17:16.000 And these are the people who are trying to push the buttons on the rules, right?
01:17:20.000 They get suspended for that?
01:17:21.000 Absolutely.
01:17:24.000 But here's the thing.
01:17:25.000 I think your team understands what they're doing.
01:17:29.000 However, you get really dangerous territory if someone accidentally tweets an N and you assume they're trying to engage in a harassment campaign, which is why I said let's talk about learn to code.
01:17:38.000 But we do look at coordination of accounts.
01:17:41.000 Do you do that through direct messages?
01:17:45.000 I don't know about direct messages.
01:17:46.000 Do you read direct messages?
01:17:48.000 We don't read direct messages.
01:17:49.000 We don't read them unless someone reports a direct message to us that they have received.
01:17:53.000 And so you read their direct message that they send to you?
01:17:56.000 So if you have a direct message and someone says something terrible and then you receive a death threat and you report that to us, then we would read it because you've reported it to us.
01:18:05.000 Does anyone in the company have access to direct messages other than that?
01:18:10.000 Only in the context, again, of reviewing reports.
01:18:13.000 Other than that, they're not accessible?
01:18:15.000 Not to my knowledge.
01:18:17.000 I don't know what you mean.
01:18:18.000 We're not reading them.
01:18:20.000 Is it possible that someone could go into Tim's direct messages and just read his direct messages?
01:18:25.000 I don't think so.
01:18:26.000 So if Tim writes an N, and I write an I, and Jamie writes a G, can you go into our direct messages and say, hey, let's fuck with Jack, and we're going to write this stuff out, and we're going to do it, and let's see if they ban us.
01:18:39.000 You can't read that?
01:18:40.000 I don't think so.
01:18:41.000 So if that's the case, how would you know if there was a concerted effort?
01:18:45.000 I think what he's saying is, like, if we do see those train of replies, then that is coordination.
01:18:52.000 You know what people are doing, right?
01:18:53.000 Sure, but how do you prove it?
01:18:55.000 Well, I think beyond the N, like, you know, the first person with the letter, you can't prove he did it, but everybody else, you kind of can.
01:19:02.000 But I don't think we would...
01:19:04.000 Well, look, I can say this.
01:19:06.000 I've been sent numerous screenshots from people.
01:19:08.000 Screenshots can be fake.
01:19:08.000 I recognize that.
01:19:09.000 But I have seen people actually tweet, and then I've seen the tweet follow-up.
01:19:13.000 Right after one letter, then?
01:19:15.000 Yeah, someone tweeted at you.
01:19:16.000 Someone decently high-profile, like a big YouTuber, tweeted an N at you and then got like a 12-hour suspension.
01:19:21.000 I understand.
01:19:22.000 But let's talk about learn to code, right?
01:19:24.000 And why are people being suspended for tweeting hashtag learn to code?
01:19:28.000 Yep.
01:19:28.000 We did some research on this.
01:19:31.000 Yes, we did some research on this.
01:19:33.000 So there was a situation, I guess about a month ago or so, where a number of journalists were receiving a variety of tweets, some containing learn to code, some containing a bunch of other coded language that was wishes of harm.
01:19:52.000 These were thousands and thousands of tweets being directed at a handful of journalists.
01:19:56.000 And we did some research and what we found was a number of the accounts that were engaging in this behavior, which is tweeting at the journalists with this either Learn to Code or things like Day of the Rope and other coded language, were actually ban evasion accounts.
01:20:13.000 That means accounts that had been previously suspended.
01:20:15.000 And we also learned that there was a targeted campaign being organized off our platform to abuse and harass these journalists.
01:20:22.000 That's not true.
01:20:24.000 See, here's the thing.
01:20:25.000 An activist who works for NBC wrote that story and then lobbied you.
01:20:29.000 You issued an official statement, and then even the editor-in-chief of the Daily Caller got a suspension for tweeting Learn to Code at the Daily Show.
01:20:36.000 So I have never talked to anybody from NBC about this issue, so I'm not sure.
01:20:40.000 No, so they report it.
01:20:42.000 Don't misrepresent me.
01:20:43.000 They report it.
01:20:43.000 The narrative goes far and wide amongst your circles.
01:20:46.000 Then all of a sudden you're seeing high-profile conservatives tweeting a joke getting suspensions.
01:20:50.000 So again, some of these tweets actually contained death threats, wishes of harm, other coded language that we've seen to mean death to journalists.
01:21:02.000 So it wasn't about just the learn to code.
01:21:05.000 It was about the context that we were seeing.
01:21:07.000 That's just not true.
01:21:09.000 The editor-in-chief of the Daily Caller was suspended for tweeting nothing but hashtag learn to code.
01:21:14.000 So, Tim, can I finish what I was saying?
01:21:17.000 Yeah.
01:21:17.000 So we were looking at the context, and what was happening is there were journalists receiving hundreds of tweets.
01:21:22.000 Some had death threats, some had wishes of harm, some just learned to code.
01:21:25.000 And in that particular context, we made a decision.
01:21:28.000 We consider this type of behavior dogpiling, which is when...
01:21:31.000 All of a sudden individuals are getting tons and tons of tweets at them.
01:21:35.000 They feel very abused and harassed on the platform.
01:21:37.000 Can we pause this because this is super confusing for people who don't know the context.
01:21:41.000 The learn to code thing is in response to people saying that people that are losing their jobs like coal miners and truck drivers and things like that could learn to code.
01:21:50.000 It was almost like ingest initially.
01:21:54.000 Or if it wasn't in jest initially, it was so poorly thought out as a suggestion that people started mocking it, right?
01:22:00.000 Correct?
01:22:01.000 So the first stories that came out were simply like, can miners learn to code?
01:22:05.000 And the hashtag learn to code is just a meme.
01:22:09.000 It's not even necessarily a conservative one, though you will see more conservatives using it.
01:22:12.000 People are using it to mock how stupid the idea of taking a person who's uneducated, who's in their 50s, who should learn some new form of vocation, and then someone says, learn to code.
01:22:24.000 And so then other people, when they're losing their job or when something's happening, people would write, learn to code, because it's a meme.
01:22:30.000 Well, not even necessarily.
01:22:32.000 I would just characterize learn to code as a meme that represents the elitism of modern journalists and how they target certain communities with disdain.
01:22:41.000 So to make that point, there are people who have been suspended for tweeting something like, I'm not too happy with how BuzzFeed reported the story, hashtag learn to code.
01:22:49.000 Making representation of these people are snooty elites who live in ivory towers.
01:22:54.000 But again...
01:22:57.000 This is a meme that has nothing to do with harassment, but some people might be harassing somebody and might tweet it.
01:23:03.000 Why would we expect to see – even still today, I'm still getting messages from people with screenshots saying I've been suspended for using a hashtag.
01:23:08.000 And the editor-in-chief of The Daily Caller, he quote-tweeted a video from The Daily Show with hashtag learn to code, and he got a suspension for it.
01:23:17.000 So why learn to code?
01:23:19.000 Why is that alone so egregious?
01:23:22.000 And I don't think it is so egregious.
01:23:24.000 So is it just something that got stuck in an algorithm?
01:23:26.000 No.
01:23:27.000 It was, again, a specific set of issues that we were seeing targeting a very specific set of journalists.
01:23:33.000 And it wasn't just the learn to code.
01:23:36.000 It was a couple of things going on.
01:23:38.000 A lot of the accounts tweeting learn to code were ban invaders, which means they've previously been suspended.
01:23:42.000 A lot of the accounts had other language in them, or tweets had other language like day of the brick, day of the rope, oven ready.
01:23:49.000 These are all coded meanings for violence against people.
01:23:53.000 And so people who are receiving this were receiving hundreds of these.
01:23:58.000 In what appeared to us to be a coordinated harassment campaign.
01:24:02.000 And so we were trying to understand the context of what was going on and take action on them.
01:24:06.000 Because again, I don't know, Joe, if you've ever been the target of a dogpiling event on Twitter, but it is not particularly fun.
01:24:23.000 Yeah, I understand.
01:24:51.000 I understand what you're saying.
01:24:58.000 But in and of itself, though, it still seems like there's alternative meanings to learn to code.
01:25:05.000 It still could be used, as Tim was saying, to mock elite snooty… Speak truth to power.
01:25:11.000 Yes, absolutely.
01:25:12.000 I agree with you.
01:25:13.000 So it's really about the context of what was happening in that situation and all those other things.
01:25:17.000 I think in a very different situation, we would not take action on that.
01:25:21.000 Okay.
01:25:22.000 But doesn't that seem like you're throwing a blanket over a very small issue?
01:25:28.000 Because Learn to Code in itself is very small.
01:25:32.000 The blanket is cast over racism.
01:25:34.000 The blanket is cast over all the other horrible things that are attached to it.
01:25:38.000 But the horrible things that are attached to it are the real issue.
01:25:40.000 This Learn to Code thing is kind of a legitimate protest in people saying that these minors should learn to code.
01:25:47.000 That's kind of preposterous.
01:25:48.000 Well, the first articles weren't mean.
01:25:50.000 It was just, Learn to Code kind of identified, you have these journalists who are so far removed from middle America that they think you can take a 50-year-old man who's never used a computer before and put him in a, you know.
01:26:00.000 The stories, I think, were legitimate.
01:26:01.000 Yes.
01:26:02.000 But the point more so is it was a meme.
01:26:04.000 The hashtag, the idea of Learn to Code condenses this idea, and it's easy to communicate, especially when you only have 280 characters, that there is a class of individual in this country.
01:26:14.000 I think you mentioned on, was it Sam Harris, that the left, these left liberal journalists only follow each other.
01:26:19.000 Yeah, in the run-up to the 2016 elections.
01:26:23.000 Yeah, and so, I mean, I still believe that to be true, and I've worked in these offices.
01:26:27.000 It has changed.
01:26:28.000 They've done the study again, the visualization, and now there is a lot more cross-pollination.
01:26:32.000 But what we saw is folks who were reporting on the left end of the spectrum mainly followed folks on the left, and folks on the right followed everyone.
01:26:41.000 What you were talking about earlier, that there's these bubbles.
01:26:44.000 There's bubbles, and we've helped create them and maintain them.
01:26:48.000 So here's what ends up happening, and this is one of the big problems that people have.
01:26:51.000 With this story particularly, you have a left-wing activist who works for NBC News.
01:26:57.000 I'm not accusing you of having read the article.
01:26:58.000 He spends like a day lobbying to Twitter saying, Guy, you have to do this.
01:27:04.000 You have to make these changes.
01:27:05.000 The next day he writes a story saying that 4chan is organizing these harassment campaigns and death threats.
01:27:10.000 And while 4chan was doing threads about it, you can't accuse 4chan simply for talking about it because Reddit was talking about it too, as was Twitter.
01:27:17.000 So then the next day, after he published his article, now he's getting threats.
01:27:22.000 And then Twitter issues a statement saying, we will take action.
01:27:25.000 And to make matters worse, when John Levine, a writer for The Wrap, got a statement from one of your spokespeople saying, yes, we are banning people for saying learn to code.
01:27:35.000 A bunch of journalists came out and then lied.
01:27:37.000 I had no idea why saying this is not true.
01:27:39.000 This is fake news.
01:27:40.000 Then a second statement was published by Twitter saying it's part of a harassment campaign.
01:27:44.000 And so then the mainstream narrative becomes, oh, they're only banning people who are part of a harassment campaign.
01:27:49.000 But you literally see legitimate high-profile individuals getting suspensions for joining in on a joke.
01:27:54.000 Oh, there are for sure probably mistakes in there.
01:27:56.000 I don't think that any of us are claiming that we got this 100% right.
01:27:59.000 And probably our team having a lack of context into actually what's happening as well.
01:28:03.000 And we would fully admit we probably were way too aggressive when we first saw this as well and made mistakes.
01:28:10.000 I hope this clarifies then.
01:28:12.000 You have situations like this where you can see – this journalist, I'm not going to name him, but he routinely has very left-wing – I don't want to use overtly esoteric words, but intersectional dogmatic points of view.
01:28:26.000 So this is – What does that mean?
01:28:27.000 So like intersectional feminism is considered like a small ideology.
01:28:32.000 People refer to these groups as the regressive left or the identitarian left.
01:28:36.000 These are basically people who hold views that a person is judged based on the color of their skin instead of the content of their character.
01:28:41.000 So you have the right-wing version, which is like the alt-right, the left-wing version, which is like...
01:28:47.000 Intersectional feminism is how it's simply referred to.
01:28:50.000 So you'll see people say things like – typically when they rag on white men or when they say like white feminism, these are signals that they hold these particular views.
01:28:59.000 And these views are becoming more pervasive.
01:29:00.000 So what ends up happening is you have a journalist who clearly holds these views.
01:29:03.000 I don't even want to call him a journalist.
01:29:05.000 He writes extremely biased and out of context story.
01:29:08.000 Twitter takes action in response, seemingly in response.
01:29:11.000 Then we can look at what happens with Oliver Darcy at CNN. He says, you know, the people at CPAC, the conservatives are gullible eating red meat from grifters, among other things, disparaging comments about the right.
01:29:20.000 And he's the one who's primarily advocating for the removal of certain individuals who you then remove.
01:29:25.000 And then when Kathy Griffin calls for doxing, that's fine.
01:29:27.000 When this guy calls for the death of these kids, he gets a slap on the wrist.
01:29:32.000 And look, I understand the context matters, but grains of sand make a heap, and eventually you have all of these stories piling up, and people are asking you why it only flows in one direction.
01:29:40.000 Because I've got to be honest, I'd imagine that calling for the death three times of any individual is a bannable offense, even without a warning.
01:29:46.000 You just get rid of them.
01:29:48.000 But it didn't happen, right?
01:29:50.000 We see these, you know, people say men aren't women, though, and they get a suspension.
01:29:53.000 We see people say the editor-in-chief of The Daily Caller may be the best example.
01:29:57.000 Hashtag learn to code, quoting The Daily Show, and he gets a suspension.
01:30:01.000 Threatening death and inciting death is a suspension, too.
01:30:04.000 It feels like it's only going in one direction.
01:30:07.000 Yeah, I think we have a lot of work to do to explain more clearly when we're taking action and why, and certainly looking into any mistakes we may have made in those particular situations.
01:30:16.000 So would you guys agree that in tech, I think we can all agree this, I would hope you agree, tech tends to lean left.
01:30:25.000 Tech companies, Facebook, Twitter, Google.
01:30:28.000 I would be willing to bet that a conservative running a social network would not have a hate speech policy.
01:30:33.000 I mean, you look at Gab and you look at Mines, and Mines is not even right-wing.
01:30:37.000 Right, they're not right-wing at all.
01:30:38.000 They just staunchly support free speech.
01:30:41.000 And I don't think Gab is necessarily, I don't think the owner is necessarily right-wing either.
01:30:45.000 I don't know much about him.
01:30:46.000 I think he's like a libertarian.
01:30:48.000 I don't want to specify either.
01:30:51.000 I don't know enough.
01:30:52.000 But I know that when you read what they write, they're just staunchly committed to free speech.
01:30:59.000 But they will stop doxing.
01:31:01.000 They will do things to stop targeted harassment and doxing and things along those lines.
01:31:06.000 Sometimes slowly.
01:31:07.000 Yeah, admittedly.
01:31:09.000 But they just want an open platform.
01:31:12.000 My point is that I think a lot of people that are on the right feel disenfranchised by these platforms that they use on a daily basis.
01:31:21.000 I don't know what the percentages are in terms of the number of people that are conservative that use Twitter versus the number of people that are liberal, but I would imagine it's probably pretty close, isn't it?
01:31:33.000 I don't know.
01:31:34.000 The numbers?
01:31:35.000 I don't know because we don't ask people.
01:31:37.000 We'd have to infer all that based on what they're saying.
01:31:40.000 So let's not even go there.
01:31:42.000 But the people that run, whether it's Google or Twitter or Facebook, any of these platforms, YouTube for sure, powerful leaning towards the left.
01:31:55.000 Wouldn't we all agree to that?
01:31:57.000 We don't ask our employees, but my guess is that many employees at tech companies are probably liberal.
01:32:03.000 It's really fascinating.
01:32:04.000 But I also think, I mean, you point out all the companies you mentioned are in exactly the same region as well.
01:32:11.000 Yes.
01:32:11.000 And we do have the challenge of some monocultural thinking as well.
01:32:18.000 Yes.
01:32:20.000 I have said publicly that, yes, we will have more of a liberal bias within our company.
01:32:27.000 I said this to CNN. But that doesn't mean that we put that in our rules.
01:32:33.000 But hold on.
01:32:35.000 Because what I'm getting at is that at some point in time, things have to get down to a human being looking and reviewing at cases.
01:32:42.000 And if you guys are so left-wing in your...
01:32:48.000 Your staff and the area that you live in and all these things, things are almost naturally going to lean left.
01:32:56.000 Is that fair to say?
01:32:58.000 If we were purely looking at the content, but a lot of this agent work is based on the behaviors, all the things that we've been discussing in terms of the context of the actual content itself.
01:33:08.000 That's what the rules are.
01:33:10.000 Except the misgendering policy, right?
01:33:11.000 So your rules do reflect your bubble, right?
01:33:14.000 Go to middle America and go hang out at a conservative town.
01:33:17.000 They're not going to agree with you.
01:33:18.000 Your rules are based on your bubble in San Francisco or whatever city.
01:33:21.000 I'm from middle America.
01:33:22.000 I'm from St. Louis, Missouri.
01:33:24.000 And I hear the point.
01:33:26.000 I definitely hear the point in terms of us putting this rule forth.
01:33:29.000 But we have to balance it with the fact that people are being driven away from our platform.
01:33:34.000 And they may not agree with me on that, my folks from Missouri.
01:33:38.000 But I think they would see some valid argument in what we're trying to do to, again, increase the opportunity for as many people as possible to talk differently.
01:33:47.000 That's it.
01:33:48.000 It's not driving the outcomes that you're speaking to.
01:33:51.000 Where do you stop?
01:33:52.000 What community is and isn't deserving of protection?
01:33:55.000 Are conservatives not deserving of protection for their opinions?
01:33:57.000 But I wanted to focus on individuals and increasing the absolute number of people who have opportunity to speak on the platform in the first place.
01:34:05.000 So then do you need a rule for body dysphoria?
01:34:08.000 Do you need a rule for other kin?
01:34:09.000 You see what I'm asking you?
01:34:11.000 I see what you're asking.
01:34:12.000 And this came from a call and research.
01:34:17.000 And there's disagreement as to whether this is the right outcome or not and this is the right policy.
01:34:24.000 And yes, our bias does influence looking in this direction.
01:34:28.000 And our bias does influence us putting a rule like this in place, but it is with the understanding of creating as much opportunity as possible for as many people to speak based on the actual data that we see of people leaving the platform because of experiences they have.
01:34:45.000 So why did your research stop there?
01:34:48.000 It hasn't stopped.
01:34:49.000 Our rules aren't set in something that just stops and doesn't evolve.
01:34:54.000 We're going to constantly question.
01:34:56.000 We're going to constantly get feedback from people on every end of the spectrum of any particular issue and make changes accordingly.
01:35:04.000 It doesn't stop.
01:35:05.000 And to your credit, I really do appreciate the fact that you're very open about that you have made mistakes and that you're continuing to learn and grow and that your company is reviewing these things and trying to figure out which way to go.
01:35:15.000 And I think we all need to pay attention to the fact that this is a completely new road.
01:35:19.000 This road did not exist 15 years ago.
01:35:22.000 There was nothing there.
01:35:23.000 That is a tremendous responsibility for any company, any group of human beings.
01:35:28.000 To be in control of public discourse on a scale unprecedented in human history.
01:35:35.000 And that's what we're dealing with here.
01:35:36.000 This is not a small thing.
01:35:38.000 And I know people that have been banned to them, this is a matter of ideology, this is a matter of this, this is a matter of that.
01:35:44.000 There's a lot of debate going on here, and that's one of the reasons why I wanted to bring you on.
01:35:48.000 Because, Tim, because you know so much about so many of these cases, because you are a journalist, and you're very aware of the implications and all the problems that have been...
01:35:58.000 That maybe have slipped through my fingers.
01:36:00.000 So I do want to make one thing really clear, though.
01:36:03.000 I have a tremendous amount of respect and trust for you when you say you wanted to solve this problem simply because you're sitting here right now and these other companies aren't, right?
01:36:10.000 Jack, you went on Sam Harris.
01:36:12.000 You were on Get With Gadsad.
01:36:13.000 And that says to me a good faith effort to try and figure out how to do things right.
01:36:18.000 So as much as I'll apologize for getting kind of angry and being emotional because...
01:36:22.000 I don't say you're angry.
01:36:23.000 Look, we also haven't been great at I'm explaining our intent.
01:36:28.000 And there's a few things going on.
01:36:31.000 One, as Joe indicated, centralized global policy at scale is almost impossible.
01:36:39.000 And we realize this.
01:36:43.000 Services have different answers to this.
01:36:44.000 Reddit has a community-based policy where each topic, each subreddit has its own policy, and there's some benefit to that.
01:36:52.000 So that's problem number one.
01:36:54.000 We know that this very binary off or on platform isn't right, and it doesn't scale, and it ultimately goes against our key initiative of wanting to promote more healthier conversation.
01:37:08.000 I just don't think that's what you're doing.
01:37:11.000 And I hear you.
01:37:12.000 I hear you.
01:37:12.000 But like, we're not done.
01:37:15.000 We're not done.
01:37:16.000 We're not finished with our work.
01:37:18.000 And we need to, the reason I'm going on all these podcasts and having these conversations and ideally Vidja's getting out there more often as well because we don't see enough and hear enough for her.
01:37:28.000 We need to have these conversations so we can learn.
01:37:30.000 We can get the feedback and also pay attention to where the technology is going.
01:37:35.000 Before the podcast, we talked a little bit about, and I talked about it on our previous podcast and also Sam's, I think?
01:38:04.000 And that is a reality that we need to pay attention to and really understand our value.
01:38:09.000 And I believe a lot of our value in the future, not today, again, we have a ton of work, is to take a strong stance of like we are going to be a company that given this entire corpus of conversation and content within the world, we're going to work to promote healthy public conversation.
01:38:27.000 That's what we want to do.
01:38:29.000 And if you disagree with it, You should be able to turn it off and you should be able to access anything that you want as you would with the internet.
01:38:38.000 But those are technologies that are just in the formative stages and presenting new opportunities to companies like ours.
01:38:45.000 And there's a ton of challenges with them and a ton of things that we've discussed over the past hour.
01:38:52.000 That it doesn't solve and maybe exacerbates, especially around things like election interference and some of the regulatory concerns that you're bringing.
01:39:01.000 So there's a few issues, right?
01:39:02.000 Your definition of what is or isn't healthy, right?
01:39:05.000 Yes, yes.
01:39:06.000 And we want that to be public.
01:39:08.000 We have four indicators right now that we're working on.
01:39:13.000 With an external lab, we want other labs to give it up, open source, make sure that people can comment on it, that people can help us define it.
01:39:20.000 We'll use that interpretation on our own algorithms and then push it.
01:39:24.000 But that has to be open.
01:39:26.000 That has to be transparent.
01:39:26.000 Are we there today?
01:39:27.000 Absolutely not.
01:39:28.000 We're not there.
01:39:29.000 This course of action to me looks like a Fahrenheit 451 future where everything is so offensive, everything must be restricted.
01:39:35.000 That's the path I see that you're on.
01:39:37.000 You want to have a healthy conversation.
01:39:39.000 You want to maximize the amount of people.
01:39:40.000 That means you've got to cut off all the tall grass and level everything out.
01:39:44.000 So if you've decided that this one rule needs to be enforced because certain things are offensive...
01:39:49.000 But can I explain what health at least means to us in this particular case?
01:39:53.000 So, like, we talked a little bit about this on the previous podcast, but, like, we have four indicators that we're trying to define and try to understand if there's actually something there.
01:40:04.000 One is shared attention.
01:40:07.000 Is a conversation generally shared around the same objects, or is it disparate?
01:40:11.000 So, like, as we're having a conversation, the four of us are having a conversation, are we all focused on the same thing?
01:40:18.000 Or is Joe on his phone, which you were earlier, or like, whatever is going on?
01:40:21.000 Because more shared attention will lead to healthier conversation.
01:40:26.000 Number two is shared reality.
01:40:28.000 Not whether something is factual, but are we sharing the same facts?
01:40:33.000 Is the earth round?
01:40:33.000 Is the world flat?
01:40:35.000 So we can tell what facts are we sharing and what facts are we not sharing, what percentage of the conversation.
01:40:44.000 So that's a second indicator.
01:40:47.000 Third is receptivity.
01:40:49.000 Are the participants receptive to debate and to civility and to expressing their opinion?
01:40:56.000 And even if it is something that might be hurtful, are people receptive to at least look at and be empathetic and look at what's behind that?
01:41:05.000 This is the one we have the most measurement around today.
01:41:09.000 We can determine and predict when someone might walk away from a Twitter conversation because they feel it's toxic.
01:41:14.000 I just ignore them all, basically.
01:41:16.000 And we see that in our data, right?
01:41:19.000 And there's some conversations that you get into and you persist.
01:41:23.000 And then finally is a variety of perspective.
01:41:27.000 Are we actually seeing the full spectrum of any topic that's being talked about?
01:41:32.000 And these are not meant to be taken as individual parts, but in unison, how they play together.
01:41:38.000 And we've written these out.
01:41:40.000 We haven't gotten far enough in actually defining what they look like and what they mean.
01:41:45.000 And we certainly haven't gotten good enough at understanding when we deploy a solution like being able to follow a hashtag.
01:41:55.000 I think we're good to go.
01:42:20.000 But in the context of a conversation, you recognize people will sometimes get heated with each other.
01:42:26.000 Is a healthy conversation when no one is being negative?
01:42:29.000 What if people are yelling at each other and being mean and insulting or misgendering them?
01:42:33.000 I think it's a question of what thresholds you allow.
01:42:38.000 And the more control we can give people to vary the spectrum on what they want to see, that feels right to me.
01:42:47.000 I mean, Joe and your Alex podcast did exactly this thing.
01:42:53.000 You're hosting a conversation.
01:42:54.000 You had both of your guests who started talking over each other.
01:43:00.000 You pause the conversation.
01:43:02.000 You said, let's not get combative.
01:43:03.000 Someone said, I'm not being combative.
01:43:05.000 You said, you're all talking over each other.
01:43:07.000 And there's a dynamic that the conversation then shifted to that got to some deeper points, right?
01:43:15.000 Could have just said, let that happen and let it go.
01:43:18.000 And that's fine, too.
01:43:19.000 It's up to who is responsible.
01:43:23.000 Viewing and experiencing that conversation.
01:43:24.000 And I agree with you.
01:43:26.000 It is completely far off from where we are today.
01:43:29.000 Not only have we had to address a lot of these issues that we're talking about at this table, but we've also had to turn the company around from a business standpoint.
01:43:39.000 We've had to fix all of our infrastructure that's over 10 years old.
01:43:42.000 We had to go through two layoffs because the company was too large.
01:43:46.000 So we have to prioritize our efforts.
01:43:48.000 And I don't know any other way to do this than be really specific about our intentions and our aspirations and the intent and the why behind our actions.
01:43:59.000 And not everyone's going to agree with it in the particular moment.
01:44:03.000 So I want to point this out before I make my next statement, though, just real quick.
01:44:07.000 It seems like the technology is moving faster than the culture.
01:44:09.000 So I do recognize you guys are in a rock and a hard place.
01:44:12.000 How do you get to a point where you can have that open source crypto blockchain technology that allows free and open speech?
01:44:19.000 I think we're good to go.
01:44:26.000 I think we're good to go.
01:44:36.000 I think?
01:44:55.000 They fall into that.
01:44:56.000 I totally get the point.
01:44:58.000 I'm hyper-aware of our action sending more and more things into the dark.
01:45:02.000 Well, this is something that I want to discuss.
01:45:04.000 This is really important in this vein of thinking.
01:45:07.000 What about Roads to Redemption?
01:45:09.000 What about someone like Megan Murphy?
01:45:11.000 What about anyone?
01:45:12.000 Alex Jones, Milo.
01:45:14.000 Is it...
01:45:15.000 Can we find a path for people to get back to the platform?
01:45:21.000 For good or for bad, like it or not, there is one video platform that people give a shit about, and that's YouTube.
01:45:27.000 You get kicked off of YouTube, you're doomed.
01:45:29.000 That's just reality.
01:45:32.000 Vimeo is wonderful.
01:45:33.000 There's a lot of great video platforms out there.
01:45:35.000 They have a fucking tiny fraction of the views that YouTube does.
01:45:39.000 That's just reality.
01:45:40.000 The same thing can be said for Twitter.
01:45:42.000 Whether or not other platforms exist, that's inconsequential.
01:45:47.000 The vast majority of people are on Twitter.
01:45:50.000 The vast majority of people that are making posts about the news and breaking information, they do it on Twitter.
01:45:58.000 What can be set up and have you guys given consideration to some sort of a path to redemption?
01:46:05.000 There's redemption and there's rehabilitation.
01:46:09.000 We haven't done a great job at having a cohesive stance on rehabilitation and redemption.
01:46:19.000 We have it in parts.
01:46:19.000 So the whole focus behind the temporary suspensions Is to at least give people pause and think about why and how they violated our particular rules that they signed up for when they came in through our terms of service.
01:46:38.000 Whether you agree with them or not, this is the agreement that we have with people.
01:46:41.000 You know, I'm just thinking this.
01:46:43.000 I'm sorry to interrupt you, but it would be kind of hilarious if you guys had an option, like a mode of Twitter, an angry mode.
01:46:50.000 Like, fuck, I'm angry right now, so I'm going to type some things, and it says, hey, dude, why don't you just think about this?
01:46:56.000 We're going to hold it for you in the queue.
01:46:59.000 People do that in their drafts.
01:47:02.000 I'm sure they do.
01:47:03.000 I'm sure they do, but it would be funny if you had an angry mode.
01:47:06.000 I notice you guys are using a lot of curse words and you're saying a lot of bad things.
01:47:10.000 We're going to put you in angry mode, so think about this.
01:47:13.000 So you have to make several clicks if you want to post this.
01:47:15.000 And there is research to suggest that people expressing that actually tends to minimize more violent physical conduct.
01:47:21.000 Oh, for sure.
01:47:22.000 Well, everyone says that with emails.
01:47:23.000 If you're in the middle of the night and someone sends you an email and you find insulting, you type an email, go to sleep.
01:47:30.000 Wake up in the morning like, I'm going to say something nice.
01:47:33.000 That's how I wind up interacting with these people.
01:47:37.000 But what do you think can be done for people like, let's say Megan Murphy, because she seems one of the, it's as easy to see her perspective as any.
01:47:47.000 What do you think could be done for her?
01:47:49.000 I think you're right.
01:47:51.000 I think that I would love to get to a point where we think of suspensions as temporary.
01:47:58.000 She's banned for life.
01:48:00.000 Right now, that's the only option that we've built into our rules, but we have every capability of changing that, and that's something that I want my team to focus on, is thinking about, as Jack said, not just coming back after some time-bound period, but also, like, what more can and should we be doing within the product itself?
01:48:16.000 Early on to educate people about the rules.
01:48:18.000 So one of the things that we're working on is a very, very simplified version of the Twitter rules.
01:48:23.000 That's two pages, not 20. I've made sure that my lawyers don't write it and it's written in as plain English as we can.
01:48:30.000 We try to put examples in there.
01:48:32.000 And like really taking the time to educate people.
01:48:35.000 And I get people aren't always going to agree with those rules and we have to address that too.
01:48:39.000 But at least simplifying it and educating people so that they don't even get to that stage.
01:48:44.000 But once they do, understanding that there are going to be different contexts in people's lives, different times, they're going to say and do things that they may not agree with and they don't deserve to be permanently suspended forever.
01:48:56.000 Right.
01:48:57.000 From a platform like Twitter.
01:48:58.000 Agreed.
01:48:58.000 So how do you get to it?
01:48:59.000 So we, this is something that actually we just had a meeting on this earlier this week with our executive team and, you know, identifying kind of some of the principles by which we would want to think about, you know, time bounding suspension.
01:49:12.000 So it's work.
01:49:13.000 We have to do it and we're going to figure it out.
01:49:15.000 I'm not going to tell you it's coming out right away, but it's on our roadmap.
01:49:18.000 It's something we want to do.
01:49:20.000 Why don't you set up a jury system?
01:49:23.000 When someone reports something, instead of you having to worry about it, there would be no accusation of bias if 100,000 users were randomly selected to determine, because Periscope does this.
01:49:31.000 Yeah, Periscope does this.
01:49:34.000 Periscope does this?
01:49:34.000 Can you please explain that?
01:49:35.000 So Periscope has a content moderation jury.
01:49:41.000 So we flag, based on the machine learning algorithms, and in some cases reports, particular replies.
01:49:49.000 We send them to a small jury of folks to ask, is this against our term service, or is this something that you believe should be in the channel or not?
01:49:58.000 Do you sign up to be on the jury?
01:50:00.000 No, it's random.
01:50:02.000 So you randomly get chosen and you decide whether or not you want to participate?
01:50:05.000 Yep.
01:50:06.000 And it's good.
01:50:08.000 It has some flaws.
01:50:09.000 It has some gaming aspects to it as well.
01:50:11.000 But we do have a lot of experiments that we're testing and we want to build confidence and it's actually driving the outcomes that we think are useful.
01:50:23.000 And Periscope is a good playground for us across many regards.
01:50:27.000 I think, ultimately, one of the greater philosophical challenges is that you are a massively powerful corporation.
01:50:32.000 You have international investors.
01:50:35.000 I believe there's a Saudi prince that owns, what, 6% of Twitter?
01:50:37.000 Is that true?
01:50:39.000 Well, we're a publicly traded corporation, so anybody can buy stock, but that doesn't mean they have influence on day-to-day operations.
01:50:46.000 Well, I think, depending on which political faction you ask, they'll say money is influence.
01:50:49.000 So I'm not going to say that the Saudi prince who invested in Twitter – because again, it's been a while since I've read these stories – is like showing up to your meetings and throwing his weight around.
01:50:57.000 But at a certain point – He's definitely not doing that.
01:50:59.000 But do I have to trust you?
01:51:02.000 This is a guy who's thrown in over a billion dollars, I think, into Twitter.
01:51:05.000 Twitter has influence on our elections.
01:51:07.000 Foreign government actors have stake in Twitter.
01:51:11.000 It worries me then when you base your rules on your personal decisions on an unelected group of people – We're good to go.
01:51:33.000 But here I am looking at both of you who have this tremendous power over whether or not someone can get elected.
01:51:37.000 You can choose to ban someone and tell me all day and night you have a reason for doing it.
01:51:41.000 I just have to trust you.
01:51:42.000 That's terrifying.
01:51:43.000 There's no proof.
01:51:44.000 There's no proof Alex Jones did any of these things other than the things he's posted, right?
01:51:47.000 I understand that.
01:51:47.000 That's actually what I was on the phone with.
01:51:49.000 Alex was texting me saying that he never did anything to endanger any child and that he was disputing what people were saying about a video of a child getting harmed.
01:51:58.000 And so do we just trust an unelected – I mean extremely wealthy individuals, Saudi princes.
01:52:06.000 It's a publicly traded company.
01:52:07.000 Who knows where the influence is coming from?
01:52:09.000 Your rules are based on a global policy.
01:52:11.000 And I'm sitting here watching, wow, these people who are never chosen in this position have too much power over my politics.
01:52:17.000 I think that that's why it's so important that we take the time to build transparency into what we're doing.
01:52:22.000 And that's part of what we're trying to do is not just in being here and talking to you guys, but also building it into the product itself.
01:52:30.000 I think one of the things that I've really loved about a new product launch, what we've done is to Disable any sort of ranking in the home timeline if you want, and you don't have to see our algorithms at play anymore.
01:52:41.000 These are the kinds of things that we're thinking about.
01:52:43.000 How do we give power back to the people using our service so that they can see what they want to see and they can participate the way they want to participate?
01:52:50.000 And this is long term, and I get that we're not there yet, but this is how we're thinking about it.
01:52:54.000 And you can imagine where that goes.
01:52:55.000 I mean, in just one switch and turning all the algorithms off.
01:53:00.000 What does that do?
01:53:01.000 What does that look like?
01:53:03.000 So these are the conversations that we're having in the company.
01:53:06.000 Whether they be good ideas or bad ideas, we haven't determined that just yet.
01:53:11.000 We definitely...
01:53:12.000 Look, I definitely understand the mistrust that people have in our company, in myself, in the corporate structure, in all the variables that are associated with it, including who chooses to buy on the public market, who chooses not to.
01:53:27.000 I get all of it, and I grew up on the Internet.
01:53:31.000 I'm a believer in the Internet principles, and I want to do everything in my power to make sure that we are consistent with those ideals.
01:53:38.000 At the same time, I want to make sure that every single person and do everything in my power has the opportunity to participate.
01:53:45.000 So let me ask you a question then.
01:53:47.000 For your policy as it pertains to, say, Saudi Arabia, right, do you enforce the same hate speech rules on Saudi Arabia?
01:53:53.000 Our rules are global.
01:53:55.000 We enforce them against everyone.
01:53:57.000 So even in countries where it's criminal to be LGBT, you will still ban someone for saying something disparaging to or saying something to that effect?
01:54:07.000 Let's say Saudi Arabia sentenced someone to death.
01:54:10.000 I don't want to call it Saudi Arabia specifically.
01:54:11.000 Let's call it Iran because I believe that's the big focus right now with the Trump administration.
01:54:15.000 Iran, it's my understanding, it's still punishable by death.
01:54:17.000 I could be wrong, but it is criminal.
01:54:20.000 If someone then directly targets one of these individuals, will you ban them?
01:54:24.000 I mean, do you guys function in Iran?
01:54:26.000 We're blocked in Iran.
01:54:27.000 Yeah, that's what I figured.
01:54:28.000 But there are some countries where, for instance, Michelle Malkin recently got really angry because she received notice that she violated blasphemy laws in Pakistan.
01:54:37.000 So you do follow some laws in some countries, but it's not a violation.
01:54:41.000 I guess the question I'm asking is, in Pakistan, it's very clearly a different culture.
01:54:45.000 They don't agree with your rules.
01:54:47.000 We do have a per-country takedown, meaning that content might be non-visible within that country, but visible throughout the rest of the world.
01:54:55.000 Just to add on to what Jack's saying, we actually are very, very transparent about this.
01:54:59.000 So we publish a transparency report every six months that details every single request that we get from every government around the world and the content that they ask us to remove, and we post that to an independent third-party site.
01:55:10.000 So you could go right now and look and see every single request that comes from the Pakistani government and what content they're trying to remove from Pakistan.
01:55:19.000 I've seen a lot of conservatives get angry about this, and it's kind of confusing.
01:55:22.000 I'm like, that's a really good thing.
01:55:24.000 I would want to know if Pakistan wanted to kill me.
01:55:26.000 Why are they angry?
01:55:27.000 Blasphemy laws, posting pictures of Muhammad.
01:55:31.000 Are they angry about our transparency report?
01:55:34.000 There's a perception that you sending that notice is like a threat against them for violating blasphemy laws, whereas it's very clearly just letting you know a government has taken action against you.
01:55:44.000 It's saying that the government has restricted access to that content in that country.
01:55:48.000 And the reason we tell users or tell people that that's happened is because a lot of them may want to file their own suit against the government, or a lot of them may be in danger if they happen to be under that particular government's jurisdiction, and they may want to take action to protect themselves if they know that the government is looking at the content in their accounts.
01:56:07.000 We send the notice to everybody.
01:56:08.000 We don't always know where you are or what country you live in.
01:56:11.000 And so we just send that notice to try to be as transparent as possible.
01:56:15.000 The main point I was trying to get to is...
01:56:18.000 Your policies support a community, but there may be laws in a certain country that does not support that community and finds it criminal.
01:56:24.000 So your actions are now directly opposed to the culture of another country.
01:56:29.000 I guess the point I'm trying to make is that if you enforce your values, which are perceivably not even the majority of this country, if you consider yourself more liberal-leaning than you're half of the United States, but you're enforcing those rules on the rest of the world that use the service, it's sort of forcing other cultures to adhere to yours.
01:56:48.000 So a lot of our rules are based in more of the UN Declaration than just purely US. Doesn't the UN Declaration guarantee the right of all people through any medium to express their opinion?
01:56:59.000 It does.
01:57:01.000 And why ban hate speech?
01:57:02.000 It also has conditions around particular speech inciting violence and some of the aspects that we speak to as well.
01:57:11.000 And it protects specific categories, whether it's religion, race, gender, sexual orientation.
01:57:16.000 Those are also protected under the UN Covenant to protect human rights.
01:57:22.000 Ooh, look at that, a pause.
01:57:25.000 We've had a number of pauses.
01:57:27.000 I'm sure we have many more things to talk about.
01:57:29.000 Don't worry.
01:57:30.000 I've got a bunch of other things.
01:57:33.000 Here's the thing.
01:57:34.000 There's a bunch of other issues having to do with bias and censorship, and I feel like we've kind of beaten that horse relentlessly.
01:57:40.000 But I think that horse is good to beat.
01:57:42.000 And I think it's also good to address why the horse is being beaten and why it exists in the first place.
01:57:48.000 And I really want to say this again.
01:57:51.000 I really appreciate the fact that you guys are so open and that you're willing to come on here and talk about this because you don't have to.
01:57:58.000 This is your decision.
01:57:59.000 And especially you, Jack, after we had that first conversation and the blowback was so hard, you wanted to come and clarify this.
01:58:06.000 And I think this is...
01:58:08.000 So important to give people a true understanding of what your intentions are versus what perceptions are.
01:58:14.000 Appreciate it.
01:58:15.000 And thank you for hosting this again.
01:58:17.000 Look, I think it's also important that the company is not just me.
01:58:24.000 We have people in the company who are really good at this and are making some really tough decisions and having tough conversations and getting pushback and getting feedback.
01:58:35.000 And they have...
01:58:49.000 I don't know how to describe him.
01:58:51.000 He's a conservative personality, but he's very, very controversial for, like, fake news or something.
01:58:59.000 I don't know too much about him, so I don't want to accuse him of things because I don't know who he is.
01:59:02.000 But he was in something where he tried accusing Mueller of sexual assault, and it turned out to be just completely fake, ridiculous.
01:59:11.000 This is a gentleman that was in the USA Today article where he admitted that he had used tactics in the past to influence the election, and he will continue to do so using all of his channels.
01:59:24.000 Yes.
01:59:25.000 And so when we saw that report, our team looked at his account.
01:59:29.000 We noticed there were multiple accounts tied to his account, so fake accounts that he had created that were discussing political issues and pretending to be other people from other perspectives.
01:59:37.000 How did you find that out?
01:59:38.000 We would have phone numbers linking accounts together or email addresses, in some cases IP addresses, other types of metadata that are associated with accounts, so we can link those accounts together.
01:59:48.000 And having multiple accounts in and of itself is not a violation of our rules because some people have their work account, their personal account.
01:59:56.000 It's when you're deliberately pretending to be someone else and manipulating a conversation about a political issue.
02:00:01.000 And those are exactly the types of things that we saw the Russians do, for example, in the 2016 election.
02:00:07.000 So it was that playbook and that type of activity that we saw about Jacob Wall.
02:00:12.000 And that's why his accounts were suspended.
02:00:14.000 Did you investigate Jonathan Morgan?
02:00:16.000 I don't know who that is.
02:00:18.000 Why?
02:00:19.000 That's the important question.
02:00:20.000 Why?
02:00:21.000 I don't know who that is.
02:00:23.000 It may be that someone at Twitter investigated him.
02:00:25.000 I personally don't know who that is.
02:00:27.000 So one of the issues that I think is really important to get to is you should know who he is.
02:00:32.000 He's more important than Jacob Wall is.
02:00:34.000 But for some reason you know about this conservative guy and not the Democrat who helped meddle in the Alabama election.
02:00:39.000 So Jonathan – This is a sheer volume that they have to pay attention to, in all fairness.
02:00:44.000 Right, right, right.
02:00:44.000 But it's about grains of sand making a heap in the flow of a direction where we can see Jacob Wall has said he's done this, so you're like, we're going to investigate, we ban him.
02:00:51.000 It was recently reported and covered by numerous outlets that a group called New Knowledge was meddling in the Alabama election by creating fake Russian accounts to manipulate national media into believing that Roy Moore was propped up by the Russians.
02:01:03.000 Facebook banned him, as well as four other people, but Twitter didn't.
02:01:06.000 He's still active.
02:01:07.000 I believe we did ban the accounts that were engaged in the behavior.
02:01:09.000 Yeah.
02:01:09.000 I do remember sending this to Vijay and our team.
02:01:12.000 That's worse, though.
02:01:13.000 So you didn't ban the guy doing it, but you banned the people.
02:01:16.000 So in the case of Jacob Wall, we were able to directly attribute through email addresses and phone numbers his direct connection to the accounts that were created to manipulate the election.
02:01:26.000 If we're not able to tie that direct connection on our platform, or law enforcement doesn't give us information to tie attribution, we won't take action.
02:01:35.000 And it's not because of political ideology, it's because we want to be damn sure before we take action on accounts.
02:01:39.000 So someone could use a VPN, perhaps, and maybe additional email accounts, and they could game the system in that way.
02:01:45.000 There are certainly sophisticated ways that people can do things to mask who they are and what accounts that they're controlling.
02:01:52.000 And just the internal conversation, Tim, just to provide more light into what happens.
02:01:56.000 Like, I got an email or a text from Vidya one morning and said, we are going to permanently suspend this particular account.
02:02:05.000 And it's not a, you know, what do you think?
02:02:08.000 It's, we are going to do this.
02:02:10.000 And I then have an opportunity to ask questions.
02:02:13.000 I asked a question why.
02:02:14.000 She gave me a link back to the document of all the findings in USA Today.
02:02:20.000 We took the action.
02:02:21.000 I was on Twitter.
02:02:22.000 A bunch of people pointed me at this particular case, sent some of those tweets to her, what's going on.
02:02:29.000 So that's in the background.
02:02:32.000 Wouldn't you just terminate anybody associated with the company that was doing this?
02:02:35.000 I mean, keep in mind, too, at the time when this campaign was happening… On what basis?
02:02:39.000 He admitted to engaging in the operation, in a quote to New York Times, and you banned the accounts associated with it.
02:02:44.000 So if you know he's the one running the company, wouldn't you be like, okay, you're gone?
02:02:48.000 Do you want us to take every single newspaper accounts attribution?
02:02:52.000 Because what we were able to do in the Jacob Wall situation was actually tie those accounts in our own systems.
02:02:57.000 Right.
02:02:58.000 That he actually controlled the accounts, not just take the word of a newspaper article.
02:03:01.000 You said you banned his accounts.
02:03:04.000 Yes.
02:03:04.000 And you know from his own statement and from his tweets that he was the one running the company.
02:03:10.000 Jacob Wall.
02:03:10.000 No, no, no, no.
02:03:11.000 Jonathan Morgan.
02:03:12.000 Oh, sorry.
02:03:13.000 I'm getting confused about what we're talking about.
02:03:15.000 Jacob Wall, it's announced in the USA Today, he says, I'm doing this.
02:03:19.000 And you're like, okay, we can look at his account, we can see it, we get rid of him.
02:03:22.000 With new knowledge, you said you did take those accounts down.
02:03:25.000 I believe we were able to take down a certain cluster of accounts that we saw engaging in the behavior, but we weren't necessarily able to tie it back to one person controlling those accounts.
02:03:34.000 Even if they say they did it?
02:03:35.000 And this is where I get back.
02:03:36.000 We like to have some sort of attribution that's direct that we can see.
02:03:41.000 Would we just take any newspaper or any article at face value and just action them?
02:03:46.000 Would you have to contact him and get some sort of a statement from him in order to take down his account?
02:03:52.000 I mean, I don't think he would admit to manipulating Twitter if Twitter asked him.
02:03:56.000 But if you could get the fact that he communicated with a newspaper, right?
02:04:00.000 To clarify what they said, what they claimed to the New York Times was that it was a false flag.
02:04:06.000 New York Times said they reviewed internal documents that showed they admitted it was a false flag operation.
02:04:11.000 The guy who runs the company said, oh, his company does this.
02:04:15.000 He wasn't aware necessarily, but it was an experiment.
02:04:19.000 So he's given kind of, in my opinion, duplicitous – not straightforward, but at the time of this campaign, which he claims to know about – He tweeted that it was real.
02:04:29.000 So during the Roy Moore campaign, he tweets, wow, look at the Russians.
02:04:32.000 Then it comes out later, his company is the one that did it.
02:04:35.000 So you're kind of like, oh, so this guy was propping up his own fake news, right?
02:04:39.000 Then when they get busted, he goes, oh, no, it's just my company doing an experiment.
02:04:43.000 But you tweeted it was real.
02:04:44.000 You use your verified Twitter account to push the fake narrative your company was pumping on this platform.
02:04:50.000 And so the point I want to make, I guess, is...
02:04:52.000 It sounds like we need to take a closer look at this one.
02:04:54.000 Ban him.
02:04:54.000 Bring back Morgan Murphy.
02:04:55.000 Well, Megan Murphy.
02:04:57.000 Megan Murphy.
02:04:58.000 Sorry.
02:04:58.000 Morgan Murphy's a friend of mine.
02:05:01.000 Sorry, Morgan.
02:05:03.000 So this is...
02:05:04.000 I haven't read the story.
02:05:05.000 It's been like two months since the story broke.
02:05:06.000 So I could have my...
02:05:07.000 I don't want to get sued and have my facts wrong.
02:05:10.000 But the reason I bring this up was not to accuse you of wrongdoing, was to point out that...
02:05:15.000 I don't think that the people who work at Twitter are twirling their mustaches, laughing, pressing the ban button whenever they see a conservative.
02:05:22.000 I think it's just there's a bias that's unintentional that flows in one direction.
02:05:26.000 So you see the news about Jacob Wall, and I think there's a reason for it too.
02:05:29.000 There's a couple of reasons.
02:05:29.000 For one, your staff is likely more – you've mentioned more likely to lean left and look at certain sources.
02:05:37.000 So you're going to hear about more things more often and take action on those things as opposed to the other side of the coin.
02:05:43.000 But we have to consider where the actions are taking place.
02:05:46.000 I'm speaking more broadly to the 4,000 people that we have as a company versus the deliberateness that we have on Vidya's team, for instance.
02:05:53.000 I just mean when we look at a company-wide average of all of your employees and the direction they lean versus the news sources they're willing to read, you're going to see a flow in one direction, whether it's intentional or not.
02:06:02.000 And so I think the challenge is...
02:06:04.000 But we don't generally rely on news sources to find manipulation of our platform.
02:06:08.000 We're looking at what we're seeing, the signals we can see.
02:06:11.000 And once in a while, we will get tipped off to something.
02:06:14.000 But for the most part, when we're looking at manipulation, it's not like the New York Times can tell us what's going on on the platform.
02:06:20.000 We're the ones that have the metadata back accounts.
02:06:22.000 We're the ones that can see patterns of behavior at scale.
02:06:24.000 But I hear your point.
02:06:25.000 I knew one name and I didn't know another name.
02:06:27.000 And it was because Vidya said, you know, we're permanently banning this account.
02:06:31.000 And yes, we didn't have the same sort of findings in the other particular account, which I got feedback on, passed to her, and we didn't find what we needed to find.
02:06:42.000 But to be clear, the team had taken action on this stuff months ago when it actually had happened.
02:06:46.000 Got it.
02:06:46.000 Yeah.
02:06:47.000 I think, you know, a lot of what people assume is malintent is sometimes fake news.
02:06:53.000 You know, I think one of my biggest criticisms in terms of what's going on in our culture is the news system is, like you pointed out, although it's changed, left-wing journalists only follow themselves.
02:07:02.000 That's my experience.
02:07:03.000 I've worked for these companies, and so they repeat these same narratives.
02:07:06.000 They don't get out of their bubble.
02:07:07.000 Even today, they're still in a bubble, and they're not seeing what's happening outside of it.
02:07:10.000 And then what happens is, you know, according to data—I think this is from Pew— Most new journalism jobs are in blue districts.
02:07:18.000 So you've got people who only hear the same thing.
02:07:20.000 They only cover the same stories.
02:07:22.000 So if, you know, we hear about Jesse Smollett.
02:07:25.000 We hear about how the story goes wild.
02:07:27.000 But there's like 800 instances of Trump supporters wearing MAGA hats getting beaten, you know, throughout the past couple of years.
02:07:32.000 We had a guy show up to a school in Eugene, Oregon with a gun and fire two rounds at a cop wearing a Smash the Patriarchy and Chill shirt.
02:07:38.000 And those stories don't make the headlines.
02:07:40.000 So it's, you know, when the journalists are inherently in a bubble, I hear you.
02:07:55.000 I think our biggest issue and the thing that I want to fix the most is the fact that we create and sustain and maintain these echo chambers.
02:08:03.000 Yeah.
02:08:04.000 Well, you're rolling out that new feature that allows you to hide replies, right?
02:08:07.000 Yeah.
02:08:08.000 We're testing.
02:08:09.000 We're experimenting with an ability to enable people to have more control as you would expect a host over the conversation.
02:08:18.000 Like Facebook allows that.
02:08:19.000 Yeah, but I don't think they have the level of transparency that we want to put into it.
02:08:24.000 So we actually want to show whether a comment was moderated and then actually allow people to see...
02:08:31.000 So, both showing the action that this person moderated a particular comment, and then you can actually see the comment itself.
02:08:37.000 It's one click over, one tap over.
02:08:40.000 That's how we're thinking about it.
02:08:42.000 It might change in the future, but we can't do this without a level of transparency, because we minimize something Vidja spoke to earlier, which is speaking truth to power, holding people to account.
02:08:53.000 Even things like the Fyre Festival, where, you know, you had these organizers who were deleting every single comment, moderating every single comment that called this thing a fraud, and don't go here.
02:09:05.000 We can't reliably and, like, just from a responsibility standpoint, ever create a future that enables more of that to happen.
02:09:14.000 And that's how we're thinking about even features like this.
02:09:17.000 I'm going to jump right off to a different train cart here.
02:09:19.000 Has law enforcement ever asked you to keep certain people on the platform even after they violated your rules?
02:09:27.000 Not that I'm aware.
02:09:29.000 So then this – to the next question pertaining to bias, you have the issue of Antifa versus the Proud Boys and Patriot Prayer.
02:09:36.000 And Twitter permanently excised anyone associated with the Proud Boys.
02:09:40.000 Antifa accounts who have broken the rules repeatedly, branded known cells that have been involved in violence, all still active.
02:09:47.000 Is there a reason – Well, with the Proud Boys, what we were able to do was actually look at documentation and announcements that the leaders of that organization had made and their use of violence in the real world.
02:10:00.000 So that was what we were focused on.
02:10:02.000 And subsequent to our decision, I believe the FBI also designated them.
02:10:06.000 That's not true.
02:10:06.000 It's not true?
02:10:07.000 That's not true.
02:10:07.000 No.
02:10:08.000 Okay.
02:10:08.000 No, that's not true.
02:10:09.000 The Proud Boys started out as a joke.
02:10:12.000 Gavin McGinnis, Anthony Cumia, who was part of Opie and Anthony, now it's his own show, It happened on his show because there was a guy that was on the show and they made a joke about starting a gang based on him because he was a very effeminate guy and they would call him the Proud Boys.
02:10:28.000 And they went into detail about how this thing became...
02:10:34.000 From a joke and saying that you could join the Proud Boys and everyone was, you know, it was like being silly to people joining it and then it becoming this thing to fight Antifa and then becoming infested with white nationalists and becoming this thing.
02:10:49.000 Well, in many ways it was, but it's been documented how it started and what it was and misrepresented as to why it was started.
02:11:00.000 I think there's some things that should be clarified about them, but Gavin has made a bunch of statements that cross the line.
02:11:06.000 He claims to be joking.
02:11:08.000 Well, he did on my podcast.
02:11:09.000 He was talking to me about Antifa, that when Antifa was blocking people like Ben Shapiro's speeches and things along those lines and stopping conservatives from speaking, you should just punch them in the face.
02:11:19.000 We're going to have to start kicking people's asses.
02:11:21.000 I was like, this is not just irresponsible, but foolish and short-sighted and just a dumb way to talk.
02:11:26.000 So then you have the Antifa groups that are engaging in the same thing.
02:11:30.000 The famous bike lock basher incident where a guy showed up, hit seven people over there with a bike lock.
02:11:38.000 I'm going to leave that out for the time being.
02:11:40.000 You have other groups like By Any Means Necessary.
02:11:44.000 You have in Portland, for instance, there are specific branded factions.
02:11:50.000 There's the tweet I mentioned earlier where they doxed ICE agents and they said, do whatever inspires you with this information.
02:11:57.000 And I mean, you're tagged in a million times.
02:11:58.000 I know you probably can't see it, but you can actually see that some of the tweets in the thread are removed.
02:12:02.000 But the main tweet itself from an anti-fascist account linking to a website is Straight up saying, like, here's the private home details, phone number, addresses of these law enforcement officers is not removed since September.
02:12:12.000 So what you end up seeing is, again, to point, I think one of the big problems in this country is the media, because it was reported that the FBI designated Proud Boys an extremist group.
02:12:22.000 But it was a misinterpretation based – a sheriff wrote a draft saying with – the FBI considers them to be extremists.
02:12:29.000 The media then reported hearsay from the sheriff and the FBI came out and said, no, no, no, we never meant to do that.
02:12:34.000 That's not true.
02:12:35.000 We are just concerned about violence.
02:12:37.000 So the Proud Boys all get purged.
02:12:38.000 And again, I think Gavin is a different story, right?
02:12:40.000 If you want to go after the individuals who are associating with that group versus the guy who goes on his show and says outrageous things and goes on Joe's show.
02:12:46.000 But then you have Antifa.
02:12:48.000 What I mean by that is they have specific names, they sell merchandise, and they're the ones showing up throwing mortar shells into crowds.
02:12:56.000 They're the ones showing up with crowbars and bats and whacking people.
02:12:59.000 I was in Boston, and there was a rally where conservatives were planning on putting on a rally.
02:13:04.000 It was literally just libertarians and conservatives.
02:13:06.000 Antifa shows up with crowbars, bats, and balaclavas with weapons threatening them.
02:13:12.000 So I have to wonder if these people are allowed to organize in your platform.
02:13:16.000 Are you concerned about that?
02:13:18.000 Why aren't they being banned when they violate the rules?
02:13:20.000 Yeah, absolutely.
02:13:21.000 We're concerned about that.
02:13:21.000 Has the FBI designated them as a domestic terrorist organization?
02:13:25.000 I'm sorry.
02:13:25.000 Homeland Security in New Jersey has listed them under domestic terrorism.
02:13:28.000 So I understand there's a conundrum in that the general concept of anti-fascism is a loose term that means you oppose fascism.
02:13:35.000 But Antifa is now – they have a flag.
02:13:38.000 They've had a flag since the Soviet – Nazi Germany in the Soviet era, and they've brought it back.
02:13:43.000 There are specific groups that I'm not going to mention by name that have specific names, and they sell merchandise.
02:13:47.000 They've appeared in various news outlets.
02:13:49.000 They've expressed their desire to use violence to suppress speech.
02:13:52.000 Is it a centralized organization the same way that – I hear you on Proud Boys, but like where they have like tenants that are written out and there's a leader and like – Yeah.
02:14:27.000 I should point out that they decided to call for violence based on Antifa calling for violence.
02:14:33.000 Based on Antifa actually actively committing violence against conservative people, they were there to see different people speak.
02:14:39.000 Well, it partly started because in Berkeley, there was a Trump rally.
02:14:43.000 So actually, after Milo got chased out of the Berkeley, there was $100,000 in damages.
02:14:47.000 I mean, there's a video of some guy in all black cracking someone on the back who's on the ground looking like they're unconscious.
02:14:53.000 So these conservatives see this and they decide to hold a rally saying we won't back down.
02:14:57.000 They hold a rally in Berkeley and then Antifa shows up again.
02:15:00.000 I understand you can't figure out who these people are for the most part.
02:15:03.000 They're decentralized.
02:15:04.000 But then this incites an escalation.
02:15:08.000 You then get the rise of the based stick man, they called it.
02:15:10.000 This guy shows up in armor with a stick and he starts swinging back.
02:15:13.000 And now you have two factions forming.
02:15:15.000 So while I recognize it's much easier to ban a top-down group, Yeah,
02:15:33.000 absolutely.
02:15:36.000 Yeah, absolutely.
02:15:48.000 So I guess the question is, how come they don't get removed?
02:15:52.000 Well, in the past when we've looked at Antifa, we ran into this decentralization issue, which is we weren't able to find the same type of information that we were able to find about Proud Boys, which was a centralized, leadership-based documentation of what they stand for.
02:16:07.000 But absolutely, I mean, it's something that we'll continue to look into.
02:16:09.000 And to the extent that they're using Twitter to organize any sort of offline violence, it's completely prohibited under our rules, and we would absolutely take action on that.
02:16:16.000 Would I ask you why Gavin was banned?
02:16:18.000 Was there a specific thing that he did or was it his association with the Proud Boys?
02:16:23.000 His association with the Proud Boys.
02:16:24.000 You know, he's abandoned that.
02:16:25.000 He's not only that, he's disassociated himself with it and said that it completely got out of hand and he doesn't want to have anything to do with it.
02:16:31.000 Yeah, and I think this is a great, again, test case for how we think about getting people back on the platform.
02:16:37.000 Yeah, he's an interesting case, because he's really a provocateur, and he fancies himself sort of a punk rocker, and he likes stirring shit.
02:16:46.000 I mean, when he came on my show last time he was on, he was dressed up like Michael Douglas in Falling Down.
02:16:53.000 He did it on purpose.
02:16:54.000 He brought a briefcase and everything.
02:16:55.000 I'm like, what are you doing?
02:16:56.000 He's like, I'm Michael Douglas in Falling Down.
02:16:59.000 He's a showman in many ways, and he did not mean for this to go the way it went.
02:17:05.000 He thought it would be this sort of innocent, fun thing to be a part of, and then other people got involved in it.
02:17:16.000 The problem is they think that you're going to just hit people and it's going to solve a problem.
02:17:21.000 It just creates a much more comprehensive problem.
02:17:24.000 It's important to point out Gavin has said things way worse than Alex Jones ever did.
02:17:29.000 Whether you want to say it's a joke or not, he said things like choke them, punch them directly.
02:17:35.000 But I guess the primary reason for getting rid of them was what you thought that the FBI had designated them an extremist group?
02:17:41.000 No, because we did it months in advance.
02:17:44.000 Oh, okay.
02:17:44.000 Yeah, I was just pointing that out.
02:17:45.000 So it was just his association with the Proud Boys?
02:17:48.000 I don't recall, and I would have to go back, and I don't want to misstate things.
02:17:52.000 I don't recall whether those statements that you're referring to of Gavin's were on Twitter.
02:17:56.000 So they weren't.
02:17:58.000 There's another – when it comes to the weaponization of rules against – like Gavin isn't creating a compilation of things he's ever said out of context and then sending them around to get himself banned.
02:18:08.000 Other people are doing that to him, activists who don't like him, and it's effective.
02:18:12.000 In fact, I would actually like to point out there's one particular user who has repeatedly made fake videos attacking one of your other high-profile conservatives so much so that he's had to file police reports, harassment complaints, and it just doesn't stop.
02:18:24.000 I guess I'll ask this to this regard.
02:18:26.000 If someone repeatedly makes videos of you out of context, fake audio, accusing you of doing things you've never done, at what point is that bannable?
02:18:34.000 Yeah, and if it's targeted harassment and we can establish it, it's just a really hard thing with us determining whether something is fake or not.
02:18:40.000 Well, it's also when things are out of context.
02:18:42.000 You still have video of the person saying that.
02:18:44.000 I agree that it's out of context and it's disingenuous, but it's still the person saying it and you're making a compilation of some pre-existing audio or video.
02:18:55.000 So I think in the instance of Gavin, like, one of the things he said was, like, a call to violence, but he was talking about, like, it was in the context of talking about a dog and being scolded.
02:19:04.000 So he was like, hit him, just hit him, and then it's like, it turns out he's talking about a dog, like, doing something wrong.
02:19:09.000 And they take that and they snip it, and then it goes viral, and then everyone starts flagging, saying, you gotta ban this guy.
02:19:13.000 So again, I understand, like, you know.
02:19:15.000 But I guess the issue is, if people keep doing that to destroy someone's life...
02:19:19.000 So I think there's a bigger discussion, I think, both of you could probably shed some important light on, too, outside of Twitter.
02:19:25.000 This weaponization of content from platforms is being used to get people banned from their banking accounts.
02:19:31.000 We can talk about Patreon, for instance.
02:19:33.000 And again, this may just be something you could chime in on.
02:19:37.000 Patreon banned a man named Carl Benjamin, also known as Sargon of Akkad.
02:19:40.000 He's also banned from Twitter.
02:19:42.000 Do you know why he got banned from Twitter?
02:19:44.000 I can see.
02:19:49.000 That's an interesting one.
02:19:52.000 I do have some of the details here.
02:19:56.000 Do you want me to read them?
02:19:57.000 Yeah, please.
02:19:58.000 Okay.
02:20:00.000 Looks like it's going to be gross.
02:20:01.000 It's not stuff that I love saying, but I will say it.
02:20:05.000 Want Jack to say it?
02:20:06.000 I should make Jack say it.
02:20:08.000 He doesn't like cursing either.
02:20:11.000 Let's see.
02:20:11.000 I curse more than he does, so I guess I should say it.
02:20:14.000 First strike.
02:20:15.000 Fuck white people, kill all men, die cis gum, none of the above qualify as hate speech.
02:20:21.000 Wait a minute.
02:20:22.000 When was that?
02:20:23.000 I don't have the dates.
02:20:25.000 I'm sorry.
02:20:25.000 But he's a white guy.
02:20:27.000 I mean, obviously, he's joking around there.
02:20:29.000 He's saying fuck white people.
02:20:31.000 It also sounds like he's trying to make a point about your rules and how you enforce them, not actually.
02:20:34.000 Possibly.
02:20:34.000 Which is also exactly why he got kicked off of Patreon.
02:20:38.000 Exactly.
02:20:38.000 Well, I know he also posted a photo of interracial gay porn at some white nationalists to make them angry.
02:20:43.000 Yes!
02:20:44.000 Yeah, he's funny.
02:20:47.000 He's funny sometimes.
02:20:48.000 I can understand how posting that photo is an egregious violation of the rules, whether or not he was trying to insult some people.
02:20:56.000 That's a very good point, and I wanted to bring that up.
02:20:58.000 Is porn a violation of the rules?
02:21:02.000 Porn, generally?
02:21:03.000 No.
02:21:04.000 Good.
02:21:04.000 Really?
02:21:04.000 Good for you.
02:21:06.000 Because it happens in my feed all the time.
02:21:08.000 I follow a couple naughty girls, and occasionally they post pictures of themselves engaging in intercourse.
02:21:14.000 I'm like, yikes.
02:21:15.000 So then what are the other strikes for Sargon or Carl?
02:21:18.000 Let's see.
02:21:20.000 There was the use of a Jewish slur.
02:21:23.000 How do you use it?
02:21:25.000 To a person, you traitor, remainer, white genocide supporting, Islamophile, Jewish slur lover.
02:21:35.000 That should keep you going.
02:21:37.000 Hashtag Hitler was right.
02:21:38.000 But these aren't general opinions.
02:21:40.000 These are targeted.
02:21:41.000 These are targeted at somebody.
02:21:43.000 That sounds like he's making a joke.
02:21:47.000 In context, it sounds like the other one.
02:21:50.000 Like, in context, what he's saying, particularly the fact that he's a white guy, that doesn't sound like a racial slur at all.
02:21:56.000 I mean, he's saying fuck white people, and he is white.
02:21:58.000 In context, again, these are tied together.
02:22:00.000 I always knew that person was not to be trusted that fucking...
02:22:07.000 He's saying this about a very specific person.
02:22:10.000 He's trying to be very provocative.
02:22:12.000 And he's saying this about a specific Jewish person?
02:22:14.000 I don't know the race of this person.
02:22:16.000 I'm sorry.
02:22:17.000 Okay, but this is not parody.
02:22:20.000 This is not joking around.
02:22:21.000 We didn't view it that way.
02:22:22.000 I'm not trying to re-litigate all this.
02:22:25.000 I'm just telling you what they were.
02:22:27.000 I knew he had done things that were egregious violations of the rules because, plain and simple, I didn't bring him up to go through it and try to figure out if he – but it does sound like at least the first one was meant to be a critique of your – Potentially, but there are a bunch of others if you want to hear them.
02:22:41.000 Sure.
02:22:42.000 Keep it rolling.
02:22:43.000 This is, again, targeted.
02:22:44.000 This is how I know one day that I'll be throwing you from a helicopter.
02:22:47.000 You're the same kind of malignant cancer.
02:22:49.000 Don't forget it.
02:22:50.000 So it's not one thing or two things or three things.
02:22:53.000 This is like a bunch of them.
02:22:54.000 That's illusions of grandeur.
02:22:56.000 Imagine thinking you're going to throw someone from a helicopter.
02:22:57.000 Well, he doesn't really.
02:22:58.000 Get you in that helicopter.
02:22:59.000 But admittedly, he's on YouTube by the name of Sargon Avakati.
02:23:04.000 He's a big account.
02:23:06.000 And I've criticized him for being overly mean in the past.
02:23:08.000 He definitely gets angry.
02:23:10.000 But he is very different now.
02:23:13.000 The reason I brought him up was not to...
02:23:14.000 He's very different now?
02:23:15.000 How so?
02:23:15.000 Well, a lot of the content he makes is much calmer.
02:23:17.000 He's less likely to insult someone directly.
02:23:19.000 He's probably recognizing that he's on his last straw.
02:23:22.000 Oh, definitely.
02:23:23.000 He's been kicked off of Twitter.
02:23:24.000 He's on YouTube.
02:23:24.000 He's probably going to mind his P's and Q's.
02:23:27.000 Oh, so the reason I brought him up again, but we'll move on, was that activists found a live stream from eight months ago.
02:23:35.000 I totally forgot why I was bringing this up because we've moved so far away from where we were.
02:23:39.000 But they pulled a clip from an hour and a half or whatever into a two-hour live stream on a small channel that only had 2,000 views, sent it to Patreon, and then Patreon said, yep, that's a violation, and banned him outright without warning.
02:23:52.000 Which, again, I understand is different from what you guys do.
02:23:54.000 You do suspensions first.
02:23:55.000 But I guess the reason I was bringing it up was to talk about a few things.
02:23:59.000 Why blocking isn't enough.
02:24:01.000 Why muting isn't enough.
02:24:02.000 And if you think that it's driving people off the platform, people post my tweets on Reddit.
02:24:07.000 I block them.
02:24:08.000 They use a dummy account, load up my tweet, post it to Reddit, and then spam me on Reddit.
02:24:12.000 So, you know, blocking and even leaving Twitter would never do anything.
02:24:16.000 Short of me shutting up, there's nothing you can do to protect me or anyone else.
02:24:20.000 Look, I mean, these are exactly the conversations we're having.
02:24:23.000 The reason why I don't think blocking and muting are enough is, one, I don't think we've made mute powerful enough.
02:24:30.000 It's spread all over the service.
02:24:33.000 You can use it, and then you've got to go find where you actually muted these people or their profile page, and that's just a disaster.
02:24:42.000 It just doesn't work in the same way that it should work in the same way that follow works, which is just the inverse of that.
02:24:48.000 I noticed that now I get a notification that says you can't see this tweet because you muted this person.
02:24:52.000 Before, I would just see a weird reply and be like, oh, it's one of those.
02:24:56.000 Exactly.
02:24:56.000 So there's also all this infrastructure that we have to fix in order to pass those through in terms of what action you took or what action someone else took to be transparent about what's happening on the network.
02:25:08.000 The second...
02:25:10.000 The second thing, block is really interesting.
02:25:15.000 My own view is it's wholly unsatisfying because what you're doing is you're blocking someone.
02:25:23.000 They get notification that you've blocked them, which may embolden them even more, which causes...
02:25:38.000 Right.
02:25:40.000 Exactly.
02:25:52.000 If you're engaging in public discourse, you know, if I go out in the street and yell out my opinion, somebody could get in my face.
02:25:58.000 If I get off Twitter, because I'm sick of it.
02:26:00.000 I mean, look, you know, I'm sure you get it way worse than I do, especially as, you know, the high profile.
02:26:06.000 Probably getting it right now.
02:26:07.000 Yeah, absolutely.
02:26:07.000 Oh, me too.
02:26:08.000 God, I can only imagine things.
02:26:10.000 So the only thing I can do is, look, we're not on Twitter right now.
02:26:13.000 We're on Joe Rogan's podcast, and they're still going to target you on Twitter.
02:26:16.000 I guarantee we're all over Reddit.
02:26:18.000 The left is probably railing on me.
02:26:20.000 The right is railing on you guys.
02:26:22.000 So it seems like even if you try everything in your power to make Twitter healthier and better, it's not going to change anything.
02:26:28.000 I'm not sure about that.
02:26:29.000 I'm not sure about that because one of the things that I do think is that just – I'm not in favor of a lot of this heavy-handed banning and a lot of the things that have been going on, particularly a case like the Megan Murphy case.
02:26:42.000 But what I think that we are doing is we're exploring the idea of civil discourse.
02:26:50.000 We're trying to figure out what's acceptable and what's not acceptable.
02:26:54.000 And you're communicating about this on a very large scale.
02:26:58.000 And it's putting that out there and then people are discussing it.
02:27:01.000 Whether they agree or disagree, whether they vehemently defend you or hate you, they're discussing this.
02:27:08.000 And I think this is how these things change.
02:27:11.000 And they change over long periods of time.
02:27:13.000 Think about words that were commonplace just a few years ago that you literally can't say anymore.
02:27:20.000 I mean, there's so many of them that were extremely commonplace or not even thought to be offensive 10 years ago that now you can get banned off of platforms for them.
02:27:31.000 But that's a good point to argue against banning people and to cease enforcing hate speech rules.
02:27:36.000 I agree with that as well.
02:27:38.000 I think it's both things.
02:27:39.000 Let me tell you something important.
02:27:41.000 I was in the UK at an event for a man named Count Dankula, who I don't know if you've heard of.
02:27:46.000 Oh, sure.
02:27:46.000 Yeah, Dankula is the guy who got charged and convicted of making a joke where he had his pug do a Nazi salute.
02:27:52.000 But I was there and I was arguing that a certain white nationalist had used racial slurs on YouTube.
02:27:58.000 He has.
02:27:59.000 I don't want to name him.
02:28:00.000 And some guy in the UK said, that's not true.
02:28:02.000 He's never done that.
02:28:03.000 And I said, you're crazy.
02:28:05.000 Let me pull it up.
02:28:06.000 Unfortunately, I don't know why, but when I did the Google search, nothing came up.
02:28:10.000 What I did notice was at the bottom of the page, it said due to UK law, certain things have been removed.
02:28:16.000 So I don't know if it's exactly why I couldn't pull up a video proving or tweets or anything because I think using these words gets stripped from the social platforms.
02:28:24.000 I could not prove to this man.
02:28:26.000 In that country.
02:28:27.000 In the UK that this man.
02:28:28.000 So you could use a VPN and get around that.
02:28:32.000 Yeah.
02:28:33.000 I mean at the time, I was just like trying to pull it up and I'm like, oh, that's weird.
02:28:35.000 So now you have someone who doesn't realize he's a fan of a bigot because the law has restricted the speech.
02:28:42.000 So there's a point to be made if you – I understand you want a healthy – like you want Twitter to grow.
02:28:47.000 You need it to grow.
02:28:48.000 The shareholders need it to grow.
02:28:50.000 The advertisers need to advertise.
02:28:51.000 So you've got all these restrictions.
02:28:53.000 But allowing people to say these awful things makes sure we stay away from them and it allows us to avoid certain people.
02:28:59.000 And isn't it important to know that these people hold these beliefs?
02:29:02.000 If you get rid of them, you know, someone could walk into a business and you wouldn't even know that they were a neo-Nazi.
02:29:07.000 But if they were high profile saying their things, you'd be like, that's the guy at home.
02:29:10.000 You're absolutely right.
02:29:11.000 This is like one of my favorite sayings is that sunlight is the best disinfectant.
02:29:15.000 And it's so, so, so true.
02:29:17.000 Like one of the biggest problems with censorship is the fact that you push people underground and you don't know what's going on.
02:29:24.000 And this is something I worry about.
02:29:26.000 It's not that I don't worry about it.
02:29:28.000 You ban people for these rules.
02:29:29.000 Because I also worry about driving people away from the platform and affecting their real lives.
02:29:34.000 So we're trying to find this right balance.
02:29:36.000 And I hear you.
02:29:37.000 You may not think we're drawing the lines in the right place.
02:29:40.000 And we get that feedback all the time.
02:29:41.000 And we're always trying to find the right places to do this.
02:29:44.000 I worry as much about the underground and being able to shine a light on these things as anything else.
02:29:51.000 Tim, I think it's a cost-benefit analysis, and we have to constantly rehash it and do it.
02:29:56.000 We have the technology we have today, and we are looking at technologies which open up the aperture even more.
02:30:04.000 And we all agree that a binary on or off is not the right answer and is not scalable.
02:30:13.000 We have started getting into nuance within our enforcement, and we've also started getting into nuance with the presentation of content.
02:30:23.000 So, you know, one path might have been for some of your replies for us to just remove those offensive replies completely.
02:30:33.000 We don't do that.
02:30:34.000 We hide it behind an interstitial.
02:30:36.000 To protect the original tweeter and also folks who don't want to see that.
02:30:41.000 They can still see everything, they just have to do one more tap.
02:30:44.000 So that's one solution, ranking is another solution, but as technology gets better and we get better at applying to it, we have a lot more optionality, whereas we don't have that as much today.
02:30:55.000 I feel like, you know, I'm just going to reiterate an earlier point, though.
02:30:58.000 You know, if you recognize sunlight is the best disinfectant, it's like you're chasing after a goal that can never be met.
02:31:04.000 If you want to protect all speech and they start banning certain individual, you want to increase the amount of healthy conversations, but you're banning some people.
02:31:11.000 Well, how long until this group is now offended by that group?
02:31:14.000 How long until you've banned everybody?
02:31:15.000 I hear you.
02:31:16.000 I don't believe a permanent ban promotes health.
02:31:18.000 I don't believe that, but we have to work with the technologies, tools, and conditions that we have today and evolve over time to where we can see examples like this woman at the Westboro Baptist Church who was using Twitter every single day to spread hate against the LGBTQA community.
02:31:43.000 And over time, we had, I think it was three or four folks on Twitter who would engage her every single day about what she was doing, and she actually left the church.
02:31:52.000 That's Megan Phelps.
02:31:53.000 She's on our podcast.
02:31:54.000 She's amazing.
02:31:55.000 And she's now pulling her family out of that as well.
02:31:58.000 And you could make the argument that if we banned that account early on, she would have never left the church.
02:32:04.000 I completely hear that.
02:32:05.000 We get it.
02:32:06.000 It's just...
02:32:08.000 I just want to make sure we're advancing the conversation too and not just going to go back.
02:32:11.000 So I'll just ask you this.
02:32:13.000 Have you considered allowing some of these people permanently band back on with some restrictions?
02:32:17.000 Maybe you can only tweet twice per day.
02:32:19.000 Maybe you can't retweet or something to that effect.
02:32:21.000 I think we're very early in our thinking here, so we're open-minded to how to do this.
02:32:25.000 I think we agree philosophically that permanent bans are an extreme case scenario, and it shouldn't be one of our, you know, regularly used tools in our tool chest.
02:32:34.000 So how we do that, I think, is something that we're actively talking about today.
02:32:41.000 I think that would fix a lot of problems.
02:32:43.000 You think so?
02:32:44.000 Yes, I really do.
02:32:45.000 I'm just curious.
02:32:46.000 Are you thinking bans of a year, five years, ten years?
02:32:49.000 I'm just curious.
02:32:50.000 What is a reasonable ban in this kind of context?
02:32:54.000 Well, I think reasonably someone should have to state their case as to why they want to be unbanned.
02:32:58.000 Like, someone should have to have a well-measured, considerate response to what they did wrong.
02:33:05.000 Do they agree with what they did wrong?
02:33:07.000 Maybe perhaps saying why they don't think they did anything wrong.
02:33:11.000 And you could review it from there.
02:33:12.000 I think...
02:33:14.000 One of the challenges is we have the benefit in English common law of hundreds of years of precedent and developing new rules and figuring out what works and doesn't.
02:33:21.000 Twitter is very different.
02:33:22.000 So I think with the technology, I don't know if you need permanent bans or even suspensions at all.
02:33:27.000 You could literally just – I mean lock someone's account is essentially suspending them.
02:33:31.000 But again, I wouldn't claim to know anything about the things you go through.
02:33:36.000 But what if you just restricted most of what they could say?
02:33:40.000 You blocked certain words in a certain dictionary.
02:33:42.000 If someone's been – if someone received – Greased Hill.
02:33:44.000 But think about it this way.
02:33:46.000 Is it better that they're permanently banned?
02:33:47.000 No, it's not better, but it's not good either.
02:33:50.000 Well, no, no.
02:33:51.000 Think about it this way.
02:33:51.000 Instead of being suspended for 72 hours, you get a dictionary block from hate speech words.
02:33:56.000 Right?
02:33:57.000 Does that not make sense?
02:33:58.000 But people just use coded language.
02:34:00.000 This is what we see all the time.
02:34:01.000 Yeah, I don't think that's a good move.
02:34:02.000 What do you think about, perhaps, instead of...
02:34:06.000 Is it possible to have levels of Twitter, like a completely uncensored, unmoderated level of Twitter, and then have a rated R, and then have a PG-13?
02:34:18.000 I mean, I don't think that's a bad idea.
02:34:20.000 We have those levels in place today, but you don't really see them.
02:34:25.000 One, we have a not-safe-for-work switch, which you can turn on or off.
02:34:29.000 Oh, really?
02:34:30.000 Not safe for WorkSwitch?
02:34:32.000 I think you have it off, Joe.
02:34:33.000 Do I? You think so?
02:34:35.000 Just based on other things you've said.
02:34:36.000 Based on what you're seeing, you have it off.
02:34:38.000 I don't even know it's there.
02:34:40.000 So we have that.
02:34:41.000 And then, as Vijay pointed out earlier, we have the timeline.
02:34:44.000 We started ranking the timeline about three years ago.
02:34:48.000 We enable people today to turn that off completely and see the reverse cron of everything they follow.
02:34:55.000 You can imagine a world where that switch has a lot more power over more of our algorithms throughout more of the surface areas.
02:35:02.000 You can imagine that.
02:35:03.000 So these are all the questions that are on the table.
02:35:05.000 You asked about timeline, and this is a challenging one.
02:35:08.000 I don't know about timeline because first...
02:35:13.000 We've decided that our priority right now is going to be on proactively enforcing a lot of this content, specifically around anything that impacts physical safety, like doxing.
02:35:25.000 Right, but there's so many examples of what you guys not doing there.
02:35:29.000 I know, but that's what we're fixing right now.
02:35:33.000 That's a prioritization.
02:35:34.000 Yeah, I think from your own personal perspective.
02:35:36.000 We think more in terms of milestones on the particular timeline.
02:35:39.000 We're going to move as fast as we can, but some of it's a function of our infrastructure, of the technology we have to bring to bear.
02:35:46.000 Do you guys have conversations about trying to shift the public perception of having this left-wing bias and maybe possibly addressing it?
02:35:54.000 Yeah.
02:35:54.000 All the time.
02:35:55.000 That's what they're doing right now, right?
02:35:56.000 Yeah.
02:35:56.000 I mean, I went on the Sean Hannity show.
02:36:00.000 How was that?
02:36:01.000 We brought ourselves before.
02:36:02.000 Did you bring a lot of sunlight?
02:36:03.000 It was short, and there weren't a lot of really tough questions, and that was the feedback as well.
02:36:10.000 I get it.
02:36:12.000 Look, again, I'm from Missouri.
02:36:15.000 My dad is a Republican.
02:36:16.000 He listened to Hannity.
02:36:17.000 He listened to Rush Limbaugh.
02:36:18.000 My mom was a Democrat.
02:36:20.000 And I feel extremely fortunate that I was able to first see that spectrum, but also feel safe enough to express my own point of view.
02:36:29.000 But when I go on someone like Hannity, I'm not talking to Hannity.
02:36:33.000 I'm talking to people like my dad who listen to him.
02:36:35.000 And I want to get across how we think.
02:36:38.000 And also that our thinking evolves.
02:36:40.000 And here's the challenges we're seeing.
02:36:42.000 And like, this is our intent.
02:36:44.000 This is what we're trying to protect.
02:36:45.000 And we're going to make some mistakes along the way.
02:36:48.000 And we're going to admit to them we...
02:36:50.000 Didn't admit to them in the past.
02:36:51.000 We'd admit to a lot more over the past three years.
02:36:55.000 But, you know, I don't know any other way to address some of these issues.
02:37:01.000 It all goes back to trust.
02:37:02.000 Like, one of our core operating principles is earning trust.
02:37:05.000 How do we earn more trust?
02:37:06.000 And, you know, there are people in the world who do not trust us at all.
02:37:11.000 And there are some people who trust us a little bit more.
02:37:14.000 But this is the thing that we want to measure.
02:37:15.000 This is the thing that we want to get better at.
02:37:17.000 I saw you had a conversation with, I think, Katie Herzog?
02:37:20.000 No, no, no.
02:37:21.000 Who was it?
02:37:22.000 That was the wrong person.
02:37:23.000 You had a Twitter conversation with Kara Swisher.
02:37:25.000 Wow.
02:37:26.000 Wrong person, but someone's got a shout-out.
02:37:28.000 And, you know, I see that the left goes at you in the opposite direction.
02:37:32.000 They want more.
02:37:33.000 They want more banning.
02:37:34.000 They want more, you know, restrictions.
02:37:36.000 And then the right is saying less, right?
02:37:39.000 So, I mean, in terms of solving the problem...
02:37:41.000 Can you tell us what that conversation was about?
02:37:44.000 Do you want to summarize?
02:37:45.000 Because the thing I was pointing out specifically was that you were being asked to do more in terms of controlling...
02:37:51.000 Well, it wasn't just more, but to be a lot more specific about what actions we've taken to promote more health on the platform.
02:37:56.000 Like, what products did we change?
02:37:58.000 What policies did we introduce in the past two years?
02:38:03.000 So she was asking questions.
02:38:05.000 Every question she asked, she wanted me to be a lot more specific.
02:38:09.000 And some of these things...
02:38:11.000 Have something that is very specific.
02:38:13.000 Some are directional right now because we have to prioritize the direction.
02:38:19.000 And I talked about, we've decided that physical safety is going to be a priority for us.
02:38:24.000 And to us, that means being a whole lot more proactive around things like doxing.
02:38:28.000 So two suggestions, I guess.
02:38:30.000 I'm not going to imply that you have unlimited funding, but we did mention the peer review.
02:38:34.000 We don't.
02:38:35.000 Right, right.
02:38:35.000 And you mentioned earlier layoffs and retraction.
02:38:39.000 Peer review, which we mentioned, but have you just considered opening an office, even a small one, for trust and safety in an area that's not predominantly blue so that at least you can have some pushback?
02:38:50.000 What does Learn to Code mean?
02:38:51.000 And then they could tell you.
02:38:52.000 Absolutely.
02:38:53.000 So that's great feedback.
02:38:54.000 And just so you know, the trust and safety team is also a global team, and the enforcement team is a global team.
02:38:59.000 So it's not like people from California who are looking at everything, making decisions.
02:39:03.000 They're global.
02:39:05.000 Now, I hear your point about who trains them and the materials they have and all that, and we have to think about that.
02:39:10.000 And that's one thing that Jack has really been pushing us to think about is how do we decentralize our workforce?
02:39:15.000 Out of San Francisco.
02:39:16.000 Out of San Francisco in particular.
02:39:18.000 So this is something he's very focused on.
02:39:20.000 What about publishing evidence of wrongdoing in a banning?
02:39:24.000 So when people say, you know, what did Alex Jones really do?
02:39:26.000 Maybe a lot of people didn't realize what you saw.
02:39:29.000 And again, it's an issue of trust.
02:39:30.000 Yeah, I love this, Tim.
02:39:32.000 I'm a lawyer, so by training.
02:39:34.000 We're thinking of doing something we call case studies, but essentially, like, this is our case law.
02:39:38.000 This is what we use.
02:39:39.000 And so high-profile cases, cases people ask us about, like to actually publish this so that we can go through, you know, tweet by tweet just like this.
02:39:48.000 Because I think a lot of people just don't understand and they don't believe us when we're saying these things.
02:39:53.000 So to put that out there so people can see.
02:39:55.000 And again, they may disagree with the calls that we're making, but we at least want them to see why we're making these calls.
02:40:00.000 I think.
02:40:01.000 And that I do want to do.
02:40:02.000 I want to at least start that by the end of this year.
02:40:05.000 So I think ultimately my main criticism stands and I don't see a solution to in that Twitter is an unelected, unaccountable as far as I'm concerned when it comes to public discourse.
02:40:15.000 You have rules that are very clearly at odds as we discussed.
02:40:19.000 I don't see a solution to that and I think in my opinion we can have this kind of like we've toned things down.
02:40:24.000 We've had some interesting conversations but ultimately unless you're willing to allow people to just speak entirely freely – You are – we have an unelected group with a near monopoly on public discourse in many capacities and I understand it's not everything.
02:40:37.000 Reddit is big too and it's – what I see is you are going to dictate policy whether you realize it or not and that's going to terrify people and it's going to make violence happen.
02:40:46.000 It's going to make things worse.
02:40:50.000 I hate bringing up this example on the rule for misgendering because I'm actually – I understand it and I can agree with it to a certain extent.
02:40:57.000 I have nothing but respect for the trans community, but I also recognize we've seen an escalation in street violence.
02:41:03.000 We see a continually disenfranchised large faction of individuals in this country.
02:41:08.000 We then see only one of those factions banned.
02:41:10.000 We then see a massive multinational billion-dollar corporation with private and foreign investors.
02:41:15.000 And it looks to me like if foreign governments are trying to manipulate us, I don't see a direct solution to that problem, that you do have political views.
02:41:24.000 You do enforce them.
02:41:26.000 And that means that Americans who are abiding by American rule are being excised from political discourse, and that's the future.
02:41:30.000 That's it.
02:41:32.000 We do have views on the approach.
02:41:34.000 And again, we ground this in creating as much opportunity as possible for the largest number of people.
02:41:42.000 That's where it starts.
02:41:44.000 And where we are today will certainly evolve.
02:41:47.000 But that is what we are trying to base our rules and judgments.
02:41:51.000 And I get that that's an ideology.
02:41:53.000 I completely understand it.
02:41:55.000 But we also have to...
02:41:59.000 We also have to be free to experiment with solutions and experiment with evolving policy and putting something out there that might look right at the time and evolving.
02:42:12.000 I'm not saying this is it, but we look to research, we look to our experience and data on the platform, and we make a call.
02:42:22.000 And if we get it wrong, we're going to admit it and we're going to evolve it.
02:42:28.000 But I guess, do you understand my point?
02:42:31.000 I understand the point.
02:42:32.000 That there are American citizens abiding by the law who have a right to speak and be involved in public discourse that you have decided aren't allowed to.
02:42:37.000 Yeah, and I think we've discussed, like, we don't see that as a win.
02:42:44.000 We see that as not promoting health, ultimately, over time.
02:42:47.000 But it's ultimately, what is your priority?
02:42:50.000 Do you have it prioritized in terms of what you guys would like to change?
02:42:54.000 Yeah.
02:42:55.000 I think Jack has said it a couple times, but the first thing we're going to do is prioritize people's physical safety because that's got to be understanding.
02:43:02.000 You already have done that pretty much, right?
02:43:04.000 No.
02:43:04.000 You do that more?
02:43:05.000 We've prioritized it.
02:43:06.000 Okay.
02:43:06.000 We're doing the work.
02:43:08.000 I don't think companies like ours make the link enough between online and offline ramifications.
02:43:14.000 Right.
02:43:14.000 What's the main criticism?
02:43:16.000 What's the main criticism, you guys?
02:43:17.000 Is it censorship that you guys experience?
02:43:19.000 Is it censorship?
02:43:20.000 Is it banning?
02:43:21.000 It depends on who you ask.
02:43:23.000 Every single person has a different criticism, so I don't think there's a universal opinion.
02:43:28.000 I mean, you just painted the picture between the left and the spectrum is asking for more, and the right is asking for less.
02:43:36.000 That's very simplified just for this country, but...
02:43:40.000 At a high level, yeah.
02:43:41.000 That's consistent.
02:43:42.000 I mean, my opinion would be as much as I don't like a lot of what people say about me, what they do, the rules you've enforced on Twitter have done nothing to stop harassment towards me or anyone else.
02:43:52.000 I swear to God, my Twitter, I mean, my Reddit is probably 50 messages from various far-left and left-wing subreddits lying about me, calling me horrible names, quote-tweeting me, and these people are blocked.
02:44:04.000 Right?
02:44:04.000 And I never used to block people because I thought it was silly because they can get around it anyway, but I decided to at one point because out of sight, out of mind.
02:44:11.000 If they see my tweets less, they'll probably interact with me less, but they do this, and they lie about what I believe, they lie about what I stand for, and they're trying to destroy everything about me, and they do this to other people.
02:44:20.000 I recognize that.
02:44:21.000 So ultimately I say, well, what can you do?
02:44:22.000 It's going to happen on one of these platforms.
02:44:24.000 The internet is a thing.
02:44:25.000 As they say on the internet, welcome to the internet.
02:44:27.000 So to me, I see Twitter trying to enforce all these rules to maximize good, and all you end up doing is stripping people from the platform, putting them in dark corners of the web where they get worse, and then you don't actually solve the harassment problem.
02:44:39.000 Red is hardly a dark corner of the web, right?
02:44:41.000 Right.
02:44:41.000 No, I'm not talking – but there are dark corners of Reddit.
02:44:44.000 There are alternatives.
02:44:45.000 I mean the internet isn't going to go away and people have found alternatives.
02:44:49.000 And here's the other thing that's really disconcerting.
02:44:51.000 We can see a trend among all these different big Silicon Valley tech companies.
02:44:57.000 They hold a similar view to you guys.
02:44:59.000 They ban similar ideology and they're creating a parallel society.
02:45:03.000 You've got alternative social networks popping up that are taking the dregs of the mainstream and giving them a place to flourish, grow, make money.
02:45:10.000 Now we're seeing people be banned from MasterCard, banned from PayPal, even banned from Chase Bank because they all hold the same similar ideology to you.
02:45:18.000 In some capacities, I don't know exactly why Chase does it.
02:45:22.000 I assume it's because you'll get some activists who will lie.
02:45:25.000 Explain what you're talking about.
02:45:26.000 There have been a series of individuals banned from Chase Bank.
02:45:29.000 Their accounts have been?
02:45:48.000 You then have Joe Biggs, who previously worked with InfoWars.
02:45:51.000 I don't know much about this.
02:45:52.000 I didn't follow up.
02:45:53.000 But he tweeted out, Chase has shuttered my account.
02:45:55.000 And then you have the new chairman of the Proud Boys, Enrique – I forgot his last name – Tario or something.
02:46:01.000 And so – Sounds really white.
02:46:25.000 Oh, no, he's Afro-Cuban.
02:46:26.000 Now we're seeing people who have, like, you mentioned Westboro Baptist Church, and she's been deradicalized by being on the platform.
02:46:33.000 But now we have people who are being radicalized by being pushed into the dark corners, and they're building, and they're growing.
02:46:39.000 And they're growing because there's this idea that you can control this and you can't.
02:46:44.000 You know, I think you mentioned earlier that...
02:46:47.000 There are studies showing, and also counter-studies, but people exposed to each other is better.
02:46:51.000 I found something really interesting, and because I have most, whether or not people want to believe this, all of my friends are on the left, and some of them are even, like, socialists, and they're absolutely terrified to say, to talk, because they know they'll get attacked by the people who call for censorship and try to get them fired.
02:47:07.000 And when I talked to them, I was talking to a friend of mine in LA, and she said, is there a reason to vote for Trump?
02:47:14.000 And I explained a very simple thing about Trump supporters.
02:47:16.000 This was back in 2016. I said, oh, well, you've got a lot of people who are concerned about the free trade agreements sending jobs overseas.
02:47:22.000 So they don't know much about Trump, but they're going to vote for him because he supported that.
02:47:25.000 And so did Bernie.
02:47:26.000 And then the response is, really?
02:47:28.000 I didn't know that.
02:47:29.000 And so you have this ever-expanding narrative that Trump supporters are Nazis and the MAGA head is the KKK hood.
02:47:35.000 And a lot of this rhetoric emerges on Twitter.
02:47:37.000 But when a lot of these people start getting excised, Then you can't actually meet these people and see that they're actually people, and they may be mean.
02:47:44.000 They may be mean people.
02:47:45.000 They may be awful people, but they're still people, and even if they have bad opinions, sometimes you actually, I think in most instances, you find they're regular people.
02:47:53.000 Well there's a part of the problem of calling for censorship and banning people in that it is sometimes effective and that people don't want to be thought of as being racist or in support of racism or in support of nationalism or any of these horrible things so you feel like if you support these bannings you support positive discourse and a good society and all these different things.
02:48:15.000 What you don't realize is what you're saying, is that this does create these dark corners of the web and these other social media platforms evolve and have far...
02:48:24.000 I mean, when you're talking about bubbles and about these groupthink bubbles, the worst kind of groupthink bubbles is a bunch of hateful people that get together and decide they've been persecuted.
02:48:37.000 Instead of, like we were talking about with Megan Phelps, having an opportunity to maybe reshape their views by having discourse with people.
02:48:45.000 Who choose to or not choose to engage with them.
02:49:05.000 We're good to go.
02:49:30.000 And it develops hate for the opposing viewpoint.
02:49:33.000 You start hating people that are progressive because these are the people that, like, you and I have talked about the Dayton Society report that labeled us as alt-right adjacent or whatever.
02:49:41.000 And now more fake news coming out about it, right?
02:49:42.000 Well, it's ridiculous.
02:49:43.000 They connected because you and I have talked to people that are on the right or far right that somehow or another were secretly far right and that there's this influence network of people together.
02:49:55.000 Well, it's a schizophrenic connection.
02:49:58.000 It's like one of those weird things where people draw a circle.
02:50:01.000 Oh, you talk to this guy and this guy talk to that guy.
02:50:03.000 Therefore, you know that guy.
02:50:04.000 So here's an expanded part of this problem.
02:50:07.000 So you're probably not familiar, but a group called Data& Society published what's entirely fake, a report labeling 81 alt-right adjacent to whatever they want to call it.
02:50:16.000 YouTube channels included Joe Rogan and me.
02:50:17.000 It's fake.
02:50:18.000 But you know what?
02:50:19.000 A couple dozen news outlets wrote about it as if it was fact.
02:50:22.000 You believe the Proud Boys were labeled by the FBI's extremists when they actually weren't.
02:50:26.000 It was a sheriff's report from someone not affiliated with the FBI, but they are activists within media who have an agenda, and we saw this with Learn to Code.
02:50:34.000 It was an NBC reporter who very clearly is left-wing identitarian.
02:50:39.000 I think we're good to go.
02:50:51.000 The snowflake won't blame itself for the avalanche.
02:50:53.000 You guys are doing what you think is right.
02:50:55.000 So is Facebook, YouTube, Patreon, all these platforms.
02:50:58.000 And it's all going to result in one thing.
02:51:00.000 It's going to result in groups like Patriot Prayer and the Proud Boys saying, I refuse to back down, showing up.
02:51:05.000 It's going to result in Antifa showing up.
02:51:06.000 It's going to result in more extremism.
02:51:08.000 You've got an Antifa account that published the home addresses and phone numbers that hasn't been banned.
02:51:12.000 That's going to further show conservatives that the policing is asymmetrical, whether it is or isn't.
02:51:19.000 And I think the only outcome to this on the current course of action is like insurgency.
02:51:24.000 We've seen people planting bombs in Houston, try to blow up a statue.
02:51:28.000 We saw someone plant a bomb at a police station in Eugene, Oregon.
02:51:30.000 Two weeks before that, a guy showed up with a gun and fired two rounds at a cop wearing a smash at the Patriarchy in Schilcher.
02:51:35.000 So, you know, so that happens.
02:51:37.000 Then a week later, they say you killed our comrade.
02:51:39.000 Then a week later, a bomb is planted.
02:51:40.000 I don't believe it's a coincidence.
02:51:41.000 Maybe it is.
02:51:43.000 I lived in New York.
02:51:44.000 I got out.
02:51:44.000 Too many people knew who I was.
02:51:46.000 And there was people sending me emails with threats.
02:51:49.000 And I'm like, this is escalating.
02:51:50.000 You know, we've seen for the past years with Trump, we've seen Breitbart has a list of 640 instances of Trump supporters being physically attacked or harassed in some way.
02:51:59.000 There was a story the other day about an 81 year old man who was attacked.
02:52:03.000 And I think we're good to go.
02:52:14.000 Yeah, I mean, I don't think it's going to be the responsibility of any one company.
02:52:19.000 We have a desire, let me be clear, that we have a desire to promote health in public conversation.
02:52:24.000 And as we've said, like, I don't think over time a permanent ban promotes health.
02:52:32.000 I don't, but we have to get there.
02:52:37.000 And there are exceptions to the rule, of course, but like...
02:52:41.000 We just have work to do.
02:52:43.000 The benefit of conversations like this is we're talking about it more, but people will naturally call us out.
02:52:49.000 You've got to show it as well.
02:52:51.000 Do you fear regulation?
02:52:53.000 I don't fear regulation if we're talking about regulation in the...
02:52:58.000 Government intervention.
02:52:58.000 In the job of...
02:53:00.000 A regulator's job is to protect the individual and make sure that they level the playing field and they're not pushed by any particular special interests.
02:53:09.000 Like, companies like ours who might, you know...
02:53:17.000 I agree that we should have an agency that can help us protect the individual and level the playing field.
02:53:29.000 So I think oftentimes companies see themselves as reacting to regulation.
02:53:35.000 I think we need to take more of an education role.
02:53:38.000 So I don't fear it.
02:53:39.000 I want to make sure that we're educating regulators on what's possible, what we're seeing, and where we could go.
02:53:44.000 When you say educating regulators, that's initiating a regulation.
02:53:49.000 Not necessarily.
02:53:50.000 By educating regulators, who are these regulators?
02:53:54.000 These are folks who might be tasked with coming up with a proposal for particular legislation or laws to present to legislators.
02:54:05.000 So it's making sure that we are educating to the best of our ability.
02:54:10.000 This is what we are.
02:54:11.000 This is what we see.
02:54:12.000 This is where technology is going.
02:54:14.000 Do you think you can hold off regulation, though?
02:54:16.000 Do you think that by these approaches and by being proactive and by taking a stand and perhaps offering up a road to redemption to these people and making clear distinctions between what you're allowing, what you're not allowing, you can hold off Regulation, or do you disagree with what he's saying about regulation?
02:54:32.000 No, I don't believe that should be our goal, is to hold off regulation.
02:54:34.000 I believe we should participate like any other citizen, whether it be a corporate citizen or an individual citizen, in helping to guide the right regulation.
02:54:45.000 So, are you familiar, and I could be wrong on this because it's been like 15 years since I've done this.
02:54:50.000 Are you familiar with the Clean Water Restoration Act at all?
02:54:52.000 I don't expect you to be.
02:54:53.000 It's a very specific thing.
02:54:54.000 So, it was at some point in like the early 70s.
02:54:58.000 There was a river in Ohio.
02:54:59.000 And again, I could be wrong.
02:55:00.000 It's been 15 years.
02:55:00.000 I used to work for an environmental organization.
02:55:02.000 It started on fire.
02:55:04.000 And what was typically told to us was that all of these different companies said we're doing the right thing.
02:55:09.000 But as I mentioned, the snowflake doesn't blame itself.
02:55:12.000 So over time, the river was so polluted it became sludge and lit on fire.
02:55:15.000 And so someone said, if all of these companies think they're doing the right thing, And they've all just contributed to this nightmare.
02:55:22.000 We need to tell them blanket regulation.
02:55:24.000 And so what I see with these companies like banking institutions, public discourse platforms, video distribution, I actually – I'm really worried about what regulation will look like because I think the government is going to screw everything up.
02:55:37.000 But I think there's going to be a recoil of – first, I think the Republicans – because I watched the testimony you had in Congress and I thought they had no idea what they were talking about nor did they care.
02:55:46.000 There was like a couple people who made good points, but for the most part, they were like, I don't know, whatever.
02:55:50.000 And they asked about Russia and stuff.
02:55:51.000 So they have no idea what's going on, but there will come a time when, you know, for instance, one of the great things they brought up was that by default, when someone in D.C. signs up, they see way more Democrats than Republicans.
02:56:01.000 Right?
02:56:01.000 You remember that when you testified?
02:56:03.000 Yeah.
02:56:31.000 So then it's going to escalate for me.
02:56:33.000 It's not going to stop with these conversations.
02:56:34.000 And so we've been having a lot of talks about this, particularly around algorithms.
02:56:38.000 And one of the things that we're really focused on is not just fairness and outcomes, but also explainability of algorithms.
02:56:43.000 And I know, Jack, you love this stuff, so I don't know if you want to talk a little bit about our work there.
02:56:47.000 Yeah, I mean, so there's two fields of research within artificial intelligence that are rather new, but I think really impactful for our industry.
02:56:54.000 One is fairness in ML. Fairness in what?
02:56:58.000 Fairness in machine learning and deep learning.
02:57:01.000 So looking at everything from what data set is fed to an algorithm, so like the training data set.
02:57:10.000 We're good to go.
02:57:30.000 The reality is a lot of this human judgment is moving to algorithms.
02:57:34.000 And the second issue with it moving to algorithms is algorithms today can't necessarily explain the decision-making criteria that they use.
02:57:42.000 So they can't explain in the way that you make a decision, you explain why you make that decision.
02:57:47.000 Algorithms today are not being programmed in such a way that they can even explain that.
02:57:50.000 You may wear an Apple Watch, for instance.
02:57:53.000 It might tell you to stand every now and then.
02:57:56.000 Right now, those algorithms can't explain why they're doing that.
02:58:00.000 That's a bad example because it does it every 50 minutes, but as we offload more and more of these decisions, both internally and also individually to watches and to cars and whatnot, There is no ability right now for that algorithm to actually go through and list out the criteria used to make that decision.
02:58:21.000 So this is another area that we'd like to get really good at if we want to continue to be transparent around our actions because a lot of these things are just black boxes and they're being built in that way because there's been no research into like, well, how do we get these algorithms to explain what their decision is?
02:58:38.000 That question hasn't been asked.
02:58:39.000 My fear is it's technology that you need to build, but the public discourse is there.
02:58:45.000 We know that foreign governments are doing this.
02:58:47.000 We know that democratic operatives in Alabama did this.
02:58:49.000 And so I imagine that with Donald Trump – he talked about an executive order for free speech on college campuses.
02:58:56.000 So the chattering is here.
02:58:58.000 Someone is going to take a sledgehammer to Twitter, to Facebook, to YouTube and just be like – Not understanding the technology behind it, not willing to give you the benefit of the doubt, and just saying, I don't care why you're doing it, we are mad.
02:59:10.000 You know what I mean?
02:59:11.000 Pass some bills and then it's over.
02:59:13.000 Again, clarifying, I think you guys are biased and I think what you're doing is dangerous, but I think that doesn't matter.
02:59:21.000 It doesn't matter what you think is right.
02:59:22.000 It matters that all of these companies are doing similar things and it's already terrifying people.
02:59:27.000 I mean, look, when I saw somebody got banned from their bank account, That's terrifying.
02:59:31.000 And PayPal's done this for a long time.
02:59:33.000 That seems like more egregious than being banned from any social media platform.
02:59:39.000 That seems to me to be worthy of a boycott.
02:59:44.000 Patreon issued a statement about a man, I believe his name is Robert Spencer, and they said MasterCard instructed us to ban him.
02:59:51.000 And you know what?
02:59:51.000 I'll say this too.
02:59:52.000 Me mentioning Chase, PayPal, MasterCard terrifies me.
02:59:56.000 I'm on the Joe Rogan podcast right now calling out these big companies in defiance.
03:00:00.000 And we've already seen...
03:00:01.000 I would like to know all the specifics of why they chose to do that.
03:00:04.000 And I would hope that they would release some sort of a statement explaining why they chose to do that.
03:00:07.000 Maybe there's something we don't know.
03:00:08.000 There was a reporter, and I could be getting this wrong because I didn't follow it very much, with Big League Politics, who said that after reporting on PayPal negatively, they banned him.
03:00:19.000 That's terrifying.
03:00:20.000 Just reporting on it in what way?
03:00:22.000 Like reporting on the Sargon of Akkad issue?
03:00:24.000 No, apparently he's a journalist.
03:00:25.000 He wrote about something bad PayPal did.
03:00:27.000 Big League politics is conservative.
03:00:28.000 And so all of a sudden he got a notification that they can't tell him why, but he's gone.
03:00:32.000 So I see these big tech monopolies.
03:00:34.000 I see YouTube, Facebook, Twitter.
03:00:35.000 I see PayPal, MasterCard.
03:00:36.000 And they're doing it.
03:00:38.000 And they all say they're doing the right thing.
03:00:39.000 But all of these little things they're doing are adding up to something nightmarish.
03:00:43.000 And some legislator is going to show up in a matter of time with a sledgehammer and just, he's going to whack your algorithm.
03:00:48.000 Well, it's really the same stupid logic where I was talking about where Gavin was saying punch people.
03:00:53.000 When you punch people, it doesn't end there.
03:00:55.000 Oh, yeah, they punch back.
03:00:56.000 Ban them.
03:00:56.000 It doesn't end there.
03:00:58.000 It doesn't end there.
03:00:58.000 You have to realize also...
03:01:01.000 Twitter is how old now?
03:01:03.000 11 years old?
03:01:03.000 12 years old?
03:01:04.000 13 years old?
03:01:04.000 13 years old.
03:01:06.000 Well, 13 years from now, what are the odds that there's not going to be something else just like it?
03:01:09.000 Pretty slim.
03:01:12.000 Depends on how we do.
03:01:15.000 Let's talk about the incestuous relationship that a lot of these journalists have in defending the policies you guys push.
03:01:21.000 A study was done, I talked about this last time, where they found 5% of the posts on Gab were hate speech, compared to Twitter's like 2.4%.
03:01:30.000 So it's a marginal increase, yet Gab is called the White Supremacy Network.
03:01:34.000 Of course.
03:01:34.000 You go on it, and yeah, absolutely, it exists.
03:01:37.000 They say that synagogue shooter, oh, he was a Gab user.
03:01:39.000 He was a Twitter user, too.
03:01:40.000 He posted on Twitter all the time.
03:01:41.000 So why the media is targeting, it's such a crazy, nightmarish reality.
03:01:46.000 It's a simplistic, reductive narrative.
03:01:48.000 When The Guardian, I believe it was the Daily Mail, called Count Dankula a Nazi hate criminal.
03:01:52.000 Yeah, I saw that.
03:01:53.000 Dude literally made a joke on YouTube, and he was arrested.
03:01:56.000 I thank God every day we have the First Amendment in this country.
03:01:59.000 Well, there's a cover of a newspaper that was called, because he got a new job somewhere.
03:02:02.000 He got fired.
03:02:03.000 For that?
03:02:04.000 He got kicked off the show.
03:02:05.000 Wow.
03:02:06.000 Yeah, so you have...
03:02:07.000 Because of trying to get his pub.
03:02:08.000 Look, let me ask you another thing.
03:02:09.000 Do you guys take the advice of the Southern Poverty Law Center?
03:02:14.000 Do we take the advice of them?
03:02:15.000 So, it's widely circulated.
03:02:18.000 The SPLC lobbies various social platforms to ban certain people.
03:02:22.000 It's been reported they advised YouTube, as is the Anti-Defamation League.
03:02:25.000 Do you use them in your decision-making process, rule development?
03:02:29.000 We're very aware of flaws with certain of their research, and we're very careful about who we take advice from.
03:02:36.000 But do you take advice from them?
03:02:37.000 I think that they have certainly reached out to our team members, but there's certainly nothing definitive that we take from them.
03:02:42.000 We don't take action.
03:02:43.000 You never take an action based on information received from them?
03:02:46.000 No.
03:02:47.000 The reason I bring them up specifically is that they're cited often in the United States.
03:02:51.000 There's other groups like Hope Not Hate in the UK, and now they're all going to point their figurative guns at me for saying this.
03:02:56.000 But the Southern Poverty Law Center, Sam Harris.
03:03:15.000 Sam Harris was...
03:03:16.000 Didn't they lose a big lawsuit around this as well?
03:03:20.000 They settled.
03:03:21.000 So again, not to imply that you guys do use it, but I asked specifically because it's been reported other organizations do.
03:03:27.000 So we have activist organizations.
03:03:29.000 We have journalists that I can attest are absolutely activists because I've worked for Vice.
03:03:34.000 I worked for Fusion.
03:03:35.000 I was told...
03:03:37.000 Implicitly, not explicitly, to lie, to side with the audience, as it were.
03:03:41.000 I've seen the narratives they push, and I've had conversations with people that I'm going to keep relatively off the record.
03:03:48.000 Journalists who are terrified because they said the narrative is real.
03:03:52.000 One journalist in particular said that he had evidence of, essentially, he had reason to believe there was wrongdoing, but if he talks about it, he could lose his job.
03:04:02.000 And there was a journalist who reported to me that Data& Society admitted their report was incorrect.
03:04:08.000 And now you've got organizations lobbying for terminating Joe and I because of this stuff.
03:04:13.000 So this narrative persists.
03:04:15.000 Then you see all the actions I mentioned before and all the organizations saying we're doing the right thing.
03:04:20.000 And I got to say, like, we're living in a – I mean, I feel like we're looking at the doorway to the nightmare dystopia of – I just want to clarify.
03:04:28.000 I don't know if we're going around saying we're necessarily doing the right thing.
03:04:33.000 We're saying why we're doing what we're doing.
03:04:35.000 Right, right.
03:04:35.000 That's what we need to get better at.
03:04:37.000 And I don't want to hide behind what we believe is the right thing.
03:04:43.000 We have to clearly rationalize why we're making the decision we're making and more of that.
03:04:49.000 That, to me, is the prevention from this snowflake avalanche metaphor.
03:04:54.000 Right.
03:04:54.000 Well, but I think it's just obvious to point out – again, I said this before, we can have the calm conversation and I can understand you.
03:05:01.000 But from where I'm sitting, you hold a vastly different ideology than I do and you have substantially more power in controlling my government.
03:05:08.000 That terrifies me.
03:05:09.000 And what makes it worse is that a Saudi prince owns – as it was reported, that a Saudi prince owns a portion of that company.
03:05:15.000 So I'm sitting here like just a little American – I can't do anything to stop it.
03:05:18.000 I'm just watching this unaccountable machine churn away, and you're just one snowflake in that avalanche.
03:05:23.000 All these other companies are as well.
03:05:25.000 And I'm like, well, here we go.
03:05:26.000 This is going to be a ride.
03:05:26.000 The avalanche has started.
03:05:27.000 As Vijay said, Saudi Prince doesn't have any influence.
03:05:30.000 But am I supposed to trust that?
03:05:32.000 That's the issue, right?
03:05:33.000 I'm not trying to insinuate he's showing up to your meetings and telling you what to do, but when someone dumps a billion dollars in your company...
03:05:38.000 I think it's silly to imply that they don't at least have some influence, but regardless.
03:05:42.000 And unlike the internet, within a company like ours, you don't necessarily see the protocol, you don't see the processes, and that is an area where we can do a lot much better.
03:05:51.000 I guess, you know, beat it over the head a million times, beat the dead horse.
03:05:56.000 I think ultimately, yeah, I get what you're doing.
03:05:58.000 I think it's wrong.
03:05:59.000 I think it's terrifying.
03:06:00.000 And I think we're looking – we're on the avalanche already.
03:06:03.000 It's happened.
03:06:03.000 And we're heading down to this nightmare scenario of a future where it terrifies me when I see people who claim to be supporting liberal ideology burning signs that say free speech, threatening violence against other people.
03:06:13.000 You have these journalists who do the same thing.
03:06:15.000 They accuse everybody of being a Nazi, everybody of being a fascist, Joe Rogan for Christ.
03:06:19.000 That's a giant issue.
03:06:19.000 And you're like a socialist as far as I know.
03:06:22.000 You're like UBI proponent.
03:06:24.000 I'm a socialist.
03:06:24.000 Well, I wouldn't necessarily say my show.
03:06:25.000 I'm very liberal.
03:06:26.000 I'm being facetious.
03:06:27.000 I'm very liberal, except for Second Amendment.
03:06:29.000 That's probably the only thing that I disagree with a lot of liberals on.
03:06:32.000 And then you see what the media says about everybody.
03:06:34.000 You see how they call Jordan Peterson all day and night alt-right.
03:06:37.000 Alt-right hates him.
03:06:37.000 And this narrative is used to strip people of their income, to remove them from public discourse.
03:06:42.000 Well, it's foolish because ultimately, upon examination, like you were saying, that sunlight is the best disinfection.
03:06:49.000 Absolutely.
03:06:50.000 And upon examination, you realize that this is not true at all, and that these people look foolish, like the Data in Society article.
03:06:57.000 No, no.
03:06:57.000 All these organizations publish that as fact without looking at any data.
03:07:00.000 Maybe some did, but anybody...
03:07:02.000 Some, dozens.
03:07:03.000 But, yeah, maybe...
03:07:04.000 And no, no, no, they're still citing it.
03:07:05.000 We're talking about millions and millions of people.
03:07:06.000 Who are these people that are still citing it?
03:07:09.000 Guardian, Fast Company...
03:07:09.000 Well, clack the bell and start yelling, shame!
03:07:12.000 Because that's foolish.
03:07:14.000 We now have – and this makes things muddier is we now have a guy who's claiming that he didn't – you're going to love this.
03:07:21.000 There's a guy claiming that the 81 accounts listed on this thing as alt-right have been – are no longer being recommended on YouTube.
03:07:29.000 I looked at the statistics for various people on this channel, because first of all, my channel is doing great.
03:07:34.000 My recommendations are way up, as are yours.
03:07:36.000 A lot of people are growing, and I did a comparison like, subscribers are up, views are up, what's this guy claiming?
03:07:42.000 And apparently, I think?
03:08:09.000 I don't understand what you're saying.
03:08:11.000 So basically, there's a guy claiming that because of Data and Society, we have been stripped of recommendations on YouTube.
03:08:16.000 Well, I'll tell you one thing that is true, though.
03:08:18.000 We don't trend.
03:08:19.000 Like, Alex Jones was saying, like, the video we did got 9 million views, but it's not trending.
03:08:25.000 And I said, well, it's because my videos never trend.
03:08:28.000 They just don't trend.
03:08:29.000 But I think it's probably because of the language that's used.
03:08:33.000 I think that's part of the issue.
03:08:36.000 It's a subject matter in language.
03:08:37.000 I think they have...
03:08:39.000 They have a bias against swearing and extreme topics and subjects.
03:08:44.000 I don't think that's true because you've had late night TV hosts talk about really messed up things.
03:08:51.000 They don't swear, obviously.
03:08:51.000 Yeah, but they don't swear, though.
03:08:53.000 It's not a matter of...
03:08:54.000 And what they talk about, whether that's messed up in comparison to what we talk about, it's probably pretty different.
03:09:00.000 You know what, man?
03:09:01.000 I'm fairly resigned to this future happening, no matter what we do about it.
03:09:05.000 And so I bought a van, and I'm going to convert it to a...
03:09:08.000 Oh, Jesus Christ.
03:09:08.000 Well, I'm coming up to a workstation, right?
03:09:11.000 You're going to be a prepper, bro.
03:09:12.000 Oh, no, [...
03:09:14.000 First of all, I will say it's hilarious to me that people have Band-Aids.
03:09:17.000 They never use, but they don't store at least one emergency food supply.
03:09:21.000 It's like, you never use Band-Aids.
03:09:23.000 Why do you have them?
03:09:24.000 But, no, I do.
03:09:26.000 I see this every day.
03:09:27.000 It was a couple years ago.
03:09:28.000 I said, wow, I see what's happening on social media.
03:09:30.000 We're going to see violence.
03:09:30.000 Boom, violence happened.
03:09:31.000 I said, oh, it's going to escalate.
03:09:32.000 Someone's going to kill.
03:09:33.000 Boom, Charlottesville happened.
03:09:34.000 And it's like, I've...
03:09:36.000 There have been statements from foreign security advisors, international security experts saying we're facing down high probability of civil war.
03:09:42.000 And I know it sounds crazy.
03:09:44.000 It's not going to look like what you think it looks like.
03:09:46.000 It may not be as extreme as it was in the 1800s.
03:09:48.000 But I think it was in the Atlantic where they surveyed something like 10 different international security experts who said based on what the platforms are doing, based on how the people are responding, one guy said it was like 90% chance, but the average was really high.
03:10:02.000 Well, let's look outside of the idea of physical war and let's look at the war of information.
03:10:07.000 We're talking about what's happening with foreign entities invading social media platforms and trying to influence our elections and our democracy.
03:10:16.000 That is a war of information.
03:10:18.000 That war is already going on.
03:10:20.000 If you're looking at something like data in society, that's sort of an act of war in that regard.
03:10:25.000 It's an information war tactic.
03:10:27.000 An attempt to lie to people to strip out their ideological opponents.
03:10:31.000 And it's also the woman who wrote that said that it's been proven over and over again that deplatforming is an effective way to silence people.
03:10:38.000 And then called for us to be banned.
03:10:39.000 Yeah, that's kind of hilarious.
03:10:40.000 I don't think she was saying that we should be banned.
03:10:42.000 I don't think she said that I should be banned.
03:10:43.000 She said something to the effect of YouTube has to take action to prevent this from, you know...
03:10:47.000 Well, you know, when people see someone saying things that they don't agree with, it's very important for people to understand where silencing people leads to.
03:10:55.000 And I don't think they do.
03:10:57.000 I think people have these very simplistic ideologies and these very narrow-minded perceptions of what is good and what is wrong.
03:11:04.000 And I think, and I've been saying this over and over again, but I think it's one of the most important things to state.
03:11:10.000 People need to learn to be reasonable.
03:11:13.000 They need to learn to be reasonable and civil discourse.
03:11:16.000 Civil discourse is extremely important.
03:11:18.000 And think over the long term.
03:11:19.000 Yes, think over the long term and understand it.
03:11:21.000 You're playing chess.
03:11:24.000 We did three hours and 30 minutes.
03:11:27.000 Nobody had to pee.
03:11:28.000 Amazing.
03:11:28.000 I'm proud of all of you.
03:11:29.000 I don't know if nobody had to pee.
03:11:31.000 We already held it in.
03:11:32.000 We did start a little late.
03:11:34.000 I think we're like 3.15.
03:11:35.000 I mean, I guess the last thing I could say is I don't think...
03:11:39.000 I think we had a good conversation.
03:11:40.000 I think we did too.
03:11:41.000 Honestly, I don't think we've solved anything.
03:11:42.000 I don't think there's been any...
03:11:43.000 Do you think we could do this again in like six months and see where you guys are at in terms of like what I think is important is the road to redemption.
03:11:49.000 I think that would open up a lot of doors for a lot of people to appreciate you.
03:11:53.000 We're going to need more than six months.
03:11:54.000 Jesus Christ.
03:11:55.000 Why don't you let me do it?
03:11:56.000 But here's the scary thing.
03:11:58.000 The information travels faster than you can, right?
03:12:01.000 And that's the point I was making.
03:12:03.000 Our culture is going to evolve faster than you can catch up to that problem because there's a problem.
03:12:08.000 And I don't – technology took a big leap.
03:12:11.000 Twitter existed.
03:12:12.000 The internet existed.
03:12:12.000 Now we're all talking so quickly.
03:12:14.000 You can't actually solve the problem before the people get outraged by it.
03:12:17.000 No, I get it.
03:12:19.000 I mean, there was an early phrase in the internet by some of the earliest internet engineers and designers, which is, code is law.
03:12:28.000 And a lot of what companies like ours and startups and random folks who are individuals who are contributing to the internet will change parts of society, and some for the positive and some for the negative.
03:12:45.000 And the most...
03:12:46.000 I think the most important thing that we need to do is to, as we just said, shine a bunch of light on it, make sure that people know where we stand and where we're trying to go and what bridges we might need to build from our current state to the future state.
03:13:02.000 And be open about the fact that we're not going to, and this is to your other point, we're not going to get to a perfect answer here.
03:13:12.000 It's just going to be steps and steps and steps and steps.
03:13:16.000 What we need to build is agility.
03:13:18.000 What we need to build is an ability to experiment very, very quickly and take in all these feedback loops that we get, some feedback loops like this, some within the numbers itself, and integrate them much faster.
03:13:31.000 What's wrong with the jury system on Twitter?
03:13:33.000 Why wouldn't that work?
03:13:35.000 I don't know why it wouldn't work.
03:13:36.000 I'm not saying we wouldn't test that.
03:13:38.000 We're testing it in Periscope, and I don't have a compelling reason why we wouldn't do it within Twitter either.
03:13:44.000 I don't.
03:13:45.000 So we likely will.
03:13:46.000 But again, we're a company of so many resources, finite resources, finite people, and we need to prioritize.
03:13:53.000 And we've decided, you may disagree with this decision, but we've decided that physical safety and the admission of off-platform ramifications...
03:14:03.000 Is critical for us.
03:14:04.000 And we need to be able to be a lot more proactive in our enforcement, which will lead to stronger answers.
03:14:12.000 And we want to focus on the physical safety aspect.
03:14:15.000 And doxing is a perfect example that has patterns that are recognizable and that we can move on.
03:14:20.000 I hear it.
03:14:21.000 And I just feel like, you know, the conclusion I can come to in the conversation is you're going to do what you think needs to be done.
03:14:28.000 I think what you're doing is wrong.
03:14:29.000 And ultimately, nothing's going to change.
03:14:31.000 I get it.
03:14:31.000 You're going to try new technologies.
03:14:33.000 You're going to try and do new systems.
03:14:34.000 From where I see it, I think you have an ideology diametrically opposed to mine.
03:14:40.000 I mean, not to an extreme degree.
03:14:41.000 I think there are people who are more – I'm not conservative.
03:14:43.000 There are a lot of people who are who are probably think – I'll say this too.
03:14:47.000 You're a symbol for a lot of them, and so I can definitely respect you having the conversation.
03:14:51.000 There are so many different companies that do things that piss people off.
03:14:54.000 You sitting here right now, I'm sure there's a ton of conservatives who are pointing all of their anger at you because you are here.
03:15:00.000 But ultimately, I just feel like I don't think anything's going to change.
03:15:04.000 I think you're on your path.
03:15:05.000 You know what you need to do, and you're trying to justify it.
03:15:07.000 And I'm looking at what Twitter is doing as very wrong, and it's oppressive and ideologically driven.
03:15:15.000 And I'm trying to justify why you shouldn't do it, but nothing's going to change.
03:15:18.000 My intention is to build a platform that gives as many people as possible opportunity to freely express themselves.
03:15:25.000 And some people believe the United States has already done that.
03:15:27.000 And Twitter is now going against what the U.S. has developed over hundreds of years.
03:15:30.000 But this is global.
03:15:31.000 This is global.
03:15:33.000 I mean, the United States doesn't have a platform to do that.
03:15:36.000 When you're talking about the internet, the United States, if they want to come up with a United States Twitter, like a solution or an alternative that the government runs, and they use free speech to govern that, good luck.
03:15:49.000 Good luck with that.
03:15:50.000 Well, it's a huge challenge.
03:15:51.000 And also, I recognize...
03:15:52.000 Not just huge, almost insurmountable.
03:15:54.000 I mean, they have the dummies that are in charge of the United States government.
03:15:59.000 This is why I said regulation is scary.
03:16:01.000 Yeah, it is scary.
03:16:03.000 It's a terrible idea.
03:16:04.000 But I think it's important to point out, too, that a lot of people don't realize...
03:16:28.000 That's 100% the problem.
03:16:29.000 There's 100% the problem with most of these platforms, including YouTube.
03:16:32.000 Absolutely.
03:16:33.000 When the PewDiePie thing happened and all of these restrictions came down on advertising and content creators, that's where it comes from.
03:16:41.000 It all comes from money.
03:16:44.000 Just to be clear, those can be segmented as well.
03:16:48.000 Advertisers can choose where they want to be placed.
03:16:52.000 Certainly.
03:16:53.000 But the platform recognizes there's a huge blowback and they're losing money.
03:16:58.000 Look at the pedo scandal that just happened on YouTube.
03:17:01.000 It was people posting comments with timestamps.
03:17:03.000 They weren't even breaking the rules.
03:17:04.000 Advertisers pulled off the platform and YouTube didn't realize because they weren't breaking the rules.
03:17:08.000 They're just creepy dudes.
03:17:10.000 Creepy people.
03:17:11.000 Also, they were putting comments.
03:17:13.000 And so one of the most preposterous responses to that was that content creators are going to be responsible for their comments.
03:17:20.000 Well, they just turned them off.
03:17:21.000 Well, the problem with people like me is that I put out a lot of content, and there's millions of views, and it's impossible to moderate all the comments.
03:17:33.000 And we don't moderate them at all.
03:17:34.000 Right, but YouTube banned only on videos with minors.
03:17:37.000 So they deleted all comments.
03:17:39.000 On videos with minors.
03:17:40.000 Yeah, videos where they say youth might be.
03:17:42.000 But you know what I'm saying?
03:17:42.000 If you put a YouTube video on and you have a bunch of people that say a bunch of racist things in your YouTube comments, you could be held responsible and get a fuck.
03:17:50.000 No, no, no, no.
03:17:50.000 YouTube clarified that.
03:17:51.000 They clarified that.
03:17:52.000 I think we're good to go.
03:18:10.000 Look, you know, I pointed out I think the Democrats are in a really dangerous position because outrage culture, although it exists in all factions, is predominantly on one faction.
03:18:21.000 And so when Trump comes out and says something really offensive, you know, grab him by the, you know what I'm talking about, the Trump supporters laugh.
03:18:27.000 They bought t-shirts that said it.
03:18:41.000 What does that have to do with Twitter, though?
03:18:43.000 It has to do with that.
03:19:07.000 And that means even though YouTube did nothing wrong with these comments, it was just a creepy group of people who didn't break the rules, who figured out how to manipulate the system, YouTube ate, like, YouTube had to take that one.
03:19:18.000 The advertisers pulled out, YouTube lost money.
03:19:20.000 So YouTube then panics, sledgehammers comments, just wipes them out.
03:19:24.000 That could happen to anybody, right?
03:19:26.000 We're in a really dangerous time with...
03:19:28.000 Well, also in their defense, though, they have to deal with that.
03:19:30.000 I mean, they have a bunch of pedophiles that are posting comments.
03:19:33.000 No, for sure.
03:19:33.000 I mean, what do you do about that?
03:19:35.000 What do you do?
03:19:36.000 Other than hire millions of people to moderate every single video that's put on YouTube, which is almost impossible.
03:19:42.000 The point I'm trying to bring up is that even if Twitter wanted to say, you know what, we're going to allow free speech, what happens?
03:19:49.000 Advertisers are like, later.
03:19:50.000 Even if you segment it, they're going to be threatened by it, and so the restrictions are going to come from whether or not you can make money doing it.
03:19:55.000 I don't know about that.
03:19:56.000 I think that that is changing, and I think that is changing primarily because of the internet.
03:20:00.000 If you look at what was acceptable in terms of people discussing that would get advertisement, it was network television standards.
03:20:10.000 Now that's changing.
03:20:11.000 I mean, there's ads on a lot of the videos that I put out that have pretty extreme content.
03:20:16.000 It's because advertisers are changing their perspective.
03:20:19.000 I don't think so.
03:20:21.000 They're shifting.
03:20:22.000 They're 100% shifting.
03:20:23.000 That's why this podcast has ads.
03:20:25.000 Sure, sure.
03:20:26.000 I mean, I don't think it's to the point where everyone's lost all ads, but look, you think George Carlin would be allowed to do his bit today?
03:20:31.000 Yes.
03:20:31.000 No way.
03:20:32.000 No, come on, man.
03:20:33.000 You're not right.
03:20:34.000 He would be able to do it.
03:20:35.000 Listen, there's stuff like that on Netflix specials that are out right now.
03:20:39.000 Things are changing.
03:20:40.000 It's just in the process of this transformation where people are understanding that because of the internet, if you look at late night conversations, how about...
03:20:52.000 Colbert saying that President Trump has Putin's dick in his mouth.
03:20:56.000 How about him saying that on television?
03:20:57.000 Do you really think that would have been done 10 years ago?
03:21:00.000 It wouldn't have been.
03:21:01.000 Or 15 years ago.
03:21:02.000 Or 20 years ago.
03:21:02.000 Impossible.
03:21:03.000 Not possible.
03:21:04.000 Standards are changing because of the internet.
03:21:06.000 So things that were impossible to say on network television just 10 years ago, you can't say now.
03:21:12.000 Kevin Hart lost his Oscar hosting gig because of jokes from 10 years ago.
03:21:17.000 Right.
03:21:17.000 But do you know why he lost it?
03:21:18.000 He lost it because people were complaining.
03:21:20.000 Because people who are activists were complaining that he had said some homophobic things that he had subsequently apologized for before they ever said that.
03:21:28.000 Count Dankil as a comedian.
03:21:30.000 Okay, look, you already discussed this.
03:21:31.000 I'm with you, and I understand what you're saying.
03:21:33.000 But I am a comedian.
03:21:35.000 But I am a comedian, and I understand where things are going.
03:21:39.000 The demise of free speech is greatly exaggerated.
03:21:42.000 That's what I'm saying.
03:21:43.000 I'm saying there's a lot of people out there that are complaining.
03:21:46.000 But the problem is not necessarily that there's so many people that are complaining.
03:21:49.000 The problem is that people are reacting to those complaints.
03:21:52.000 The vast majority of the population is recognizing that there is an evolution of free speech that's occurring in our culture and in all cultures around the world.
03:22:00.000 But this is a slow process when you're in the middle of it.
03:22:03.000 It's almost like evolution.
03:22:04.000 You're in the middle of it, you don't think anything is happening.
03:22:07.000 But it's fucking happening.
03:22:08.000 I agree with you.
03:22:09.000 I agree with you that the majority of people are like, that's funny, I don't care.
03:22:13.000 But the minority is kind of dictating things right now.
03:22:15.000 For now.
03:22:16.000 And not even dictating things.
03:22:17.000 They're just making a lot of noise and that noise is having an effect.
03:22:21.000 That's what data in society was an attempt at.
03:22:23.000 I don't think it was effective.
03:22:25.000 That's why we're still here talking right now.
03:22:27.000 It was one attack.
03:22:28.000 But there's many of them, man.
03:22:29.000 There's hundreds of articles that are written about all sorts of things that are inaccurate or misleading and biased.
03:22:34.000 And some people have been stripped of their bank accounts and some people have been kicked off.
03:22:36.000 Yes.
03:22:37.000 And this is why it's important to have this conversation and conversations like this.
03:22:41.000 So here's what I'll say.
03:22:42.000 I cross my fingers and I wait for when you implement blockchain technology.
03:22:45.000 Get that van ready, bro.
03:22:46.000 Well, the van is going to be a mobile production studio so I can travel around when things are getting crazy.
03:22:50.000 With a lot of water.
03:22:52.000 Dried food.
03:22:53.000 And more than just Band-Aids.
03:22:54.000 I'm putting a shower in it.
03:22:55.000 Okay.
03:22:56.000 It's going to be like I'm going to have a computer and monitors and I'm going to be able to do video.
03:22:59.000 Sounds cool.
03:22:59.000 So I can travel around when everything's happening.
03:23:02.000 Let's wrap this up.
03:23:03.000 I want to see the blockchain version of Twitter where it said it exists.
03:23:08.000 That's what I want to see.
03:23:08.000 It's going to happen whether we like it or not.
03:23:10.000 Vidya, any last thoughts?
03:23:12.000 No, I just want to thank you, Joe.
03:23:14.000 This has been great.
03:23:15.000 And Tim, thanks for your feedback.
03:23:16.000 We're always listening, and I've learned a lot today.
03:23:18.000 Thank you.
03:23:19.000 I really appreciate you guys.
03:23:20.000 Thank you, Jack.
03:23:21.000 Thank you.
03:23:21.000 Any last things?
03:23:22.000 No, I think we've said it all.
03:23:23.000 That's a wrap, folks!
03:23:25.000 No more ear beatings.
03:23:26.000 Good night, everybody.
03:23:29.000 That was awesome.
03:23:29.000 Thank you.
03:23:30.000 Thanks for talking.
03:23:32.000 I really do appreciate it.
03:23:33.000 That was great.
03:23:33.000 Hey, could you...
03:23:34.000 I just want to follow up on a couple things.
03:23:38.000 You mentioned an Antifa account that docks policemen.
03:23:40.000 Can you please just send that over to me?
03:23:42.000 Bit.ly slash antifatweet.
03:23:44.000 Bit.ly slash antifatweet.
03:23:48.000 And then would you DM me?
03:23:50.000 I'll follow you.
03:23:51.000 Would you DM me the accounts that you said have threatened you?
03:23:54.000 No.
03:23:57.000 I believe in minimizing harm.
03:24:00.000 I won't take action on them, but I want to understand why you didn't take action on them, and I can't learn from that unless you...
03:24:10.000 So, when Lawrence Southern got banned from Patreon, a lot of people were...
03:24:15.000 Stop!
03:24:17.000 Everybody out of the room.
03:24:18.000 This is streaming, and it's frozen.
03:24:23.000 Something's wrong.
03:24:35.000 Thank you.
03:25:00.000 Thank you.