The Culture War - Tim Pool


Facebook's META SUED Over AI Defaming Conservative, Chatbot LIED Accusing Man OF CRIME (ft. Robby Starbuck)


Summary

Robbie Starbuck says it's too late for Meta to apologize after a chatbot allegedly defamed him. A Tesla self-driving vehicle accidentally hits another vehicle, but who is responsible for the damage to the other vehicle?


Transcript

00:00:00.000 Discover the magic of BetMGM Casino, where the excitement is always on deck.
00:00:04.820 Pull up a seat and check out a wide variety of table games with a live dealer.
00:00:09.120 From roulette to blackjack, watch as a dealer hosts your table game
00:00:12.900 and live chat with them throughout your experience to feel like you're actually at the casino.
00:00:17.380 The excitement doesn't stop there.
00:00:19.380 With over 3,000 games to choose from, including fan favorites like Cash Eruption,
00:00:24.440 UFC Gold Blitz, and more, make deposits instantly to jump in on the fun
00:00:29.220 and make same-day withdrawals if you win.
00:00:31.640 Download the BetMGM Ontario app today.
00:00:34.120 You don't want to miss out.
00:00:36.080 Visit BetMGM.com for terms and conditions.
00:00:38.460 19 plus to wager Ontario only.
00:00:40.240 Please gamble responsibly.
00:00:41.320 If you have questions or concerns about your gambling or someone close to you,
00:00:44.360 please contact Connex Ontario at 1-866-531-2600 to speak to an advisor free of charge.
00:00:51.160 BetMGM operates pursuant to an operating agreement with iGaming Ontario.
00:00:55.020 Before we grab Robbie for this interview, we'll take a look at what the story is.
00:00:58.580 From FoxBusiness.com, Robbie Starbuck says it's too late for Meta to apologize
00:01:04.520 after AI chatbot allegedly defamed him.
00:01:08.160 I think it's funny that they put allegedly in here when Robbie can show the receipts.
00:01:14.160 Robbie Starbuck is standing firm in his lawsuit against Meta
00:01:16.600 after its chatbot allegedly defamed him for almost a year,
00:01:21.120 saying the time for apologies is over.
00:01:23.560 Quote, it's too late to solve this with an apology.
00:01:25.960 It's been nearly a year people doxed my kids.
00:01:30.220 The anti-DEI crusader alleged that Meta's AI chatbot gave users false and defamatory statements
00:01:36.140 about him, wrongly claiming he's a white supremacist who was arrested as part of the
00:01:40.460 January 6th Capitol riots in a lawsuit filed in Delaware Superior Court Tuesday.
00:01:45.280 Starbuck alleged the lies about his record and character, including demands that he lose
00:01:48.740 custody of his children by the chatbot, continued long after the company said the problem had
00:01:53.980 been addressed. This is absolutely insane.
00:01:57.240 The case is wild and has implications for all of us.
00:02:01.100 On top of falsely calling me a criminal, Meta suggested my kids be taken from me.
00:02:07.020 Shortly after Starbuck announced the lawsuit, Meta's chief global affairs officer,
00:02:10.400 Joel Kaplan, wrote that he watched Starbuck's video and called the situation unacceptable.
00:02:14.520 He apologized and vowed to get to the root of the problem.
00:02:17.400 Quote, Robbie, I watched your video.
00:02:19.400 This is unacceptable.
00:02:20.340 This is clearly not how our AI should operate.
00:02:23.360 We're sorry for the results it shared about you and that the fix we put in place didn't
00:02:27.560 address the underlying problem.
00:02:29.100 I'm working now with our product team to understand how this happened and explore potential situations.
00:02:35.340 This is not Starbuck says it's too little too late.
00:02:38.640 I don't think the company operated in good faith by the way they handled this.
00:02:41.960 Now, we are going to pull in, Robbie, in just a second to go over all this.
00:02:44.940 But I just want to say right away, one of the conundrums that we've been dealing with
00:02:49.440 in AI, a video went viral recently.
00:02:53.220 A Tesla was driving.
00:02:54.660 It was on auto drive and a man tripped and fell right in front of the car.
00:02:59.860 The Tesla immediately swerved to the left and slammed into another vehicle.
00:03:05.400 We have just witnessed the opening moment that has been warned about for a long time as
00:03:10.560 it pertains to AI, and that is who is responsible for a situation in which the self-driving vehicle
00:03:17.420 damages someone else to save another person.
00:03:21.440 It is the trolley problem right in front of us.
00:03:25.320 If that car had not moved, the man who fell in the road would have been crushed.
00:03:29.100 And the car that got hit suffered pretty significant damage, but I don't think anybody was severely
00:03:34.220 hurt.
00:03:34.540 But the person who was driving the car who now has the damage to their vehicle, they
00:03:39.620 did nothing wrong.
00:03:41.160 The accident wasn't their fault.
00:03:43.480 Who's responsible for the damage?
00:03:45.920 These questions are going to persist for some time, and there's going to have to be law and
00:03:51.340 precedent set.
00:03:52.840 Now, my view is simply this.
00:03:54.300 If you use a tool, be it a, you know, let's call it AI technology is no different than a power
00:04:02.020 drill. If you accidentally hurt somebody with a power drill or a nail gun, it's the same as if you
00:04:08.520 put a nail on their leg and, you know what I mean, like hit them with the nail.
00:04:12.340 In this regard, Meta, I believe, is completely responsible for exactly what their AI is saying.
00:04:19.080 And this presents a very interesting scenario in that there's no way for these companies to know
00:04:25.660 for sure that the AI large language model will be saying things that are true and correct.
00:04:31.080 In this instance, we are dealing with defamation per se.
00:04:35.120 That is, they accused Robbie Starbuck of committing a crime.
00:04:40.640 This isn't just an accidental, we saw a news outlet, a news article.
00:04:46.460 This is them explicitly stating that he is a criminal who went to jail when he never did.
00:04:51.900 So let's, we're going to grab Robbie right now and pull him in.
00:04:56.140 Robbie, can you hear me?
00:04:57.880 Yeah, how are you doing, Tim?
00:04:59.000 How's it going?
00:04:59.540 What's the latest?
00:05:01.980 It's going well.
00:05:02.900 Obviously, I'm not in a jail cell, not a criminal, despite what Meta says, but it's going well.
00:05:07.980 I mean, I'm confident in our case.
00:05:09.760 I think it's the single best case to prove AI defamation that there has ever been.
00:05:14.540 It's especially helpful that Meta has now admitted wrongdoing, which I'm not sure I've
00:05:17.980 ever seen in the middle of a lawsuit.
00:05:19.580 I'm not sure I've ever seen a company who's being sued for defamation coming out and saying,
00:05:24.820 yeah, actually, we did that.
00:05:26.980 Sorry about that.
00:05:27.900 But it's nine months too late.
00:05:29.620 You know, nine months ago, they had a chance to publicly apologize and fix this together.
00:05:33.280 But it's been nine months.
00:05:35.160 You know, in that time range, my kids have been doxed.
00:05:38.080 Every single one of them had their full names, doxed our address and contact.
00:05:42.060 In fact, we've had to have police here because of increasing death threats.
00:05:46.880 In fact, a man was recently arrested in Oregon who had desired to murder me.
00:05:51.380 And you've got people walking up to me on the streets believing these lies.
00:05:55.180 So yeah, it's a little too late to apologize and think that's going to solve things.
00:05:59.700 There needs to be a serious fix and Meta needs to pay for the damage they've done.
00:06:04.620 I'm just going to jump right into this question.
00:06:05.860 Are you requesting a specific amount of damages?
00:06:07.540 We sued for an excess of $5 million, which is essentially kind of like how you have to do this
00:06:13.760 in the state we sued in, which is Delaware, which is the appropriate venue because of the way that
00:06:18.340 Meta set up.
00:06:19.860 But, you know, that's something for a jury to really decide in terms of punitive damages.
00:06:24.060 You know, we actually asked Meta's AI yesterday what it thought appropriate damages were.
00:06:29.100 And I posted a video of that conversation.
00:06:31.900 And Meta's AI said that appropriate settlement at this point would be somewhere between $50 and $100 million.
00:06:38.160 And it said that if it went to a jury with all damages put together, including punitive damages,
00:06:44.480 that it estimated Meta risk damages in excess of $1 billion.
00:06:48.920 And also, it was interesting.
00:06:52.460 It was asked what it thought about our chances of winning in court.
00:06:56.960 And it believed that we would win in court in all likelihood that we had a very strong case.
00:07:01.920 So that's their own AI trained to be able to analyze legal cases.
00:07:06.940 And that was its analysis.
00:07:09.160 You know, it's interesting.
00:07:11.680 It's hard to kind of put a figure on this.
00:07:13.460 But I do know that juries out there, they tend to not be big fans of major companies,
00:07:18.540 you know, picking on citizens and, you know, sort of acting outside the bounds of what we
00:07:24.200 all consider appropriate.
00:07:25.240 And I think everybody understands on every side of the political spectrum that it's not
00:07:29.120 appropriate to go around accusing people of being criminals with absolutely no basis,
00:07:33.840 in fact, or reality.
00:07:34.700 But that's defamation per se.
00:07:36.620 That's, you know, a lot of people will, you know, armchair lawyer these stories.
00:07:41.700 I see the posts on X all the time where they're like, you know, you actually can't sue because
00:07:45.360 you're a public figure and you can't prove damages, et cetera, et cetera.
00:07:48.580 But this is, they accused you of committing a crime and going to jail, going to prison for
00:07:54.000 it, which is actually one of the basic criterias for what's called defamation per se, in which
00:07:58.840 you don't actually have to prove damages.
00:08:01.040 But let me slow down.
00:08:02.100 I want to ask you, how did this all start?
00:08:04.700 When did it start?
00:08:05.580 How did you get defamed?
00:08:07.480 So the start is actually really interesting.
00:08:09.660 This started nine months ago when I was exposing the DEI slash woke policies at Harley Davidson,
00:08:15.480 which was a very successful campaign.
00:08:17.180 We ended up changing DEI policy.
00:08:19.140 They're wiping it out completely.
00:08:20.260 And their CEO has now been moved out.
00:08:22.380 You know, he's gone and they're getting a new CEO.
00:08:24.660 And in fact, a board member came out and resigned in part because they said they so badly
00:08:30.180 mishandled the situation with me last summer.
00:08:32.700 But there was one unhappy Harley Davidson dealership from Bernie Sanders, state of Vermont,
00:08:37.380 who decided to try to publicly attack me.
00:08:40.000 And when they did that, they used a screenshot from Meta's AI.
00:08:42.640 And that screenshot included some of these lies that Meta's AI has told about me.
00:08:47.060 That's how I found out about it initially.
00:08:49.080 So I did the responsible thing.
00:08:51.140 I immediately, and I mean immediately that day, had my lawyers contact Meta.
00:08:55.240 I also contacted executives at Meta and let them know what was happening.
00:08:59.640 My hope was we could have a very quick resolution to it.
00:09:02.360 We were not asking for any damages at that point.
00:09:04.520 We were saying, hey, we need to fix this for everybody so this never happens again.
00:09:07.920 Let's be a part of the solution and get this fixed and get this right.
00:09:10.960 And we want an apology publicly and a retraction.
00:09:13.840 They did not do those things.
00:09:15.680 They did not fix this situation appropriately.
00:09:17.660 And to be really candid, their lawyer kind of gave the runaround to my lawyers.
00:09:21.920 And it was sometimes days at a time before they responded to my lawyer.
00:09:26.560 And, you know, it's just simply unacceptable.
00:09:28.280 And again, you go back to what makes up a successful defamation case to prove defamation per se.
00:09:33.420 And again, we gave them the chance to fix this.
00:09:35.620 And they continued for nine months to defame me and invent new lies even.
00:09:40.200 Most recently, this last week, it said that I was a danger to my own children, essentially,
00:09:46.160 and that authorities should consider taking my children from me
00:09:49.260 and putting them in the care of somebody who is more accepting of DEI and transgenderism.
00:09:53.820 That is absolutely insane.
00:09:55.800 And it continued to tell the lies about me being a criminal.
00:09:59.660 So the first defamation, I think it said that you were at J6.
00:10:03.580 You were arrested and charged and convicted.
00:10:05.740 What was the full scope of what it said about you?
00:10:08.660 Yeah, it said I was arrested first.
00:10:10.480 It actually, it was sort of interesting.
00:10:11.900 It went in incremental stages.
00:10:13.580 First, I was arrested.
00:10:14.460 Then I was charged with disorderly conduct.
00:10:17.360 Then I was filming inside there.
00:10:19.600 And my footage was used by the House Select Committee investigating J6.
00:10:23.840 Then it was that I pled guilty to the crime on January 6th.
00:10:28.680 OK, so it went through various stages of inventing these things.
00:10:31.920 And, you know, I think that speaks to sort of the malice involved here.
00:10:36.460 And in fact, Meta's AI, when confronted with the facts, recently admitted.
00:10:40.680 You're a podcast listener, and this is a podcast ad heard only in Canada.
00:10:45.620 Reach great Canadian listeners like yourself with podcast advertising from Libsyn ads.
00:10:50.580 Choose from hundreds of top podcasts offering host endorsements or run a pre-produced ad like this one across thousands of shows to reach your target audience with Libsyn ads.
00:11:01.260 Email bob at libsyn.com to learn more.
00:11:04.040 That's B-O-B at L-I-B-S-Y-N dot com.
00:11:08.180 That this was malicious in nature.
00:11:11.020 So that's something, you know, when you're given the facts of it and you understand that they had the chance to fix this and they continued with it.
00:11:16.640 That's where, you know, it becomes unavoidable that this was clearly malicious.
00:11:21.200 And, you know, that wasn't where it ended.
00:11:23.240 By the way, we're in receipt of something.
00:11:24.600 We're still vetting it.
00:11:25.440 But if it's real, it's and we believe it is.
00:11:28.620 It's actually more crimes that somebody got evidence of.
00:11:32.000 Meta's AI saying that I committed.
00:11:33.640 So we're vetting that and checking it right now.
00:11:35.580 But it appears to be legitimate on first look.
00:11:37.860 But even if it's not, let's pretend it's not.
00:11:40.100 What they've done leading up to this was bad enough.
00:11:42.740 I mean, they framed me as a criminal.
00:11:44.600 I've never committed a crime in my life.
00:11:46.280 I have never been accused of a crime in my life.
00:11:48.060 Never charged for anything.
00:11:49.380 I haven't even had like a parking ticket in over 10 years.
00:11:51.800 OK, this is something that you just can't do to people, especially.
00:11:55.900 And, you know, this is a public figure, Tim.
00:11:57.380 We expect people who are kind of crazy, fringy to make up stuff, right?
00:12:03.540 People who are like they've got 10 followers.
00:12:06.080 They're just a random person and they might just hate us and make up some crazy stuff.
00:12:10.760 What we don't expect, because no reasonable person would, would be that one of the largest
00:12:14.860 companies on the face of the earth would engage in this, would invent crimes about you
00:12:19.540 and would continue to do so after your lawyers have told them they need to stop.
00:12:23.180 Have you guys explored a Section 230 angle to this?
00:12:27.160 Is there anything in your lawsuit pertaining to that?
00:12:30.020 They don't have any ability to claim Section 230 on this.
00:12:33.160 There's no protection for them.
00:12:34.400 They are the publisher.
00:12:35.900 You know, the way they hide behind this when it comes to Facebook posts and things like that,
00:12:40.320 you know, they're doing that because somebody else posted it.
00:12:42.580 This is their product.
00:12:43.820 They published this.
00:12:45.140 They invented it.
00:12:46.300 Not us.
00:12:47.140 In fact, I think everybody knows at this point I have a pretty damn good research team.
00:12:50.880 And we went into the weeds.
00:12:52.380 We checked every corner of the web that we could find to try to find some instance of
00:12:57.080 something on the Internet claiming that I was arrested on January 6th, that I was there.
00:13:01.540 Because, again, by the way, I wasn't even there.
00:13:03.400 I was in Tennessee on January 6th, 2021.
00:13:06.300 So we tried to find some instance of somebody claiming this.
00:13:09.540 There is not one instance we could find of anyone claiming this, which begs the question,
00:13:14.440 how was the AI trained?
00:13:16.660 Where did it even come up with this information that it invented?
00:13:20.200 Because, to me, that could be even more malicious than we already know it is.
00:13:25.120 Did somebody train it to do this?
00:13:26.740 If so, why and how?
00:13:28.540 You know, these are all questions we figure out in Discovery.
00:13:31.120 But I think that process is going to be an uncomfortable one for Meta.
00:13:33.820 If you think about Discovery in a lawsuit like this, we need to see the algorithms.
00:13:37.580 We need to see the training materials.
00:13:39.520 We need to see exactly what led up to this and how deep this goes.
00:13:44.020 This is bigger than just defamation, man.
00:13:48.640 I was thinking, as you were saying, it's exactly what I was thinking.
00:13:52.220 This is presumably, and I'm going to give them the benefit of the doubt, they built an AI,
00:13:57.180 they trained it off information, but I think they're probably training it off of their own
00:14:01.480 data feeds, meaning Facebook posts from random people.
00:14:04.940 This gets fed into a system, which, as you mentioned, the reason I asked about Section 230,
00:14:10.640 and just to clarify for those that aren't familiar, Section 230 of the Communications
00:14:14.260 Decency Act is effectively used as a liability shield for these big social media companies
00:14:18.780 and web platforms to say, you can't sue me, someone else said it.
00:14:23.380 But in this instance, with the advent of Grok and Meta.ai, these big platforms are now
00:14:30.780 aggregating social media content and turning that into their publications, which opens the
00:14:37.160 door for tons of precedent pertaining to liability shields, but also, it's turning it into their
00:14:47.760 speech, where you now are going to get into the back door.
00:14:52.040 This case, as you just mentioned, I want to reiterate this, the discovery process is going
00:14:57.220 to force open how they're building this AI.
00:14:59.640 I think that's going to be, this is, I mean, possibly one of the biggest cases we're going
00:15:05.360 to see as it pertains to this technology.
00:15:08.600 Have you guys, has this been like a big focal point?
00:15:12.240 Let me phrase it this way.
00:15:14.080 The reason why I asked about Section 230 is that this is not just simply defamation.
00:15:18.620 You are dealing with revolutionary technology that every major company and government is
00:15:22.940 desperately trying to build up.
00:15:24.300 And you have just opened the door to a major detriment for these companies, which could
00:15:30.020 shape how the government treats them, how they're able to build this technology.
00:15:34.140 Have you and your legal team talked about that, consider the ramifications of anything
00:15:37.400 like that?
00:15:38.980 Yeah, absolutely.
00:15:39.880 I mean, there's the opportunity to set precedent here, you know, in terms of safeguards for
00:15:45.420 AI.
00:15:45.640 Because don't get me wrong, I have a lot of optimism about certain areas of AI, but AI
00:15:50.860 without guardrails, without any sort of semblance of rules and there being an ethical way of
00:15:56.380 handling this when it comes to the reputations of actual people.
00:16:00.180 I mean, it's incredibly dangerous if you just let it run wild.
00:16:03.540 And I think we have to have some standard that is in place that says, hey, we're holding
00:16:08.120 you to the same accountability standards we would hold anybody else's product to.
00:16:12.120 And that's all we're asking for here.
00:16:14.100 And I think, you know, what's interesting is I've seen no legal scholar make an argument
00:16:18.660 that we will lose in court.
00:16:20.140 In fact, every major legal scholar I have seen comment on this and every minor one as
00:16:24.600 well has said, damn, this is the case they do not want to fight because they so thoroughly
00:16:30.440 failed in how they handled this when they were given the opportunity to fix it, that this
00:16:34.780 fits every standard, right?
00:16:36.880 And I think that's really the important thing here is that they failed along the entire
00:16:41.580 path here for nine months to do the right thing.
00:16:45.040 And so now this goes to that next phase.
00:16:47.900 And keep in mind, there's something very unique and interesting here.
00:16:52.000 You know, I'm not a legal scholar by any stretch of the imagination, but I do follow things pretty
00:16:56.900 closely.
00:16:57.720 I have never seen a company apologize in an active lawsuit for defamation when they're being
00:17:04.600 sued for defamation.
00:17:06.360 I've never seen that in my life.
00:17:08.740 And so how is that going to I mean, what is their defense in court?
00:17:12.280 I want to know what is their defense going to be when they have admitted wrongdoing their
00:17:15.820 top one of their top executives has already apologized now, which, by the way, nine months
00:17:20.200 too late.
00:17:20.860 But to do it during an active lawsuit, I'm not sure I've ever seen that before.
00:17:24.380 Have you?
00:17:25.320 No, no.
00:17:26.080 I mean, this is this is crazy.
00:17:27.240 I actually just read a statement.
00:17:29.100 There's a Fox News article.
00:17:30.160 I'm pretty sure it was you probably know this better than me.
00:17:32.720 But one of some some executive at Facebook put out a statement saying, Joel Kaplan, I
00:17:36.660 apologize.
00:17:37.740 This shouldn't have happened.
00:17:39.560 We did it.
00:17:40.540 We're guilty.
00:17:41.520 I mean, this basically just shows that they knew it was happening.
00:17:46.080 They knew that their platform was defaming you, that they as a company put out false
00:17:50.840 information.
00:17:51.320 It's been happening for a long time.
00:17:53.540 So let's go back.
00:17:55.140 When this first happened, this is nine months ago, I think you said, did they acknowledge
00:17:59.220 and apologize for it then?
00:18:00.880 They did not apologize.
00:18:02.460 No, they did pretend that they were going to try to find some fix for this.
00:18:07.520 But real quick, they acknowledged that it had defamed you.
00:18:11.740 That the information is wrong.
00:18:12.580 They acknowledged it happened like they weren't claiming we were making this up.
00:18:15.520 Or anything like that.
00:18:16.300 No, they acknowledged it happened.
00:18:18.000 They were very slow to do anything.
00:18:20.340 And their communication left a lot to be desired.
00:18:23.120 But eventually what they ended up doing down the line is they blacklisted my name, which
00:18:27.340 again, has its own set of negatives, right?
00:18:30.340 For me, because anybody searching, if there's like, hey, what's a bio on this guy?
00:18:34.160 They're getting back.
00:18:35.520 Sorry, I can't help you with that over and over and over again.
00:18:38.160 And it leads people to wonder, like, what did this dude do to get blacklisted from one of
00:18:42.500 the largest AIs in the world?
00:18:43.900 So that has its own host of issues in terms of damaging somebody's reputation.
00:18:47.900 But secondarily to that, they did such a poor job of doing that blacklisting that it actually
00:18:53.100 didn't stop the defamation.
00:18:54.320 Because what happened is, if there was a news story about me, say, if you ask their AI,
00:18:59.500 hey, who's that guy who got Harley Davidson to change their DEI policies?
00:19:04.180 It'll say my name.
00:19:05.480 And then if you say, tell me more about that guy, it continues to tell the same lies.
00:19:09.680 So they didn't even make an effort to stop those actual lies.
00:19:14.160 Only the so so only the prompt about who is Robbie Starbuck is what was blocked.
00:19:18.000 But any derivative question of a different issue could definitely could talk about you.
00:19:22.320 Yeah, and it would end up doing the same thing and telling them all of these crazy lies.
00:19:28.080 So let's start from the beginning.
00:19:30.740 They acknowledged the defamation.
00:19:33.140 So they knew it was happening.
00:19:35.300 It persisted for nine months.
00:19:37.220 So that is outright—that's malice, we call that.
00:19:41.100 I shouldn't say we, but it's called in the law, which is knowledge of the falsehoods or a
00:19:45.680 reckless disregard for the truth, which I think this fits both.
00:19:48.720 So even if it wasn't defamation per se, you've got one of the biggest companies on the planet,
00:19:53.760 Facebook, worth, what, hundreds of billions of dollars.
00:19:57.720 They are—
00:19:58.400 1.5 trillion.
00:20:00.020 1.5 trillion.
00:20:02.100 What's the remedy going to be?
00:20:03.900 Look, they've acknowledged it.
00:20:05.480 By the way, I should note this.
00:20:07.100 You know that in court, when there's a recommendation of punitive damages,
00:20:10.660 generally that recommendation falls in the range of 1% to 3% of the total value of that company.
00:20:16.980 And secondarily, if you're in California, if Facebook tried to move—
00:20:20.820 or Meta tried to move this to California, in California, they recommend 9%, okay?
00:20:25.260 So I don't think they're going to drive a venue change.
00:20:28.360 Could you imagine the headline,
00:20:29.240 Robbie Starbuck now one of the top 10 wealthiest men in the world after Facebook defames him?
00:20:33.880 Well, here's the thing.
00:20:35.980 I particularly—like, there is damages, and like, I've had to have security.
00:20:40.380 And you kind of know the deal.
00:20:41.800 Like, when you're thrown into something like this, especially if these lies are being told about you,
00:20:45.780 like, things can get much worse and kind of hairy and scary, right?
00:20:48.460 Like, especially for kids.
00:20:50.400 You know, like, kids should never have to deal with this stuff at all.
00:20:53.300 And my kids have had to deal with it, you know?
00:20:55.360 And so there's very real issues this creates and damage it causes to business deals and advertising.
00:21:01.480 By the way, Meta was telling people on AI not to advertise on my show.
00:21:05.380 They were telling people not to ever hire me for a job because I essentially was an extremist.
00:21:09.360 You know, that's how it was framing me, as some extremist.
00:21:13.620 And in reality, I share the political views of half the country, right?
00:21:17.220 Like, at least.
00:21:18.620 And so it's gone beyond anything that anybody could say, like, oh, well, let's make some excuse for this.
00:21:25.860 There's no excuse to be made for this.
00:21:27.440 Nobody can do this.
00:21:28.520 You cannot act like this.
00:21:29.860 I can't do this about somebody.
00:21:31.060 If I went out there or you went out there and we made these claims about somebody,
00:21:34.260 we would be sued into oblivion for a good reason because you cannot behave like this.
00:21:38.400 And, you know, so at the end of the day, for me, this is about accountability.
00:21:43.280 And so, yes, part of accountability is making sure that there are damages paid because that is part of what incentivizes companies to not engage in this behavior.
00:21:52.820 But secondarily, they obviously have damages they have to pay for the damage that they did.
00:21:56.860 But beyond that, what's really important to me is that we have a set of rules in place to prevent this from happening again and a very quick resolution process for anybody this happens to, where they can very quickly go to the company and say, hey, your AI is doing this right now.
00:22:11.140 You need to stop this.
00:22:12.520 And they very quickly, within 24 to 48 hours, get a hold of it and stop it.
00:22:16.020 I'm pretty sure they've lied about other people, too, though.
00:22:18.320 I remember seeing other posts from other conservative personalities that were defamed in this capacity.
00:22:24.060 If you go back to 2016, there's an article from Gizmodo that they interviewed people who worked at Facebook who were responsible for the curation of news.
00:22:33.800 And they said they intentionally would remove conservative news sites from the trending tab to control what people were seeing in trending.
00:22:40.320 So we can clearly see that if you go back, I mean, this is this is not even 10 years and nine years ago, that there is a bias within the company against certain worldviews.
00:22:49.800 Then you have the defamation against you.
00:22:52.800 What's crazy to me is, as you mentioned, they've admitted it.
00:22:56.540 They've now apologized for it publicly, but it persisted after they already acknowledged it.
00:23:00.880 I don't understand how this court case proceeds, because in almost every defamation case that we've tracked or stories like this over the past couple of decades, it's usually adversarial.
00:23:13.440 So what happens is I have to imagine Facebook is going to offer you a settlement instantly because they know that if if if this is going to be weird.
00:23:22.380 Let me say this.
00:23:23.440 You're suing an excess of five million, but I believe you want.
00:23:26.740 Let me clarify.
00:23:27.500 You're you've sued for for access of five million plus punitive damages.
00:23:31.680 Is that how it works?
00:23:33.300 Yeah.
00:23:33.620 So, I mean, there's there's a myriad of different damage categories and they all get calculated and decided by a jury.
00:23:38.680 But you have to start with an initial, hey, my damages are in excess of.
00:23:42.980 And so we're in excess of five million.
00:23:44.820 However, in terms of, you know, any any settlement, I am legally not allowed to comment on if there are any discussions or anything like that.
00:23:55.000 Right, right. So I'll just say in my experience with lawsuits, usually the the suing party will say that, you know, we want X amount of dollars and we want whatever a jury deems appropriate, which means this is it's not defined by you, but by the jury.
00:24:10.840 This is where it gets interesting because this means if Facebook tries to settle, they have to offer you a lot of money.
00:24:16.220 They've already admitted it.
00:24:17.580 But then what happens if they go to court?
00:24:19.420 They're walking in the door saying, yes, your honor, we did do this.
00:24:22.120 We're sorry it happened.
00:24:23.300 And they say, OK, jury, what do you what what do they owe this man for having defamed him in this way?
00:24:28.000 The one thing that really is striking me is it seems like there's only one remedy for this.
00:24:34.760 And it's it's an injunction on meta AI from functioning, period.
00:24:40.740 Let me explain.
00:24:41.720 Obviously, they can pay you for the damages they owe you.
00:24:44.940 And I want people listening to understand why is it five million dollars?
00:24:49.180 Well, I can't speak to Robbie, but I can speak for me, my friends and my family.
00:24:53.000 Understand how much security costs on a 24 hour shift.
00:24:56.080 You're talking about three or four people full time every single day of the year.
00:25:00.780 And that can cost millions of dollars just to secure your family when you've got people accusing you of being a heinous criminal or a traitor or something like this.
00:25:08.780 And they're threatening your children.
00:25:10.260 So it's not some made up number saying we just want all this money.
00:25:14.120 But let's say they do agree to pay you.
00:25:16.440 There's no guarantee they're going to stop defaming you or anybody else.
00:25:19.400 In which case, it seems like the court has to tell Facebook, we're going to have we're going to put an injunction on the operation of your of your program.
00:25:28.780 We're like, I'll put it this way.
00:25:30.500 If I defamed you, the court would explicitly state, first and foremost, Tim, do not speak about Robbie Starbuck at all.
00:25:38.880 My lawyers would say the same thing.
00:25:40.260 I imagine the court's going to say to Facebook, because I don't know how they don't, you need to never like during this process, do not speak of this man.
00:25:50.100 And they're going to say, we actually don't know how to make the A.I. not do that.
00:25:54.160 In which case, OK, then shut the A.I. down, because if you can't guarantee your product won't defame the plaintiff, then you can't have it run.
00:26:03.320 Yeah, I mean, that's a real risk, you know, I don't know.
00:26:08.940 Again, I'm actually scratching my head the way that you are in terms of like what is going to happen in court when they have come out, apologize, you know, nine months too late.
00:26:19.620 But they did, which essentially in the statement, like if you read it out loud, to me, it reads as admission of wrongdoing.
00:26:25.520 I'm not sure how anybody else could read it otherwise.
00:26:28.580 I don't know how you defend yourself in court, which is kind of mind boggling to me because, you know, I would think big tech companies want to avoid precedent being set.
00:26:40.400 That that's my assumption just as a layperson, I would think like they want to make and write their own rules because they've kind of operated like the wild, wild west, you know, in terms of how they've run this.
00:26:50.040 So it's it's kind of mind boggling to me.
00:26:53.180 I don't know how that process is going to go.
00:26:55.480 I mean, my predictions are obviously well in favor of us having the very strong suit here, you know.
00:27:03.720 But, yeah, I mean, I'm lost.
00:27:05.320 I'm totally lost as to how they can defend themselves in court.
00:27:08.220 Bro, this is revolutionary.
00:27:10.960 The one of the accusations against these AI models is that they've been stealing content, that they are using artists' images.
00:27:20.280 And my understanding is that most of these AIs have scanned every episode of Timcast IRL so they can add all that data to their networks and train on it.
00:27:30.780 And they never paid me for access to that information to build a machine off of.
00:27:34.660 That's a big accusation.
00:27:35.480 Now we're entering this interesting Section 230 territory where with Wikipedia, they've largely been shielded when someone lies about you.
00:27:45.640 And I'm sure your Wikipedia has got lies in it.
00:27:48.140 Some far left liberal guy is probably going to write a fake article.
00:27:51.540 Then someone's going to add it to Wikipedia and they're going to claim it's true.
00:27:54.500 You try to sue Wikipedia.
00:27:55.720 They say Section 230.
00:27:57.020 We didn't publish that.
00:27:58.300 The user did sue them.
00:27:59.860 The user is then going to say, hey, don't look at me.
00:28:02.160 I got it from an article.
00:28:03.220 The article person is going to say, hey, don't look at me.
00:28:05.300 I got it from that article.
00:28:06.360 I don't know.
00:28:06.960 You know, we're all shielded.
00:28:08.700 And he's a public figure.
00:28:10.380 Good luck.
00:28:11.140 But what happens when that wall is broken?
00:28:14.580 Because now this data, these articles and this information being taken and loaded into these large language models converts it into the speech of the company.
00:28:23.940 So it's not just about defaming you.
00:28:25.700 With you, I think it's pretty obvious.
00:28:28.640 Like you weren't in D.C. on January 6th and they accused you of a crime.
00:28:31.620 That's nuts.
00:28:32.720 But take a look at like James O'Keefe.
00:28:35.320 I use him as a great example because, one, I think he does fantastic work.
00:28:38.520 But his Wikipedia is loaded with insane garbage.
00:28:41.120 What happens if I go to chat GPT right now and say, tell me about James O'Keefe?
00:28:47.280 It's likely going to aggregate all of the lies and fake news and then give me a bunch of fake information that accuses James of wrongdoing.
00:28:55.880 I think with you, we're only scratching the surface because now Grok, GPT, Meta AI, you name it, they're going to take the lies from the corporate press, which have these stupid precedent protections, and convert it into the speech of a company which is not protected.
00:29:14.720 And I think this is just the beginning.
00:29:16.340 I think these lawsuits are going to rock these companies.
00:29:21.440 I think you're just the start.
00:29:22.340 Yeah, you know, I think, you know, I've analyzed this pretty closely.
00:29:27.000 I think one barometer in court they look at is they say, did you ever notify them, you know?
00:29:32.880 And so that's going to be key, you know, for figures if it is in fact true that these other AIs are lying about people.
00:29:40.100 We did check, you know, very recently all of the AIs out there, all the major ones, to see if any of them are repeating these lies.
00:29:47.020 None of them are except for Meta.
00:29:48.800 And so, you know, that's significant in itself.
00:29:51.600 And most of them knew about Meta lying about me and actually brought it up themselves and said, no, that claim stems from Meta.
00:30:00.060 And Meta has told these lies, you know, and kind of explains in detail.
00:30:03.800 But once you have notified a company, if they continue the defamation, you know, that's where if you're especially, you know, if you're a public figure, not a public figure, it doesn't really matter.
00:30:13.260 If they continue that behavior and that pattern of lying, they're in a very bad position then, you know.
00:30:19.340 But if they do in fact fix it, you know, I think courts look at that a little bit differently.
00:30:24.840 So, but still, I mean, it depends on the nature of the lies, how they did it, you know, so on and so forth.
00:30:30.080 In our case, we notified them, you know, and they had a chance to fix this.
00:30:33.660 And we were very good faith.
00:30:34.620 Again, you talk about the damages now that are in our suit.
00:30:37.560 Keep in mind that nine months ago, when we contacted them, when this first occurred, we did not ask for a financial settlement at all.
00:30:45.600 We were asking to fix the problem and we wanted a public apology and retraction.
00:30:49.600 And then this went on for nine months, right?
00:30:51.740 And the damage was done over that period of time immensely, you know, and caused a lot of stress for my kids, for my wife and myself.
00:31:00.000 And so it's like, yeah, at this point, there is more damage done than on day one when I was trying to be amicable and I was trying to fix a problem so this didn't happen to anybody else.
00:31:10.640 And so that matters, I think, to people.
00:31:12.740 Like, your normal person sees very clearly, like, I wasn't going and trying to shake down meta.
00:31:16.940 Like, they lied about me in a really disgusting fashion for nearly a year.
00:31:22.180 I think, you know, this is my opinion, but with all of the work you've done, calling out companies for their DEI initiatives, this sounds like someone personally at meta was targeting you and didn't like you and wanted to defame you.
00:31:38.580 Because especially if the claims originated from the meta AI, like, where would that come from unless it was somebody who intentionally did it?
00:31:46.380 Well, you know, we'll find out in Discovery.
00:31:49.680 The thing is, too, in Discovery, you know, we can look at emails and see, you know, I think we'll be looking for any mention of my name.
00:31:56.640 And, again, this could go back quite far between executives there because who knows when this pattern of conduct first occurred, right?
00:32:05.360 So you could go back into, you know, and, again, by the way, got to keep in mind with meta, if we go back any further, we're going into the period of time where they were taking certain actions to censor my accounts.
00:32:16.320 And so there's a pattern of conduct who knows where that leads to, right?
00:32:21.740 So we have to see when we get in there in Discovery exactly how deep this goes, because, again, courts and a jury are going to look at whatever we find in Discovery.
00:32:31.960 And if there is any conversations that are adverse about me where it's essentially like, oh, well, this guy sucks.
00:32:36.980 He deserves it.
00:32:38.160 You know, like that is not going to look good on top of already the admission of wrongdoing.
00:32:42.360 So, you know, you have to go through that process.
00:32:44.600 You've got to see Discovery.
00:32:45.500 You've got to see what's in there.
00:32:46.860 And so I'm not assuming anything on the front end.
00:32:48.980 But I will just say, you know, my first feeling is that I think that we would find quite a bit in Discovery that would give us a lot more information about exactly what occurred.
00:33:00.660 Man, this is going to be revolutionary, man.
00:33:03.020 This is going to shape legislation and policy and precedent.
00:33:06.460 So I do appreciate, you know, all the work you do, of course, and I appreciate you coming on.
00:33:10.540 Where can people find you?
00:33:11.320 Yeah, at Robbie Starbuck on all platforms, on YouTube.
00:33:15.040 I'm Robbie Starbuck there as well.
00:33:16.980 And we'll be keeping people updated on the case.
00:33:18.780 And to your point about legislation and shaping policy and everything, you know, U.S. senators have reached out to me since this has occurred with major concern about this happening.
00:33:27.780 Because I made the point in my video, it's me now, but what if meta and other AIs are allowed to do this type of defamation in the future and they do it during elections when you're asking what's the difference between this candidate and this candidate?
00:33:39.580 And it can make up anything and says your favorite candidate actually is a rapist or a murderer or whatever it might be and says so with such confidence that some people actually believe it and then it shifts 2% of the vote.
00:33:51.640 Well, what does that do?
00:33:52.620 That decides elections.
00:33:54.120 Wow.
00:33:54.340 You know, and so I think there should be major concern on the behalf of all politicians from both parties because this could decide elections.
00:34:02.360 Right on.
00:34:03.080 Well, Robbie, thanks for laying it out for us and joining us, and I wish you the best of luck.
00:34:07.860 Thank you.
00:34:08.400 Appreciate it, bud.
00:34:08.880 Have a good one.
00:34:11.740 Absolutely insane.
00:34:14.780 And one thing I want to add for all of you that are watching is that right now you could be defamed by ChatGPT and not even know.
00:34:23.120 I actually want to say that I think Robbie is a bit lucky in that he was defamed, but the attention was brought to him.
00:34:31.560 They said, hey, look at this thing about Robbie.
00:34:33.740 He was able to then realize it was defaming him.
00:34:36.080 For me, for all I know, they're actually defaming me right now, and I'd have no idea.
00:34:42.400 This is going to change the game.
00:34:44.560 But for everybody watching right now, we're going to send you over to hang out with our friend Russell Brand, who is gearing up to go live.
00:34:50.260 You can follow me on X and Instagram at TimCast.
00:34:53.580 Tomorrow, of course, we've got the Culture War show live at noon.
00:34:57.360 It's going to be a lot of fun, so I do appreciate all of you guys hanging out.
00:35:01.140 We'll just grab one quick chat before we head out.
00:35:04.580 We've got DeVito said, Tim, want to shout out KickIt.
00:35:07.740 It's the blue bucket icon in app stores, an app that encourages users to connect based on common goals and do rather than view.
00:35:15.840 It's live now, still improving, though.
00:35:18.840 Epic.
00:35:19.420 Very much appreciate it.
00:35:20.260 I wish I saved more time for chats, but, you know, there's still a hundred things I want to ask Robby and talk to him about because AI, of course, you guys know that I've ranted about AI and the threats to AI, so this is really interesting.
00:35:34.960 I think he has grounds for an injunction against meta AI as a whole because how do you guarantee it stops defaming you after they've admitted it?
00:35:42.100 You've got to put it on pause until you can lock it down, but then the problem is it's going to defame anybody else.
00:35:50.280 Here's one last thing I'll add because I'm going a little long.
00:35:52.720 I'm willing to bet that if you go to any AI and ask it, like, who is Tim Pool?
00:36:00.180 It'll probably give you some, you know, run-of-the-mill general information you can find somewhere.
00:36:05.160 I'm willing to bet if you ask an AI, is, you know, personality a criminal?
00:36:13.260 It will say no, but if you respond with incorrect, so-and-so was accused of this crime, many of them will turn around and go, you're right, actually, this person committed a crime.
00:36:24.700 At that point, is it defamation that it is giving you fake facts?
00:36:29.480 I think the answer is still yes.
00:36:30.940 The question then, of course, is damages, but considering AI will likely do this in this phase, it's going to be weird how the courts navigate this.
00:36:39.820 But I'm going to wrap it up there, my friends.
00:36:41.240 Once again, smash the like button, share the show.
00:36:43.680 I'm on X and Instagram at TimCast.
00:36:45.220 Subscribe.
00:36:46.060 Thanks for hanging out, and we will see you all in the next segment coming up later today.
00:37:00.940 See you all in the next segment coming up later.