00:01:53.200and people who are extremely concerned, but not to the point that they would give up the entire
00:01:59.920industry. In the AI safety community, you have concerns such as bioweapons being developed by
00:02:07.120amateurs or even rogue states, concerns like AI going rogue. What happens if you create a super
00:02:13.920human artificial intelligence that cannot be controlled? One of the more sane arguments,
00:02:20.660I would say, is to simply pause this race. The race dynamic between American corporations and
00:02:28.280American corporations against China means that no one is incentivized to pause. And yet it is
00:02:38.260ultimately up to human beings to make these decisions. Here to talk about pausing AI is
00:02:45.320Holly Elmore, Executive Director of Paws AI. Holly, thank you very much for joining us here.
00:02:51.720That's Paws AI US, just to be. Paws AI US, as opposed to Paws AI Mars.
00:02:56.140That's a positive, Paws AI the whole movement, you know, it's a worldwide thing.
00:02:59.240Yes. So Holly, if you would, maybe just give us a sense of what Paws AI is as an organization,
00:03:07.340what your goal is, and what your tactics are to achieve that goal.
00:03:11.580So we're a grassroots organization. We're focused on using the democratic process to connect the already 70 percent, looking at different polls, sometimes the number is even higher, that want something like a pause.
00:03:22.860They want a slowdown. They want regulation on AI. That's already what the people want.
00:03:27.880So we're connecting them to their representatives to hear that message and then demonstrating in other ways to just let people know.
00:03:34.480There are a lot of people out there who wish they could pause. We hear very commonly like, oh, wouldn't that be nice? You know, kumbaya, we could pause.
00:03:39.740But we really can. We've done things like this before. This is why we're not dead from nuclear weapons is because there have been international treaties to control the proliferation and the use of nuclear weapons.
00:03:50.020When you say the democratic process, you've been here in D.C. for a while. You're talking to politicians.
00:03:55.500You actually had a protest out in front of the White House or a demonstration at the Capitol.
00:03:59.000Yes, the Capitol. Yeah. So tell me a little bit about your experience here in D.C. and what you hope to achieve here.
00:04:05.560Honestly, I hadn't done what we did this week on Monday and Tuesday was we met with 75 congressional offices, including 25 Senate offices.
00:04:12.720So that's 25 percent of the Senate. And we just brought constituents who are concerned about this to talk to their representatives.
00:04:20.060And that went so much better than I even thought it was going to go.
00:04:24.500It was really impressive how much the staffers of these offices and sometimes the members themselves wanted to know more about what we were saying, about what was possible.
00:04:32.900They wanted to see these polls. You know, there were a lot of things. They're busy. They haven't heard of a lot of things. You really can make a difference by bringing it to their attention. And an in-person meeting makes a big point. And then the Capitol demonstration just further goes. It's a big conscientious test. It's so hard to put a lawful demonstration on Capitol Hill because the security is high.
00:04:52.100But it shows that, look, there are this many people, you know, we had something like 80 people who want to, who support pausing AI.
00:04:59.040They've got lobbyists in their ear all day from the AI industry who are telling them, oh, you can't, it's too hard, you know.
00:05:04.400And the people, this is so unpopular, the people love AI.
00:05:09.600And so if you get that message to them, like, that makes a difference.
00:05:12.880So here in America, you've got four major frontier companies who are pushing this race forward.
00:05:18.520You've got Google, OpenAI, XAI, and Anthropic.
00:05:23.300I oftentimes joke that Meta is an upstart trying, just jogging along behind them.
00:05:30.220I think it was our colleague Jeffrey Ladish joke that Mark Zuckerberg should get the Nobel Peace Prize for slowing down the race to superintelligence.
00:05:41.720No, just by, I think, if I took the joke correctly, just because they suck.
00:05:45.320But at any rate, this race is driven by the notion that if we, Google, or we, XAI, or we, Anthropic, don't create AGI first, don't arrive at the finish line first, then the bad guys will, people who are less responsible, less trustworthy.
00:06:03.200When you look at that, you look at that incentive structure, how do you see pausing AI being possible?
00:06:10.680How would it even really occur? Would it require some sort of governmental intervention or is it possible to let these guys basically determine our fates on their own?
00:06:21.280I think it has to require, you know, the people of the world cooperating. And the legal way we do that is through governments. We have infrastructure for governments to come together. Again, we have other treaties on other destructive technologies already run by the U.N., for example, by other intergovernmental organizations.
00:06:40.040So with pause, you know, it's a very broad ask.
00:06:45.440There's actually so many ways to recognize a pause.
00:06:48.440Also, a pause could happen without us doing it on purpose.
00:06:51.720If, for instance, there is some problem with the compute supply chain, that would put in
00:06:56.080place a de facto pause, because I think what a lot of people don't realize is that there's
00:06:59.620a lot of effort going into making each new model.
00:07:02.580You know, it seems like it's happening all the time, and it is.
00:07:04.400But every time, every new model like Mythos, you know, Claude Mythos that just was sort of released, that takes exponentially more compute every time, more huge resources.
00:07:14.740That's why these big data centers are being built.
00:07:17.400And so if something got in the way of the compute supply chain, if something got in the way of the price of energy that this is going to take, there would be lots of ways that that project could be stopped in its track.
00:07:29.280It's by many ways, by many lights, the largest human project ever.
00:07:32.740And it's not just happening by itself. So there's lots of ways to stop this. What I would prefer is that our governments stop it on purpose because it's dangerous. Like it's a national security issue. It's an international security issue. But there are lots of ways. We just we what we want is a pause. And I'm fond of saying that a pause is the next right step. It's the next right step for any correct solution.
00:07:55.160For any solution, we need time where we're not making the problem worse.
00:07:59.040We're not building more and more, you know, dense neural networks that we don't understand and that we can't control.
00:08:04.900We need time to catch up on technical safety and research, but mainly on how we're going to govern it, how we're going to make sure who gets to call the shots, whose values, all of these questions.
00:08:14.920You know, for the most part, this argument has been about theoretical dangers, ideas like bioweapons or cyber attacks, things like this.
00:08:25.440But I think that the at least completion, if not release, of Claude Mythos from Anthropic shows that there are practical concerns that need to be honed in on.
00:08:35.840You've got, with Mythos, the capability to identify and exploit vulnerabilities in operating systems, browsers, security systems.
00:08:46.960And even if we don't have a really clear view of the details because they've kept it behind closed doors, we at least, I think, can trust that what they developed is actually dangerous.
00:09:01.140People called it a sales pitch, and I can see why you might think that, but I don't think that you would have all these other corporations that are involved in it.
00:09:09.780I don't think they're all conspiring to boost Anthropics' stock values.
00:09:13.480So on the note of danger, what are some of the dangers that you see with artificial intelligence?
00:09:19.240I mean, we've talked a few times about this, that, you know, the more kind of mundane dangers of, you know, mass AI psychosis, but also the more extraordinary dangers of artificial general intelligence out of control or artificial superintelligence.
00:09:34.920So for you, Holly Elmore, what are the major dangers you're concerned about?
00:09:39.460I mean, for me, truly, I mean, I when I set the priorities of Pase, I meant it.
00:09:44.300I think the entire spectrum of dangers that are caused by developing out of control and unregulated are important.
00:09:51.520And they're all connected. They're all connected to the they're all externalities of this out of control development.
00:09:57.160But I mean, the biggest thing, you know, imaginable to me is that the human race goes extinct.
00:10:02.340I mean, something that serious, we either we lose control or it empowers a bad actor or a dictator to do something that wipes out our civilization.
00:10:12.720That's I really do think that's on the table. And that will sometimes sound very histrionic to people.
00:10:17.580But, you know, I used to be in my old life an evolutionary biologist and ninety nine percent of all species that have ever lived are extinct now.
00:10:25.200uh because and that's the normal thing that that happens and a lot of times you can see in the
00:10:29.840fossil record what happens when one species gains something like eyes that are you know they become
00:10:34.600better predators they just wipe out a lot of species there's nothing there's no natural law
00:10:38.780that says that we cannot go extinct and when we destabilize our society um all these things add up
00:10:44.380too so you know if we destabilize society by through you know we can now we can't trust uh
00:10:49.720deepfakes we can't trust what we see we don't know what's going on then we're not going to be as
00:10:53.180resilient to big threats like bioweapons, like possibly AI having its own ideas and usurping
00:11:00.360power. So I think the whole range of these threats is real. There's probably many we haven't thought
00:11:07.960of, and those are going to be the real sleepers. I mean, that's the nature of this danger is it's
00:11:11.500intelligence. It's the ability to figure out ways to get to a goal. And if it's smarter than us,
00:11:16.140it's going to find out ways to do what it wants. We might not know why it wants what it wants.
00:11:21.420and it's going to be very hard for us to just anticipate.
00:12:43.760But when it comes to the rule of that game, it excels all human capabilities and it teaches itself.
00:12:49.400And I think when you extrapolate that out to things like drone piloting, things like target acquisition, any other system that might be able to recognize patterns at a superhuman level, you then run into the danger of it exceeding human capabilities.
00:13:07.220So it may not be at that large scale theoretical sort of place of, say, AGI yet, artificial general intelligence or artificial superintelligence.
00:13:17.660But just with what we have now, you have a lot of potential dangers.
00:13:22.000And so I guess if I could tease out some of your more theoretical ideas on this before we move back to Earth,
00:13:28.780Um, how do you see what, if you could give us a, your definition of artificial general
00:13:34.940intelligence, artificial super intelligence, and if you have a definite timeline, what
00:18:28.260You have always advocated for nonviolent tactics, correct?
00:18:33.460I've always been extremely strict, you know, making people sign our volunteer agreement,
00:18:37.640number one thing about the code of conduct is nonviolence uh yes extremely extremely strict
00:18:42.600uh to the point at first everybody thinks like oh you know come on we i don't let people even
00:18:47.220make jokes um i don't you know when people make protest signs i screen the signs and if somebody
00:18:52.580has like blood dripping like i know they don't mean anything by it but the person you're you're
00:18:57.120speaking to and then the people who are watching you know don't necessarily know that so we really
00:19:00.860really really strict uh you know this is about influencing people morally and through democratic
00:19:06.300means. And you have a real positive element to your message too, correct? I mean, maybe not you
00:19:13.900personally, but pause AI US does. I think so. Give us a little light before we get on to the
00:19:19.860commercial break. Well, I think we're about the world is really good and we want to protect it.
00:19:25.280And we think that there is a way to protect it. We're not talking about anything that hasn't
00:19:28.640happened before. We have nuclear nonproliferation treaties. We have new start treaties. We just
00:19:33.960want this for AI. And then we can enjoy whatever benefits are safe from the AI. That's really the
00:19:39.840best of all possible worlds. I also enjoy, frankly, I enjoy its bracing to be part of the democratic
00:19:46.420process and discussion. And I find that very fun. Speaking of life, your study as an evolutionary
00:19:53.720biologist, you studied mushrooms, correct? That was one of your specialties? That was the species
00:19:57.880I worked on in grad school yeah the species the phylum kingdom did you did you spend a lot of
00:20:04.260time out in nature doing this I did I did uh field work where I would collect actually I
00:20:08.440worked on the deadliest mushrooms we'd collect those and be very careful morbid again but
00:20:14.880let's hear a little bit about that though because I'd like to connect it to the way you see
00:20:19.760artificial intelligence to some extent um I think I do have uh I used to say well I you know I
00:20:25.760started doing pause AI because it was necessary morally and that it wasn't connected to my old
00:20:29.940work. But more and more, I think it really is. I think the way that I see things, definitely
00:20:36.820machine learning is the same process as natural selection, gradient descent. I think I have an
00:20:42.120intuition for what works. And I wanted to spend my life just understanding life and doing cool
00:20:47.720stuff like that. And I thought, okay, I've been called to duty now and I just have to do this
00:20:53.020instead. But it's really, it's an interesting challenge, not only to, I understand people who
00:21:00.280are really interested in AI, and they really like want to be close to it and study it, because it
00:21:03.660is fascinating, but it's also dangerous. And I kind of think I've got the right remove. And then
00:21:08.420it's also just a really cool challenge to like figure out a new social movement. And I'm now
00:21:12.560here I'm talking to you, like, I'm having a good time. Yeah, it's interesting that someone like
00:21:17.760you, I mean, you've never, I've never gotten a clear sense of your political leanings, but I
00:21:22.220would say quite a bit further left than mine but who knows we're a bipartisan group yeah i think
00:21:28.380that this moment is really fascinating and and heartening because this this issue is sort of
00:21:35.940like pollution uh or unhealthy foods or drugs in schools it's not something that any particular
00:21:42.940political persuasion is necessarily going to be concerned about if the soil is poisoned and getting
00:21:48.560into the crops it affects everyone and it affects everyone that is a beautiful thing about working
00:21:54.140on pause ai we have people really coming together under the same banner under this single issue
00:21:58.700um really it's everything but working for the ai companies that unites us i mean everything else
00:22:05.080like being parents and at this hill day we had you know parents engineers uh people who had been in
00:22:11.500the ai industry people teachers you know just all kinds of people can agree this is dangerous what
00:22:17.220are we doing? Why are we doing? Why are we making this problem worse before we have any idea of what
00:22:20.900to do about the dangers? Yeah, I think that I've met a number of people like the AI ecosystem has
00:22:27.040opened up to me in the last year in ways that were really unexpected. I mean, there are some people
00:22:32.420maybe that I didn't get along with totally, but for the most part, we're talking about people from
00:22:36.580very, very different walks of life, all of whom share the same concern. The Future of Life Institute,
00:22:42.160that's been a really, really important resource to not only connect to different people,
00:22:46.560but also just to learn more from people who are expert in this about what artificial intelligence systems are, what the real effects are on the human mind.
00:22:57.360Another example would be, say, Nate Soares and Eliezer Yudkowsky from the Machine Intelligence Research Institute, Jeffrey Ladish and his colleagues at Palisade Research.
00:23:09.080So it's been really amazing. How do you see that ecosystem functioning now?
00:23:15.440I mean, of those institutions I just mentioned or any others, what when you look out across the landscape, how do you see it fitting together?
00:23:25.400Where do you see the real strength in this movement?
00:23:28.220I think the strength. So what we're aiming at is the public engaging the public.
00:23:32.960And I think that's going to be our real source of strength.
00:23:36.260And I think that's where the organizations you mentioned are getting their strength is that they're working with they're working publicly openly.
00:23:43.440So the old AI safety ecosystem used to be very closed and it was very in-groupy and it got to the point where, unfortunately, people see people working at, you know, AI companies as like closer to them as like a safety person than like the public and their interests.
00:24:00.580And I think our power, those groups are branching out into just being more open and being more involved in the democratic process.
00:24:08.540And that like that's going to be the way forward. And that's going to be it's already growing.
00:24:13.720But we we have the public already. They just need to understand and be helped a little bit in this like baffling.
00:24:19.600Everybody's baffled by how much is happening and how fast with in the ecosystem.
00:24:24.180But this is, I think this is where the power is, is just harnessing that, focusing that, shepherding that.
00:24:31.500And you come from San Francisco, right?
00:24:33.900Or you've been in San Francisco for many years, but you've been all over the country talking to different people.
00:24:40.280Do you see a certain sort of personality type or certain cultural type that's more open to the critiques of this technology?
00:28:06.160You don't have to lose yours. Text my name, Bannon, B-A-N-N-O-N, to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30th. Text Bannon, B-A-N-N-O-N, to 989898 and do it today.
00:28:25.340The American health care system is broken, and for most Americans, nothing changes.
00:28:31.740There's still delays, denials, high-cost insurance roadblocks.
00:28:35.820So when I find people doing things differently, I talk about it.
00:28:40.860All-family pharmacy is not your typical big-chain pharmacy.
00:28:44.920This is an independent, family-owned pharmacy that gives you access to over 400 medications delivered straight to your door.
00:33:32.940Yeah, basically, you one-upped me on both.
00:33:35.400You know, I went to the University of Tennessee, Knoxville. And then, of course, I was across the river at Boston University, staring out the window over at Harvard, wondering what it must taste like over there.
00:33:44.640Like, what's it smell like at Harvard? So, you know, the educational system.
00:33:51.840Do you do you have faith in academia as an institution? Was it a satisfying experience for you?
00:33:58.760i you know i back then before ai i did i've been pretty discouraged by what i've seen with
00:34:06.400ai education day so we have uh university level organizers and i was talking to them the other
00:34:11.300day and i told them i was like you know i was a scrupulous never cheat never did i would stay up
00:34:16.120till you know 5 6 a.m to do essays all the time but i never had i just go to one link and one
00:34:23.900click of a button practically and had the whole thing done i mean that how can you stand up to
00:34:28.140that. And then one of my organizers told me it's actually even worse than that. Like I have a
00:34:32.840school provided laptop that's a Lenovo with Microsoft Office. And when I write in Microsoft
00:34:37.140Office, it constantly prompts me to have Copilot rewrite it. You know, it's like the software,
00:34:43.480the computers, everybody's telling you to cheat. Like, how can you how can you keep your integrity
00:34:48.920in that environment when just like it's like everybody's telling you to do it. And then on
00:34:52.420top of that, you know, OpenAI especially is making all these deals with campuses. So my mom teaches
00:34:56.960at um a small but a reputable christian college and they have taken on they're at chat gpt college
00:35:03.060and my mom was as a composition teacher she's like this is unacceptable we can't um be operating
00:35:08.700like this and then you know what they added they added chat gpt college campus yeah it's what
00:35:13.600but what's even worse they have chat gpt shepherd for clergy to tell the clergy how to do their job
00:35:21.080like christ gpt it's called chat gpt shepherd yeah which i thought was bad enough yeah so
00:35:26.940So, like, that's just, it's like with everything with the AI industry.
00:35:39.200And I think that people have the right, I think our hearts are in the right place and we would correct.
00:35:44.200And even our institutions like academia would want to correct.
00:35:46.880But are they going to be able to with how fast they're being inundated with this and by how quickly, you know,
00:35:52.940If a whole generation grows up, first they went through COVID and they missed their high school, and then now they're missing college, basically, because they cheated everything, which was happening to an uneducated generation.
00:36:05.800I think it's a pretty serious concern.
00:37:40.080you want to socialize all these things and they're having screens shoved in their faces it would be
00:37:45.540like if you could go down to the school nurse and get oxycontin and you know on on demand and
00:37:51.520most kids you would hope wouldn't do it but many would and a growing number it's basically okay
00:37:57.420yeah the nurse will give it to you um and this is a chachi bt campus and oh you're supposed to use
00:38:02.640it to help you know your development like that's an honor system um it does feel like they're being
00:38:07.940told to to use it and become become dependent yeah i it's spotty so there are a lot of professors
00:38:16.800who are pushing back on this in their classrooms to the administrations there are a few schools
00:38:22.200brown university being one of them but a few schools that overall are you know the the general
00:38:28.080attitude of the administration is against all of this a lot of professors going back to the old
00:38:32.940blue books with the pencils which you know i'm old enough to remember when the blue books were
00:38:36.960a thing. I use a blue book, yeah. I'm old enough to remember when there were no laptops in
00:38:42.480classrooms. When I taught briefly, I refused to have any laptops in my classroom and the kids
00:38:48.160adapted just fine. There really wasn't a problem. But increasingly I hear from professors that
00:38:53.640due to COVID and the lockdowns and the lost education and also just the kind of general
00:39:00.420digital culture, the kids coming in aren't really prepared for college. I mean, some are,
00:39:05.180but most really aren't. And it is nightmarish, you know, the idea that human beings survive,
00:39:11.560artificial intelligence doesn't create radical abundance. And we're stuck with this global
00:39:16.160village of the damned in which all the children are, you know, digitized and have offloaded
00:39:21.660their cognition to the machine. In the case of what Shepard GPT offloaded their spirituality
00:39:27.080to the machine. It's terrifying, more terrifying to me than the idea of going extinct. Going
00:39:33.740extinct would be kind of a relief in comparison to that i mean i'd beg to differ but i i do think i
00:39:39.520think they're all very serious and i think really all these threats that are caused by this of these
00:39:44.040unchecked externalities of development are threats to our way of life and then one is like a final
00:39:49.320threat to our way of life but um some are some are more survivable some are not but like potential
00:39:55.220we also we don't know we don't we shouldn't mess around with the fabric of our society like we just
00:39:59.180don't know what's the important load-bearing part yeah uh yeah and uh i feel very demoralized
00:40:05.920especially with um it goes beyond just like getting the grade for kids in school it's like
00:40:11.060they're becoming less confident in their ability to think themselves like they don't like to sort
00:40:16.920of just represent their own thoughts this is one of the things that people i always get complimented
00:40:21.340on this um i think for being outspoken but i more and more people are like oh i could never do that
00:40:26.180without having AI check it or like, like, wow, you could just, you could never just, what,
00:40:30.880thank your own thoughts, have your own ideas, or they need to go to one thing that my university
00:40:34.560organizers were complaining about was that like, even to just answer like trivial questions or
00:40:39.800even questions about their own preferences, like people would be like, ask chat, ask chat,
00:40:43.720like, it's like an addiction to, they can't even stand the uncertainty of like,
00:40:47.800working it out themselves. You know, it's hard enough to stay physically fit in today's world.
00:40:51.820But imagine you have this with thoughts, like you encounter a little difficulty,
00:40:55.200And there's this answer that's very soothing and easy and quick and it feels valid and like from a neutral source right at your fingertips.
00:41:02.780Think of the potential for manipulation if there's any kind of, you know, if, you know, some person in control, some industry in control wanted people to think a certain thing, they would be able to do it.
00:41:14.860Yeah. Increasingly, they are. And, you know, there was a problem from the television forward, you could say from the telegraph forward.
00:41:21.280but this is a whole other level you know when i was in grad school my my main area of study was
00:41:29.320evolutionary and cognitive science as applied to religion but my my real interest in my master's
00:41:36.300thesis was based on this was altruism the question if evolution darwinian evolution is so harsh why
00:41:43.360would human beings be so kind why why would the ants be so helpful to one another the termites
00:41:49.260the bees, all of this. And recently I've been accused of being an effective altruist. Now I'm
00:41:57.540kind of an altruist. I'm mostly an ineffective altruist. I'm not usually nice to many people
00:42:02.800and not very long, but I'm definitely not an effective altruist. Now you have a lot of
00:42:08.920experience in and around this group. Can you give the War Room Posse some idea of who the effective
00:42:15.860altruists are, what their goals and tactics are. So I, disclaimer, I used to be kind of a big
00:42:24.680personality in effective altruism. And I first got introduced to it when I started grad school
00:42:29.120at Harvard. They were kind of at a lot of elite schools. I ended up organizing Harvard EA for six
00:42:34.520years. Back then, the idea was mainly like, yes, so there's the possibility of helping others. And
00:42:41.400the big insight was, like, people who are, you know, wealthy in the West, like, they can do a
00:42:47.560lot more for people elsewhere. Or we can just even rank our causes in terms of, like, what's the
00:42:52.320actual impact instead of, like, what caused you like on Vibes. And that that could take the same
00:42:57.220amount of money, the same amount of, like, our personal power to do anything and, like, have a
00:43:00.540much bigger positive impact for people in the world. I still believe this is great. But always
00:43:06.000lurking in the back there was also this uh ai safety like cause which i remember seriously i
00:43:11.500was like i was already vegetarian i was already like into giving to um giving to the poor i was
00:43:16.480like really excited for a way for that to go further and the only thing i didn't like about
00:43:21.000ea was ai safety because i just and i couldn't put my finger on it like because it's not that
00:43:25.400i thought the arguments were wrong and that's the way a lot of people feel about hearing any
00:43:28.540argument about a computer can become powerful and out of control and it could be a problem right
00:43:32.860But I realized over time that it was sort of the culture I didn't like, and that culture continues to be very strong.
00:43:41.920It is very – I know your listeners will be familiar with kind of transhumanist ideas.
00:43:53.760Well, and so a lot of the reason that the core group that ever got this to be a big idea –
00:43:59.040Not everybody who's into it today, of course, really knows why it became a topic.
00:44:03.120But the interest was to use AI to be immortal and to reach the singularity to become immortal.
00:44:12.280And then, of course, everything else would also be fixed.
00:44:15.580And so within the mindset of effective altruism, this is kind of like an argument for everything.
00:44:21.660Like if the AI would do everything the best, you like have to try to get the AI and apply it to whatever you're trying to do.
00:44:29.520Or the whole the project that the version of AI safety that they worked on is called alignment.
00:44:35.760And it was about in various different flavors of this, but like finding the true values that the AI should have and then like make and then letting it become more and more powerful, but guided by those values.
00:44:47.400So it'll just do the right thing by humanity and ideally provide like a paradise, you know, where people get to do whatever they want.
00:44:55.700A kinder, gentler digital god of sorts.
00:44:59.280Sort of make a digital god that would like nanny paradise.
00:45:02.880Like and. I always thought of this as like a not I didn't think of it as like a scientific idea.
00:45:10.640I didn't think of it. There was always something that kind of repulsed me about it.
00:45:14.660But as the capabilities of AI got worse, I thought like, oh, these people are definitely like they're on to something about the power of it for sure.
00:45:24.400And after ChatGPT came out, I just had I really didn't I didn't think about it any harder up until then,
00:45:31.740because it seemed like it really could be like hundreds of years off before we're like dealing with artificial intelligence, like anything close to human level.
00:45:38.740and when i saw chat gbt talk like a human like i knew computers could not do that before i based
00:45:46.040on my you know knowledge of linguistics and stuff like most linguists like argued we would never see
00:45:50.820it in our lifetime yes and so david deutch the futurist david deutch he he argued this famously
00:45:56.860in uh the beginning of infinity and most of that book is quite accurate but not on the llms it just
00:46:02.900and it was um i mean to get a little nerdy kind of the thing that this kind of ai is good at is
00:46:08.020what we thought was like human skills. So it's like associative, creative writing. We thought
00:46:14.080that artificial intelligence would be more like mathematical, like that would be its ability. But
00:46:18.520actually, as we were talking about, that's kind of where it has, it makes mistakes. It's not
00:46:22.580precise. It's kind of like the creative parts of our brains. And when I, the one thing that was
00:46:30.520even scarier about shout gbt was that it was created just by a process of searching you could
00:46:38.100describe this whole thing as like a process of the more and more compute resources you have like the
00:46:43.280more combinations of uh parameters it's called like model weights kind of similar to neurons
00:46:50.960and and synapses but if you're looking you're searching like design space for brains and the
00:46:55.980more compute you have the more you can search and the better you can search it and find like those
00:46:59.700like really powerful options and it did that this this process was done without like learning
00:47:06.880anything new special about how the brain works like we don't know you know how it's doing it
00:47:11.740it's just it's a process for finding a way to do it that's described in these model weights and we
00:47:16.940don't know what it's doing um and so i once that happened it seemed pretty obvious that if you put
00:47:23.460more compute on it, you would get an even bigger, a more powerful model. And we, and there was
00:47:30.160nothing standing in the way, you know, the, the only thing standing in the way is acquiring these
00:47:34.880compute resources. Okay. So we're talking about effective altruism. We're talking about these
00:47:41.300massive projects, huge data centers, full of GPUs screaming, developing these, these kind of virtual
00:47:49.100brains in the mathematical space of possibilities it all reminds me of anthropic uh anthropic by
00:47:57.780and large is my understanding this is what the word on the street is but anthropic is largely
00:48:02.400staffed at the upper levels with people who are very friendly to effective altruism yes perhaps
00:48:07.640even effective altruists themselves definitely and uh they are in that strange sort of uh mode
00:48:14.480that a lot of these tech companies and the CEOs are in,
00:48:17.680this technology, they say, could kill everyone,
00:48:38.920I'm just going to preface this by saying this is all my opinion.
00:48:42.580Don't want to hear from Anthropic's lawyers.
00:48:44.480But my experience being there as this company was formed is it was definitely founded by EAs with EA values.
00:48:53.160It was founded by Effective Altruists.
00:48:55.560It was a break off from OpenAI because of losing confidence in Sam Altman's leadership and commitment to those values,
00:49:04.120which is something Sam Altman did talk up early on because EAs were the people with the technical ability to do this.
00:49:09.580um so it's it's always been that from the beginning despite what they told the atlantic
00:49:15.160which was a lie um about ea involvement so i i'm just gonna my personal opinion i anthropic's the
00:49:21.320one i hate the most i think i think it's anthropic final boss i anthropic final boss meaning meaning
00:49:28.420i think that they're the one that's going to be left the others are gonna do something i mean
00:49:31.780open ai has kind of shown its hand um especially with sam altman's duplicity um anthropic is really
00:49:38.820successfully cultivating this group of loyalists and uh serving their interests and i try to break
00:49:46.440their ranks i call out anthropic employees all the time for how they're betraying what i know
00:49:50.220were the values they went into it with um but they they don't break ranks and uh they but they're
00:49:57.440doing the same thing as all of the other ai companies and they're doing now they're at the
00:50:01.460the edge and they went from saying they weren't going to push the frontier they were just going
00:50:05.080to study this to help with safety. I remember. They broke that promise. Now they're at the
00:50:09.440frontier and they're talking about how can we release this model that knows all these zero
00:50:16.320day exploits for all our operating systems. The ultimate cyber weapon.
00:50:20.600But they're better at creating this beneficent image and kind of playing on letting people
00:50:25.220believe like, oh, you don't have to do anything. We'll handle it. The world's going to be great.
00:50:29.360It's no problem. But we do have to handle it. We can't be lulled into a false sense of security.
00:50:33.760We can't think, oh, well, Anthropic would basically do what I want.
00:50:36.940It has to be we, the people, make known what we want.
00:50:40.840And we have democratic control over what happens with this AI.
00:50:43.700Well, on that note, Holly, if you would, tell the audience where they can find resources on your mission, where they can find information about Paws AI, and give them some sense of where you're going from here.
00:50:57.880Okay, so you can go to PawsAIUS.org, and our website will branch out to everything else.
00:51:03.480You can find out how to join a local group. You can donate there.
00:51:07.400Where we're going, we're trying to really scale up with helping our constituents, the constituents who identify the pause position, reach their representatives.
00:51:17.840And we we really want to help people to get through all of the confusing.
00:51:23.440You know, it feels like a 12 hour news cycle on AI, help them focus, make their voices amplified, make their voices unified.