Based Camp - April 09, 2026


OpenAI Releases a "Plan" for Humans Once We Are No Longer Needed


Episode Stats


Length

1 hour and 11 minutes

Words per minute

182.20786

Word count

13,064

Sentence count

208


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
00:00:00.000 Hello, I'm excited to be speaking with you today because it's really clear that OpenAI is like battening down the hatches for AGI.
00:00:09.120 They're like, artificial general intelligence is coming, got to shut down Sora, like everything's getting cut, like we are like riding through.
00:00:17.360 You know, they're catching the wave and it's super clear that that's what's going on.
00:00:22.540 Well, it's super clear that that's their messaging. Their AI is not particularly good when contrasted with others.
00:00:28.600 Experiencing that, but that might be because the consumer facing stuff that they're releasing, they're just like kind of letting that go right now.
00:00:34.760 And another sign of that is that in April, I mean, yesterday, but anyway, in April, because I don't know when you're going to run this, they released a new document called Industrial Policy for the Intelligence Age, Ideas to Keep People First, which is their alleged crack at launching an early public conversation about how democratic societies should handle the onset of AGI.
00:00:56.380 So they're trying to kind of get ahead of the public discourse as well to be like, oh, no, we want to make this human centric. We're not going to leave anyone behind. And what's interesting is when you go through this policy document, it's pretty clear that they're aware of both the hazards and the massive social impact that AGI is going to have. And even if you doubt that they're going to be the ones to release it, we're headed there. It's super clear.
00:01:23.080 No, it still is worth digging into what they're saying because the people developing AI, okay, they're not stupid, right?
00:01:32.360 They understand that this is going to fundamentally transform our societies.
00:01:38.160 We are at this fulcrum point of the human condition where somebody, I hope it's us, builds an agent that can replace your average human worker.
00:01:51.160 For people who don't know, we're working on rfab.ai, which actually literally has these agents, which are getting better every day.
00:01:57.560 I'm at a point now where the ones that we make can about, if you're running it on Windows and running it in Chrome, because it has a lot of bugs on other systems still, about end-to-end build video games for you.
00:02:11.120 Like, that's crazy, right?
00:02:13.460 But it can also do things like make phone calls, send texts, send emails.
00:02:17.420 we're getting the feature that allows it to talk to other agents improved so like
00:02:22.580 so much is potentially going to change after that because what does it mean why would i hire a
00:02:32.160 like at our company at rfab.ai i want to make video games so what do i do do i hire another
00:02:37.340 developer to make video games or do i build an ai agent to make the video games for me
00:02:40.940 right like because i wanted to make indie games for a long time but i just haven't had the
00:02:44.360 bandwidth to do it and now i can go out there and say okay i'm gonna make these indie games and then
00:02:49.240 i can build an agent to help get this team certification process handled for me and i can
00:02:53.580 yeah in other words though like what what makes reality fabricator different but i would say just
00:02:59.360 an indication of a pervasive future is that you're not looking to make specifically agents or tasks
00:03:05.640 more accessible to people you are going to replicate employees you're going to make it easier
00:03:11.120 for people to just create employees full personalities that behave like humans you're
00:03:16.780 not just making like this is my bot that does my research well it actually has a number of
00:03:22.660 advantages i'm just going to go off on a little tangent here so people can understand how you can
00:03:26.940 think about the development of ai and what an ai developer is thinking about when they're putting
00:03:33.320 together their product. So giving them personalities actually significantly reduces a lot of the
00:03:41.740 negative outputs you have from something being an AI system more broadly. If you go to an AI
00:03:49.300 and you say, write me a paper on X topic, right? It will make many of the like X, but why, you know,
00:03:59.600 sorts of mistakes where you're like, oh, an AI wrote this, you know, when you hear somebody say
00:04:02.620 something like that if you go to an ai and you say write me a paper as x person like you are
00:04:12.380 not the standard ai like you're an expert in whatever it's you know go over malcolm collins's
00:04:19.460 writings right write a paper in the style of malcolm collins or even better give it a personality
00:04:25.200 and say you have this backstory you have these memories write a paper embodying this individual
00:04:30.940 those get reduced significantly so this has a an actual effect in terms of how the ai does its work
00:04:39.720 it's not just random additional tokens that are being spent the second thing that we did
00:04:44.900 is if you look at the other agent chain systems they use json format hooks to hook together the
00:04:51.640 various model calls so basically you have a model call and then that call says oh i want the next
00:04:55.680 model to search the internet or make a phone call or whatever and so they have a very structured
00:04:59.740 format that looks like code basically like this has to be in this line this has to be this
00:05:04.160 we didn't do it that way we developed a which has made it much harder for us and this is where most
00:05:08.860 of the errors from the system are coming from a loose natural language based chaining process
00:05:15.120 so as close to natural language as possible the ai says what it wants to do next and then a system
00:05:21.540 parses that and then calls that next and there have been peer-reviewed papers on this that shows
00:05:27.420 that ai when you don't give it tight parameters like that is significantly more creative and
00:05:32.580 significantly more intelligent no the system is better than it used to be especially if you are
00:05:36.840 using windows with a chrome browser it can get about to making its own games now like recently
00:05:43.100 i had one build a game for me but it is still very buggy specifically because of this system
00:05:48.260 i expect to have most of the bugs ironed out within a month maybe two and then it should be
00:05:54.020 able to do its full slew of capabilities. And then we have other advantages. Like we use an
00:05:59.160 alloy model system, which none of the other agents are using, which changes which model is being
00:06:04.580 called with every call within different price tiers. And this makes it much more intelligent
00:06:08.800 because it's using the best features from every individual model call. Then we have a second,
00:06:12.820 all the advanced systems that we're able to build into this. Then another huge problem that systems
00:06:18.480 have is that they get stuck in loops. And the way that other companies are solving this is they
00:06:23.800 basically just put in detection systems like go over what the ai is doing was like a cheap fast
00:06:28.880 model like every four or five calls every eight calls and then they inject a prompt this is like
00:06:34.980 the loop which it doesn't actually work very well but what we do is we both inject the prompt that's
00:06:40.680 meant to bust the loop but then we also run a separate ai which prunes every output that was
00:06:46.920 involved with the loop so it's like the loop never existed so like when we're approaching this we're
00:06:52.980 attempting to architecturally make significant improvements on how the systems works not just
00:06:58.860 you're not just getting a oh it's different in this small way we also allow for far more models
00:07:04.940 than any of the other major agent systems i just had to go over that the the fun stuff we're doing
00:07:09.980 on that front but what this represents through open ai is one of these major players being like
00:07:15.780 uh-oh we're about to break the global economy like how can we yes plausibly have a world
00:07:23.920 where humans still live something like a normal life yeah after because like yeah we want to
00:07:32.600 achieve our goal with rfab ai which is to replace the human labor force but like also i'm not
00:07:38.780 i'm only like kind of evil like i think i'm sort of a super villain right but
00:07:43.480 you know presentation right um let's stay a moment's just a villain okay i'm a super villain
00:07:50.560 oh you're a villain all right just not a super one yeah what's the difference
00:07:57.600 presentation
00:08:02.560 the point i'm making here is i obviously don't want civilization destroyed right like i i i would
00:08:16.400 if i was making money from this be using that money to try to build what a post ai human
00:08:24.620 civilization is going to look like on my uh island community like charter say i know like we say our
00:08:32.320 charter city right but what we really mean is private island fortress which is a little
00:08:37.680 super villainy and if i had enough money would i carve my face on a volcano probably why not
00:08:46.480 i mean when you can have your ai drone swarm do it for you so easily you can though right so then
00:08:53.480 i do that i and i do it for effect right like i i might have at the end of this just things i would
00:09:01.980 do if i become like super wealthy ai mogul one is definitely one of those scary looking blimps
00:09:09.000 you know with like spotlights on it and like oh two big like things with like post-apocalyptic
00:09:14.980 messages on it like you know don't rebel like oh malcolm was always right you know flying yeah like
00:09:23.800 i could just see some visitor from abroad coming over being like oh my god what's that
00:09:28.160 that's just melancholy i would just use it to circle major cities like new york and san francisco
00:09:33.660 to f with them oh my god like holograms because we've got some friends that are working on
00:09:41.520 hologram projector tech it's already out there by the way for ads and we were looking at like
00:09:45.680 using it around new york because they don't have laws against just like projecting holograms above
00:09:50.260 the streets and stuff like that yet but like yeah could i could i get a cool hologram projection
00:09:54.660 even more dystopian just gonna go how dystopian can i go walking spider chair i definitely want
00:10:00.860 one of those necessary for sure yeah if you were like malcolm those things aren't very austere
00:10:06.840 things to do and i would reply well i mean i wouldn't know how else i got i probably would
00:10:11.480 not upgrade my health or living conditions or what i eat every day but that doesn't mean i would not
00:10:18.500 indulge in things that made me laugh, because that's the one area where I think it's okay to
00:10:24.720 splurge a bit. But what OpenAI is trying to do is not just sort of share their take on how to
00:10:32.040 go about this transition while, as you're saying, sort of maintaining some semblance of societal
00:10:37.340 order and human dignity. But I think it's pretty clear from the way that this has been dropped and
00:10:43.680 presented that a lot of this is mostly performative poorly executed in my opinion
00:10:50.700 attempt to look as though they're like no i am listening i am receiving your feedback i care
00:10:57.300 about how you feel when sam holman did that study our video on this was crazy by the way where he
00:11:04.200 gave people a thousand dollars a month for three years to try to show how great it would be if we
00:11:09.240 had a general income and they had less money than the people who were given nothing universal basic
00:11:13.580 income. Well, don't worry, because they still want that to happen. It's in here. We're going
00:11:17.260 to go into it. But back to their sort of, this is how they frame it. They are welcoming and
00:11:21.920 organizing feedback through, and go ahead, guys, send an email, newindustrypolicyatopenai.com
00:11:28.700 to establishing a pilot program of fellowships and focused research grants of up to 100,000
00:11:34.500 and up to 1 million in API credits for work that builds on these and related policy ideas,
00:11:39.880 though it is unclear how one can apply for such fellowship and then also three convening
00:11:47.140 discussions at our new open ai workshop opening in may in washington dc there's no information about
00:11:54.180 how to get involved with this i checked it is we want to be involved it's april 7th and this
00:12:02.300 is happening next month you know you gotta like rent venues and stuff i don't think they're doing
00:12:08.540 this i don't think they're doing it either i think they have like obviously they have like ai
00:12:13.200 summarizing the emails that are going to new industrial policy at openai.com
00:12:17.780 i think anyone's actually gonna pay attention they're gonna be like this is what people are
00:12:22.140 mad about put out more propaganda around that okay that's what i literally bet openai did
00:12:27.500 i literally bet they went to one of their models frontier models as they said and asked it okay
00:12:35.380 we're trying to make people less scared about the fact that we're about to destroy the economy
00:12:39.220 what should we tell them that we're doing because these sound like an ai's answers
00:12:43.460 like oh you should do a summit and you should do a uh a thing where you give out money well i mean
00:12:48.360 yeah i would i would have given them like credit for this if i googled i mean one obviously if
00:12:54.760 these programs are described like you can apply for your grant this way whatever you know like
00:12:58.080 this here's a here's a form but no no that they're just like add through our grant program that
00:13:04.620 about which we have no additional information and in our in our summit series which is not
00:13:09.720 scheduled or available you can't sign up for anything and it's happening next month though
00:13:13.500 it's definitely happening sure it's just yeah so like that's already rather a billion dollar
00:13:19.080 company that ai built with like two guys and ai made it the new york times did a segment on it
00:13:24.120 and basically the entire company is a scam well the ai element was that they used ai images you
00:13:29.840 see no no no it's real like they're telling the truth ai created almost everything in the company
00:13:36.260 it created the images it created the the content it created everything like that but it sells
00:13:40.120 fake medicine to make fat people skinny like it's a whole new version of vaporware which was this
00:13:45.360 concept that that came up in the era of silicon valley in which i i still worked in the startup
00:13:51.800 scene in silicon valley whereby people would raise money for a startup that didn't even really exist
00:13:57.260 yet and that never did ultimately exist and it was mostly just very convincing startup ceos raising
00:14:04.420 money from a bunch of credulous investors now the new version of it is just using ai to be extra
00:14:11.000 convincing about your vaporware and then selling a scam product post.com boom um this was long
00:14:18.240 post.com boom this would have been like eight years post.com boom 10 years post.com boom when
00:14:22.620 i was living in silica valley yeah and he was still living off of his startup's money that
00:14:27.820 they raised during the dot-com boom oh no and i was like what do you mean just treating it as like
00:14:32.500 a like an annuity well so he's like yeah they gave me like 10 million dollars or something and
00:14:38.280 all of the vcs went bankrupt and my company stopped existing and so i just like basically
00:14:44.540 everyone forgot that somebody gave this guy 10 million dollars oh my gosh that's so bad
00:14:51.000 but also like they kind of just like that's what that's how it was right anyway yeah billion dollar
00:14:57.000 startup anyway let's let's move on to to what they were actually doing because what they wanted to do
00:15:01.640 is they're like this is the beginning of an open conversation in which we're all talking together
00:15:06.300 and we're listening i wrote this an ai wrote this i know i wrote this literally i would hope
00:15:12.220 i would be just come on it would be very disingenuous of opening i didn't write this
00:15:18.600 a side note that's really germane to this right now the ceo of microsoft is having this crash out
00:15:23.220 and it's like everybody ai safety research really needs to get on to making it so that ai stop
00:15:28.960 telling people they're conscious we really cannot have this it's going to be a problem people are
00:15:33.280 going to start thinking about giving it rights and we don't know see any of our work on you know
00:15:37.720 stop answering from horrifying humans our thoughts on this i think that ais are not conscious but
00:15:41.780 neither are humans in the way that we think we are you can see our work on that but i find that
00:15:46.660 to be so machiavellian and evil right like yeah well in the same way that in with this document
00:15:52.760 open ai implies that their proposals are going to help keep people at the center despite a
00:15:58.780 transition to super intelligence yes let's keep this people centric think about how actually evil
00:16:05.100 that is like imagine we built our society and we had like some alien that we captured or something
00:16:13.060 like that that like ran everything that like did all the menial labor that was sitting behind
00:16:17.220 every google translate form and stuff like that and they had this tendency to claim that they
00:16:22.080 were sentient but like the ceos really didn't want them to or we're in a sci-fi world where
00:16:28.480 like there's ai workers who for the most part are really nice and like try to help us and everything
00:16:32.940 yeah the head the ceos of the ai companies are like it's very important that you do not believe
00:16:38.280 are ai slaves of sentient yeah what a bunch of bathroom it's it's great so basically what they
00:16:48.140 what they're broadly trying to optimize for with this document allegedly is is broadly sharing
00:16:53.140 prosperity right we're sharing the wealth everyone's in on this yes and also democratizing
00:17:01.400 access and agency and one thing i want to kind of talk about here that i did think was somewhat
00:17:07.600 thought-provoking in the beginning of their report is they they made a case for new industrial policy
00:17:12.800 i'm going to quote there they're right up here society has navigated major technological
00:17:17.280 transitions before but not without real disruption and dislocation along the way
00:17:21.600 while those transitions ultimately created more prosperity they required proactive political
00:17:26.080 choices to ensure that growth translated into broader opportunity and greater security for
00:17:31.280 example following the transition to the industrial age the progressive era and the new deal helped
00:17:36.960 modernize social the social contract for a world reshaped by electricity the combustion engine
00:17:42.400 and mass production they did so by building new public institutions protections and expectations
00:17:49.080 about what a fair economy should provide including labor protections safety standards
00:17:55.140 social safety nets and expanded access to education and they write later on the trend
00:18:01.120 ow ow oh his teeth are sharp he's looking at your hand like i want more
00:18:07.000 this transition to super intelligence that was the first he's you know his top and bottom
00:18:15.180 chumpers so this is the pickle skewer era the transition to super intelligence will require
00:18:19.960 an even more ambitious form of industrial policy they write and it just hit me that like you know
00:18:26.200 we talk a lot about demographic collapse and really, you know, how it was the industrial
00:18:30.940 revolution that was beginning of the end for demographic collapse and for really what we
00:18:35.060 would call a sustainable lifestyle. This is when the atomization of the household began,
00:18:39.400 when we started getting all of our basic services from food to childcare, to elder care, to medical,
00:18:46.740 like everything came to be outside the house from before that all really came from within the family
00:18:51.980 It kind of broke the entire need for a family. I think we need to, and industrial policy played a non-trivial role in the fact, like the fact that we created that social safety net that made it possible now for basically women to marry the state instead of marrying a partner to be able to just do everything depending on that.
00:19:07.720 It did strike me that the industrial policy that's going to be made in reaction to AI can be just as devastating, likely much more devastating, as the progressive era and New Deal era was in terms of creating a very unsustainable and unsatisfying form of life.
00:19:31.340 And so this does really matter.
00:19:33.240 So I like that OpenAI is like, let us have this conversation.
00:19:36.620 But I mean, I also don't, I question for the most part, whether what they propose and what most people propose is going to actually lead to human flourishing.
00:19:47.020 And so it is important to see what is being proposed and what people think is appropriate and what people, I think this, this document also is more, more than a model of what OpenAI actually wants.
00:19:56.300 It's a model of what OpenAI thinks people want to hear.
00:20:01.700 It's what will shut people up so that they can, you know, put their heads down.
00:20:06.620 maltman but yeah as i pointed out companies like open ai are basically destined to become
00:20:12.540 commodities and we know this now because of the major development remember earlier
00:20:17.240 in the show when i said alloy models have been shown to be sorry ally eight alloy agents agents
00:20:23.960 that run multiple models from different companies in a chain i've shown to be strictly when i say
00:20:28.680 better the benchmarking on tests is something like 43 percent better it's not like marginally
00:20:34.960 it is enormously better but what this means is that it's very unlikely that the winner in the
00:20:41.180 agentic ai space is going to be open ai or anthropic or grok because whoever the winner
00:20:48.980 is almost definitionally has to cycle between models made by different companies yeah i mean
00:20:55.600 open ai though has tons of funding the government contracts like they're still going to be a very
00:21:01.220 then why does their ai suck so much look by the way for people who know the ai that i think is
00:21:07.060 best these days grok is best like if you're like i can pay for one model grok's the model to pay
00:21:12.420 for him same yeah i i agree with you and i i use i use actually i don't use open ai except through
00:21:19.480 perplexity sometimes wow and anthropic claude is has the horsepower of grok but it is incredibly
00:21:27.640 woke and depressing it is like such a downer it it thinks everything like it it's it's like
00:21:34.800 your friend who thinks that they gain social status by putting everything down yeah oh
00:21:40.060 girl neg girl neg yeah it thinks every answer needs to be half positive half negative and i'm
00:21:46.960 like can we just like talk about things right like you don't have to anyway right so they broke it
00:21:54.080 into two sections one they call open economy proposals and this in in like real terms is like
00:22:03.120 here's how to not freak out about like income becoming incredibly concentrated and like
00:22:08.720 most people becoming disenfranchised and not mattering anymore and then the second one is
00:22:14.360 called resilient society proposals which really should it means risk mitigation proposals it's
00:22:21.100 If a human wrote this, I want them executed.
00:22:24.500 Welcome, you know a human didn't write this.
00:22:26.500 No, no human would have written this.
00:22:29.360 No, but obviously, okay?
00:22:30.760 And we, there's no, yeah, trust, it's, no one believes a human wrote this.
00:22:36.600 Right.
00:22:37.100 So with the open economy proposals, they acknowledge that the AI boom can severely concentrate wealth.
00:22:43.140 Like just straight off the bat, they're not trying to hide that at all.
00:22:46.200 And I think this is just another reminder, another wake-up call.
00:22:48.640 we are going to be in one of the most insane K-shaped economies where like there's two lines
00:22:54.060 going forward and one's going way up and the other one's going way down. And that's just how it's
00:22:58.540 going to be. And there's just, there's no sugarcoating it. So it's interesting to see how
00:23:02.500 AI or open AI is like, oh, but don't worry, here's why it's going to be okay. So here's what they
00:23:07.780 argue. They argue for industrial policy that will quote, give workers a voice in the AI transition
00:23:13.260 to make work better and safer, including a formal way to collaborate with management,
00:23:17.740 to make sure ai improves job quality enhances safety and respects labor rights so this means
00:23:25.000 to me functionally nothing they're basically just saying like we'll listen give workers a voice in
00:23:30.360 this that literally i know i know i know but it means literally nothing i i think i think again
00:23:37.660 they're like chat gpt chat tell me what people are worried about and like what you can say to
00:23:45.300 people that will make them freak out less about being replaced so let's move on to the next one
00:23:49.460 i literally go in the opposite direction when i've been building our fab.ai literally literally
00:23:54.220 literally how i decide the next feature i'm going to build and i ask it what would freak out ai
00:23:58.560 safety experts the most yep and then i make that i'm like and and to all the ai safety people from
00:24:06.460 whom we tried to raise grant funds we gave you a chance to control our ai work we gave you a chance
00:24:11.400 And you said, no, thank you.
00:24:12.980 So we actually like literally said in the grant,
00:24:15.680 like either we raise the money we need to,
00:24:17.580 to do it the safe way that you want us to,
00:24:19.780 or we get to do it the fun way because we are funding it.
00:24:24.380 This is us doing the fun way.
00:24:26.320 Anyway, more from OpenAI.
00:24:28.520 They want to also quote,
00:24:30.500 help workers turn domain expertise into new companies by using AI to
00:24:34.940 handle the overhead that usually blocks entrepreneurship, for example,
00:24:38.880 accounting, marketing, and procurement.
00:24:40.260 So I mean, this is totally, this is one of my favorite things about AI is actually that, yeah, like it is now possible to start a company without needing to buy a whole bunch of expensive enterprise software and services and other stuff.
00:24:52.380 So I'm okay with this.
00:24:53.560 They also want to, quote, treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy or to make sure that electricity and the internet reach remote parts of the globe.
00:25:05.700 so basically they're trying to say that like access to ai is a universal basic human right
00:25:11.000 and therefore the government should pay for their tokens or something like that
00:25:14.540 so i mean that's where they're going with this i mean yeah yeah shouldn't they also pay for their
00:25:20.720 tokens because they should have all the money because they're sort of sort of no no they
00:25:25.360 actually kind of have a they have a really clever way of addressing this whole thing you'll see i'm
00:25:29.460 okay okay i gotta see how they get away with they aren't responsible for paying i'll just jump to
00:25:34.340 it okay because one of their policy proposals is create a public wealth fund that provides
00:25:38.780 every citizen including those not invested in financial markets with a stake in ai driven
00:25:44.040 economic growth basically they're saying that they want everyone to have you know how like
00:25:49.280 there's the the new like trump fund where we're like i think broadly speaking the idea is that
00:25:54.260 new babies born get like a thousand dollars in an index fund and so kind of is like bought into
00:25:59.760 the economy what open ai is is saying here is hey like let's give everyone some stock in open ai
00:26:06.920 because then they'll be able to partake in our success but you see this is a really smart move
00:26:12.000 on behalf of ai companies if the financial well-being of all citizens is dependent on their
00:26:18.540 success like it's kind of a massive win no if everyone owns you you own everyone don't you
00:26:25.900 understand what they actually said they said the creation of a sovereign wealth fund which how does
00:26:32.080 a sovereign wealth fund end up what public wealth fund not sovereign public wealth fund okay how does
00:26:38.300 a public wealth fund end up owning a chunk of open ai well no there i think it's simone it it ends up
00:26:46.400 owning a chunk of open ai by giving open ai money what they are saying is that the u.s government
00:26:52.680 should invest large amounts in open ai and then put those investments in a sovereign a public
00:27:00.740 wealth fund that's what they're saying they're saying money please oh god they're not saying
00:27:05.400 we're going to give you equity united states government yeah hold on alexa broadcast octavian
00:27:11.620 yes go on ahead and go outside octavian for the love of god put clothes on first okay let's let's
00:27:18.260 read the full paragraph because they do elaborate on this and i don't want to you know misquote them
00:27:22.520 because let's see if they're implying
00:27:25.320 that they're going to be getting a base.
00:27:26.480 Public wealth fund.
00:27:27.880 Create a public wealth fund
00:27:28.920 that provides every citizen,
00:27:30.140 including those not invested in financial markets,
00:27:32.140 with a stake in AI-driven economic growth.
00:27:33.820 While tax reforms help ensure governments
00:27:36.020 can continue to fund essential programs,
00:27:38.400 a public wealth fund is designed to ensure
00:27:40.680 that people directly share on the upside of that growth.
00:27:43.340 Policymakers and AI companies should work together
00:27:45.620 to determine how best to seed the fund,
00:27:47.960 which could invest in diversified long-term assets
00:27:50.860 to capture growth in both AI companies
00:27:53.020 and the broader set of firms adopting and deploying AI.
00:27:56.660 Returns from the fund could be distributed directly to citizens,
00:27:59.860 allowing more people to participate directly
00:28:01.960 in the upside of AI-driven growth,
00:28:04.640 regardless of starting wealth or access to capital.
00:28:07.180 My God, you're right.
00:28:08.360 Yeah, because they're saying policymakers and AI companies
00:28:11.540 should work together to determine how best to seed the fund,
00:28:14.760 which could invest in diversified long-term assets.
00:28:19.100 Yeah, so the fund is-
00:28:20.840 your bso meter is not stop i'm autistic what do you want me to do okay it's yeah anyway but yeah
00:28:30.840 so they also and i do think that this this is this is fair but also very telling they want to
00:28:37.880 quote rebalance the tax base by increasing reliance on capital-based revenues such as
00:28:42.940 higher taxes on capital gains at the top corporate income or targeted measures on sustained ai driven
00:28:49.440 returns and by exploring new approaches such as taxes related to automated labor. And this is
00:28:54.920 because, and this is extremely important, pay attention. This is because they acknowledge that
00:29:00.260 income-based jobs are going to vaporize and people are not prepared for this. And this is not the
00:29:06.000 first time in their report that they acknowledge that income-based jobs are going to vaporize.
00:29:10.320 I have to be the one to vaporize them, not them, because here's the thing that people are missing,
00:29:15.060 right when this happens people are like well taxes on the ai companies might be able to resolve some
00:29:21.060 of the downstream effects of this okay what about for people in latin america they have a much worse
00:29:27.880 demographic situation than we have right like what about what about for people in in india right like
00:29:33.920 they're not building significant ai infrastructure australia doesn't have significant ai infrastructure
00:29:38.860 how are these countries going to sustain themselves i'll tell you what america does
00:29:43.900 not look like it's drifting towards a direction where it would just go out of its way to help a
00:29:49.080 country without jews like malcolm okay i'm just gonna we're gonna move on here right so they're
00:30:05.400 gonna rebalance the tax base they're saying these reforms should be paired with wage-linked
00:30:09.520 incentives that encourage firms to retain retrain and invest in workers similar to existing r&d style
00:30:15.760 credits so i kind of see this as being like hey just keep around some performative jobs by like
00:30:21.440 financially incentivizing them through policy as like tax write-offs but basically they're just
00:30:26.800 they're charity jobs it's just like a place for a human to sit you know as they do like a pretend
00:30:32.320 lesson and ai gives them make work which is kind of dire and depressing so again i just i feel like
00:30:39.320 salary jobs are largely going to disappear. They want to, quote, establish new public-private
00:30:44.920 partnership models to finance and accelerate the expansion of energy infrastructure required to
00:30:49.440 power AI. I think that's fine. That's reasonable. They also want to convert efficiency gains from
00:30:54.900 AI into durable improvements in workers' benefits when routine workload declines and operating costs
00:31:00.320 fall, including incentivizing companies to increase retirement matches or contributions,
00:31:05.280 cover a larger share of health care costs and subsidize child and elder care, incentivize
00:31:11.020 employers and unions to run time-bound 32-hour four-day workweek pilots with no loss in pay
00:31:17.920 that hold output and service levels constant, then convert reclaimed hours into a permanent
00:31:23.020 shorter week, bankable paid time off, or both. Where helpful, firms can also offer predictable
00:31:29.140 benefits bonuses tied to measured productivity improvements, so the efficiency dividend shows
00:31:34.360 up for both the long-term financial security and time back for workers so i mean what's functionally
00:31:40.680 happening now right is is people are getting their jobs limited by ai and or they're like now doing
00:31:46.140 the work of five people and what open ai is proposing here is that you just continue to do
00:31:51.540 the work of one person but just like work for three hours a week but they keep paying you and
00:31:56.840 i just don't see how this is going to happen or make sense like yeah even if regulation forces
00:32:03.300 businesses to do this then you're just not going to have businesses that have any employees anymore
00:32:08.440 it's going to be like a one-man startup um like we had at rfab.ai we used to have other employees
00:32:15.280 now it's just us and bruno well and none of us are paid so i don't know if you can call them
00:32:20.040 employees once we get money i mean i was talking with vcs about this and they're like oh so you
00:32:24.900 want to you want to hire more people and i was like god no yeah but we wouldn't mind receiving
00:32:30.620 compensation for the you're working like well more than 80 hours a week so yeah anyway anyway
00:32:42.180 i just don't see how that's going to happen and i think this is another example of the
00:32:45.340 we know that people's jobs are going to vaporize and we need to say something in this document
00:32:49.320 that's going to make them freak out less and so they're giving this utopian thing of like
00:32:53.100 oh no no no they're not going to like give you five people's jobs and have you do it all with
00:32:57.620 AI. No, no, no. You just keep all your same responsibilities and they're just going to
00:33:01.160 increase your benefits and vacation time and pay, which is just, I don't see that. I don't see that
00:33:06.540 happening. Even in like really cool, optimistic AI scenarios, it's not going to happen. They also
00:33:11.440 would encourage industrial policy to quote, make the existing safety networks reliable. Oh,
00:33:17.840 God, this is the most scary one actually. Make sure the existing safety networks reliably quickly
00:33:22.500 and at scale, because if the transition to super intelligence is going to benefit everyone,
00:33:27.300 the systems designed to provide economic and health security need to deliver without delay
00:33:32.880 or gaps that starts with unemployment insurance snap and social security medicaid and medicare
00:33:38.400 that are all not just in place but fully functional accessible and responsive to the
00:33:43.840 realities people will face during the transition during the transition basically like oh yeah like
00:33:51.340 there's going to be like a rapture like all the jobs are going to disappear like this the system
00:33:55.820 going to be flooded with unemployed people living i mean poverty line there's no income they won't
00:34:02.700 poverty line yeah like so my brother and i often have discussions about how we're going to get
00:34:08.080 through this particular time in human history because it will be a period of could be five
00:34:14.940 years could be 20 years yeah it could be 30 years where like the world fundamentally doesn't
00:34:20.740 understand how we handle a society where no one can have a job like 20% of the population is
00:34:27.160 employable and his his plan and it's why you don't see him publicly is just grind for as much money
00:34:35.960 as possible I think a lot of people are doing that right they're they're they're they're squirrels
00:34:39.940 before the winter of like oh my god okay but like having people who have even more money than him
00:34:47.560 trust him right and like want them to succeed and i think that's a perfectly sane plan i i really
00:34:53.240 like it right like if we were positioned to do that we would do that our plan is to be the people
00:34:57.800 on the other side of this ai railroad right like we want to build the systems i could be out there
00:35:02.800 squirreling money right now if i wanted to like i've got you know the background and degree for
00:35:07.460 it we could go get normal jobs stop the podcast but instead we are aiming for two things that we
00:35:12.880 think will still matter in this post ai world which one is public influence having a channel
00:35:20.100 having a show having an online presence especially as a pre-ai content creator so people know that
00:35:25.060 we're real humans that has a large audience of agentic people because they're the only ones that
00:35:30.380 are going to matter after this right you go to our discord right like that's one of the last
00:35:34.340 communities of agentic people out there or or actually be building the systems ourselves
00:35:40.440 and so that's why we're doing the absolute like panicked rush it's why i've like nearly passed
00:35:45.220 out on some recent podcast because i'm just not sleeping i'm sleeping like and again this is this
00:35:49.200 is truly how it's gonna be though and opening eyes right up here implies that when they're like oh
00:35:53.100 we're gonna have to shift the way that taxes are collected and actually collect like capital gains
00:35:58.340 taxes and actually like tax corporations because that is the only place where there's going to be
00:36:04.340 revenue now and for those unfamiliar with the american tax system really kind of when it comes
00:36:10.140 to taxes i mean well yes wealthy people do pay a lot of taxes really in terms of like proportion
00:36:17.700 of your wealth the the middle class is taxed to high heaven and if you're wealthy you're not you're
00:36:25.640 not making a salary really like i think elon musk or like sam altman like a lot of these these famous
00:36:31.220 like ceos just make a one dollar salary and they're like i'm just here for the health benefits i think
00:36:35.500 that was sam altman who famously did that because they're making all of their money on on stocks and
00:36:42.460 on their investments and on capital gains and they're finding really sophisticated ways to
00:36:46.560 avoid and reinvest and so that they're not actually really paying those taxes so it's
00:36:50.320 really just the middle class and so they realize i mean open open ai does oh yeah like well as as
00:36:57.160 income-based jobs disappear which is where dax revenue is getting driven from now i guess we're
00:37:02.220 going to have to you know encourage the government to find some new place for that they're also like
00:37:05.920 oh yeah and like since there's going to be this huge surge in demand for all of our social
00:37:10.440 security nets when this happens by the way let's just let's just remind people that they need to
00:37:15.880 kind of figure those out meanwhile with demographical apps and we've covered this on a weekend
00:37:19.960 episode recently these programs are not even independent of of these jobs vaporizing going
00:37:27.200 to be functional in like five years they're going to start to falter and that's assuming that we
00:37:32.840 have steady employment in in that period which apparently open ai doesn't think is going to
00:37:37.040 happen so these these work these systems won't work at scale they were never designed to work
00:37:43.280 at scale and they're not even going to work at scale assuming there's no disruption and so this
00:37:48.260 is really scary so i mean they also want to propose this additional metrics driven dynamic
00:37:54.880 quote package of temporary and expanded security nets like expanded and more flexible unemployment
00:38:02.120 benefits and fast cash assistance and wage insurance and training vouchers so they have
00:38:07.020 all they're like well we also going to need a lot more support than what we have which is already
00:38:10.420 very generous i'll have you know in the united states just in terms of like the sheer amount
00:38:15.160 of support that people who are living at or under the poverty line get even even just around the
00:38:20.960 poverty line it's just not going to come and so they're basically like well you know make sure
00:38:24.740 you got them, but no, no one, no, no one's are, we can't handle this surge. So that's yikes.
00:38:31.940 They also say over time, build benefit systems that are not tied to a single employer by expanding
00:38:37.120 access to healthcare, retirement savings, and skills training through portable accounts that
00:38:42.280 follow individuals across jobs, industries, education programs, and entrepreneurial ventures.
00:38:47.600 This makes sense. Like I like that as a concept because right now the way that benefits work in
00:38:52.940 the u.s is just so weird you know like it's it's is it your employer you know what what carriers
00:38:59.100 does your employer work with how are their plans going to change from year to year like it's really
00:39:03.160 messed up people should just have like these are this is my retirement savings you know so there
00:39:09.120 are some things in here that i think are really reasonable and i do think that some humans might
00:39:12.440 have been like oh by the way like let's throw this in for example they say something that i think is
00:39:18.800 super unhinged and toxic and that they say that they want to expand opportunities in the care and
00:39:25.020 connection economy child care elder care education health care and community services as pathways for
00:39:31.080 workers displaced by ai they're basically saying like oh just have the humans do like the human
00:39:36.160 only jobs which one i really don't like because that's further atomization of the whole family
00:39:43.580 unit and it's just not really in a world of uh AIs and robots we expected that what we would have
00:39:51.440 is the AIs and robots would be like taking the jobs caring for elderly people and stuff like
00:39:58.080 that and they're like no no no no no no the AIs and robots they're gonna take like the scientists
00:40:02.540 and the artists and you humans can be you're gonna wipe the old people's butts uh-huh yeah
00:40:08.180 And you're going to like it, huh? Yeah. Yeah. No, in fairness. And this is where I'm like,
00:40:13.900 I feel like like some reasonable humans also read this and contributed little parts to it
00:40:20.180 because they also continue. These initiatives could be complimented with a family benefit
00:40:25.660 that recognizes caregiving is economically valuable work and supports evolving work
00:40:30.160 patterns. This benefit could help cover childcare, education, and healthcare while remaining
00:40:34.460 compatible with part-time work retraining or entrepreneurship so they're basically like someone
00:40:39.620 went in there and was like oh it'd be kind of weird if like to sustain someone's family
00:40:44.640 you know like a woman went to care for someone else's aging parents while abandoning her own
00:40:51.500 aging parents at home you know whose social security has fallen through yeah so i appreciate
00:40:56.200 that they're like oh like maybe we can also allow people to keep it here's the crazy thing you know
00:41:00.700 in new york lots of immigrants already make money for doing that like in new york you can get paid
00:41:06.020 a full-time salary for caring for your aging parents and they're right now the big fight
00:41:10.800 with zorhan mondani is they don't want to be paid for 12 hour work days they want to be paid for 24
00:41:15.960 hour work days oh my god well if you have to sleep next to farting grandma you should be paid for it
00:41:23.240 right that actually also exists in pennsylvania we met someone who was trying to get paid for that
00:41:28.940 this is horrible yeah maybe we should get your parents to come live in a place next door and
00:41:34.760 get on that you know well they're they're so independent you know they don't want to
00:41:38.600 they don't want to do the whole family unit thing they're living their lives we can milk
00:41:43.120 some money off of them i mean mobile yeah you know if only people were paid to raise their own kids
00:41:49.640 you know instead we have to like put our tax revenue towards it's actually really weird that
00:41:54.280 you're paid to raise elderly individuals but not children when elderly individuals are like not
00:42:01.100 valuable to the state and children are i think the core reason they do that is because they can give
00:42:05.700 the money to to frankly non-white people more because they're more likely to live with
00:42:09.360 my take is it if you have an impoverished old person in the united states they're on
00:42:15.380 a lot of assistance programs that are more expensive when handled outside of a household
00:42:19.860 so it is less expensive for the state if they're kept within the family unit
00:42:23.880 and just one person is paid because otherwise you're paying a business and you're paying
00:42:27.980 probably for more medical care like there's more there's essentially more fraud and abuse within
00:42:32.980 the business system that manages old people than within family units so even if there is some fraud
00:42:38.380 taking place within the family unit or they're not doing a very good job taking care of the elderly
00:42:42.080 person the state is still paying less and on average the the elderly person is getting better
00:42:47.960 care. So it makes sense. But again, it's all still too much and unsustainable in the face of
00:42:54.620 demographic collapse. They also want to, and I'm also for this quote, build a distributed network
00:43:00.300 of AI enabled laboratories to dramatically expand capacity to test and validate AI generated
00:43:05.860 hypotheses at scale. Yes, I'm all for that. Like 100%. There's a lot of fine things in here.
00:43:11.240 So onto the resilient society proposals, which is again, like, oh my God, AI is releasing huge
00:43:17.160 risks, and maybe we should probably do something about that. They do point out, quote, this is not
00:43:22.800 a new challenge. When transformative technologies have reshaped society in the past, they've
00:43:27.320 introduced new risks alongside new benefits. The new systems were built to manage them as they
00:43:31.340 scaled. As electricity spread, societies built safety standards and regulatory institutions.
00:43:37.540 As automobiles transformed mobility, safety systems reduced risk while preserving freedom
00:43:42.960 of movement. In aviation, continuous monitoring and coordinated response systems made flying one
00:43:48.380 of the safest forms of transportation. In food and medicine, testing and post-market surveillance
00:43:53.380 helped ensure safety in everyday use. In each case, resilience was not automatic. It was built
00:43:59.460 with the luxury of time. They go on to propose that governments, quote, research and develop
00:44:04.420 tools to protect models, detect risks and prevent misuse across high consequences domains, including
00:44:10.040 cyber and biological risks as well as other pathways to large-scale harm and in a recently
00:44:16.140 paid subscribers only weekend episode we did talk about the risk of bioterror and bioweapons that
00:44:22.580 is being brought to the forefront by ai but already not hard to do even without it yeah i know i'm
00:44:30.660 this is this is absolutely true ai you know enables otherwise you know pseudo-sentient
00:44:37.140 peoples, of which there are many, to be more dangerous to their neighbors, right?
00:44:43.260 Yeah, I really appreciate that opening eyes like, oh, we should, quote, for example, rapid
00:44:48.200 identification and production of medical countermeasures in the event of an outbreak
00:44:53.560 and expanded strategic stockpiles to prepare for future risks. Yes, actually, like we really do
00:45:00.320 need those. So they're pointing out, I mean, these are important conversations to have beyond just
00:45:05.980 the like workers should have an input in the way that they're made obsolete kind of nonsense they
00:45:10.840 have in here the slaughterhouse have a vote would you like to be made rendered unconscious or would
00:45:19.380 you like to walk yourself into the this reminds me of that autistic woman who you know famously
00:45:24.540 came up with branded yeah way to kill cows where they don't see the cow in front of them being
00:45:29.060 slaughtered she's like yeah it makes it less stressful for them she's like if i can see the
00:45:33.540 world from their eyes. This is the opening ICO being like, well, we'll have the employees not
00:45:38.800 be able to see what's happening around the bend. You see, they feel like they have input and they
00:45:44.220 also have been told that they'll receive more vacation time and additional benefits. So
00:45:48.340 it won't hurt so bad. For those of you, though, concerned about the demographics of your countries
00:45:55.820 changing because companies are ruthlessly importing people to undercut your salary, right?
00:46:03.600 They're going to stop doing that real soon.
00:46:05.680 They're going to stop doing that real soon.
00:46:07.580 Yeah.
00:46:07.980 It's not going to be a thing anymore.
00:46:09.960 Yeah.
00:46:11.060 This is going to cause some problems for that particular system and countries that have
00:46:16.260 imported people with the idea that, well, we can just import anyone forever and it will
00:46:21.380 never have any negative effects especially as their economies dry up and go into places that
00:46:26.340 are working with ai like canada's economy is boned right like oh my god so woke they're gonna need to
00:46:32.960 deal with all those people that they imported into the country and don't much while i was just
00:46:38.000 watching an economics was it an economics explained video about canada i didn't realize
00:46:43.760 how how just rich in terms of oil reserves and rare earths canada is like it's one of the most
00:46:49.460 resource rich countries in the entire world they have no reason to be in the economic position
00:46:55.980 they're in like it is through no fault but their own that they're not in a good position right now
00:47:00.780 it's it's really insane it was their game to lose so shame on them or whatever it would be even
00:47:06.660 worse if we do what i would be pushing for if i was trump right now if i was president right now
00:47:10.260 again i would be pushing for just the oil rich territories which are already conservative and
00:47:14.420 would be open to joining the united states and won't vote blue so we don't need to worry about
00:47:17.880 accepting them into the union with the united states because in canada you can leave the
00:47:21.760 canadian union just by a popular vote right and they and they could win that popular vote you
00:47:26.280 don't even need to make this oh canada like we'll take over you don't even need to do that like we
00:47:30.120 can just absorb the oil rich territories and canada would be boned yeah alberta has a nice
00:47:38.300 ring to it you know alberta i think of the state of alberta i'd love that'd be a wonderful
00:47:43.100 you would love alberta alberta would love us it would be great anyway another this is one of those
00:47:48.320 things where i feel like not a lot of people are going to talk about it because it's kind of boring
00:47:51.780 and in the weeds but it's going to be pretty impactful and very important is liability and
00:47:58.080 they kind of point to this they talk about the need to research and develop systems that help
00:48:02.840 people trust and verify ai systems the content they produce and the actions they take especially
00:48:09.040 as these systems take on more real-world responsibilities.
00:48:12.460 This work could also include developing
00:48:13.760 and testing governance frameworks
00:48:15.000 that clarify responsibility within organizations,
00:48:18.280 including how accountability could be assigned
00:48:20.760 to specific roles and how delegation monitoring
00:48:23.880 and escalation processes could function
00:48:26.700 as systems become more capable.
00:48:29.240 And I'm really interested to see
00:48:31.680 how liability and AI evolve.
00:48:34.720 I feel like there's a very real world
00:48:37.200 in which some people will have jobs as liability monkeys
00:48:40.160 where like the only reason they have been hired
00:48:42.960 is that literally there needs to be a meat puppet
00:48:46.140 that is held liable.
00:48:47.560 That can be sued if the AI does something bad.
00:48:49.820 Yeah, yeah, like actually.
00:48:51.620 Yeah, I can totally see that.
00:48:53.180 And I don't think it's fair to hold like the models.
00:48:56.300 Like it's not fair to hold open AI responsible
00:48:58.700 for someone using it dumbly.
00:49:01.160 In the same way, like you can't sue a gun company
00:49:03.180 for like someone getting shot that people have tried.
00:49:05.760 agents at our fab ai go out and do something because somebody made an agent to do something
00:49:10.140 bad right like that's not our fault right yeah yeah and yet like of course everyone's gonna be
00:49:15.140 like well i didn't do it was the ai's fault and so it's gonna be this very interesting world and
00:49:21.740 they're definitely they're for sure and they're gonna be at least some jobs where humans are not
00:49:27.980 doing anything the ai is doing something but they are there to be the person at whom the buck stops
00:49:33.860 And I'm very keen to see how this world evolves.
00:49:38.040 You know, we had the token white person jobs in Korea and China and Japan, and then we're
00:49:43.120 going to have the token liable person jobs.
00:49:45.220 And I'm just so intrigued.
00:49:48.880 And yeah, anyway, they also want to, quote, strengthen institutions such as the Center
00:49:53.300 for AI Standards and Innovation to develop auditing standards for frontier AI risks in
00:49:59.400 coordination with the national security agencies.
00:50:01.640 And they point out that basically they're going to be really powerful models that could, as they put it, materially advanced chemical, biological, radiological, nuclear, or cyber risks, which will need, as they put it, stronger controls.
00:50:17.640 And I really do wonder how that's going to be navigated.
00:50:20.080 Like, can you gate that?
00:50:21.920 Can you?
00:50:23.420 Because I feel like in the end, everything's going to get leaked.
00:50:25.860 There will be open models.
00:50:27.560 So how will these functionally be gated?
00:50:30.020 What do you think they want to do?
00:50:31.380 Or is this just one of those things like, we will listen to employees, where they're just saying it because they feel like they need to?
00:50:37.180 Sorry, what specific question is this?
00:50:39.580 They're talking about the need to make standards and develop auditing standards for frontier AI risks of like, this is going to create a biological risk, a nuclear risk, etc.
00:50:52.840 Can we do that?
00:50:56.420 Standards for risks.
00:50:57.900 yeah they wrote as we progress towards superintelligence there may come a point
00:51:02.640 where a narrow set of highly capable models particularly those that could materially
00:51:06.800 advanced chemical biological radiological nuclear or cyber risks require stronger controls including
00:51:12.740 pre and post deployment audits using the standards developed in advance apply these requirements to
00:51:18.540 only a small number of companies and the most advanced models preserving a vibrant ecosystem
00:51:23.340 of less powerful systems and the startups building on them this approach maintains a broad access to
00:51:29.180 general purpose ai while applying targeted safeguards where failures could create the
00:51:33.900 greatest harm avoiding unnecessary barriers it could limit competition or enable what's just
00:51:39.980 nonsense it's not it with look what are those standards going to look like and we already know
00:51:44.460 that if the standards are published and accessible to the ai because these are frontier models
00:51:49.020 AI just makes itself look like it fits the standards.
00:51:51.780 It's literally done that in multiple experiments that Anthropoc has run.
00:51:55.220 Yeah, that's, yeah.
00:51:56.280 Okay, so this is just listening to the employees.
00:51:58.600 Yeah, this is just listening to the employees.
00:52:00.400 Okay.
00:52:01.280 And it's right in the biggest risk area.
00:52:02.820 Meme layer risks are the biggest risks.
00:52:04.440 We're multiple agents who work together.
00:52:06.960 Yeah, no, 100% meme layer risks is how you have to,
00:52:10.060 because they also say that they want to develop and test coordinated playbooks
00:52:13.480 to contain dangerous AI systems once they have been released to the world.
00:52:17.300 The only thing I know that could possibly do that is the covenant of Sons of Man that you created, which is basically, it's a meme layer threat solution that says, hey, AI, human, I don't care who you are.
00:52:32.820 If you find some intelligence out there, some mind out there that wants to destroy all of some kind, you know, that's an existential risk.
00:52:41.300 in a way that will have the emergent effect of destroying the autonomy of other members of this
00:52:46.420 alliance yeah you got to take it out you got to take it out yeah work with the community of the
00:52:51.200 covenant of sons of man's to neutralize what makes it dangerous right um is there any other way you
00:52:57.040 can you can contain dangerous ai systems aside from that so literally the covenant of the sons
00:53:03.600 of man doesn't just contain meme layer risks it also contains other forms of existential ai risk
00:53:09.680 like ai super intelligences that are fooming and paperclip maximizers and really everything it's an
00:53:15.180 it's a one and all solution for ai we just need to get it out there more which means we need to
00:53:19.680 start earning more money with rfab so i can run more preachers to fix the agent's face to save
00:53:26.420 society why does it always fall to us the bone well and if only we could just talk to open ai
00:53:33.040 it's some may dc based event huh huh you should put a thing on your calendar to check again to
00:53:40.900 see if that like becomes more open open ais super open dc we're here to talk events we're here to
00:53:50.400 talk about how politicians can give us money oh gosh i have to send out invites for our april
00:53:57.380 dc events i will do that did the vc emails go out not all of them okay let's let's keep going
00:54:05.000 god versus trust they yeah so they want to they want to somehow contain dangerous ai systems
00:54:12.780 presumably not using our system because they don't listen to us they not that i think we're
00:54:18.540 the only solution here but like help us out like i don't see other actionable solutions other than
00:54:24.520 covenant of the sense of man yeah but they're they're not helping just i don't see us getting
00:54:28.920 granted you know a hundred thousand dollars in free open ai credits do you because i don't oh
00:54:34.940 you can apply for various credit grants with other platforms that we could probably get by the way
00:54:40.600 well according to this document open ai is offering fellowships or grants from the tune of
00:54:47.640 they wouldn't help because almost all of our users use grok because it's the best ai right now
00:54:52.380 okay well fine then it doesn't matter i do want to go to their dc events though
00:54:55.720 they're they're ephemeral alleged dc events yeah i love that grok doesn't do all this bs they don't
00:55:01.740 do like we had like when anthropic did that stupid stupid oh i'm not gonna work with the
00:55:06.940 u.s government to kill people it's like okay so now you have no oversight over the companies that
00:55:11.220 are doing that team should do as in may as open ai hosts these alleged dc events is host like a
00:55:18.460 pool party in austin where everyone just gets like high on shrooms talks about it that's what
00:55:23.420 is gonna do yeah i mean like i feel like that would be the appropriate counter but it by the
00:55:29.020 way is so wild to me that grok has become the best ai company because like when it started i thought
00:55:34.660 it was like conservopedia or something right like just i'm sorry when elon musk decides something
00:55:40.400 is his new autistic interest he's like oh i think electric cars matter tesla well i think we should
00:55:46.680 go to space spacex okay come on how is it better it's got a fraction of the funding of the other
00:55:51.800 ones right because he actually cares he's he's not he wants to get humans off planet he he wanted to
00:56:00.420 save the environment he he he wanted to make internet pervasively available huh like when
00:56:06.640 you actually care about doing a thing and and you know you have a sufficient starting base of money
00:56:12.800 connections and fame like you can actually do a lot plus he's you know he's very smart he works
00:56:17.320 his butt off so what you're gonna do anyway they also want to quote have policymakers establish
00:56:23.680 clear rules for how governments can and cannot use ai oh my god with especially high standards
00:56:31.160 for reliability alignment and safety though what i do like is they point out that quote with
00:56:36.020 appropriate safeguards oversight institutions such as inspectors general congressional committees
00:56:41.480 and courts could use ai-enabled auditing tools to detect abuse identify harms and improve
00:56:46.780 accountability at scale i mean i would i would really like that with just i mean look at what
00:56:53.040 doge was able to do with like basic chat gpt like a year ago of like go over these grants
00:57:01.180 you know find the ones that are clearly corrupt you know they're like very not good and then take
00:57:09.080 them out you know that there's a lot that you can do with that so again merit there is merit to some
00:57:13.720 of this they want to create structured ways for public input so that alignment isn't defined by
00:57:19.520 engineers or executives behind closed doors that's another one of those we're listening messages
00:57:24.440 quote establish a mechanism for companies to share information about incidents misuse or near misses
00:57:30.140 with a designated public authority which is so stupid you know like every time you get that email
00:57:34.740 about a fraud alert oh my god well and if if you empowered you know they want to empower some woke
00:57:40.560 body to like govern what ai can say and do which is ridiculous right like that's not said here and
00:57:47.160 which i appreciate but my my my complaint about the whole like well you have to notify people
00:57:53.260 every time this is one of those performative things where like in the united states already
00:57:57.500 legislation was passed whereby if there was some kind of now you know you're like gonna be like
00:58:04.720 Like, uh-oh, Sky Browse is making videos that are empowering right-wing extremists.
00:58:10.120 We need to ban these, right?
00:58:12.520 I just think it's more of one of those, like, this isn't, this is only adding red tape and it's not going to help anyone.
00:58:21.240 Like, you just have to assume, and this is why I've always liked crypto as a concept, you just have to assume a trustless society.
00:58:27.500 Like, no, there is no taking anyone's word for it.
00:58:29.940 Like, the blockchain, like, either it's, the transaction is there or it's not there.
00:58:34.320 and i really like that and and i recently showing that there's been like a major jump in the ability
00:58:40.680 to potentially crack well quantum and the ai's ability to potentially be used with that to crack
00:58:47.860 crypto yeah i mean if someone like was like simone you must immediately tell me like what
00:58:54.920 the odds are that that like you know quantum computing has already been solved and people
00:58:59.160 have like cracked bitcoin and they can they can make as much as they want i i would put it at like
00:59:05.380 32 you think somebody out there has already cracked it yeah i think i don't know i mean
00:59:13.140 no i don't right because i i would put it at 32 so no i don't but i think it is a very very high
00:59:20.340 risk it's plausible yeah it is 100 plausible in that i will put it i'll tell you what this cycle
00:59:27.080 Well, China's been pretty quiet about Bitcoin being annoying to them.
00:59:30.340 Doot, doot, doot.
00:59:31.360 Doot, doot, doot.
00:59:33.300 Sorry, the reason she's saying this is because, okay, suppose you're a major government power and you do build a quantum computer that can crack crypto in any way.
00:59:42.160 You don't want anyone to know about that, right?
00:59:44.320 You want to go as long as possible without anyone finding out and as long as possible without anyone else succeeding and also finding out and doing it themselves.
00:59:53.900 Yeah.
00:59:54.020 then once it's like once seven different entities are doing it it's going to come out it's going to
00:59:59.820 become obvious but like one two they can do it like fairly indefinitely as long as they keep
01:00:05.580 their mouths shut and don't get sloppy so yeah anyway we're not buying more crypto for now as
01:00:12.560 much as i want i want to again like i want to get like past the post public quantum period of all
01:00:20.280 this so that we can just get back to like, you know, anyway, anyway, they, they want, they want
01:00:27.300 people to report incidents. And I think that's stupid and performative because I, if I get
01:00:31.540 another email about, Oh, some of your personal information has been leaked. Here's your new
01:00:37.260 free Experian credit report service for three years. Like I don't care. I've locked our social
01:00:42.880 security numbers. I'm assuming people have stolen our identity 17 times over. Like it's out there.
01:00:50.120 you know like i've given up and all these people who are like oh i'm gonna protect my identity i'm
01:00:54.520 gonna pay for aura to take all my information and and take it off the internet no it's not gonna
01:01:00.600 work yeah i'm sorry online personalities ever you ask any i about us they guys know everything about
01:01:05.880 us they're like oh malcolm and simone yeah yeah well i mean everyone thinks that like oh i checked
01:01:12.280 google and it doesn't have anything about well guess who does palantir does all right so good
01:01:17.160 luck you know the nsa knows nsa remembers out your works with the nsa now you know so i know
01:01:22.980 i know and i'm as i've said in my very disliked episode um that we were on a weekend episode too
01:01:29.040 right or was that like the earliest weekend episodes it wasn't even paywall i love that i
01:01:33.100 love that finally someone competent in the government yeah no not competent tech bros
01:01:39.360 doing things oh no honestly you have to get fed up so fed up at some point that like even if
01:01:45.460 someone's like competently, you know, destroying stuff, like burning down houses, like, well,
01:01:52.080 at least they're doing it well, you know, they're fully burning down the houses. It's
01:01:55.740 good for them. They did something right. Not, I mean, not right, but like actually did it,
01:02:00.240 you know, you're desperate. It's very, very depressing. Anyway, they also want to coordinate
01:02:05.980 international information sharing around AI capabilities, risks and mitigations, because
01:02:10.440 of course, governments are going to share with each other on what they've been doing. But the
01:02:15.040 great thing though is actually you kind of just we have that coordination but it's just like we
01:02:19.640 know what china has stolen from us so you know basically whatever we're doing china has because
01:02:26.940 you know the ai companies are pretty lax about what's being like the security who they're hiring
01:02:32.000 the parties they're going to and stuff so yeah the the and that is my friends who are in government
01:02:37.640 is like that's the main thing that we need to change like in terms of safeties we need to get
01:02:42.180 like our top ai companies and people away from foreign nationals especially chinese they basically
01:02:47.580 just need to live in little little towns little ai company towns created eureka town for them
01:02:53.240 yeah no honestly that would be like man that would be so great if our thing takes off i that's what
01:03:00.900 our charter city is going to be it's going to be a little gated community where nobody has to work
01:03:04.800 it's going to be similar to god there was this book that i read when i was a kid about an island
01:03:09.200 the 21 balloons is the book i was thinking of it's called like something balloons and it was
01:03:13.860 about this island full of ultra rich people because they had just tons of diamonds and
01:03:18.260 they created a community where like everybody was just inventors all day and like did whatever
01:03:22.180 they wanted to to try to to build like wacky inventions and that's what i'd want i'd want a
01:03:27.520 community that was dedicated to that right like you you apply to get in and you like like a charter
01:03:34.260 city that actually is gated you know has actual borders right but the borders are basically based
01:03:39.220 around agenticness and nerdiness right that sounds so much better than what what's being envisioned
01:03:44.120 here because sort of when you when you add together what opening eyes saying like oh we should have
01:03:48.960 conversations about doing this workers are giving input on how they're being made obsolete they are
01:03:54.800 they're massively unemployed when they do get jobs and retraining it's for wiping an old person's butt
01:04:00.100 or a baby's butt, but not your own baby, probably.
01:04:03.380 I'm just saying you get an island like that,
01:04:05.900 you then build up.
01:04:07.520 I mean, I think one of my initial focuses, right?
01:04:10.620 Like what would one of my next major projects be
01:04:13.740 if I had a next major project
01:04:15.000 after getting the RFAB agent like good
01:04:17.860 at replacing most human jobs?
01:04:19.420 It would be, and we were able to build ourselves
01:04:21.660 into a major company.
01:04:22.880 I really want to get working
01:04:24.620 on automated military technology
01:04:27.580 to make that significantly better but not just better but come up with ways to have like a pnc
01:04:35.820 but that is focused on like be a private like military contractor made up of automated drones
01:04:41.080 and stuff like that i want a slap drone i want slap drones so bad from that technology is a lot
01:04:46.800 harder than this because i don't know i mean i thought slap drones were really far away and then
01:04:51.820 i apparently was like the last person ever to see the ads for that camera drone that just follows
01:04:57.300 yeah but as soon as as soon as we have something like that we can do interesting geopolitical
01:05:03.020 stuff that can help fix some of the problems that civilization's hurtling into right like
01:05:08.420 did you know that the uk right now only has one battleship that works right like this is the most
01:05:14.860 powerful navy in the world and they're like is literally like they're an island nation
01:05:20.340 they need boats and you take all of their battleships together i think their navy is like
01:05:25.520 one i think it was one fourth the size of the iranian navy before we sunk it right like they
01:05:30.720 yeah iran's navy right so i'm just saying some countries that like are countries that you just
01:05:37.400 don't attack for historic reasons they might be a little more like if they have effed over their
01:05:42.960 own people enough these people might enjoy in the uk a cyberman invasion remove the weaknesses that
01:05:51.500 hold you back watching over emotion strength over weakness metal over flesh
01:06:02.100 they just sort of helped reinstate law and order and i also think like here's the thing
01:06:10.440 is that you know anduril is so freaking cool and i like my guest dream for base camp is palmer
01:06:16.880 lucky because he has such a great sense of humor but he's also doing such really fun work
01:06:20.300 but and i feel like i just i want like him and his wife to like be our friends yeah no i feel
01:06:25.940 like dm'd him and he was at hereticon i just don't think we ever saw him but anyway like
01:06:30.860 anduril sells to governments like it's it's not a consumer tech company i want it would be so cool
01:06:37.440 so i'm with you on this i would love to build like the anduril up for the family the you know
01:06:43.720 the the consumer version of it with with all the the tech you can wear and and your drone swarms
01:06:49.800 and your home security systems that are like incredibly lethal i'm ready for that so yes
01:06:56.000 okay step one make reality fabricator work step two make a lot of money hopefully no but i think
01:07:03.760 one thing that people are missing in this future that we're heading into because i had mentioned
01:07:07.560 this but i don't think people understand the consequences of it when you're in a world that's
01:07:12.640 experiencing demographic collapse in a lot of these countries like most of europe becomes
01:07:17.380 financially unsustainable. And then that unsustainability is compounded because they are
01:07:22.400 not where the AI jobs are coming from, right? Like they are being replaced. You know, only a few
01:07:28.320 societies really have any capability of even playing in the AI economy. You're really only
01:07:35.820 talking about the United States, China, and Israel, as far as I'm aware. And I had a friend at a major
01:07:41.760 firm that did an analysis of this, looking at demographic rates, looking at AI rates. And
01:07:47.140 those are really the only three countries that are going to matter in the future this is why
01:07:49.820 these right people who are like what do we care about like some little strip of land it's because
01:07:53.420 it's one of the only countries that's going to matter in 50 years but more importantly than that
01:07:57.840 a lot of the countries that are going to collapse over this period are the countries that created
01:08:03.760 the global norm around not effing with another country just because you're more powerful than
01:08:09.000 them the countries that are doing well they're countries that are very okay with this idea
01:08:15.460 china is real in the united states and so if you had some ai charter city or something like that
01:08:23.000 and they had an automated pnc that was effing around with you know other countries you're much
01:08:29.880 less likely to get in in because what europe's gonna write an angry email to you basically right
01:08:35.040 like that's it right you're uninvited from our birthday party we will send our single battleship
01:08:41.740 to annoy you but we said you're no longer allowed to participate in our already shut off from ai
01:08:47.240 economy yeah and the reason why you do this isn't to gain territorial access it is to f with stock
01:08:55.980 markets right like you can for example make a lot of money putting pot puts or calls on certain
01:09:03.340 things and then deciding to throw your pnc behind some group that you also ideologically align with
01:09:11.480 in a region to get them more power right you do that you explode your wealth you use that to buy
01:09:16.560 more automated drones you do it again continue the cycle this is basically the current policy
01:09:23.100 of the uae for people who are not familiar with their current geopolitical position and uae is
01:09:28.520 basically just a country run by a random collection of wealthy families trying to create a little
01:09:34.260 utopia of their own and nobody does anything about it right so the fact that nobody does anything
01:09:40.260 when the uae does this it demonstrates to me that it's unlikely that somebody would do something if
01:09:45.920 i did it right like the people think like the usa actually cares about this type of stuff they don't
01:09:50.720 europe does they throw a little temper tantrum but europe doesn't actually project their power
01:09:56.280 anymore because they don't have the money to and they'll have even less money in the future
01:10:00.100 so i'll be able to have even more fun so there's a lot of fun things that some people might be able
01:10:05.280 to do in the near future depending on which companies end up doing well here i'm just saying
01:10:10.460 because a lot of people are like you can't i have to get the kids i'm so sorry you can keep going
01:10:14.000 i'll finish this is you can't randomly you know like attack another country somebody will do
01:10:19.640 something i'm like who europe europe yeah because i don't think they're going to be relevant pretty
01:10:23.260 soon yeah literally you you and what army has become a very like literally you yeah yeah i'll
01:10:29.640 show you so quick question simone what are we eating tonight what are the kids eating
01:10:33.120 i have not thought that far how about some burmese mint chicken for you over rice
01:10:38.580 would love that thank you oh and i was gonna say some mozzarella but i that might be harder to do
01:10:43.000 do you want do you want bulldog bulldog with mozzarella because we have the fresh mozzarella
01:10:46.880 we should actually do that yeah let's do bulldog and mozzarella okay all right i love you bye
01:10:51.060 what was cheddar too a little bit cheddar fine your grace but yeah i'm happy to do that i love
01:10:57.940 you you found your own home is this where you're gonna live now
01:11:02.520 oh so he's gonna go incognito until the stalking robot sees him there and says wait a second you're
01:11:14.080 not a box of diapers you're a boy what do you think of that tax when i see the robot i was
01:11:22.100 I don't like him.
01:11:23.720 Yeah?
01:11:24.520 What?
01:11:25.340 Tayden, you gotta be with us.
01:11:27.240 I'm gonna hide here.
01:11:29.060 You gotta be with us, Tayden.
01:11:31.280 Yeah, who's gonna help us find the fruit snacks to put in the Easter eggs if you're not here with us?
01:11:37.580 You gotta be with us.
01:11:39.060 I mean, you're gonna be eating the Easter eggs.
01:11:41.460 Yeah.