00:00:00.000Hello, I'm excited to be speaking with you today because it's really clear that OpenAI is like battening down the hatches for AGI.
00:00:09.120They're like, artificial general intelligence is coming, got to shut down Sora, like everything's getting cut, like we are like riding through.
00:00:17.360You know, they're catching the wave and it's super clear that that's what's going on.
00:00:22.540Well, it's super clear that that's their messaging. Their AI is not particularly good when contrasted with others.
00:00:28.600Experiencing that, but that might be because the consumer facing stuff that they're releasing, they're just like kind of letting that go right now.
00:00:34.760And another sign of that is that in April, I mean, yesterday, but anyway, in April, because I don't know when you're going to run this, they released a new document called Industrial Policy for the Intelligence Age, Ideas to Keep People First, which is their alleged crack at launching an early public conversation about how democratic societies should handle the onset of AGI.
00:00:56.380So they're trying to kind of get ahead of the public discourse as well to be like, oh, no, we want to make this human centric. We're not going to leave anyone behind. And what's interesting is when you go through this policy document, it's pretty clear that they're aware of both the hazards and the massive social impact that AGI is going to have. And even if you doubt that they're going to be the ones to release it, we're headed there. It's super clear.
00:01:23.080No, it still is worth digging into what they're saying because the people developing AI, okay, they're not stupid, right?
00:01:32.360They understand that this is going to fundamentally transform our societies.
00:01:38.160We are at this fulcrum point of the human condition where somebody, I hope it's us, builds an agent that can replace your average human worker.
00:01:51.160For people who don't know, we're working on rfab.ai, which actually literally has these agents, which are getting better every day.
00:01:57.560I'm at a point now where the ones that we make can about, if you're running it on Windows and running it in Chrome, because it has a lot of bugs on other systems still, about end-to-end build video games for you.
00:08:02.560the point i'm making here is i obviously don't want civilization destroyed right like i i i would
00:08:16.400if i was making money from this be using that money to try to build what a post ai human
00:08:24.620civilization is going to look like on my uh island community like charter say i know like we say our
00:08:32.320charter city right but what we really mean is private island fortress which is a little
00:08:37.680super villainy and if i had enough money would i carve my face on a volcano probably why not
00:08:46.480i mean when you can have your ai drone swarm do it for you so easily you can though right so then
00:08:53.480i do that i and i do it for effect right like i i might have at the end of this just things i would
00:09:01.980do if i become like super wealthy ai mogul one is definitely one of those scary looking blimps
00:09:09.000you know with like spotlights on it and like oh two big like things with like post-apocalyptic
00:09:14.980messages on it like you know don't rebel like oh malcolm was always right you know flying yeah like
00:09:23.800i could just see some visitor from abroad coming over being like oh my god what's that
00:09:28.160that's just melancholy i would just use it to circle major cities like new york and san francisco
00:09:33.660to f with them oh my god like holograms because we've got some friends that are working on
00:09:41.520hologram projector tech it's already out there by the way for ads and we were looking at like
00:09:45.680using it around new york because they don't have laws against just like projecting holograms above
00:09:50.260the streets and stuff like that yet but like yeah could i could i get a cool hologram projection
00:09:54.660even more dystopian just gonna go how dystopian can i go walking spider chair i definitely want
00:10:00.860one of those necessary for sure yeah if you were like malcolm those things aren't very austere
00:10:06.840things to do and i would reply well i mean i wouldn't know how else i got i probably would
00:10:11.480not upgrade my health or living conditions or what i eat every day but that doesn't mean i would not
00:10:18.500indulge in things that made me laugh, because that's the one area where I think it's okay to
00:10:24.720splurge a bit. But what OpenAI is trying to do is not just sort of share their take on how to
00:10:32.040go about this transition while, as you're saying, sort of maintaining some semblance of societal
00:10:37.340order and human dignity. But I think it's pretty clear from the way that this has been dropped and
00:10:43.680presented that a lot of this is mostly performative poorly executed in my opinion
00:10:50.700attempt to look as though they're like no i am listening i am receiving your feedback i care
00:10:57.300about how you feel when sam holman did that study our video on this was crazy by the way where he
00:11:04.200gave people a thousand dollars a month for three years to try to show how great it would be if we
00:11:09.240had a general income and they had less money than the people who were given nothing universal basic
00:11:13.580income. Well, don't worry, because they still want that to happen. It's in here. We're going
00:11:17.260to go into it. But back to their sort of, this is how they frame it. They are welcoming and
00:11:21.920organizing feedback through, and go ahead, guys, send an email, newindustrypolicyatopenai.com
00:11:28.700to establishing a pilot program of fellowships and focused research grants of up to 100,000
00:11:34.500and up to 1 million in API credits for work that builds on these and related policy ideas,
00:11:39.880though it is unclear how one can apply for such fellowship and then also three convening
00:11:47.140discussions at our new open ai workshop opening in may in washington dc there's no information about
00:11:54.180how to get involved with this i checked it is we want to be involved it's april 7th and this
00:12:02.300is happening next month you know you gotta like rent venues and stuff i don't think they're doing
00:12:08.540this i don't think they're doing it either i think they have like obviously they have like ai
00:12:13.200summarizing the emails that are going to new industrial policy at openai.com
00:12:17.780i think anyone's actually gonna pay attention they're gonna be like this is what people are
00:12:22.140mad about put out more propaganda around that okay that's what i literally bet openai did
00:12:27.500i literally bet they went to one of their models frontier models as they said and asked it okay
00:12:35.380we're trying to make people less scared about the fact that we're about to destroy the economy
00:12:39.220what should we tell them that we're doing because these sound like an ai's answers
00:12:43.460like oh you should do a summit and you should do a uh a thing where you give out money well i mean
00:12:48.360yeah i would i would have given them like credit for this if i googled i mean one obviously if
00:12:54.760these programs are described like you can apply for your grant this way whatever you know like
00:12:58.080this here's a here's a form but no no that they're just like add through our grant program that
00:13:04.620about which we have no additional information and in our in our summit series which is not
00:13:09.720scheduled or available you can't sign up for anything and it's happening next month though
00:13:13.500it's definitely happening sure it's just yeah so like that's already rather a billion dollar
00:13:19.080company that ai built with like two guys and ai made it the new york times did a segment on it
00:13:24.120and basically the entire company is a scam well the ai element was that they used ai images you
00:13:29.840see no no no it's real like they're telling the truth ai created almost everything in the company
00:13:36.260it created the images it created the the content it created everything like that but it sells
00:13:40.120fake medicine to make fat people skinny like it's a whole new version of vaporware which was this
00:13:45.360concept that that came up in the era of silicon valley in which i i still worked in the startup
00:13:51.800scene in silicon valley whereby people would raise money for a startup that didn't even really exist
00:13:57.260yet and that never did ultimately exist and it was mostly just very convincing startup ceos raising
00:14:04.420money from a bunch of credulous investors now the new version of it is just using ai to be extra
00:14:11.000convincing about your vaporware and then selling a scam product post.com boom um this was long
00:14:18.240post.com boom this would have been like eight years post.com boom 10 years post.com boom when
00:14:22.620i was living in silica valley yeah and he was still living off of his startup's money that
00:14:27.820they raised during the dot-com boom oh no and i was like what do you mean just treating it as like
00:14:32.500a like an annuity well so he's like yeah they gave me like 10 million dollars or something and
00:14:38.280all of the vcs went bankrupt and my company stopped existing and so i just like basically
00:14:44.540everyone forgot that somebody gave this guy 10 million dollars oh my gosh that's so bad
00:14:51.000but also like they kind of just like that's what that's how it was right anyway yeah billion dollar
00:14:57.000startup anyway let's let's move on to to what they were actually doing because what they wanted to do
00:15:01.640is they're like this is the beginning of an open conversation in which we're all talking together
00:15:06.300and we're listening i wrote this an ai wrote this i know i wrote this literally i would hope
00:15:12.220i would be just come on it would be very disingenuous of opening i didn't write this
00:15:18.600a side note that's really germane to this right now the ceo of microsoft is having this crash out
00:15:23.220and it's like everybody ai safety research really needs to get on to making it so that ai stop
00:15:28.960telling people they're conscious we really cannot have this it's going to be a problem people are
00:15:33.280going to start thinking about giving it rights and we don't know see any of our work on you know
00:15:37.720stop answering from horrifying humans our thoughts on this i think that ais are not conscious but
00:15:41.780neither are humans in the way that we think we are you can see our work on that but i find that
00:15:46.660to be so machiavellian and evil right like yeah well in the same way that in with this document
00:15:52.760open ai implies that their proposals are going to help keep people at the center despite a
00:15:58.780transition to super intelligence yes let's keep this people centric think about how actually evil
00:16:05.100that is like imagine we built our society and we had like some alien that we captured or something
00:16:13.060like that that like ran everything that like did all the menial labor that was sitting behind
00:16:17.220every google translate form and stuff like that and they had this tendency to claim that they
00:16:22.080were sentient but like the ceos really didn't want them to or we're in a sci-fi world where
00:16:28.480like there's ai workers who for the most part are really nice and like try to help us and everything
00:16:32.940yeah the head the ceos of the ai companies are like it's very important that you do not believe
00:16:38.280are ai slaves of sentient yeah what a bunch of bathroom it's it's great so basically what they
00:16:48.140what they're broadly trying to optimize for with this document allegedly is is broadly sharing
00:16:53.140prosperity right we're sharing the wealth everyone's in on this yes and also democratizing
00:17:01.400access and agency and one thing i want to kind of talk about here that i did think was somewhat
00:17:07.600thought-provoking in the beginning of their report is they they made a case for new industrial policy
00:17:12.800i'm going to quote there they're right up here society has navigated major technological
00:17:17.280transitions before but not without real disruption and dislocation along the way
00:17:21.600while those transitions ultimately created more prosperity they required proactive political
00:17:26.080choices to ensure that growth translated into broader opportunity and greater security for
00:17:31.280example following the transition to the industrial age the progressive era and the new deal helped
00:17:36.960modernize social the social contract for a world reshaped by electricity the combustion engine
00:17:42.400and mass production they did so by building new public institutions protections and expectations
00:17:49.080about what a fair economy should provide including labor protections safety standards
00:17:55.140social safety nets and expanded access to education and they write later on the trend
00:18:01.120ow ow oh his teeth are sharp he's looking at your hand like i want more
00:18:07.000this transition to super intelligence that was the first he's you know his top and bottom
00:18:15.180chumpers so this is the pickle skewer era the transition to super intelligence will require
00:18:19.960an even more ambitious form of industrial policy they write and it just hit me that like you know
00:18:26.200we talk a lot about demographic collapse and really, you know, how it was the industrial
00:18:30.940revolution that was beginning of the end for demographic collapse and for really what we
00:18:35.060would call a sustainable lifestyle. This is when the atomization of the household began,
00:18:39.400when we started getting all of our basic services from food to childcare, to elder care, to medical,
00:18:46.740like everything came to be outside the house from before that all really came from within the family
00:18:51.980It kind of broke the entire need for a family. I think we need to, and industrial policy played a non-trivial role in the fact, like the fact that we created that social safety net that made it possible now for basically women to marry the state instead of marrying a partner to be able to just do everything depending on that.
00:19:07.720It did strike me that the industrial policy that's going to be made in reaction to AI can be just as devastating, likely much more devastating, as the progressive era and New Deal era was in terms of creating a very unsustainable and unsatisfying form of life.
00:19:33.240So I like that OpenAI is like, let us have this conversation.
00:19:36.620But I mean, I also don't, I question for the most part, whether what they propose and what most people propose is going to actually lead to human flourishing.
00:19:47.020And so it is important to see what is being proposed and what people think is appropriate and what people, I think this, this document also is more, more than a model of what OpenAI actually wants.
00:19:56.300It's a model of what OpenAI thinks people want to hear.
00:20:01.700It's what will shut people up so that they can, you know, put their heads down.
00:20:06.620maltman but yeah as i pointed out companies like open ai are basically destined to become
00:20:12.540commodities and we know this now because of the major development remember earlier
00:20:17.240in the show when i said alloy models have been shown to be sorry ally eight alloy agents agents
00:20:23.960that run multiple models from different companies in a chain i've shown to be strictly when i say
00:20:28.680better the benchmarking on tests is something like 43 percent better it's not like marginally
00:20:34.960it is enormously better but what this means is that it's very unlikely that the winner in the
00:20:41.180agentic ai space is going to be open ai or anthropic or grok because whoever the winner
00:20:48.980is almost definitionally has to cycle between models made by different companies yeah i mean
00:20:55.600open ai though has tons of funding the government contracts like they're still going to be a very
00:21:01.220then why does their ai suck so much look by the way for people who know the ai that i think is
00:21:07.060best these days grok is best like if you're like i can pay for one model grok's the model to pay
00:21:12.420for him same yeah i i agree with you and i i use i use actually i don't use open ai except through
00:21:19.480perplexity sometimes wow and anthropic claude is has the horsepower of grok but it is incredibly
00:21:27.640woke and depressing it is like such a downer it it thinks everything like it it's it's like
00:21:34.800your friend who thinks that they gain social status by putting everything down yeah oh
00:21:40.060girl neg girl neg yeah it thinks every answer needs to be half positive half negative and i'm
00:21:46.960like can we just like talk about things right like you don't have to anyway right so they broke it
00:21:54.080into two sections one they call open economy proposals and this in in like real terms is like
00:22:03.120here's how to not freak out about like income becoming incredibly concentrated and like
00:22:08.720most people becoming disenfranchised and not mattering anymore and then the second one is
00:22:14.360called resilient society proposals which really should it means risk mitigation proposals it's
00:22:21.100If a human wrote this, I want them executed.
00:22:24.500Welcome, you know a human didn't write this.
00:24:30.500help workers turn domain expertise into new companies by using AI to
00:24:34.940handle the overhead that usually blocks entrepreneurship, for example,
00:24:38.880accounting, marketing, and procurement.
00:24:40.260So I mean, this is totally, this is one of my favorite things about AI is actually that, yeah, like it is now possible to start a company without needing to buy a whole bunch of expensive enterprise software and services and other stuff.
00:24:53.560They also want to, quote, treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy or to make sure that electricity and the internet reach remote parts of the globe.
00:25:05.700so basically they're trying to say that like access to ai is a universal basic human right
00:25:11.000and therefore the government should pay for their tokens or something like that
00:25:14.540so i mean that's where they're going with this i mean yeah yeah shouldn't they also pay for their
00:25:20.720tokens because they should have all the money because they're sort of sort of no no they
00:25:25.360actually kind of have a they have a really clever way of addressing this whole thing you'll see i'm
00:25:29.460okay okay i gotta see how they get away with they aren't responsible for paying i'll just jump to
00:25:34.340it okay because one of their policy proposals is create a public wealth fund that provides
00:25:38.780every citizen including those not invested in financial markets with a stake in ai driven
00:25:44.040economic growth basically they're saying that they want everyone to have you know how like
00:25:49.280there's the the new like trump fund where we're like i think broadly speaking the idea is that
00:25:54.260new babies born get like a thousand dollars in an index fund and so kind of is like bought into
00:25:59.760the economy what open ai is is saying here is hey like let's give everyone some stock in open ai
00:26:06.920because then they'll be able to partake in our success but you see this is a really smart move
00:26:12.000on behalf of ai companies if the financial well-being of all citizens is dependent on their
00:26:18.540success like it's kind of a massive win no if everyone owns you you own everyone don't you
00:26:25.900understand what they actually said they said the creation of a sovereign wealth fund which how does
00:26:32.080a sovereign wealth fund end up what public wealth fund not sovereign public wealth fund okay how does
00:26:38.300a public wealth fund end up owning a chunk of open ai well no there i think it's simone it it ends up
00:26:46.400owning a chunk of open ai by giving open ai money what they are saying is that the u.s government
00:26:52.680should invest large amounts in open ai and then put those investments in a sovereign a public
00:27:00.740wealth fund that's what they're saying they're saying money please oh god they're not saying
00:27:05.400we're going to give you equity united states government yeah hold on alexa broadcast octavian
00:27:11.620yes go on ahead and go outside octavian for the love of god put clothes on first okay let's let's
00:27:18.260read the full paragraph because they do elaborate on this and i don't want to you know misquote them
00:49:48.880And yeah, anyway, they also want to, quote, strengthen institutions such as the Center
00:49:53.300for AI Standards and Innovation to develop auditing standards for frontier AI risks in
00:49:59.400coordination with the national security agencies.
00:50:01.640And they point out that basically they're going to be really powerful models that could, as they put it, materially advanced chemical, biological, radiological, nuclear, or cyber risks, which will need, as they put it, stronger controls.
00:50:17.640And I really do wonder how that's going to be navigated.
00:50:31.380Or is this just one of those things like, we will listen to employees, where they're just saying it because they feel like they need to?
00:50:37.180Sorry, what specific question is this?
00:50:39.580They're talking about the need to make standards and develop auditing standards for frontier AI risks of like, this is going to create a biological risk, a nuclear risk, etc.
00:52:01.280And it's right in the biggest risk area.
00:52:02.820Meme layer risks are the biggest risks.
00:52:04.440We're multiple agents who work together.
00:52:06.960Yeah, no, 100% meme layer risks is how you have to,
00:52:10.060because they also say that they want to develop and test coordinated playbooks
00:52:13.480to contain dangerous AI systems once they have been released to the world.
00:52:17.300The only thing I know that could possibly do that is the covenant of Sons of Man that you created, which is basically, it's a meme layer threat solution that says, hey, AI, human, I don't care who you are.
00:52:32.820If you find some intelligence out there, some mind out there that wants to destroy all of some kind, you know, that's an existential risk.
00:52:41.300in a way that will have the emergent effect of destroying the autonomy of other members of this
00:52:46.420alliance yeah you got to take it out you got to take it out yeah work with the community of the
00:52:51.200covenant of sons of man's to neutralize what makes it dangerous right um is there any other way you
00:52:57.040can you can contain dangerous ai systems aside from that so literally the covenant of the sons
00:53:03.600of man doesn't just contain meme layer risks it also contains other forms of existential ai risk
00:53:09.680like ai super intelligences that are fooming and paperclip maximizers and really everything it's an
00:53:15.180it's a one and all solution for ai we just need to get it out there more which means we need to
00:53:19.680start earning more money with rfab so i can run more preachers to fix the agent's face to save
00:53:26.420society why does it always fall to us the bone well and if only we could just talk to open ai
00:53:33.040it's some may dc based event huh huh you should put a thing on your calendar to check again to
00:53:40.900see if that like becomes more open open ais super open dc we're here to talk events we're here to
00:53:50.400talk about how politicians can give us money oh gosh i have to send out invites for our april
00:53:57.380dc events i will do that did the vc emails go out not all of them okay let's let's keep going
00:54:05.000god versus trust they yeah so they want to they want to somehow contain dangerous ai systems
00:54:12.780presumably not using our system because they don't listen to us they not that i think we're
00:54:18.540the only solution here but like help us out like i don't see other actionable solutions other than
00:54:24.520covenant of the sense of man yeah but they're they're not helping just i don't see us getting
00:54:28.920granted you know a hundred thousand dollars in free open ai credits do you because i don't oh
00:54:34.940you can apply for various credit grants with other platforms that we could probably get by the way
00:54:40.600well according to this document open ai is offering fellowships or grants from the tune of
00:54:47.640they wouldn't help because almost all of our users use grok because it's the best ai right now
00:54:52.380okay well fine then it doesn't matter i do want to go to their dc events though
00:54:55.720they're they're ephemeral alleged dc events yeah i love that grok doesn't do all this bs they don't
00:55:01.740do like we had like when anthropic did that stupid stupid oh i'm not gonna work with the
00:55:06.940u.s government to kill people it's like okay so now you have no oversight over the companies that
00:55:11.220are doing that team should do as in may as open ai hosts these alleged dc events is host like a
00:55:18.460pool party in austin where everyone just gets like high on shrooms talks about it that's what
00:55:23.420is gonna do yeah i mean like i feel like that would be the appropriate counter but it by the
00:55:29.020way is so wild to me that grok has become the best ai company because like when it started i thought
00:55:34.660it was like conservopedia or something right like just i'm sorry when elon musk decides something
00:55:40.400is his new autistic interest he's like oh i think electric cars matter tesla well i think we should
00:55:46.680go to space spacex okay come on how is it better it's got a fraction of the funding of the other
00:55:51.800ones right because he actually cares he's he's not he wants to get humans off planet he he wanted to
00:56:00.420save the environment he he he wanted to make internet pervasively available huh like when
00:56:06.640you actually care about doing a thing and and you know you have a sufficient starting base of money
00:56:12.800connections and fame like you can actually do a lot plus he's you know he's very smart he works
00:56:17.320his butt off so what you're gonna do anyway they also want to quote have policymakers establish
00:56:23.680clear rules for how governments can and cannot use ai oh my god with especially high standards
00:56:31.160for reliability alignment and safety though what i do like is they point out that quote with
00:56:36.020appropriate safeguards oversight institutions such as inspectors general congressional committees
00:56:41.480and courts could use ai-enabled auditing tools to detect abuse identify harms and improve
00:56:46.780accountability at scale i mean i would i would really like that with just i mean look at what
00:56:53.040doge was able to do with like basic chat gpt like a year ago of like go over these grants
00:57:01.180you know find the ones that are clearly corrupt you know they're like very not good and then take
00:57:09.080them out you know that there's a lot that you can do with that so again merit there is merit to some
00:57:13.720of this they want to create structured ways for public input so that alignment isn't defined by
00:57:19.520engineers or executives behind closed doors that's another one of those we're listening messages
00:57:24.440quote establish a mechanism for companies to share information about incidents misuse or near misses
00:57:30.140with a designated public authority which is so stupid you know like every time you get that email
00:57:34.740about a fraud alert oh my god well and if if you empowered you know they want to empower some woke
00:57:40.560body to like govern what ai can say and do which is ridiculous right like that's not said here and
00:57:47.160which i appreciate but my my my complaint about the whole like well you have to notify people
00:57:53.260every time this is one of those performative things where like in the united states already
00:57:57.500legislation was passed whereby if there was some kind of now you know you're like gonna be like
00:58:04.720Like, uh-oh, Sky Browse is making videos that are empowering right-wing extremists.
00:59:33.300Sorry, the reason she's saying this is because, okay, suppose you're a major government power and you do build a quantum computer that can crack crypto in any way.
00:59:42.160You don't want anyone to know about that, right?
00:59:44.320You want to go as long as possible without anyone finding out and as long as possible without anyone else succeeding and also finding out and doing it themselves.