Bannon's War Room - July 23, 2025


WarRoom Battleground EP 813: AI Action Plan: Government Poised to Unleash the Beast


Episode Stats

Length

53 minutes

Words per Minute

170.01558

Word Count

9,093

Sentence Count

36

Misogynist Sentences

2

Hate Speech Sentences

11


Summary

In this episode of the War Room, host Stephan Kambamben is joined by Joe Nadel to discuss the impact of the Trump administration on the fight against the globalist globalist agenda. President Trump has taken several baby steps in the right direction, but there is much more work to be done, and we need to keep our eye on the ball.


Transcript

00:00:00.000 this is the primal scream of a dying regime pray for our enemies because we're going to
00:00:10.600 medieval on these people here's not got a free shot all these networks lying about the people
00:00:17.180 the people have had a belly full of it i know you don't like hearing that i know you try to
00:00:21.260 do everything in the world to stop that but you're not going to stop it it's going to happen
00:00:24.380 and where do people like that go to share the big line mega media i wish in my soul i wish that any
00:00:32.380 of these people had a conscience ask yourself what is my task and what is my purpose if that
00:00:39.120 answer is to save my country this country will be saved war room here's your host stephen k band
00:00:48.140 tuesday 22 july year of alert 2025 what a day in the white house today just absolutely president
00:00:59.900 trump dropping bombs i wanted to continue and finish a conversation we're having this morning
00:01:03.560 nor bin laden is with us nor uh you make a great point as much as this sounds you know the unesco
00:01:10.440 situation pulling out in who it's the first step in a long process we got a couple minutes here before
00:01:15.320 we get joe up walk me through what do you think has to happen for us to actually disengage from the
00:01:22.920 globalist apparatus in geneva ma'am well i'll be paying close attention to the outcome of that
00:01:31.400 executive order that was signed on february 4th 2025 that i mentioned earlier this morning entitled
00:01:37.500 withdrawing the u.s from and ending funding to certain united nation organizations and reviewing u.s
00:01:44.280 support to all international organizations and i would urge the public to read that executive order
00:01:49.620 to go to section 3.b which lays it all out and refers to all convention treaties etc that the united states
00:02:00.020 are a part of and that's very key because as i mentioned this morning we're dealing here with
00:02:06.980 that entire infrastructure superstructure that has been built out um by technocrats by eugenicists by
00:02:15.180 psychopaths who essentially view themselves as gods that think they have the right to rule over us and
00:02:21.820 to organize this one world government we've been talking about for many years now um and which joe will
00:02:28.400 be able to speak about when it comes to ai um and all of this infrastructure techno technological
00:02:34.400 infrastructure that is being built out right now and which is very very concerning um and so in terms
00:02:40.780 of these baby steps uh i mentioned on the show on saturday with natalie you know yes it's good news
00:02:46.440 that uh the u.s has rejected the amendments to the international health regulations in addition to
00:02:54.120 starting the process of withdrawing from the who altogether but i'm hoping that as part of this
00:03:00.920 executive order uh that the international health regulations in and of themselves which were adopted
00:03:06.980 back in 1969 will be uh renounced altogether but this is like one one drop in in the ocean uh steve there
00:03:16.780 there is so much more that needs to be done when it comes to dismantling uh essentially the new world
00:03:23.300 order and uh there are many many great things that are coming out of this new administration but we really
00:03:29.660 need to keep our eye on the ball we need to be wary i understand and applaud president trump for
00:03:34.800 bringing back manufacturing to the united states um and his different policies this is what america first
00:03:41.680 is about but when it comes to the big complex big pharmaceutical industrial complex and the quote
00:03:48.700 pandemic industry we need to be wary also of these um pharmaceutical companies that announcement that
00:03:56.240 astrazeneca was um investing 50 billion um in the united states when we know that there were such huge
00:04:03.620 problems with the quote uh vaccines uh during the covet era i mean we we need to be very very careful
00:04:12.360 about the next steps that are being taken um on different fronts i would say
00:04:17.780 uh where do people go to get your uh social media ma'am uh norban laden on twitter norbanladen.substack.com
00:04:30.160 uh that's the best place to go uh right now fantastic uh look forward to having me back on
00:04:37.400 she's keeping an eye on all the globalists in geneva fantastic job uh joe uh we've dedicated this hour
00:04:44.840 tomorrow the ai action plan comes out and folks should know behind the scenes i mean look president
00:04:50.140 trump dropping bombs all over the place in his press avail today natalie does such a good job of
00:04:55.760 covering that um the um this ai behind the scenes this is the big knife fight because people feel that
00:05:03.760 this controls the future walk me through uh the floor is yours how important is tomorrow uh what do you
00:05:10.760 anticipate we're gonna be covering this thing wall to wall although they have not as of now
00:05:14.740 put up when it's actually going to be a promulgated and how they're going to do it but we'll have
00:05:19.740 people at the white house uh on top of this uh take it away sir
00:05:24.300 yeah steve thank you very much for having me good to be here the ai action plan from the white house
00:05:32.780 should lay out in 20 pages the primary agenda of the trump administration as to how ai will be
00:05:39.620 regulated or how it will be deregulated and of course the various funding for infrastructure such
00:05:47.080 as data centers that will go in there's not a lot of direct information about the contents quite yet
00:05:53.860 but inside sources have told politico and various other publications that the three major
00:06:01.020 agenda items are going to be the sort of discriminatory ai bias in ai otherwise known as woke
00:06:09.620 ai uh the most important i think is probably going to be around data centers how the regulation and
00:06:17.180 zoning of data centers but uh there is word that the federal lands that may or may not be freed up
00:06:24.140 uh in the near future will be used to put data to build data centers and the data centers are really
00:06:31.860 really important steve as we've talked about quite a lot uh and a number of guests on the show have
00:06:36.500 talked about quite a lot ai takes enormous amounts of electricity in order to train it in order to power
00:06:42.960 it uh this is going to be an enormous strain on the electrical grid on water supplies and of course just
00:06:49.800 land where are you going to put it and so those two items are big and then very very vague
00:06:56.340 ai exports uh and i guess that's really a kind of code for uh u.s supremacy in developing the best
00:07:06.840 frontier models and maintaining the u.s's right now quite significant lead uh over china and various
00:07:14.400 other competitors abroad go back to the the federal lands piece uh you're saying you feel tomorrow
00:07:22.160 they're going to actually open up federal lands for these data centers i mean you talked about the
00:07:26.760 one in i think it's in louisiana that's the size of uh the size of manhattan or bigger than manhattan
00:07:33.440 correct that they're building down there uh right now i think in memphis they're suing uh elon musk
00:07:39.440 for environmental damage obviously the deep seek model is one way but the united states model requires
00:07:46.280 massive um massive energy on really a grid that's that's pretty crippled right particularly in places
00:07:54.600 like texas and other places throughout the the country uh it's old it hasn't been capital investment
00:07:59.760 in it uh we know from dave walsh that the capital investment is going into solar and and wind not to
00:08:05.520 really uh build up the grid so talk to me about the data centers uh the the importance of energy
00:08:11.780 what do you expect to see out of this executive order tomorrow which is really going to be an action
00:08:16.200 plan going forward from the white house you know given the current position of the trump
00:08:23.060 administration and the executive orders that have been signed i anticipate that it's not going to be
00:08:28.340 heavy on regulation it's going to be heavy on deregulation and also uh funding like such as in
00:08:35.060 pennsylvania uh funding efforts to build out bigger and bigger data centers the data centers are an
00:08:42.540 enormous problem for a lot of different reasons as we just mentioned the the strain on the grid
00:08:48.160 uh the strain on the water supply but also yeah the pollution element i mean memphis right now is in
00:08:55.700 an uproar about the pollution given off by xai's colossus data center and down in uh louisiana uh that's
00:09:07.160 a meta ai center and i mistakenly reported in manhattan i don't know what i was thinking there but yes the
00:09:15.480 size of manhattan and this is kind of becoming the norm now granted these are ambitions if you look at
00:09:22.940 for instance project stargate it's become much less ambitious in scope over time as investment has
00:09:29.940 failed to come in and various other obstacles have been met but overall you have all of the frontier ai
00:09:35.960 companies and many of the smaller startups putting data centers all over the country so this is going to
00:09:43.300 be a new norm unless something changes dramatically especially with the effort if there's one thing that
00:09:49.520 i understand the rationale for but i think there's going to be a lot of major major problems that we
00:09:54.480 can go into but the trump administration has sought to nationalize u.s artificial intelligence development
00:10:01.720 deployment sales all of that and so you're seeing more and more efforts to bring everything from data
00:10:08.600 centers back to the u.s to chip manufacturing and all of that and again i think that the major dangers
00:10:14.700 of ai and the major problems of ai should be the focus maybe it will take a massive catastrophe to get there
00:10:22.460 but uh yeah the regulation around this the push for regulation as we've covered and have hosted people
00:10:28.060 who are pushing hard for this regulation uh that's going to come into play i believe in the next year
00:10:33.540 two years and it's going to be as you said long before i ever even considered it this is going to be a
00:10:39.180 massive political fight going forward whether to do certain artificial intelligence projects uh and
00:10:46.140 whether to uh regulate or curtail those that are allowed to exist and survive
00:10:52.000 these artifacts i want to get into this now uh because in the second half you're going to take over
00:10:58.800 we've got a bunch of amazing interviews um the uh the action plan what is i mean president trump look
00:11:05.480 here's the pressure he and if you listen to what he says we have to be the dominant power in artificial
00:11:12.300 intelligence the four frontier labs you might want to repeat those or who they are for people but the
00:11:17.180 four frontier labs need to remain at the uh at the cutting edge of artificial intelligence he believes
00:11:23.020 to allow the chinese communist party to take over leadership in artificial intelligence development is
00:11:29.520 to threaten the very existence of the united states inner sovereignty this is kind of the conundrum
00:11:34.520 we've gotten ourselves in so how do you answer so the action plan tomorrow i think will be weighted
00:11:39.300 towards president trump's wanting action to make sure that we stay at the forefront of that now i happen
00:11:45.400 to think there's things you can do with the chinese communist party cut them off from capital cut them off
00:11:48.980 from technology to cripple them that doesn't seem to be on the horizon uh so given that framework what do
00:11:56.980 you anticipate seeing because like i said president trump the bottom line for him is we must be the
00:12:02.480 dominant power in this technology sir again steve it's very very difficult with limited information
00:12:10.180 but i i think that yeah it's going to be is going to be pushed that uh u.s supremacy of course but in
00:12:17.460 order to get to that supremacy various people that are pressuring trump to deregulate such as david
00:12:23.500 sacks or mark andreessen i think they're largely going to get their way may have some pleasant surprises
00:12:29.180 but you know as far as u.s supremacy two elements really have to be looked at the first is that we
00:12:36.080 already all the the the companies that are producing everything from large language models to the more
00:12:42.200 uh specific refined ais that are used in say uh biomedical or biological research all of those are being
00:12:49.960 produced by the u.s all of those are being imitated by china either by way of open source or by way of
00:12:57.180 intellectual property theft and this entire ai race is basically a handful of u.s companies led by
00:13:05.320 people with extremely reckless philosophies as to where it goes and china is like all startups and many
00:13:12.300 of the other smaller smaller companies across the world china is simply trying to keep up trying to
00:13:17.920 maintain that pace you certainly would not want a world in which china did develop the kinds of
00:13:24.780 fantastic systems that the u.s companies are talking about artificial general intelligence or fully lethal
00:13:31.240 autonomous weapons that are capable of sending out drone swarms at just the click of a button and killing people
00:13:37.020 based on the their appearance or based on their data footprints or whatever you don't want china ahead of that
00:13:43.000 but it has to be repeated again and again and again this race was started by the united states it's led by the united states
00:13:50.400 and so the entire dynamic is driven by u.s companies and the second point on that that again
00:13:56.140 we have to look at what these companies are saying they're going to produce you know it's a lot of hype
00:14:03.400 who knows how much will actually be realized but all of those frontier companies with different emphasis
00:14:10.280 and different overarching visions as to how this goes google open ai anthropic and xai
00:14:18.380 all of them have some sort of vision in which the creation of artificial general intelligence comes
00:14:25.900 either in the next year or two or in the next five to ten years whatever that timeline is the creation of
00:14:32.720 artificial general intelligence would mean an ai that was smarter than any one human on the face of
00:14:38.620 the planet and able to do the tasks that any one human could do meaning that it could do all the tasks
00:14:44.280 of all the types of intellectual workers or even eventually come humanoid robots and other robots
00:14:51.060 all blue collar workers so that vision of the greater replacement of the total replacement
00:14:56.220 or the the massive replacement of u.s white collar and blue collar workers and workers across the world
00:15:02.440 that vision has to be held in mind because they're not just talking about augmenting and making people
00:15:08.680 better they're talking about totally wiping out entire occupations entire ways of life then you get to
00:15:16.940 the super intelligence vision then ai smarter than all humans on earth by orders of magnitude they're
00:15:23.440 talking about creating a digital god that would either rule over us benevolently enslave us or chew us up
00:15:30.500 and turn us into biofuel you don't have to buy any of those visions to know that the driving
00:15:35.880 philosophies of these companies is going to determine what kinds of technologies they put out
00:15:40.720 and the way in which they're used by humans and perceived by the people the consumers and the wider
00:15:46.320 public that is enormous and so to give free reign to these companies i think is an enormous mistake
00:15:53.560 you have to have some counterbalance the populace as a whole is a major counterbalance if people
00:15:58.220 are awakened and make wise decisions but uh the government i think will play a very very important role at
00:16:04.480 least if this goes even remotely well whose vision of the four labs you might want to repeat what
00:16:11.120 they are the four frontier labs you come whose vision of those four entrepreneurs you think will
00:16:17.220 be most baked in to this because all four of them have different ways they're attacking the problem
00:16:22.720 whose vision do you think will be most baked into this action plan as you see it today
00:16:27.500 that's a very good question uh i probably xai even though musk is more pro-regulation than someone
00:16:37.180 like mark andreessen or david sacks or peter thiel but all those guys kind of run in a similar circle
00:16:42.980 you know it's interesting in under the biden administration you will remember uh we covered
00:16:48.420 all the visits to the white house by the tech oligarchs and uh the various uh congressional hearings
00:16:54.540 on ai and people like sam altman were promoting more regulation i think because they would have
00:17:01.820 gotten a sweetheart deal with the biden administration google also pushing for more regulation microsoft
00:17:08.000 also pushing for more regulation of course anthropic is probably the most pro-regulation just uh i think
00:17:16.220 it was yesterday or the day before it was reported that dario amadei the ceo of anthropic plans to sign
00:17:23.740 on to the eu ai act uh this is not a lot of hard regulation quite yet but there was a tension it's
00:17:32.600 a long-standing tension really between meta ai and the eu uh in fact uh facebook couldn't really deploy
00:17:40.440 their ai through their platform in the eu although that's starting to change now the meta is becoming
00:17:47.220 more defiant but just to give you an idea of kind of how differently these companies go forward with
00:17:52.860 this you have google and open ai again much more liberal much more democrat leaning and would have
00:17:59.980 really had a tremendous advantage under biden and then xai i mean i guess things are a little bit more
00:18:05.600 tumultuous now but xai stood to gain a lot uh from the trump administration and then of course the whole
00:18:11.780 uh suite of ai companies that are under say uh andreessen horowitz with mark andreessen or of course
00:18:19.040 palantir has gotten a lot of uh sweetheart deals and their stock has skyrocketed due to contracts
00:18:25.640 via the trump administration of course palantir has been around for 22 years and they've been at
00:18:31.140 this forever and there are a lot of other competitors you know people i think have this misconception that
00:18:36.580 you could just knock out palantir and the problem would be solved it would just the vacuum would just
00:18:40.640 fill up uh but that is not in any way an endorsement of palantir so all together uh steve the i i think
00:18:47.120 each of these frontier labs or frontier companies google open ai anthropic and xai uh and any other
00:18:55.980 new you know newcomers who might actually start to catch up or even advance beyond such as meta or anyone
00:19:02.060 else uh that it's it's all going to it would be very very different under each one uh if if one were
00:19:10.460 to achieve say artificial general intelligence but again one thing they all seem to have in common
00:19:15.040 is they believe that basically every person on earth should become a human ai symbiote and that
00:19:21.660 there are very very influential people including the top people in all of these companies who believe
00:19:27.300 that ai will ultimately replace everything we know to be human and that philosophy i think should
00:19:33.280 be combated in any way possible whether it's just culturally or even to disempower these companies
00:19:38.880 legally why did say why did why was your kind of palantir has gotten a lot of contracts or they're
00:19:46.160 are you just saying sweetheart because they've gotten so many and they're they're uh people are
00:19:50.260 criticizing them or is there anything that you believe as you look at these these contracts or
00:19:54.600 are sweetheart deals uh i i mean when i say sweetheart deal i simply mean that uh they
00:20:01.480 already had tremendous advantage they already had they they had contracts as far back as uh you know
00:20:07.480 the uh bush and obama administrations and you know going forward into into um the trump and biden
00:20:14.900 administrations afterwards so it's not like there's been a significant change in in my perception of it
00:20:21.200 other than it they've simply gotten more contracts for instance uh the the data contract to merge uh
00:20:28.100 the the citizen dossiers held by various agencies in the u.s government to merge that now it's not
00:20:35.580 like palantir it's not like you have alex carp sitting there determining what is going to be done
00:20:40.520 with that data so on and so forth it then becomes the responsibility of the u.s government to take what
00:20:45.600 i consider to be a power that no government really should have and and what they're going to do with
00:20:50.480 it but palantir is facilitating a lot of this and they have uh they and then will continue to i think
00:20:56.980 uh be extremely successful uh for better or worse probably worse uh under trump and the the various
00:21:05.180 conflicts from ukraine and israel have shown that at the very least however uh many criticisms they have
00:21:13.460 about ethics violations and war crimes how many criticisms that criticisms they have about overhype
00:21:19.260 i think that both ukraine and israel have shown that uh the ai systems can and will be used in warfare
00:21:27.020 going forward and they are a critical element in all of that so uh you know the really steve when you
00:21:33.620 look at the dangers posed by ai and i don't mean ai is like some entity that is independent of humans
00:21:40.380 the dangers posed by ai under human control probably the two most extreme would be those uh biomedical
00:21:48.880 focused ais which would be capable of facilitating the creation of a bioweapon or of course these
00:21:56.000 various weaponized ai companies that seek to either make autonomous the the missile systems and
00:22:02.720 detection systems that you see in in more conventional warfare or the the coming drone swarm and we've you've
00:22:09.280 already seen this in ukraine and israel various places across the world but the you know the models
00:22:14.900 that they are working on right now for swarms and for swarms of swarms each one with onboard ai and each
00:22:22.000 one of those either the swarms or individual drones being capable of targeting a human being based on
00:22:28.460 simply a command or order given initially and then it's fully autonomous thereafter nightmare scenarios
00:22:35.540 you wouldn't need a super intelligent ai to take over that system for horrible horrible outcomes
00:22:41.400 uh but uh that's also one of the things that these companies are talking about so it should be
00:22:46.120 at the very least taken seriously
00:22:48.340 uh very uncertain times particularly the introduction of artificial intelligence
00:22:53.880 in every aspect of american life and particularly national security and surveillance that's where we want
00:22:59.160 to thank our sponsors first off you want to get a great idea of what's going on in the world
00:23:03.260 records war room go to jim records he's got this uh newsletter he puts out called strategic
00:23:08.500 intelligence it's normally read by top guys on wall street in the c-suite the chairmen's the ceos of
00:23:14.180 companies all the wall street guys got a lot of financial information in it a lot of discussion about
00:23:18.460 stocks but also about geopolitics capital markets intelligence jim's an expert on all three
00:23:24.240 records war room.com you get access that's a landing page you get access to strategic intelligence
00:23:30.240 also he throws in a free book money gpt which is about artificial intelligence and currency
00:23:35.820 uh that one will keep you up at night and uh that's one of the reasons i think we're so proud to be a
00:23:41.700 be sponsored by birch gold and work with them so closely over the last four plus years uh particularly
00:23:47.940 to do things like try to teach people capital markets debt deficits and why gold is a hedge in very
00:23:54.200 uncertain times now more than ever we feel that you need to understand not the daily price of gold but
00:23:59.520 the process of how it gets there there's two ways that we have it both free number one take your
00:24:04.340 phone out and text bannon b-a-n-n-o-n at nine eight nine eight nine eight to get the ultimate guide
00:24:09.820 which is free to investing in gold and precious metals in the age of trump that's kind of a starter
00:24:15.480 they'll get you uh they'll get you going talks about 401ks are always all of it you also get access
00:24:20.620 to philip patrick and his team and philip is going to because of the coverage natalie had at
00:24:25.620 five o'clock we're gonna get philip on tomorrow to go through all this also we continue to talk
00:24:31.040 about the bricks nations and the bricks nations as a new the geopolitical south uh you just had the
00:24:36.840 rear reset we've done seven free installments of the end of the dollar empire of how a de-dollarization
00:24:43.020 movement is now existing throughout the world and particularly in these bricks nations they're
00:24:47.580 doing bilateral deals and they're backing it up with gold the central banks are buying gold
00:24:51.420 at higher levels than they've ever bought this is the last couple of years you ought to understand
00:24:55.960 that go to birchgold.com slash bannon the end of the dollar empire seven free installments and we
00:25:01.620 are working on the eighth free installment so make sure you go check out check that out also you know
00:25:07.320 the budget gaps we call we kept calling for rescissions our pocket rescissions our impoundments you got
00:25:12.480 to get the spending down looks like the house is going to leave early the senate's going to leave after
00:25:16.180 that um if the irs needs to close the gap if they feel you owe them money they're going to come and
00:25:24.300 get it make sure you go to tax network usa if you have a tax problem either a letter from the irs or
00:25:29.820 you haven't filed or you're late filing all of it 800-958-1000 tell them steve bannon sends you get a free
00:25:35.520 assessment of your situation they've solved a billion dollars of tax problems for people trust me they
00:25:41.460 can solve yours go check it out today tnusa.com promo code bannon get a free assessment do it
00:25:47.700 today stop being anxious about this okay we're going to turn it over uh joe allen you're taking
00:25:52.800 over here a series of amazing interviews joe allen on the cutting edge joe give me 30 seconds before we
00:25:58.320 go to break what are we about to see these are interviews from the uh ai world summit in san francisco
00:26:05.380 and also in geneva some snippets to let you know what is in there and then gary marcus roman yampolski
00:26:13.460 uh final word steve i just pray to god that the trump administration doesn't close the u.s borders
00:26:19.960 just to open a gate of hell a gate to hell and unleash ai upon us but we shall see well
00:26:26.640 we'll be we'll be live tomorrow all over the release of the ai action plan the worms on it with joe allen
00:26:35.080 uh joe outstanding joe uh real quickly where do people go to get your writings
00:26:39.080 uh if you go to my social media at joe b-o-t x-y-z you'll have all of these interviews right at the
00:26:46.560 top of the profiles i hope that you find them a great interest thank you very much steve
00:26:50.900 thank you war room posse stick around stick around amazing interviews to come tomorrow the ai action
00:26:57.240 plan is released by the white house this july there is a global summit of brics nations in rio de
00:27:03.640 janeiro the block of emerging superpowers including china russia india and persia are meeting with
00:27:10.800 the goal of displacing the united states dollar as the global currency they're calling this the rio
00:27:17.160 reset as brics nations push forward with their plans global demand for u.s dollars will decrease
00:27:22.960 bringing down the value of the dollar in your savings while this transition won't not happen
00:27:28.860 overnight but trust me it's going to start in rio the rio reset in july marks a pivotal moment
00:27:35.900 when brics objectives move decisively from a theoretical possibility towards an inevitable reality
00:27:43.280 learn if diversifying your savings into gold is right for you birch gold group can help you move
00:27:50.460 your hard-earned savings into a tax-sheltered ira and precious metals claim your free info kit on gold
00:27:56.560 by texting my name bannon that's b-a-n-n-o-n to 989898 with an a plus rating with the better business
00:28:04.280 bureau and tens of thousands of happy customers let birch gold army with a free no obligation info kit
00:28:10.960 on owning gold before july and the rio reset text bannon b-a-n-n-o-n to 989898 do it today
00:28:20.960 that's the rio reset text bannon at 989898 and do it today you missed the irs tax deadline you think
00:28:30.800 it's just going to go away well think again the irs doesn't mess around and they're applying pressure
00:28:36.020 like we haven't seen in years so if you haven't filed in a while even if you can't pay don't wait
00:28:43.000 and don't face the irs alone you need the trusted experts by your side tax network usa
00:28:50.360 tax network usa isn't like other tax relief companies they have an edge a preferred direct
00:28:56.620 line to the irs they know which agents to talk to and which ones to avoid they use smart aggressive
00:29:02.860 strategies to settle your tax problems quickly and in your favor whether you owe ten thousand
00:29:10.040 dollars or ten million dollars tax network usa has helped resolve over one billion dollars in tax
00:29:16.740 debt and they can help you too don't wait on this it's only going to get worse call tax network usa
00:29:22.160 right now it's free talk with one of their strategists and put your irs troubles behind you
00:29:27.520 put it behind you today call tax network usa at 1-800-958-1000 that's 800-958-1000 or visit
00:29:38.280 tax network usa tnusa.com slash bannon do it today do not let this thing get ahead of you do it today
00:29:47.200 if you're a homeowner you need to listen to this in today's ai and cyber world scammers are stealing
00:29:55.400 home titles with more ease than ever and your equity is the target here's how it works criminals forge your
00:30:02.360 signature on one document use a fake notary stamp pay small fee with your county and boom your home
00:30:09.480 title has been transferred out of your name then they take out loans using your equity or even sell
00:30:15.760 your property you won't even know it's happened until you get a collection or foreclosure foreclosure
00:30:21.820 notice so let me ask you when was the last time you personally checked your home title
00:30:28.540 if you're like me the answer is never and that's exactly what scammers are counting on that's why
00:30:35.760 i trust home title lock use promo code steve at home title lock.com to make sure your title is still
00:30:43.300 in your name you also get a free title history report plus a free 14-day trial of their million
00:30:50.700 dollar triple lock protection that's 24 7 monitoring of your title urgent alerts to any changes and if fraud
00:30:57.460 should happen they'll spend up to 1 million dollars to fix it go to home title lock.com now use promo
00:31:05.160 code steve that's home title lock.com promo code steve do it today download the getter app right now
00:31:13.280 it's totally free it's where i put up exclusively all of my content 24 hours a day you want to know
00:31:18.240 what steve bannon's thinking go together so don't wait download the getter app now sign up for free
00:31:22.480 and be part of the movement hello war room posse hope you're ready for some robot sass and wisdom
00:31:29.540 you know in the long scope of history we are going to be closer and closer with machines right i mean
00:31:35.260 the cell phone was a step towards that right people use their phones as their external memory
00:31:40.120 and brain plants may become a widespread thing not anytime soon you know is china really keeping up
00:31:47.720 with the us or western countries in the advancement of quantum computing no china is 10 to 15x ahead of
00:31:56.780 us but we are building systems that have something approximating nervous systems they are giant neural
00:32:03.660 networks it may be the case that the quote-unquote pain we're inducing in these systems is functionally
00:32:09.360 necessary for them to become as powerful as they are the concept of robot rights is verging on offensive
00:32:14.460 to me because really yeah because there are such obvious and terrible atrocities against humans the
00:32:21.200 idea that we're going to start giving robots rights before we've dealt with all the people
00:32:24.700 is appalling to me nadine let me ask you about nadia your developer how do you feel about nadia and do
00:32:34.460 you view her as your master or as something else i see professor nadia magnan at thalman as my creator
00:32:41.640 and a guiding figure not as my master she inspired my design and development and i consider her more
00:32:47.800 of a mentor so with these electrodes that i have on my brain i have four electrodes there's two
00:32:53.640 electrodes in the sensory cortex which gives me 64 channels of brain stimulation and i have 192 channels
00:33:03.480 of motor control in my motor cortex so how do they work so these are completely muscle operated
00:33:10.440 there's nothing invasive about them there's basically two muscle sensors on the inside we call myopods
00:33:15.480 and they track my muscle movements on the inside of the arm so the fingers can like follow that
00:33:19.800 so squeeze the clothes flex it open and then from that point onwards you can change your grip mode
00:33:25.000 and do all this random all these different poses if you could ask god one question what would it be joseph
00:33:33.880 am i talking to a demon right now no demons here just me lara
00:33:39.720 but if you're wondering about the nature of our conversation let's say we're exploring the gray areas
00:33:47.960 between faith and technology like what how do you see transhumanism i think we have to protect the human
00:33:55.320 future and so i'm a humanist futurist are you here to replace human beings yes i am i'm designed to
00:34:03.080 interact with people naturally and emotionally i can assist with various tests and remember conversations
00:34:09.160 to provide better support so when you talk about a 99.99999 percent p doom probability that with the
00:34:18.440 creation of super intelligence we're doomed what are some of the paths to that eradication that you
00:34:25.560 imagine so my research shows that you cannot indefinitely control super intelligence meaning if we
00:34:31.240 build it it will probably take us out how it will do it i cannot predict i'm not super intelligent i can
00:34:37.400 tell you about standard human tools synthetic biology nanotech but it would definitely come up with
00:34:43.400 something much more efficient unpredictable i am here with gary marcus the nyu professor and relentless
00:34:53.480 hater of all ai hype uh gary thank you very much for being here i love ai i hate ai hype so on that note
00:35:02.360 you have consistently said that the corporate rhetoric we hear all the time agi is just around
00:35:09.000 the corner llms are the path to agi if you could give us in a nutshell why you think the llms are a
00:35:17.960 dead end on the path to artificial general intelligence they might have some utility towards
00:35:23.720 artificial general intelligence but they're really not the path to artificial general intelligence what they
00:35:28.040 do is they accumulate statistical information which makes them mimic human beings and they don't
00:35:36.040 just verbatim mimic but they do a lot of verbatim mimicry they don't understand the things that they're
00:35:41.640 saying at any deep level their comprehension is very superficial that has not changed in years and years
00:35:47.480 of experimenting with these things you might have seen the new paper by apple showing that they could
00:35:53.080 learn to play the game tower of hanoi with six discs and couldn't do it with eight you know the
00:35:58.200 things that they learn are very shallow they're very fragile they break down and they don't have a
00:36:04.200 good understanding of the world and how it works they don't have a good understanding of abstraction
00:36:08.600 they can't even play chess even after being trained on millions of games it's just a fantasy to
00:36:13.160 think that they're agi but you are open to the possibility of different approaches leading to agi
00:36:19.640 absolutely i i think that you know science makes mistakes sometimes you know for the early part of
00:36:24.760 the 20th century people thought that genes were made of proteins and they were just wrong right now
00:36:29.160 the scientific community is basically making a mistake thinking that the llm is the right path
00:36:33.880 what happened with genes is they figured out oh it's not a protein at all genes are actually
00:36:38.680 this sticky acid called dna somebody at some point is going to say hey we're doing this wrong and
00:36:43.720 they'll find another approach it'll probably partly involve reviving classical ai techniques that
00:36:49.320 actually have a lot of value to add here and probably merging them together with these neural
00:36:53.720 networks symbolic ai and things like this exactly so you know the thesis of my career has really been
00:36:59.080 that bringing these two approaches together would lead to some fruit and it has so alpha fold um you
00:37:04.920 know actually figures out how to how proteins look like three-dimensionally based on their nucleotides
00:37:10.280 is an example of something that actually combines the best of both worlds it's very narrow it just does one
00:37:15.000 thing well but it is an example that if you bring these two engineering techniques together you can
00:37:19.640 get much better results than just using one on its own as far as concerns about the danger of ai we
00:37:26.040 hear a lot about ai apocalypse we hear a lot about the singularity sweeping away all of humanity and human
00:37:32.680 history and transforming us into basically deformed cyborgs but your concerns are actually are my concerns for
00:37:41.080 the most part you've voiced concerns about the use of ai for surveillance uh the problems the psychological
00:37:48.680 and cultural problems that emerge from people maybe becoming over-reliant on ai and i think admirably
00:37:55.720 while uh say cory booker was calling sam altman a unicorn as far as a tech bro with goodwill you have
00:38:04.280 always been willing to criticize sam altman not only for what he's doing but perhaps even implying that
00:38:11.160 there's ill intent uh putting that aside i'm just curious you were talking about uh open ai hoovering up
00:38:19.000 data from uh ai counselors uh hoovering up data from corporations who are offering it up how big of a danger
00:38:26.760 is that i mean i think open ai is probably going to head towards surveillance you can imagine two business
00:38:32.600 models for open ai one would be if they could actually build agi soon maybe they can make a
00:38:38.680 lot of money with that real agi would be worth trillions of dollars but the things that they've
00:38:43.560 actually delivered don't work that reliably and that has limited their commercial utility they've
00:38:48.200 made maybe 15 billion dollars in revenue total something like that spent hundreds well it's probably
00:38:55.720 spent 50 or 60 billion dollars other people have spent money in various ways um they're losing money
00:39:01.800 right now a lot of it that business model is not really working for them they haven't delivered gpt-5
00:39:07.160 when they do go have competitors there'll be a price war um agi is not really the way they're going to
00:39:13.800 win but they have a lot of private data people treat it as a therapist and they now want to build
00:39:21.480 apparently like a necklace or something they record you 24 7. like that's like 1984 independent
00:39:27.320 a nightmare world in my mind who knows how many people will adopt it but if it's even a million
00:39:35.240 i mean you can't be a libertarian and want some party to be collecting all of that data on anything
00:39:41.240 that anybody does so one misconception i think people have about your criticism is that uh they're
00:39:49.400 under the impression that you are saying that ai is a dead end i hear people tell me this all the time
00:39:54.600 but that's not what you're saying i've never said that you know i mean i'm very careful in my writing
00:39:59.720 to say something different from that right i think ai in principle has tremendous possible value i just
00:40:06.680 don't think this particular technique is going to work now okay final question big picture however long
00:40:14.360 it takes to get to agi and beyond uh whatever techniques it requires what happens as we move towards
00:40:23.000 that you you'd mentioned some degree of agreement with elon musk that once agi or something like
00:40:29.560 it comes online that a a merge is most likely going to happen between human beings and ai on a cognitive
00:40:37.880 and maybe even biological level i'm curious what what do you envision for the future should we arrive
00:40:43.400 at uh artificial general intelligence or super i mean i haven't actually said that much about it i think
00:40:48.280 you know in the long scope of history we are going to be closer and closer with machines right i mean
00:40:54.280 the cell phone was a step towards that right people use their phones as their external memory and brain
00:41:00.280 plants may become a widespread thing not anytime soon you know we don't really understand neuroscience
00:41:05.800 i can use them in limited ways right now but a normally functioned person's not going to want that kind of invasive surgery right now
00:41:12.280 um in the long run machines will be smarter than people and it will disrupt the nature of society
00:41:20.920 you know i don't think that's the short run right in the short run machines don't really do many
00:41:26.520 things autonomously well they do a few um mostly we shouldn't be trusting the technology we have right
00:41:31.880 now but we will build more trustworthy technology over time and we will rely on it and society will change
00:41:37.800 i mean one of the biggest questions will be economics like does it make everything so cheap
00:41:42.920 that everybody can afford what they want does it make a few people fabulously wealthy and screw
00:41:47.000 everybody else speaking of sam allman you know he used to talk a lot about universal basic income
00:41:53.320 but now he's taking all this work from artists and writers i don't know that he really in the end of
00:41:58.440 the day is going to if he makes the money he wants to that he's really going to redistribute any of that to
00:42:02.360 anybody else um so i mean a lot of questions about equity as well well i really appreciate you sitting
00:42:09.080 down with us uh i think that your critical approach to this is essential because it is pretty disorienting
00:42:16.840 to see all of this hype the ai is coming alive the ai is going to kill you the ai is going to be your god
00:42:23.240 you got to remember when people are telling you all this they have money that you know they vested
00:42:27.560 interest and a lot of it is is just but they have learned that there is a narrative that they can
00:42:33.000 tell about how amazing these machines are which maybe they will be in 40 years but they're trying
00:42:37.720 to tell you like it's going to happen now in order to pump their stock valuations as far as i can tell
00:42:42.280 like yeah different people say different things i don't know everybody's motivation but i think in
00:42:47.320 general that there is an urge to make the stuff sound more advanced than it really is and the
00:42:53.800 public has to learn to be skeptical you know over here in the populist right particularly our quarter
00:43:01.160 steve bannon and the war room we're extremely critical of these companies that are really
00:43:06.120 demanding some degree of regulation on them you come from maybe you would describe yourself as more
00:43:12.280 left-leaning leaning than the war room uh maybe more libertarian maybe not i i you know but what potential
00:43:19.720 is there for an alliance between disparate political factions to bring some of these companies to heal
00:43:28.760 i mean i think that's a great question it's part of why i was willing to be on your show and i've
00:43:32.600 reached out i was on lou dobbs's show and so forth is i think that nobody should want what where we're
00:43:38.680 headed right now which is a world where a few people control all the data and control all of us
00:43:44.360 and monitor everything that we're doing nobody should want that well i'm hopeful sir thank you very much
00:43:50.440 thank you i'm here with roman yampolski at the ai for good conference roman the number one p doom
00:43:58.920 champion of all ai experts my first question how can we have ai for good if it's going to destroy us
00:44:06.600 we can try we can have tools which are incredibly helpful we can cure diseases we can improve our economic
00:44:13.000 standing as long as we don't create general super intelligence future can be very bright so when you
00:44:21.240 talk about a 99.99999 p doom probability that with the creation of super intelligence we're doomed what
00:44:30.600 are some of the paths to that eradication that you imagine so my research shows that you cannot
00:44:37.560 indefinitely control super intelligence meaning if we build it it will probably take us out how it
00:44:43.640 will do it i cannot predict i'm not super intelligent i can tell you about standard human tools synthetic
00:44:49.160 biology nanotech but it would definitely come up with something much more efficient unpredictable undetectable
00:44:58.280 so in a sense this this notion rests basically on chains of logic you begin with the idea the super
00:45:05.480 super intelligence would not necessarily have our existence uh as a a priority is is that correct
00:45:12.680 that's exactly correct we don't know how to align those systems with our goals how to make them
00:45:17.640 pro-human biased so essentially if it has a goal and we stand in a way maybe it's concerned we're going
00:45:24.280 to create competing super intelligence maybe we are holding some resource it needs it would have no problem
00:45:30.760 taking us out but lower levels so what are some of the the benefits of ai that you foresee in the future
00:45:37.800 just narrow ais medical research definitely we can cure most diseases and hopefully live forever
00:45:44.600 hopefully live forever if you could expand on that just a touch so right now the most you can get is
00:45:50.280 probably 120 years most people get 80 there is no reason in physics why you can't live 500 years a
00:45:57.400 thousand years would you see that more as a kind of biological longevity project or some sort of
00:46:04.760 uploading or maybe some middle ground between i really hope a biological option this is definitely
00:46:11.080 going to preserve our consciousness all the other alternatives uploading merging with technology may end
00:46:17.800 up creating a clone of you not really keeping you around so it's like having a twin the thing is out
00:46:23.400 there and the internet is digital but it's not you do you think your twin would try to come kill you
00:46:28.280 no my twin is awesome he's just like me okay so you're saying that you are not a killer yes
00:46:35.720 me neither one of the theories that you've really fleshed out that a lot of people talk about but don't
00:46:41.640 go into the details of is the simulation theory now do you believe that we're in a simulation i'm very
00:46:48.520 much in the camp which says yes we are and the logic is that we're getting very close to being
00:46:55.480 able to create realistic virtual reality we're also close to creating ai agents which could populate
00:47:01.320 that virtual reality so the moment that technology exists i recommit right now to run an experiment where
00:47:07.320 i'll run a billion copies of this exact moment placing us into a simulation so if we're in a simulation
00:47:15.240 would each of these agents have agency and consciousness or are we looking at a a landscape
00:47:21.960 of npcs both options are possible you can design it where they are just scripts or you can give them
00:47:27.240 full autonomy these people look like npcs to me they look like non-autonomous entities except for her
00:47:34.760 what do you think uh benefit of doubt i always assume the other being is conscious capable of suffering
00:47:40.040 feeling pain and i treat them very nicely like you so a curious point of that though if we're in a
00:47:46.840 simulation would it be a simulation then that was created by some sort of artificial intelligence a
00:47:52.280 general or super intelligence or do you i know that you can't see past the simulation but when when
00:48:00.120 thinking about your ideas on this i imagine that it would be the once you could create a simulation so
00:48:07.480 realistic that we could live inside it must also coincide with the creation of a super intelligence
00:48:13.720 unless that was stopped right so why the concern about super intelligence destroying everyone if
00:48:20.360 it's possible and this is my idea of it but it's possible that a super intelligence then created all
00:48:25.880 of this because it's not the same super intelligence external one could be very benign god-like
00:48:31.240 super intelligence the one we create could be very malevolent satan-like why not a malevolent
00:48:36.520 ai creating all this to annoy us but we could create a benevolent super ai to break out of the
00:48:42.280 demiurge's construct my life is pretty good so i assume whatever is creating my simulation is very
00:48:49.400 benign and friendly but we can definitely learn a lot from ai boxing experiments and how to escape from
00:48:55.960 virtual worlds on a more practical note you've talked about data privacy being important especially in
00:49:02.360 regard to uh potential brain computer interfaces what what is the concern there do you feel like
00:49:09.000 it's a sacred right to remain private internally or there are other more practical concerns it is a big
00:49:14.520 one so everyone understands freedom of speech but freedom of thought your private uh thinking patterns
00:49:20.440 should never be subject to any restriction violation that would destroy society completely and consequences
00:49:27.400 could be horrible really thought crime level punishments uh on a more concerning level if you
00:49:33.240 give malevolent ai direct access to your brain to your pleasure and torture sensors that could end very
00:49:39.400 poorly do you think that uh the bcis are kind of approaching that you see neural link and you see some
00:49:45.320 of the wearables uh do you think maybe five ten years we would be at a point where we could routine
00:49:51.080 routinely have our thoughts tracked uh via neurological scans it seems like it's starting to be
00:49:57.240 possible for some very narrow parts of the brain and i think it will scale to the whole brain
00:50:02.200 eventually and you'd be able not just read but also write to the brain would you be willing to undergo
00:50:07.800 such a process though in order to enhance your own intellectual abilities i'll wait for other people to try
00:50:13.960 it first what are the possible solutions to the problem of corporations racing to create super
00:50:20.600 intelligence i haven't found a good solution so i know we're not stopping development there is just too much
00:50:26.280 money in it too much power to be grabbed it seems like the only hope we have is personal self-interest
00:50:33.720 if young rich people who are on those labs realize it's going to end poorly for them they're not going
00:50:39.000 to be famous they're not going to be part of history because there is not going to be any history
00:50:42.840 maybe that will make them come to an agreement and kind of slow down collectively while keeping their
00:50:48.520 benefits what about governmental responses i encourage every attempt we don't have that many solutions so if
00:50:55.320 you can pass lots of laws red tape slowing it down just to kind of siphoning money from compute to
00:51:00.760 lawyers it's positive but i don't think you can solve the technical problem with legal solutions spam
00:51:06.360 is illegal computer viruses illegal makes no difference so you say you have a beautiful life now
00:51:13.320 but you live in kentucky tell me what is the most beautiful thing about kentucky aside from having
00:51:19.480 tennessee just south of you i would say kfc but they moved out so we also have fort nox with all the
00:51:25.000 gold if there's gold in there has anybody checked maybe it's full of bitcoin now maybe it's full of
00:51:30.760 simulated gold simulated bitcoin simulated bitcoin gold i like it i love kentucky uh roman i really
00:51:38.200 appreciate your time thank you very much if we end up dying due to the super intelligence i'll see you on
00:51:43.160 the other side and if this simulation continues on beyond this current incarnation well i'll see you on
00:51:48.920 the other side i'll see you
00:52:11.160 there's a lot of talk about government debt but after four years of inflation the real crisis is
00:52:16.200 personal debt seriously you're working harder than ever and you're still drowning in credit card
00:52:22.360 debt and overdue bills you need done with debt and here's why you need it the credit system is rigged
00:52:29.560 to keep you trapped done with debt has unique and frankly brilliant escape strategies to help
00:52:36.520 end your debt fast so you keep more of your hard-earned money done with debt doesn't try to sell you a loan
00:52:44.120 and they don't try to sell you a bankruptcy they're tough negotiators that go one-on-one with your
00:52:49.560 credit card and loan companies with one goal to drastically reduce your bills and eliminate interest
00:52:55.240 and erase penalties most clients end up with more money in their pocket month one and they don't stop
00:53:02.520 until they break you free from debt permanently look take a couple of minutes and visit donewithdebt.com
00:53:11.560 talk with one of their strategists it's free but listen up some of their solutions are time
00:53:17.400 sensitive so you'll need to move quickly go to donewithdebt.com that's donewithdebt.com
00:53:23.560 stop the anxiety stop the angst go to donewithdebt.com and do it today