Making Sense - Sam Harris - October 02, 2025


#435 — The Last Invention


Episode Stats

Length

37 minutes

Words per Minute

162.55377

Word Count

6,058

Sentence Count

7

Hate Speech Sentences

11


Summary

In today's episode, we preview a new podcast series created by Andy Mills and Matt Boll on artificial intelligence. It's called "The Last Invention" and it's the first episode in a new series from a new media company called Longview Investigations.


Transcript

00:00:00.000 welcome to the making sense podcast this is sam harris well in today's episode we are
00:00:27.820 previewing a new podcast series created by andy mills and matt boll on artificial intelligence
00:00:35.300 andy and matt have been making podcasts and reporting for at least 15 years or so and they've
00:00:44.560 won a bunch of awards the peabody a pulitzer they've worked at outlets like spotify and npr
00:00:52.640 and the new york times they also produce the witch trials of jk rowling which i discussed on a
00:00:58.680 previous episode of this podcast and they have a new media company called longview and this series
00:01:04.820 on ai which is titled the last invention is their first release and so we're previewing that here i
00:01:12.280 am interviewed in it but there are many other people in there which are well worth listening to
00:01:17.760 nick bostrom jeffrey hinton yoshua bengio reed hoffman max tegmark robin hansen jaron lanier
00:01:25.580 tim urban kevin roos and others and um this is really a great introduction to the topic i mean
00:01:34.620 this is the it's going to be a limited run series of eight episodes and um really more than anything
00:01:40.660 that i've done on this topic this is the series you can share with your friends and family to get
00:01:45.480 them to understand just what the fuss is about whether it's um the controversy on the side of
00:01:51.980 being worried about this issue or on the side of being merely worried that we're going to do something
00:01:59.360 regulatory or otherwise to slow our progress they represent both sides of this controversy uh in a way
00:02:07.680 that i really don't because i'm so in touch with one side of it and um i think uh the uh those who
00:02:16.660 aren't in touch with that side are fairly crazy and doctrinaire at this point so anyway it's a great
00:02:22.660 introduction to the topic as far as the series itself it's reported by andy mills and gregory warner
00:02:31.340 andy as i said it was worked for the new york times and he actually helped create their audio
00:02:37.460 department and their flagship podcast the daily as well as rabbit hole and gregory warner has been a
00:02:45.420 foreign correspondent in russia and afghanistan and east africa where he was the bureau chief for npr
00:02:52.080 and he created and hosted the podcast rough translation he also publishes stories on this
00:02:59.520 american life and in the new york times so anyway you're in great hands with this series i'm happy
00:03:06.200 to preview it i would love to give these guys a boost again their new media company is longview
00:03:12.860 you can find out more about them over at longviewinvestigations.com where you can also
00:03:19.480 subscribe and get their newsletter and also andy and matt wanted me to say that they are hiring
00:03:26.280 for this new company they're looking for reporters producers editors both for writing and for audio and
00:03:34.280 video jobs so if you have those skills don't be shy again it's longviewinvestigations.com anyway that's
00:03:42.200 it enjoy thanks for listening
00:03:49.480 this is the last invention i'm gregory warner and our story begins with a conspiracy theory
00:04:02.440 so greg last spring i got this tip via the encrypted messaging app signal this is reporter andy mills
00:04:12.200 from a former tech executive and he was making some pretty wild claims and i wanted to talk to him
00:04:19.620 on the phone but he thought his phone was being tapped but the next time i was out in california
00:04:24.460 i went to meet with him i'm really kind of contending with like who i am in this moment up until a few
00:04:31.240 months ago i was an executive in silicon valley and yet here i am sitting in a living room with you guys
00:04:37.800 talking about what i think is one of the most important things that needs to be discussed
00:04:43.920 in the whole world right which is the the nature in which power is decided in our society
00:04:51.660 and he told me the story that a faction of people within silicon valley had a plot to take over the
00:05:00.720 united states government and that the department of government efficiency doge under elon musk
00:05:06.840 was really phase one of this plan which was to fire human workers in the government and replace them
00:05:16.300 with artificial intelligence and that over time the plan was to replace all of the government
00:05:23.600 and have artificial intelligence make all the important decisions in america
00:05:29.400 i have seen both the nature of the threat from inside the belly of the beast if it were in
00:05:37.840 silicon valley and seen the nature of what's at stake
00:05:42.840 now this guy his name is mike brock and he had formerly been an executive in silicon valley he'd
00:05:50.620 worked alongside some big-name guys like jack dorsey but he'd recently started a substack and he told me
00:05:56.440 that after he published some of these accusations he had become convinced that people were after him
00:06:03.040 i have reason to believe that i've been followed by private investigators um for that and other reasons
00:06:08.760 i i traveled with private security when i went to dc and new york city last week he told me that he had
00:06:15.920 just come back from washington dc where he had met with a number of lawmakers including maxine waters and
00:06:24.360 debriefed them about this threat to american democracy we are in a democratic crisis this is a coup this is a slow
00:06:31.860 motion soft coup
00:06:33.100 and so this faction who is in this faction what is this like the masons or something or it's like a secret cult
00:06:42.160 well he named several names uh people who are recognizable figures in silicon valley
00:06:49.400 and he claimed that this quote-unquote conspiracy went all the way up to jd vance the vice president
00:06:55.040 and he called the people who were behind this coup the accelerationists the accelerationists
00:07:02.040 it was a wild story yeah but you know some conspiracies turn out to be true and it was also
00:07:13.600 an interesting story so i started making some phone calls i started looking into it and some of his
00:07:19.920 claims i could not confirm maxine waters for example did not respond to my request for an interview
00:07:25.340 other claims started to somewhat fall apart and of course eventually doge itself somewhat fell apart
00:07:32.920 elon musk ended up leaving the trump administration and for a while it felt like you know it was one of
00:07:38.720 those tips that just doesn't go anywhere but in the course of all these conversations i was having
00:07:43.920 with people close to artificial intelligence i realized that there was an aspect of his story
00:07:49.960 that wasn't just true but in some ways it didn't go quite far enough because there is indeed a faction
00:07:59.080 of people in silicon valley who don't just want to replace government bureaucrats but want to replace
00:08:06.140 pretty much everyone who has a job with artificial intelligence and they don't just think that the ai
00:08:13.520 that they're making is going to upend american democracy they think it is going to upend the entire
00:08:20.160 world order
00:08:20.820 the world as you know it is over it's not about to be over it's over i believe it's going to change
00:08:28.880 the world more than anything in the history of mankind more than electricity but here's the thing
00:08:34.340 they're not doing this in secret this group of people includes some of the biggest names in
00:08:40.260 technology you know bill gates sam altman mark zuckerberg most of the leaders in the field
00:08:45.920 of artificial intelligence ai is going to be better than almost all humans at almost all things
00:08:50.920 a kid born today will never be smarter than ai it's the first technology that has no limit
00:08:55.920 so wait so you get a tip about like a slow motion coup against the government and then you realize no no
00:09:04.000 this is not just about the government this is pretty much every human institution well yes and no
00:09:08.380 many of these accelerationists think that this ai that they're building is going to lead to the end
00:09:14.720 of what we have come to think of as jobs the end of what we have traditionally thought of as schools
00:09:20.680 some would even say this could usher in the end of the nation state but they do not see this as some
00:09:27.120 sort of shadowy conspiracy they think that this may end up literally being the best thing to ever
00:09:34.600 happen to humanity i've always believed that it's going to be the most important invention that
00:09:39.020 humanity will ever make imagine that everybody will now in the future and have access to the very
00:09:45.960 best doctor in the world the very best educator the world will be richer and can work less and have more
00:09:55.120 this really will be a world of abundance they predict that their ai systems are going to be the thing
00:10:02.720 that helps us to solve the most pressing problems that humanity faces energy breakthroughs medical
00:10:08.340 breakthroughs maybe we can cure all disease with the help of ai they think it's going to be this
00:10:13.260 hinge moment in human history where soon we will be living to maybe be 200 years old or maybe we'll be
00:10:19.480 visiting other planets where we will look back in history and think oh my god how did people live
00:10:25.520 before this technology should be a kind of era of maximum human flourishing where we travel to the stars
00:10:31.220 and colonize the galaxy i think a world of abundance really is a reality i don't think it's utopian given
00:10:39.120 what i've seen that the technology is capable of so these are a lot of bold promises and they come
00:10:48.820 from the people who are selling this technology why do they think that the ai that they are building
00:10:54.660 is going to be so transformative well the reason that they're making such grandiose statements and
00:11:01.340 these bold predictions about you know the near future it comes down to what it is they think that
00:11:08.180 they're making when they say they're making ai okay this is something that i recently called up my old
00:11:13.400 colleague kevin roos to talk about kevin how is it that you describe what it is that the ai companies
00:11:20.720 are making am i right to say that they're essentially building like a super mind like a
00:11:26.700 digital super brain yes that is correct he's a very well-sourced tech reporter and a columnist at
00:11:32.580 the new york times also co-host of the podcast hard fork and he says that the first thing to know is that
00:11:37.400 this is far more of an ambitious project than just building something like chatbots essentially many of
00:11:44.780 these people believe that the human brain is just a kind of biological computer that there is nothing
00:11:50.500 you know special or supernatural about human intelligence that we are just a bunch of neurons
00:11:56.880 firing and learning patterns in the data that we encounter and that if you could just build a computer
00:12:02.900 that sort of simulated that you could essentially create a new kind of intelligent being right i've heard
00:12:11.040 some people say that we should think of it less like a piece of software or a piece of hardware
00:12:14.940 and more like a new intelligent species yes it wouldn't be a computer program exactly it wouldn't be a
00:12:23.680 human exactly it would be this sort of digital super mind that could do anything a human could and more
00:12:31.400 the goal the benchmark that the ai industry is working towards right now is something that they call
00:12:37.960 a gi artificial general intelligence the general is the key part because a general intelligence isn't
00:12:46.020 just really good at one or two or 20 or 100 things but like a very smart person can learn new things
00:12:53.320 can be trained in how to do almost anything i guess this is where people get worried about jobs getting
00:12:59.060 replaced because suddenly you have a worker like a lawyer or a secretary and you can tell the ai to learn
00:13:05.740 everything about that job exactly i mean that is what they're making and that's why there's a lot
00:13:10.780 of concerns about what this could do to the economy i mean a true agi could learn how to do any human job
00:13:17.260 factory worker ceo doctor and as ambitious as that sounds it has been like the stated on paper goal
00:13:26.100 of the ai industry for a very long time but when i was talking to kevin roos he was saying that even just
00:13:31.020 a decade ago the idea that we would actually see it within our lifetimes that was something that
00:13:37.120 even in silicon valley was seen as like a pie in the sky dream people would get laughed at inside the
00:13:42.880 biggest technology companies for even talking about agi it seemed like trying to plan for uh you know
00:13:50.020 something building a hotel chain on mars or something it was like that far off in people's imagination
00:13:54.960 and now if you say you don't think agi is going to arrive until 2040 you are seen as like a hyper
00:14:02.120 conservative basically luddite in silicon valley well i know that you are regularly talking to people
00:14:09.300 at open ai and anthropic and deep mind and all these companies what is their timeline at this point
00:14:15.600 when do they think they might hit this benchmark of agi i think the overwhelming majority view among the
00:14:24.240 people who are closest to this technology both on the record and off the record is that it would
00:14:30.460 be surprising to them if it took more than about three years for ai systems to become better than
00:14:38.260 humans at at least almost all cognitive tasks some people say physical tasks robotics that's going to
00:14:45.540 take longer but the majority view of the people that i talk to is that something like agi will arrive
00:14:52.160 in the next two or three years or certainly within the next five i mean holy shit holy shit that is
00:15:00.780 really soon this is why there has been such insane amounts of money invested in artificial intelligence
00:15:08.640 in recent years this is why the ai race has been heating up right this is to accelerate the path to ai
00:15:15.320 but this has also really brought more attention to this other group of people in technology people who
00:15:23.280 i personally have been following for over a decade at this point who have dedicated themselves to try
00:15:29.840 everything they can to stop these accelerationists the basic description i would give to the current
00:15:35.340 scenario is if anyone builds it everyone dies many of these people like eliezer yudkowski are former
00:15:42.700 accelerationists who used to be thrilled about the ai revolution and who for years now have been
00:15:48.660 trying to warn the world about what's coming i am worried about the ai that is smarter than us i'm
00:15:54.940 worried about the ai that builds the ai that is smarter than us and kills everyone there's also the
00:16:00.020 philosopher nick bostrom he published a book back in 2014 called super intelligence now a super
00:16:05.680 intelligence would be extremely powerful we would then have a future that would be shaped by the
00:16:10.320 preferences of this ai not long after elon musk started going around sounding this alarm i have
00:16:16.880 exposure to the most cutting edge ai and i think people should be really concerned about it he went
00:16:22.840 to mit i mean with artificial intelligence we are summoning the demon told them that creating an ai
00:16:28.700 would be summoning a demon ai is a fundamental risk to the existence of human civilization musk went as
00:16:35.080 far as to have a personal meeting with president barack obama trying to get him to regulate the ai
00:16:41.920 industry and take the existential risk of ai seriously but he like most of these guys at the time
00:16:49.120 they just didn't really get anywhere however in recent years that has started to change the man dubbed the
00:16:57.420 godfather of artificial intelligence has left his position at google and now he wants to warn the world
00:17:02.980 about the dangers of the very product that he was instrumental in creating over the past few years
00:17:08.620 there have been several high profile ai researchers in some cases very decorated ai researchers this
00:17:15.400 morning as companies race to integrate artificial intelligence into our everyday lives one man behind
00:17:21.120 that technology has resigned from google after more than a decade who have been quitting their high
00:17:26.660 paying jobs going out to the press and telling them that this thing that they helped to create
00:17:31.320 poses an existential risk to all of us it really is an existential threat some people say this is
00:17:37.640 just science fiction and until fairly recently i believed it was a long way off one of the biggest
00:17:42.440 voices out there doing this has been this guy jeffrey hinton he's like a really big deal in the industry and
00:17:48.160 it meant a lot for him to quit his job especially because he's a nobel prize winner for his work in ai
00:17:53.960 the risk i've been warning about the most because most people think it's just science fiction but i want to
00:18:00.500 explain to people it's not science fiction it's very real is the risk that we'll develop an ai that's
00:18:06.260 much smarter than us and it will just take over and it's interesting when he's talking to journalists
00:18:11.500 trying to sound this alarm they're often saying yes we know that ai poses a risk if it leads to fake
00:18:17.660 news or like what if someone like vladimir putin gets a hold of ai it's inevitably if it's out there
00:18:23.920 going to fall into the hands of people who maybe don't have the same values the same motivations
00:18:29.760 he's telling them no no no this isn't just about it falling into the wrong hands this is a threat
00:18:34.680 from the technology itself what i'm talking about is the existential threat of this kind of digital
00:18:42.300 intelligence taking over from biological intelligence and for that threat all of us are in the same boat
00:18:48.360 the chinese the americans the russians we're all in the same boat we do not want digital intelligence
00:18:53.980 to take over from biological intelligence okay so what exactly is he worried about when he says it's
00:19:00.300 an existential threat well the simplest way to understand it is that hinton and people like him
00:19:04.800 they think that one of the first jobs that's going to get taken after the industry hits their benchmark
00:19:12.560 of agi will be the job of ai researcher and then the agi will 24 7 be working on building another
00:19:24.440 ai that's even more intelligent and more powerful so you're saying ai would invent a better ai and then
00:19:31.840 that ai would invent an even better ai that is one way of saying it yes exactly the agi now becomes the
00:19:38.460 ai inventor and each ai is more intelligent than the ai before it all the way up until you get from
00:19:46.380 agi artificial general intelligence to asi artificial super intelligence the way i define it is this is a
00:19:54.760 system that is single-handedly more intelligent more competent at all tasks than all of humanity put
00:20:01.520 together i've now spoken to a number of different people who are trying to stop the ai industry from
00:20:08.280 taking this step people like conor lahey he's both an activist and a computer scientist so it can do
00:20:14.680 anything the entire humanity working together could do so for example you and me are generally
00:20:21.740 intelligent humans but we couldn't build semiconductors by ourselves but humanity put
00:20:26.640 together can't build a whole semiconductor supply chain a super intelligence could do that by itself
00:20:31.940 so it's kind of like this if agi is as smart as einstein or way smarter than einstein i guess
00:20:38.400 an einstein that doesn't sleep that doesn't take bathroom breaks right and lives forever and has
00:20:43.220 memory for everything exactly asi that is smarter than a civilization a civilization of einsteins that's
00:20:51.140 how the theory goes right like you have the ability now to do in hours or minutes things that take
00:20:59.380 a whole country or maybe even the whole world a century to do and some people believe that if we
00:21:06.060 were to create and release a technology like that there'd be no coming back humans would no longer be
00:21:12.900 the most intelligent species on earth and we wouldn't be able to control this thing by default these
00:21:19.480 systems will be more powerful than us more capable of gaining resources power control etc and unless they have
00:21:26.540 a very good reason for keeping humans around i expect that by default they will simply not to do
00:21:30.900 so and the future will belong to the machines not to us and they think that we have one shot
00:21:36.780 essentially one shot like one shot meaning we don't we can't update the app once we release it once
00:21:42.660 this cat is out of the bag once this genie is out of the bottle whatever once this program is out of
00:21:47.060 a lab as it were exactly unless it is 100 aligned with what humans value unless it is somehow placed
00:21:55.600 under our control they believe it will eventually lead to our demise i guess i'm scared to ask this
00:22:02.060 but like how would this look like a global disaster or are we talking about it getting control of crispr and
00:22:08.860 releasing a global pandemic yes there are those fears for sure i want to get more into all the
00:22:15.460 different scenarios that they foresee in a future episode but i think the simplest one to grasp is
00:22:20.440 just this idea that a superior intelligence is rarely if ever controlled by an inferior intelligence
00:22:27.980 and we don't need to imagine a future where these asi systems hate us or they like break bad or
00:22:34.860 something the way that they'll often describe it is that these asi systems as they get further and
00:22:41.540 further out from human level intelligence after they evolve beyond us that they might just not
00:22:48.260 think that we're very interesting i mean in some ways hatred would be flattering like if they saw us
00:22:54.660 as the enemy and we were in some battle between humanity and the ai which we've seen from so many
00:22:59.220 movies but what you're describing is just like indifference right i mean one of the ways that people
00:23:04.900 will describe it is that like if you're going to build a new house of all the concerns you might have
00:23:09.520 in the construction of that house you're not going to be concerned about the ants that live on that
00:23:16.560 land that you've purchased and they think that one day the asis may come to see us the way that we
00:23:23.440 currently see ants you know it's not like we hate ants some people really love ants but humanity as a
00:23:31.600 whole has interests and if ants get in the way of our interests then we'll fairly happily kind of destroy
00:23:37.940 them this is something i was talking to william mccaskill about he is a philosopher and also the
00:23:42.520 co-founder of this movement called the effective altruists and the thought here is if you think of
00:23:48.440 ai as we're developing as like this new species that species as its capabilities keep increasing so the
00:23:55.680 argument goes we'll just be more competitive than the human species and so we should expect it to end up
00:24:02.720 with all the power that doesn't immediately lead to human extinction but at least it means that
00:24:09.440 our survival might be as contingent on the goodwill of those ais as the survival of ants are on the
00:24:16.740 goodwill of human beings if the future is closer than we think and if one day soon there is a at least
00:24:25.340 reasonable probability that super intelligent machines will treat us like we treat bugs then what do the
00:24:32.680 folks worried about this say that we should do well there's essentially two different approaches to
00:24:38.220 the perceived threat some people who are worried about this they simply say that we need to stop
00:24:44.980 the ai industry from going any further and we need to stop them right now we should not build asi
00:24:51.820 just don't do it we're not ready for it and it shouldn't be done further than that it's not just i am
00:24:57.140 not trying to convince people to not do it out of the goodness of their heart i think it should be
00:25:00.180 illegal it should be logically illegal for people and private corporations to attempt even to build
00:25:08.220 systems that could kill everybody what would that mean to make it illegal like how do you enforce that
00:25:12.560 yeah i mean some accelerationists joke like what are you going to outlaw algebra right you don't need
00:25:17.660 uranium in a secret uh center you can just build it with code right but you do need data centers and
00:25:25.040 you could you know put in laws and restrictions that stop these ai companies from building any more
00:25:30.900 data centers and a number of other laws there are some people though who go even further and say
00:25:35.680 that nuclear armed states like the u.s should be willing to threaten to attack these data centers
00:25:43.240 if these ai companies like open ai are on the verge of releasing an agi to the world wait so
00:25:51.500 even bombing data centers that are in virginia or in uh massachusetts i mean like they see it as that
00:25:58.520 great of a threat they believe that on the current path we're on there is only one outcome and that
00:26:06.300 outcome is the end of humanity if we build it then we die exactly and this is why many people have
00:26:13.200 come to calling this faction the ai doomers the accelerationists like to call doomer that was a
00:26:19.440 kind of pejorative coined by them and very successfully i must say i disavow the doomer
00:26:24.240 label because i don't see myself that way some of them have embraced the name doomer others of them
00:26:28.900 dislike the name doomer they often will call themselves the realists but in my reporting
00:26:34.380 everyone calls themselves the realists so i didn't think that would work like i consider to be realistic
00:26:39.320 to be calibrated and one of the reasons that they balk at the name is that they feel like it makes
00:26:45.580 them come off as a bunch of anti-technology luddites when in fact many of them work in technology many of
00:26:50.660 them love technology people like conor lahey i mean they even like ai as it is right now i mean he uses
00:26:56.880 chat gpt he just tells me that from everything that he sees where it's headed where it's going
00:27:03.680 we have no choice but to stop them if it turns out tomorrow there's new evidence that actually all
00:27:10.240 these problems i'm worried about are less of a problem than i think they are i'd be the most happy
00:27:14.320 person in the world like this would be ideal all right so one approach is we stop ai in its tracks
00:27:20.980 it's illegal to proceed down this road we're on but that seems challenging to do given how much
00:27:28.480 is it already invested in ai and frankly how much potential value there is in the progress of this
00:27:32.920 technology so what's the alternative well there's another group of people who are pretty much equally
00:27:39.560 worried about the potentially catastrophic effects of making an agi and it leading to an asi but they
00:27:46.400 agree with you that we probably can't stop it and some of them would go as far as to say we probably
00:27:51.680 shouldn't stop it because there really is a lot of potential benefits in agi so what they're advocating
00:27:58.980 for is that our entire society essentially our entire civilization needs to get together and try
00:28:07.280 in every way possible to get prepared for what's coming how do we find the win-win outcome here
00:28:15.260 one of the advocates for this approach that i talked to is liv barree she is a professional
00:28:20.360 poker player and also a game theorist our job now right now whether you know you are someone building
00:28:27.140 it or someone who is observing people build it or just a person living on this planet because this
00:28:31.520 affects you too is to collectively figure out how we unlock this narrow path because it is a narrow
00:28:37.640 path we need to navigate we should be really focusing a lot right now on trying to understand
00:28:43.380 as concretely as possible what are all the obstacles we need to face along the way and what can we be
00:28:49.500 doing now to ensure that that transition goes well this faction which includes figures like william
00:28:55.080 mccaskill what they want to see is the thinking institutions of the world you know the universities
00:29:01.540 research labs the media join together to try and solve all of the issues that we're going to face over
00:29:08.600 the next few years as agi approaches so you mean not just leave this up to the tech companies
00:29:14.620 exactly they want to see you know politicians brainstorming ways to help their constituents in the event
00:29:23.160 that the bottom falls out of the job market right right or prepare communities to have no jobs i guess
00:29:28.980 some of them go that far right like universal basic income and they also want to see governments around
00:29:34.700 the world especially in the u.s start to regulate this industry what are the concrete steps we could
00:29:40.800 take in the next year to get ready so we'd like regulations that say when a big company produces
00:29:46.860 a new very powerful thing they run tests on it and they tell us what the tests were
00:29:52.240 jeffrey hinton after he quit google he converted to this approach and he was talking to me about
00:29:58.520 the kinds of regulations that he wants to see and we'd like things like whistleblower protection
00:30:03.240 so if someone in one of these big companies discovers the company is about to release something
00:30:08.200 awful which hasn't been tested properly they get whistleblower protections those are to deal though
00:30:14.120 with more short-term threats okay but what about the long-term threats what about this idea that ai
00:30:20.320 poses this existential threat what is it that we could do to prevent that okay so i can tell you
00:30:26.460 what we should do about our self-taking over there's one good piece of news about this which is that
00:30:32.880 no government wants that so governments will be able to collaborate on how to deal with that so you're
00:30:38.860 saying that china doesn't want ai to take over their power and authority the u.s doesn't want some
00:30:44.540 technology to take over their power and authority and so you see a world where the two of them can
00:30:49.620 work together to make sure that we keep it under control yes in fact china doesn't want an ai to take
00:30:57.240 over the u.s government because they know it will pretty soon spread to china so we could have a system
00:31:03.700 where there were research institutes in different countries that were focused on how are we going to
00:31:09.460 make it so that it doesn't want to take over from people it will be able to if it wants to so we
00:31:14.140 have to make it not want to and the techniques you need for making it not want to take over
00:31:19.820 are different from the techniques you need for making it more intelligent so even though the
00:31:24.180 countries won't share how to make it more intelligent they will want to share research on
00:31:28.840 how do you make it not want to take over and over time i've come to calling the people who are a part
00:31:34.040 of this approach the scouts like the boy scouts be prepared like the boy scouts yes exactly and it
00:31:40.440 turned out that after i ran this name by william mccaskill so what if i called your camp the scouts
00:31:46.580 so a little fun fact about myself is i was a boy scout for 15 years he actually was a boy scout and so i
00:31:55.980 thought okay the scouts maybe that's why i've got this approach but the key thing about the scouts
00:32:01.380 approach if it's going to work is they believe that we cannot wait that we have to start getting
00:32:08.800 prepared and we have to start right now this is something i was talking about with sam harris
00:32:13.800 the reasons to be excited and to want to go go go are all too obvious except for the fact that
00:32:19.540 we're running all of these other risks and we haven't figured out how to mitigate them
00:32:23.700 sam is a philosopher he's an author he hosts the podcast making sense and he's probably
00:32:28.700 the most impassioned scout that i know personally there's every reason to think that we have
00:32:33.640 something like a tightrope walk to perform successfully now like in this generation right
00:32:40.820 not a hundred years from now and we're edging out onto the tightrope in a style of movement that is not
00:32:49.000 careful if you knew you had to walk a tightrope and you got one chance to do it and you've never done
00:32:56.300 this before like what is the attitude of that first step and that second step right we're like
00:33:03.720 racing out there in the most chaotic way you know yeah and just like we're off balance already we're
00:33:12.660 looking over our shoulder fighting with the last asshole we met online and we're leaping out there
00:33:19.060 right and you've been on this for a long time in 2016 i remember you did this big ted talk yeah i've
00:33:25.560 watched it at the time it had millions of views and you were essentially saying the same thing you
00:33:30.220 were trying to get people to realize that we have a tightrope to walk and we have to walk it right now
00:33:35.660 well i wanted to to help sound the alarm about the inevitability of this collision whatever the
00:33:44.560 time frame we know we're very bad predictors as to how quickly certain breakthroughs can happen
00:33:50.820 so stewart russell's point which i i also cite in that talk which i think is a quite brilliant
00:33:57.580 change in a frame he says okay let's just admit it is you know probably 50 years out right let's just
00:34:04.180 change the concepts here imagine we received a communication from elsewhere in the galaxy from
00:34:10.940 an alien civilization that was obviously much more advanced than we are because they're talking to us
00:34:16.640 now and the communication reads thus people of earth we will arrive on your lowly planet in 50 years
00:34:25.820 get ready just think of how galvanizing that moment would be that is what we're building
00:34:37.040 that collision and that new relationship
00:34:40.240 coming up on the last invention
00:35:05.360 why is all the worry about the technology going badly wrong and why are people not worried enough
00:35:18.900 about it not happening the accelerationists respond to these concerns existential risk for humanity is a
00:35:25.020 portfolio we have nuclear war we have pandemic we have asteroids we have climate change we have a whole
00:35:32.460 stack of things that could actually in fact have this existential risk so you're saying that it's going
00:35:36.560 to decrease our overall existential risk even as it itself may pose to some degree an existential risk
00:35:43.640 yes researchers tell us what they saw that changed their minds i was a person selling ai
00:35:52.060 as a great thing for decades i convinced my own government to invest hundreds of millions of dollars
00:35:58.460 in ai all my self-worth was on the plan that it would be positive for society and i was wrong
00:36:09.700 i was wrong and we go back to where the technology fueling this debate began
00:36:16.020 basically this is the holy grail of the last 75 years of computer science
00:36:22.000 it is the genesis the er like philosopher stone of the field of computer science
00:36:28.520 the last invention is produced by longview to hear episode two right now search for the last
00:36:39.060 invention wherever you get your podcasts and subscribe to hear the rest of the series
00:36:43.820 thank you for listening and our thanks to sam we'll see you soon
00:36:47.800 you
00:36:51.540 this is what i like
00:37:05.740 you
00:37:10.940 you
00:37:13.560 you