Making Sense - Sam Harris - June 22, 2023


#323 — Science & Survival


Episode Stats

Length

46 minutes

Words per Minute

160.33293

Word Count

7,410

Sentence Count

6

Misogynist Sentences

1

Hate Speech Sentences

1


Summary

In this episode of the Making Sense podcast, I speak with Martin Rees Martineau, a well-known astronomer and the author of several books, including If Science Is To Save Us, about existential risk and the paradoxical provisionality of science, and the strange relationship we have to scientific authority. We talk about the importance of science and scientific institutions, and about the failure of our politics and our institutions to properly grapple with existential risk. We also discuss the role of science in society and the role science has played in shaping the culture and understanding of the world. And, of course, there's a discussion of existential risk, which Martin discusses in his new book, "If Science is to Save Us: A Guide to Existential Risk: The Case for Science in the 21st Century". If science is to save us? is a new book by Martin Reese Martin, which explores existential risk in the face of increasing technology, artificial intelligence, nuclear weapons, the far future, the multiverse, and artificial intelligence. It's a collection of essays exploring some of the most important questions about science's role as a tool for understanding the world, and how science can help us make sense of the universe. The book is out now, and is available for purchase on Amazon. Please consider becoming a patron of the podcast by clicking the link below. This podcast is made possible by the generous support of our sponsor, Sam Harris, who is making a monthly contribution of $5.00. You can support the project by becoming a supporter of the making sense podcast by pledging $5 or $10, which gets you access to the latest episodes of Making Sense. . You'll get access to all sorts of great resources, including books, videos, podcasts, and much more. Thanks to you'll get a better understanding of what's going on in the podcast, including the podcasting opportunities, and more! The Making Sense Podcast, and a chance to get a discount on future episodes, and access to future episodes of the Podcasts, by visiting making sense, and all kinds of perks like this podcast episodes, by becoming one of your favourite podcasting companions, too you get the chance to access a special kind of podcasting opportunity, too, you'll be getting access to access the latest in the Making sense Podcasts and more opportunities to learn more about what's happening in the world of making sense. Make sense?


Transcript

00:00:00.000 welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.520 this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.900 of this conversation in order to access full episodes of the making sense podcast you'll
00:00:21.800 need to subscribe at sam harris.org there you'll find our private rss feed to add to your favorite
00:00:27.020 podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.500 therefore it's made possible entirely through the support of our subscribers so if you enjoy
00:00:36.540 what we're doing here please consider becoming one today i'm speaking with martin reese martin is a
00:00:49.800 well-known astronomer and the former president of the royal society a fellow and former master of
00:00:56.460 trinity college cambridge an emeritus professor of cosmology and astrophysics at cambridge he's
00:01:02.640 also a member of the uk house of lords and he's the author of several books most recently if science
00:01:09.320 is to save us which is the principal topic of today's conversation we talk about the importance
00:01:15.680 of science and scientific institutions the paradoxical provisionality of science and the strange relationship
00:01:23.920 we have to scientific authority we talk about genius as a scientific and sociological phenomenon
00:01:31.100 civilizational risk pandemic preparedness artificial intelligence nuclear weapons the far future
00:01:39.920 the fermi problem where is everybody out there in the cosmos the prospect of a great filter explaining
00:01:48.160 the apparent absence of everybody the multiverse string theory exoplanets large telescopes steps toward improving
00:01:58.100 scientific institutions wealth inequality atheism the conflict between science and religion and this provokes a bit
00:02:06.680 of a debate between us martin was not a fan of what the new atheists were up to nor is he a fan of my version of
00:02:13.980 moral realism so we talk about rationality and ethics unfortunately we had a few technical difficulties
00:02:19.500 and ran out of studio time so the debate didn't go on for as long as it might have but um we got about 30 minutes
00:02:27.020 there where we disagreed about religion and ethics a good bit and i enjoyed it and now i bring you martin reese
00:02:36.860 i am here with martin reese martin thanks for joining me thank you for having me so you have a new book
00:02:49.340 if science is to save us which brings together many of your concerns about existential risk and
00:02:57.020 the importance of science you know the promise of it along with our failures to fully actualize that
00:03:03.420 promise and so i want to talk about this i want to talk about existential risk which you've written
00:03:08.560 about before and also just the inability of our politics and our institutions to properly grapple
00:03:15.320 with it but before we jump into those topics but perhaps you can summarize your your intellectual
00:03:21.540 background and your life in science how would you summarize the kinds of topics you've focused on
00:03:27.840 yes well i've been very lucky in that i've worked most of my career in astrophysics and uh i'm lucky in that
00:03:35.580 when i started it was an exciting time when we had the first evidence for the big bang uh the first evidence
00:03:42.640 for black holes etc and uh i was lucky to be able to write some of the first papers on those topics and uh i always
00:03:50.660 advise students starting now to pick a subject uh where new things are happening so that you could be the first
00:03:56.760 person to do new things rather than just uh filling in the gaps that the old guys left and so i was
00:04:02.700 lucky there and i've been even more lucky in that the subject has remained truthful and so i would
00:04:09.080 describe my work as being phenomenology mainly trying to make sense of all the phenomena discovered through
00:04:16.320 observations on the ground and in space so that's been my main work but when i got to the age of 60
00:04:22.100 i felt i ought to diversify a bit because in my subject in particular it was taken over rather by
00:04:30.220 a computational modeling and i knew i would never be adept at doing that so i felt i ought to do
00:04:37.100 something else and i therefore took on some other duties outside my academic field more in politics i
00:04:45.800 became head of the biggest college in cambridge i became president of the royal society which is our
00:04:51.100 national academy of sciences and i even became a member of the house of lords so i had a wide
00:04:55.960 experience in my 60s of doing this sort of thing and that really uh is uh the background to why i wrote
00:05:03.340 a book which has this rather broad coverage nice nice well um it's a wonderful career and and it's
00:05:10.900 fantastic to have someone who has seen so much scientific progress as well as its um failure you know both
00:05:19.740 the successes and failures of it to permeate the culture and affect policy it's just it's great that
00:05:25.720 you are um where you are and and spending as much time as you are currently in furthering the public
00:05:32.500 understanding of science because your most recent books have definitely done that before we jump into
00:05:38.180 questions of existential risk and the other topics i outlined i have a first question and concern
00:05:45.980 that's more foundational with respect to how we do science how we how we understand its progress how
00:05:53.980 we communicate that progress to non-scientists and it it's around this the issue of the provisionality
00:06:02.000 of science and the really the perpetual provisionality of it there are no final final answers really
00:06:09.440 and this goes to you know the philosophy of science and you know the pauparian observation that we we
00:06:17.620 never really finally prove something true we we simply prove false theories false and we just hew to the
00:06:25.840 best current explanation but there's this does throw up a kind of paradox because what we have in
00:06:32.780 science in the culture of science and in just the epistemology of it is a is a fundamental distrust of
00:06:39.300 authority right we don't we don't slavishly respect authority in science and yet the reality is that you
00:06:46.180 know to a first approximation scientific authority matters you know when no one has time to run all of
00:06:52.780 the experiments going back to the you know the origins of of any specific science themselves we're
00:06:58.340 constantly relying on colleagues to have done things correctly to not be perpetuating fraud to not be
00:07:05.640 lying to us and yet the truth is you know even a nobel laureate is only as good as his last sentence if his if his last
00:07:14.840 sentence didn't make any sense well then a graduate student or anyone else can say that doesn't make any sense and
00:07:19.900 and everyone's on the on the same ground epistemologically speaking so how do you how do you think about how we
00:07:26.520 treat authority and the provisionality of science both in science and and in the communication of it whether you're quite right the
00:07:35.640 of course that science is a progressive enterprise it's a social and collective enterprise and we
00:07:41.880 can never be sure we've got the final truth but i think we've got to not be too skeptical we've got to
00:07:48.020 accept that some things are almost incontestable like newton's laws of motion for instance and also that in many
00:07:57.400 areas of uh importance socially it is prudent to listen to the experts rather than to random person
00:08:03.900 even though the experts are fallible and i think people talk about the idea of revolutions overthrowing
00:08:11.420 things thomas coon is the famous philosopher who did this and i think um the one or two revolutions
00:08:18.160 quantum theory was one but for instance it's not true in any sense that einstein overthrew newton
00:08:25.720 newton is still fine it's good enough to program all um spacecraft going in our solar system but
00:08:34.380 what einstein did was got a theory that gave a deeper understanding and had a wider applicability
00:08:42.420 but newton's laws within a certain range are still okay so one can't say that newton was falsified
00:08:48.520 we can say that it was a step forward and if you think of physics again then our hope would be that
00:08:54.880 there may be some theory unifying all the laws of nature the four basic forces and that will
00:09:02.220 incorporate einstein's theory as a special case so it's really a progressive incorporation and
00:09:09.120 broadening of our understanding how do you think about the quasi myth of the lone genius in science and
00:09:17.240 what that has done to the perception of science i say quasi myth because it it's not truly a myth i mean
00:09:22.680 you just mentioned newton and and when you think about the progress he made in about 18 months locked
00:09:28.180 in a garret you know avoiding the plague he seemed to have done a you know about a century at least of
00:09:33.940 of normal scientific work but how do you think about genius and the the idea that we should be
00:09:43.120 shining the light of admiration on specific scientists for their breakthroughs and ignore the
00:09:49.960 and very often ignoring the fact that someone else would have made that breakthrough about 15 minutes
00:09:55.100 later if the first person hadn't yes well of course that is true and the difference between science
00:10:02.040 and the arts is that if you're an artist then anything you create is distinctive it's your work
00:10:09.200 it may not last whereas in the case of science if you make a contribution then it will last probably
00:10:17.580 if you're lucky but it'll be just one brick in the edifice so it'll lose its individuality in that in
00:10:25.440 almost all cases it would have been done by someone else if you hadn't done it so that's why there is
00:10:31.020 this difference and it's also why science is a social activity and why those who cut themselves off
00:10:38.140 may be able to do some kind of work in say pure mathematics by themselves but science involves
00:10:46.260 following developments across a fairly broad field and in fact in my book i discuss this contrast in
00:10:54.300 telling us why in the case of many artists and composers their last works are thought their greatest
00:11:02.040 and that's because once they were influenced when young by whatever the tastes were then
00:11:07.140 it's just internal development they don't need to absorb anything else whereas no scientist could go
00:11:12.600 on for 40 years just thinking by themselves without having to absorb new techniques all the time and
00:11:20.540 it's because scientists get and everyone gets less good at absorbing new ideas as they get older that
00:11:27.860 there are very few scientists of whom we would say that their last works are their greatest
00:11:31.720 interesting and that's why i decided to do something else when i was 60 that's why you you looked in the
00:11:38.180 mirror at 60 and and realized you were not going to start programming that's um you've met a lot of
00:11:44.600 great scientists over the course of yes many decades have you have you ever met someone who you would
00:11:51.540 unhesitatingly call a genius i mean someone who's just seemed in their scientific abilities or their
00:11:58.200 intellectual abilities generally just to be a you know a standard deviation beyond all the other smart
00:12:03.800 people you've had the pleasure of of knowing yes i think i've met some but of course i have a chapter
00:12:10.740 in my book uh saying that nobel prizes may do more harm than good and that's because the people who make
00:12:16.820 the great discoveries are the same people necessarily as those who have the deepest
00:12:22.860 intellects right many of the great discoveries are made serendipitously i think in the case of
00:12:29.520 astronomy discovery of neutron stars and of the radiation from the big bang those were both
00:12:35.160 discovered by people by accident by people who are not of any special intellectual eminence but
00:12:42.160 nonetheless i think we would accept that there are some people who do have special intellectual
00:12:48.220 qualities of the people who i've known in my field i would put steven weinberg in that class
00:12:53.200 as someone who obviously had a very broad intellectual interests and the ability to uh to do a great deal of
00:13:01.700 work at greater variety and a greater speed than most other people so there are people clearly in every
00:13:07.860 field who have special talents but they are not necessarily the people who make the great discoveries
00:13:15.500 which may be partly accidental or opportunistic and also of course they're not always the people
00:13:21.640 who you want to listen to in a general context and that's why it's a mistake if um nobel prize
00:13:28.980 winners are asked to pontificate on any subject because they may not be an expert on
00:13:33.220 yeah yeah yeah weinberg was wonderful uh he died a few years ago but he was um right really an
00:13:40.700 impressive person and a beautiful writer too yes did you know uh fineman or any of these other uh
00:13:47.380 bright lights of physics well i knew fineman slightly but uh i knew some of these other people who
00:13:54.100 were exceptional in their abilities and of course did keep going and didn't do just one thing because
00:14:01.160 and i also uh knew francis crick for instance who clearly was a rather special intellect and uh
00:14:08.180 mathematicians like uh andrew wiles who incident he did shut himself away for seven years to do his
00:14:14.020 work but that was exceptional yeah talk about a solitary effort that was incredible okay well let's talk
00:14:20.500 about the fate of our species which i think relies less on the lone genius and much more on our failure
00:14:27.300 or hopefully success in solving a variety of coordination problems and getting our priorities
00:14:35.460 straight uh and actually using what we know in a way that is cooperative and global we we face many
00:14:43.860 problems that are global in character and seem to cry out for global solutions and yet we have a
00:14:50.980 global politics we even have a domestic politics in every country that is tied to you know short-term
00:14:59.700 considerations of a sort that that really even if the the existential concerns are perfectly clear
00:15:06.640 we seem unable to take them seriously because it just there is no political incentive to do that
00:15:14.540 what what what are your if you were going to list your concerns that go by the name of of existential
00:15:21.440 risk you know maybe we should be a little more capacious than existential i mean they're you know
00:15:28.040 just enormous risk i mean that you know extreme risks yes yes yeah there can still be a few of us
00:15:33.500 left standing to suffer the the consequences of our stupidity what what are you worried about well i think
00:15:38.860 i do worry about global setbacks and the way i like to put it is in a cosmic context the earth's been
00:15:44.680 around for 45 million centuries but this century is the first when one species namely our species can
00:15:53.040 destroy its future or set back its future in a serious way because we are empowered by technology
00:16:00.360 and we are having a heavier footprint collectively on the world than was ever the case before and i think
00:16:07.520 there are two kinds of things we worry about one kind is the consequences of our heavier impacts on
00:16:14.680 nature and this is um climate change loss of biodiversity and issues like that which are long-term
00:16:20.940 concerns and the other is the uh fact that our technology is capable of destroying a large fraction of
00:16:29.800 humanity well that's been true ever since the invention of the h-bomb about 70 years ago but what
00:16:37.000 worries me even more is that new technologies bio and cyber etc can have a similar effect we know
00:16:46.060 that a pandemic like covid19 can spread globally because we are interconnected in a way we weren't in
00:16:52.920 in the past but what is even more scary is that it's possible now to engineer viruses which would be even
00:17:01.060 more virulent or more transmissible than the natural ones and this is my number one nightmare actually
00:17:08.440 that this may happen and it's my number one nightmare because it's very hard to ensure how we can actually
00:17:15.600 rule out this possibility in the case of nuclear weapons we know it needs large special purpose
00:17:21.220 facilities to build them and so the kind of monitoring and inspection which we have from the
00:17:27.400 it's an isotopic energy agency can be fairly effective but even if we try hard to regulate
00:17:33.760 what's done in uh biological laboratories even the stage four ones which are supposed to be
00:17:40.220 the most secure ones enforcing those regulations globally is almost as hopeless as enforcing the drug laws
00:17:48.080 globally or the tax laws globally because the delinquents can be just a few individuals or a small
00:17:54.680 company and this is a big worry i have which is that i think if we want to make the world safe
00:18:00.840 against that sort of concern we've got to uh be aware of a growing tension between three things we'd
00:18:06.540 like to preserve namely privacy security and freedom and i think that privacy is going to have to go
00:18:14.460 if we want to ensure that someone is not clundestinely plotting something that could kill us all
00:18:20.520 so there's one one class of threats yeah can you say well yeah i want to talk about uh others but
00:18:26.940 can you say more on how you imagine the the infringement of privacy being implemented here
00:18:34.800 what what do you what would actually help mitigate this risk well obviously uh we've given up a lot of our
00:18:41.620 privacy with uh cctv cameras and all that sort of thing and uh lots of what we have on internet is
00:18:48.740 probably accessible for surveillance groups and i think we probably have to accept something like
00:18:53.900 that to a greater extent than uh certainly uh in the u.s it will be acceptable now but i think
00:19:00.180 we've got to accept that these risks are very very high and we may have to modify our behavior in that
00:19:05.800 way yeah well i think there's one infringement of privacy that i don't think anyone would care about
00:19:11.380 which is for us to be monitoring the spread of pathogens increasingly closely right actually just
00:19:18.780 sampling the air and water and waste and and sampling everything we can get our our hands on
00:19:24.380 so as to detect something novel and dangerous as early as possible given that our ability to vaccinate
00:19:33.780 against pathogens has seems to have gotten you know much faster if not uniformly better
00:19:40.980 yes well of course the the hope is that the uh technology of uh vaccine development will
00:19:47.300 accelerate and that will uh counteract some of these concerns but i do think that we are going
00:19:52.500 to have to worry very much about the uh the spread of uh not just natural pandemics that might have a
00:19:59.380 much higher fatality rate than covet 19 but also these uh engineered pandemics which could be even worse
00:20:06.420 and i think we've got to have some sort of surveillance in order to minimize that and of course
00:20:12.660 the other way in which small groups are empowered is through um cyber attacks in fact i quote in in my
00:20:20.980 book from um a u.s defense department document from 2012 where they point out that a state level cyber
00:20:30.020 attack could knock out the electricity grid on the eastern coast of the united states and if that
00:20:38.180 happened they say i quote it would merit a nuclear response it would be catastrophic obviously if the
00:20:45.700 electricity grid shut down even for a few days and um what worries me now is that it may not need a
00:20:51.860 state level actor to do that sort of thing uh because there's an arms race as it were between
00:20:56.980 the empowerment of the cyber attackers and the empowerment of the cyber security people who
00:21:02.660 doesn't know which side is going to gain yeah we can add ai to this this picture which um that's
00:21:09.620 right which i i know you've been concerned about i think your the group you helped found the the
00:21:15.300 center for existential risk was one of the the sponsors of the that initial conference in puerto
00:21:20.820 rico in 2015 that i was happy to to go to that first brought everyone into the same room to talk
00:21:28.340 about the the threat or or lack thereof of general ai agi yeah and you know we've obviously seen a ton
00:21:36.820 of progress in recent months on narrow ai of the sort that could be presumably useful to anyone who wanted
00:21:42.580 to make a mess with you know cyber attacks indeed yes yeah i mean so it's there is an asymmetry here
00:21:50.740 which is intuitive i don't know if it holds across all classes of risk but it's easy to assume and i it
00:21:58.900 seems like it must generally always be accurate to assume that it's easier to break things than to fix
00:22:06.500 them or easier to make a mess than it is to clean it up i mean it's probably something related relating
00:22:12.100 to entropy here that we could generalize yes how do you view these asymmetric risks because as you point
00:22:18.900 out nuclear risk that the one fortuitous thing about the technology required to make you know big bombs is
00:22:26.980 that there are certain steps in the process that are hard for a single person or even a small number of
00:22:35.060 people to accomplish on their own i mean there's just their rare materials they're you know they're
00:22:39.620 hard to acquire etc and it's more of an engineering challenge than one person can reliably take on but not
00:22:46.820 so with you know dna synthesis you if we fully democratize all those tools and you can just you know
00:22:53.460 order nucleotides in the mail and uh not so at all with cyber and and now ai which is a bit of a
00:23:03.620 surprise i mean most of us who are worried about the development of truly powerful ai were assuming
00:23:11.460 that the most powerful versions of it would be inaccessible to almost everyone for the longest
00:23:19.940 time and you'd have you know you'd have a bunch of researchers making the decision to you know as to
00:23:25.540 whether or not a system was safe but now it's seeming that our most powerful ai is being developed
00:23:30.260 already in the wild with everyone you know literally millions and millions of people given access to
00:23:36.420 it on a moment-by-moment basis yes that's right that that is scary and i think we do need to have
00:23:43.300 some sort of regulation rather like in the case of drugs we encourage the r and d but intensive testing
00:23:52.900 is expected before something is released on the market and we haven't had that in the case of chat
00:23:59.780 gbt and things of that kind and i think there needs to be some discussion some international agreement
00:24:07.300 about how one does somehow regulate these things so that the worst bugs can be erased before they are
00:24:15.540 released to a large public this is of course especially difficult in the case of ai because the field is
00:24:24.980 dominated to a large extent by a few multinational conglomerates and of course they can as we know
00:24:34.500 evade paying proper taxation and they can evade regulations by moving their their country of residence
00:24:43.300 and all that and for that reason it's going to be very hard to enforce uh regulations globally
00:24:50.500 on those companies but we've got to try and indeed in the last few months there have been
00:24:56.020 discussions about how this can be done it's not just uh academies but uh uh but bodies like the um
00:25:03.380 the g20 and the un and other bodies must try to think of some way in which we can regulate these but of
00:25:10.180 course we can't really regulate them completely because 100 million people have used this uh
00:25:17.620 software within a month so it's going to spread very very widely and uh i think the only the only point
00:25:23.860 i would make to perhaps uh be an antidote to the most scary stuff i think the idea of um of a machine
00:25:33.220 taking over general super intelligence is still far in the future i mean i'm with um those people
00:25:40.260 who think that for a long time we've got to worry far more about human stupidity than artificial
00:25:46.100 intelligence and i think that's the case but on the other hand we do have to worry about bugs and
00:25:53.780 breakdowns in these programs and that's a problem if you become too dependent on them if we become
00:26:01.460 dependent globally on something which uh runs um gps or the internet or the electricity grid network
00:26:11.140 over those areas then i worry more about the vulnerability if something breaks down and
00:26:17.060 it's hard to repair than i do about uh an intentional attack yeah i mean the scary thing is it's easy to
00:26:24.420 think about the harm that bad actors with various technologies can commit but it's so much of our
00:26:32.340 risk is the result of what can happen by accident or just inadvertently just based on human stupidity
00:26:39.060 or just the failure of antiquated systems to function properly i mean when you think about the risk of
00:26:46.100 nuclear war yes it's scary that there are people like you know vladimir putin of whom we can
00:26:51.700 reasonably worry you know whether he may use nuclear weapons to prosecute his own you know
00:26:58.900 very narrow aims but the bigger risk at least in in my view is that we we have a a system with truly
00:27:07.220 antiquated technology and it's just easy to see how we could stumble into a a full-scale nuclear war
00:27:14.500 with russia by accident by just misinformation no indeed the addition of ai to this picture is
00:27:21.380 terrifying yes i think it's very scary indeed and uh i i think at least this uh hype in the last few
00:27:28.980 months has raised these issues on the agenda and that's a very good thing because one point about
00:27:37.860 getting political action or getting these things up political agenda is that politicians have to
00:27:44.260 realize that the public care and everyone now is scared about these threats and so it will at least
00:27:50.020 motivate the public so much for politicians to do what they can to achieve some sort of a regulation or
00:27:58.340 ensure that the greater safety of these complex systems and uh this is i think something which uh the
00:28:05.700 public doesn't recognize really that um politicians they have scientific advisors but those advisors
00:28:12.660 have rather little traction except when there's an emergency you know after kobe 19 they did but
00:28:19.700 otherwise they don't and incidentally to slightly shift gears that's one of the problems getting serious
00:28:26.180 action to uh deal with the climate change and similar environmental catastrophes because they're slow to
00:28:33.780 develop and long range and therefore politicians don't have the incentive to deal with them urgently
00:28:40.980 because they will happen on the timescale longer than the electoral cycle in some cases longer than
00:28:46.820 the normal cycle of business investment but nonetheless if you want to ensure that we don't get some
00:28:52.660 catastrophic changes in the second half of the century they do have to be prioritized and if that's to happen
00:28:58.740 then the public has to be aware because the politicians if voters care will take action and that's why in my
00:29:09.380 book i point out that we scientists are on the whole not very charismatic or influential in general
00:29:17.140 so we depend very much on individuals who do have a much larger following and in my book i
00:29:24.660 quote four people of a disparate quartet who have been
00:29:28.740 had this effect in the climate context the first is pope francis whose encyclical in 2015
00:29:37.140 got him a standing evasion at the un energized his billion followers and made it easy to get the
00:29:41.940 consensus at the paris climate conference in 2015 so he's number one number two is our secular pope
00:29:49.380 david attenborough who uh certainly in many parts of the world has made people aware of uh environmental
00:29:56.660 damage ocean pollution and climate change the third i would put is uh bill gates uh who has um
00:30:04.660 a large following and talks a great deal of sense about technological opportunities and what's realistic
00:30:10.660 and what isn't so i think he's a positive influence and for we should think of greater tornberg who has
00:30:17.220 energized the younger generation and i think those four between them have in the last five years
00:30:22.580 raised these issues on the agenda so that uh governments are starting to act about how to cut carbon
00:30:30.260 emissions and uh even businesses changes rhetoric even if not changing its actions very much
00:30:36.180 well it's it is a difficult tangle to resolve with this challenge of public messaging and
00:30:46.340 leveraging the attention of the wider world against the short-term incentives that everyone feels
00:30:56.180 very directly i mean the thing that is going to move someone through their day well you know from the
00:31:00.660 the moment they get out of bed in the morning tends to be what they're what they're you know truly
00:31:05.620 incentivized to do in the near term and even if you were going to live by the light of the most rank
00:31:13.380 selfishness everyone seems to hyperbolically discount their own interests over the course of time so that
00:31:21.780 which is to say it's it's even hard to care about one's own far future or even or the future of of one's
00:31:28.420 children to say nothing of the you know the abstract future of humanity and you know the long-term
00:31:35.060 prospects of the species yes so it's just it's amazing to me i mean even as someone who deals with
00:31:40.900 these issues and you know fancies himself you know a clear-eyed ethical voice on many of these topics
00:31:47.780 i'm amazed at how little time i spend really thinking about the world that my children will inhabit
00:31:57.380 when they're my age and trying to prioritize you know my resources so as to ensure that that is
00:32:04.340 the best possible world it can be i mean you know so much of what i'm doing is you know loosely coupled
00:32:10.660 to that outcome but it's not felt as a moral imperative in the way that responding to near-term
00:32:17.700 challenges is no so i i mean maybe maybe you can say something about the ethical importance of the
00:32:22.980 future and yes and how we should respond to these kinds of you know long tail risks that in any
00:32:29.540 given month in any given year are not you know it's hard to argue that their priorities because
00:32:35.540 each month tends to look like the last and yet we know that if we can just influence the trajectory of
00:32:43.620 our progress by one percent a year you know 50 years from now will be totally different than if we
00:32:49.460 degrade it by one percent a year yes that's right that is the problem and of course most people do
00:32:55.940 really care about the life chances of their children or grandchildren who may be alive at the end of a
00:33:01.700 century and i think most of us would agree despite all the uncertainties in climate modeling that there is
00:33:09.140 a serious risk of a catastrophic change by the end of a century if not by 2050 and this is something which
00:33:18.020 we need to try and plan to avoid now uh and it is a big ask of course and that's why i think um
00:33:26.340 that you've got to appeal to people's concern about future generations but of course if we ask about
00:33:31.780 how far ahead one should look how much so-called long-termism one should go for one that then has the
00:33:38.740 legitimate concern that if you don't know what the future is like and don't know what the preferences and
00:33:44.100 tastes are going to be of people 50 years from now then of course we can't expect to make sacrifices
00:33:50.180 because they may be uh inappropriate for what actually turns out so i think in the case of
00:33:55.860 climate we can fairly well predict what will happen if we go on as we are now but in other contexts things
00:34:02.180 are changing so fast that we can't make these predictions and uh and so the idea that we should uh
00:34:08.340 make sacrifice for people a thousand years from now doesn't make much sense and in fact in my book i
00:34:13.620 present an interesting paradox i think about those who built cathedrals in the 12th century amazing
00:34:21.620 artifacts uh that were built over a century and uh people invested in them and knew they would be
00:34:28.180 finished in their lifetime and they planned ahead even though they thought the world would end in a
00:34:33.780 thousand years and their spatial horizons were limited to europe on the other hand today when our
00:34:41.380 time horizons are billions of years and our spatial horizons vast too we don't plan ahead 50 years from
00:34:49.380 now that may seem a paradox but there is a reason for it the reason is that back in the middle ages
00:34:55.060 although the overall horizon was constricted they didn't think things would change very much
00:35:00.180 yeah they thought the life chances of their children and grandchildren would be the same
00:35:05.060 so they were confident that their grandchildren would appreciate the finnish cathedral whereas i
00:35:10.020 think apart from i would guess on climate change perhaps and uh biodiversity where we don't want to leave a
00:35:17.620 depleted world for our descendants we can't really predict what people's preferences would be
00:35:23.700 what be the key technologies and therefore it's perhaps inappropriate to uh plan in too much detail
00:35:30.500 for them so when things are changing unpredictably then of course you have a good reason for discounting
00:35:36.900 in the future but we mustn't discount it too much especially in cases when we can be fairly confident
00:35:42.020 of the risks of the status quo yeah well i would add some of the risks we've we've already mentioned
00:35:48.180 to me that we know that living year after year with these invisible dice rolling with respect to the
00:35:55.700 threat of you know accidental nuclear war that's just a game we shouldn't be playing right so if we can
00:36:00.660 absolutely dial back that risk in any given year that would be a very good thing and so it is with
00:36:05.620 the spread of uh pandemics you know engineered or or natural yes let's talk about the future a little
00:36:12.340 bit more because i know you have thought about the transhuman possibilities or really inevitabilities
00:36:20.180 of the future that you're saying i think someplace that if you go out far enough our descendants will
00:36:26.500 not only not be recognizably human but they will just be be unimaginably different from from
00:36:31.780 what we are now how do you what do you actually expect and and what and what sort of time horizon
00:36:37.540 would you give that i mean if i draw if i could drop you back on earth 10 000 years from now
00:36:42.980 what would you what would you expect with respect to our descendants provided obviously that we don't
00:36:48.500 destroy the possibility of survival in this century well i'd expect significant differences but uh let me
00:36:55.460 put this in the cosmic context we know it's taken uh four billion years or so for the biosphere of which
00:37:02.580 we are apart today to evolve from the simple beginnings in the primordial slime and the young
00:37:08.020 earth and some people tend to feel that we are the culmination of evolution the top of the tree but no
00:37:14.980 astronomer can believe that because we know that the sun is less than halfway through its life
00:37:20.740 it's been shining for four and a half billion years but it's got five or six more before it flares up and
00:37:26.900 engulfs the inner planets and of course the universe has far longer still maybe going on forever and
00:37:34.660 i like to quote woody allen eternity is very long especially towards the end we are maybe not even
00:37:40.660 the halfway stage in the emergence of progressively greater complexity and i think this century is going
00:37:49.460 to be crucial in that context too because it may be the stage when indeed um genetic modification can
00:37:58.660 redesign humans and maybe cyborgs who are partly electronic will develop and uh that future evolution
00:38:07.940 uh will be much faster than darwinian natural selection it'll be what i like to call secular
00:38:15.140 intelligent design it'll be us or on the machines aiding us designing better next generation so the
00:38:23.140 future changes intelligence are going to be faster than the slow darwinian ones which have led to the
00:38:30.500 emergence of humans over a few hundred thousand years so it'd be much faster and so it's completely
00:38:36.500 unimaginable what there will be in billions of years because there can be rapid changes on this timescale
00:38:43.780 which is fast compared to the lumenian timescale if i could be slightly more specific about my scenario
00:38:49.220 and discuss a recent article i wrote from mario livio and some other things i've written i think that
00:38:55.460 the first developments of post humans may happen on mars and let me explain this i wrote another book
00:39:04.740 last year with don goldsmith called the end of astronauts and we made the point that um as robots get
00:39:12.420 better the need for sending humans into space is getting weaker all the time and so i think
00:39:19.380 many of us feel that uh nasa or other public agencies shouldn't spend taxpayers money on uh human space
00:39:27.460 flight especially something as expensive as trying to send people to mars which is hugely expensive if
00:39:33.220 you want to make it almost risk-free fuel people and feed them for six months on the journey and give them
00:39:40.180 stuff for the return journey etc that's very very dangerous and the public probably won't uh accept
00:39:47.540 the cost or the risk so my story is that um we should uh leave human space flight to adventurers
00:39:56.740 prepared to accept high risks funded by the billionaires musk and bezos people like that because
00:40:03.380 there are people who would be prepared to go to mars on the one-way trip in fact musk himself has said
00:40:09.620 that uh he'd like to die on mars but doesn't impact and he's now i think 51 or 52 years old so 40 years
00:40:16.580 from now good luck to him and there are other people like that who will go and uh they will go on a
00:40:22.020 mission which is very risky and therefore far cheaper than anything that nasa would do right because
00:40:29.780 that's just risk averse and it's not our taxpayers money anyway so my scenario is that there may well
00:40:37.540 be a small colony of people living on mars by the end of a century probably adventurers rather like
00:40:44.740 captain scott and aberson and people like that um and they'll be trying to live in this very hostile
00:40:50.260 environment and i think this will happen but incidentally i don't agree with musk that uh that'll be
00:40:56.420 followed by mass emigration of humans because living on mars is much worse than living at the
00:41:02.980 bottom of the ocean at the south pole and uh dealing with climate change on earth is a doddle compared
00:41:07.860 to terraforming mars so so there's no planet people order is converse people but the reason i digressed
00:41:13.540 into this topic is that if you think of these uh crazy pioneers on mars they'd be ill adapted but they'd be
00:41:20.980 away from the regulators and so they will use all the techniques of cyborg and genetic modification
00:41:29.460 to design their progeny to be better suited to that environment and they will become a different
00:41:36.900 species within a few hundred years and the key question then is will they still be flesh and blood
00:41:43.220 or could it be that uh the human brain is about the limit of what could be done by flesh and blood and
00:41:49.140 therefore they will become electronic and if they become electronic then of course they won't need
00:41:53.940 an atmosphere they may prefer zero g and they'll be near immortal so then they will go off interstellar
00:42:00.100 space and so the far future would be one in which our descendants our remote descendants mediated by these
00:42:08.740 crazy adventures on mars will start spreading through the milky way and that raises the other question are we
00:42:17.700 the first or are there are there some others and of course this leads to seti and all that and the
00:42:23.620 relevance to seti is that if we ask what will be the evidence for anything intelligent it will be in my
00:42:31.140 opinion far more likely to be some electronic artifacts than a flesh and blood civilization like ours because
00:42:40.100 if you think of the track that our civilization has taken it's lasting a few thousand years at most
00:42:45.620 then these electronic progeny will last for billions of years and so if we had another planet it's unlikely
00:42:52.580 to be synchronized within a few thousand years in its evolution with ours so if it's got a head start
00:42:59.540 then it'll gone past the flesh and blood civilization stage and uh will have left electronic progeny so
00:43:06.660 the most likely evidence we would find of intelligence would be electronic entities produced by some
00:43:14.340 civilization which had evolved rather like i think may happen here on our solar system but with a
00:43:20.660 head start that's a long answer to say that that's a future evolution yeah what do you make of the fact
00:43:27.140 and this is your this is the fermi problem question what do you make of the fact that we don't see evidence of
00:43:34.260 any of that technology out there when we look up in all the in all our ways of looking up
00:43:39.300 i'm glad you asked that because i think this also uh eases that problem too because uh darwinian
00:43:45.620 evolution favors intelligence maybe but also aggression but these electronic entities may evolve
00:43:53.620 to greater intelligence deeper and deeper thoughts but there's no reason why they should be aggressive
00:43:58.820 so they could be out there just thinking deep thoughts the idea that they'd all be expansionist
00:44:03.380 and come to eat us as it were doesn't really make sense so i think they could be out there and and
00:44:09.380 not as conspicuous as a flesh and blood civilization but they could still be out there but given the
00:44:16.980 the mismatch in timing of the birth of of intelligence and technology on any planet that you just
00:44:25.940 referenced i mean the fact that you know in our case you know all of the gains we've made that could
00:44:30.580 possibly show up and announce our presence to the rest of the cosmos have been made in a couple of
00:44:37.300 hundred years uh and we're we're now envisioning a situation where if life is common if intelligent life is
00:44:42.900 common in the galaxy uh you know there there are planets that could be you know 20 million years ahead
00:44:49.380 of us or more yeah so if you shift if you acknowledge the the likely shifts in time in that way
00:44:55.540 wouldn't you expect to see and and leaving antagonism aside just the curiosity to explore
00:45:04.740 wouldn't you expect to see the galaxy teeming with some signs of technological life elsewhere if in fact
00:45:12.740 it exists well we don't know what their motives would be and we've no idea what their technology
00:45:20.260 would be it'd be so different to be might not even recognize it but the the point i would make is
00:45:25.860 that um even if life is already common in our galaxy or had origination in many places then in the
00:45:33.380 drake equations if you'd like to continue listening to this conversation you'll need to subscribe at
00:45:42.020 sam harris.org once you do you'll get access to all full-length episodes of the making sense podcast
00:45:47.620 along with other subscriber only content including bonus episodes and amas and the conversations
00:45:53.540 i've been having on the waking up app the making sense podcast is ad free and relies entirely on
00:45:58.980 listener support and you can subscribe now at sam harris.org