This episode is the result of a series of interviews I conducted with three people who have been on the podcast before. In this episode, I talk about the first event I'm hosting with the Great Tibetan Lama, Mingyur Rinpoche. It's happening in Los Angeles on July 11th, and tickets are selling fast! You can find more information about that event here. And you can get tickets to the Waking Up Event at The Wiltern in LA, where I'll be sitting down with the Dalai Lama to discuss his new book, "In Love with the World," which is out now. You can also get tickets for the event at the Wiltern, where you can ask questions about the book and meditation, and I'll answer them here. This episode is sponsored by The Wakening Up App, which is a service that allows you to connect with your friends, family, and colleagues through the practice of mindfulness and meditation. We don't run ads on the app, and therefore, therefore, it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming a supporter of the podcast, and/or become a supporter by becoming a subscriber. You'll get access to all sorts of great resources, including: The Making Sense Podcast courses, books, meditations, and more! The Wakingup App courses, including a web-based version of the Making Sense podcast course, which will be launching soon. And you'll get more information on all of that at wakingup.org.org/makingsense. If you're not a supporter yet? Then you'll need to subscribe to the podcast to access the full-length episodes of the making sense podcast, which includes the podcast. and all of the other great resources you'll be getting access to everything you need to know about making sense of this podcast, including the podcast and much more. making sense. It's a great resource. -Sam Harris . Sam Harris - Make Sense Podcast and the podcast is made possible through the work of John Brockman, a good friend of mine, John D. Brockman (and the book I mentioned in this episode of Making Sense, by my book, by the excellent book, Possible Minds, 25 Ways of Looking at AI, by Dan Dennett, by the great author, Steve Pinker, Dan Dennen, and so on.
00:13:02.660They knew, you know, they both knew Johnny von Neumann quite well because he was sort of in circulation.
00:13:09.880My father had met Norbert Wiener, but it never worked with him, didn't really know him.
00:13:15.460And neither of them actually met Alan Turing.
00:13:19.020But of course, my father came from Cambridge where Turing had been sort of a fixture.
00:13:23.300What my father said was that when, you know, he read Turing's paper when it came out and he, you know, he thought like many people, he thought this was sort of the least likely, you know, this was interesting logic, but it would have no great effect on the real world.
00:13:38.300I think my mother was probably maybe a little more prescient that, you know, logic really would change the world.
00:13:45.680Von Neumann is perhaps the most colorful character here.
00:13:49.460I mean, there seems to be an absolute convergence of opinion that regardless of the fact that he may not have made the greatest contributions in the history of science,
00:14:04.480he seemed to have just bowled everyone over and given a lasting impression that he was the smartest person they had ever met.
00:14:12.160Does that ring true in the family as well, or have estimations of von Neumann's intelligence been exaggerated?
00:14:20.600No, I don't think that's exaggerated at all.
00:14:22.700I mean, he was impressively sharp and smart, extremely good memory, you know, phenomenal calculation skills, sort of everything.
00:14:31.340Plus he had this, you know, his real genius was not entrepreneurship, but just being able to put everything together.
00:14:40.020His father was an investment banker, so he had no shyness about just asking for money.
00:14:47.020I mean, that was sort of in some ways almost his most important contribution was he was the guy who could get the money to do these things that other people simply dreamed of.
00:14:56.340But he got them done, and he hired the right people.
00:14:59.620He's sort of like the orchestra conductor who'd get the best violin player and put them all together.
00:15:05.920Yeah, and these stories are, I think I've referenced them occasionally on the podcast,
00:15:11.960but it is a, it's astounding to just read this record, because you have the, really the greatest physicists and mathematicians of the time,
00:15:23.140all gossiping, essentially, about this one figure who, certainly Edward Teller was of this opinion,
00:15:31.100and I think, you know, he's, I think there's a quote from him somewhere, which says that, you know,
00:15:36.260if we ever evolve into a master race of super intelligent humans, you will recognize that von Neumann was the prefiguring example.
00:15:47.460Like, this is, this is how we will appear when we are fundamentally different from what we are now.
00:15:52.000Yeah, it's sort of, in other ways, it's a great tragedy, because he was doing really good work and,
00:16:22.000you know, pure mathematics and logic and game theory, quantum mechanics, and those kinds of things,
00:16:29.060and then got completely distracted by the weapons and the computers.
00:16:34.060Never, never really got back to any real science, and then died young, like Alan Turing, the very same thing.
00:16:40.160So we sort of lost these two brilliant minds who not only died young, but sort of professionally died very early,
00:16:48.000because they got sucked into the war, never came back.
00:16:50.400Yeah, there was an ethical split there, because Norbert Wiener, who was, again, part of this conversation fairly early,
00:16:59.860I think it was 47, published a piece in The Atlantic, more or less vowing never to let his intellectual property
00:17:08.560have any point of contact with military efforts.
00:17:12.200And so at the time, it was all very fraught, seeing that physics and mathematics was the engine of destruction, however ethically purposed.
00:17:23.220You know, obviously, there's a place to stand where the Manhattan Project looks like a very good thing,
00:17:28.620you know, that we won the race to fission before the Nazis could get there.
00:17:33.560But it's an ethically complicated time, certainly.
00:17:38.480Yes, and that's where, you know, Norbert Wiener worked very intensively and effectively for the military in both World War I.
00:17:46.420He was at the proving ground in World War I and World War II, but he worked on anti-aircraft defense.
00:17:54.380And what people forget was that it was pretty far along at Los Alamos when we knew, when we learned that the Germans were not actually building nuclear weapons.
00:18:04.920And at that point, people like Norbert Wiener wanted nothing more to do with it.
00:18:09.520And particularly, Norbert Wiener wanted nothing to do with the hydrogen bomb.
00:18:13.020There was no military justification for a hydrogen bomb.
00:18:17.400The only use of those weapons still today, it's against, you know, it's genocide against civilians.
00:24:09.120So there are really two very different kinds of computers.
00:24:14.380There's—it sort of goes, again, back to Turing in sort of a mathematical sense.
00:24:18.620There are continuous functions that vary continuously, which is sort of how we perceive time or the frequency of sound or those sorts of things.
00:24:28.300And then there are discrete functions, the sort of ones and zeros and bits that took over the world.
00:24:34.160And Alan Turing gave this very brilliant proof of what you could do with a purely digital machine.
00:24:40.400But both Alan Turing and von Neumann were almost, you know, sort of at the end of their lives, obsessed with the fact that nature doesn't do this.
00:24:51.180Nature does this in a—in our genetic systems.
00:24:54.280We use digital coding because digital coding is, as Shannon showed us, is so good at error correction.
00:25:01.640But, you know, continuous functions in analog computing are better for control.
00:25:07.700All control systems in nature, all nervous systems, the human brain, the brain of a fruit fly, the brain of a mouse, those are all analog computers, not digital.
00:25:21.120And von Neumann, you know, wrote a whole book about that that people have misunderstood.
00:25:25.060I guess you could say that whether or not a neuron fires is a digital signal, but then the analog component is downstream of that, just the different synaptic weights and perceptors.
00:25:50.300You can take apart a brain, you don't find any sort of digital code.
00:25:53.820There's no—I mean, now we're sort of obsessed with this idea of algorithms, which is what Alan Turing gave us.
00:25:59.140But there are no algorithms in a nervous system or a brain.
00:26:04.720That's a much, much, much sort of higher-level function that comes later.
00:26:10.620Well, so you introduced another personality here and a concept.
00:26:14.360So let's just do a potted bio on Claude Shannon and this notion that digitizing information was somehow of value with respect to error correction.
00:26:27.860Yes, I mean, what Claude Shannon's great contribution was sort of modern information theory, which you can make a very good case.
00:26:35.640He actually sort of took those ideas from Norbert Wiener, who was explaining them to him during the war.
00:26:40.600But it was Shannon who published the great manifesto on that, proving that you can sort of communicate with reliable accuracy given any arbitrary amount of noise by using digital coding.
00:26:56.460And that none of our computers would work without that, the fact that basically your computer is a communication device and has to communicate these hugely complicated states from one fraction of a microsecond to the next billions of times a second.
00:27:10.980And the fact that we do that perfectly is due to Shannon's, you know, his theory and his model of how can you do that in an accurate way.
00:27:18.080Is there a way to make that intuitively understandable why that would be so?
00:27:22.720I mean, what I picture is like cogs in a gear where it's like you're either all in one slot or you're all out of it.
00:27:30.820And so any looseness of fit keeps reverting back to you fall back into the well of the gear or you slip out of it.
00:27:39.360Whereas something that's truly continuous, that is to say analog, admits of errors that are undetectable because you're just, you're kind of sliding off a more continuous, smoother surface.
00:27:54.440Yeah, that's a good, that's a very good way to explain it.
00:27:57.060Now it has this fatal flaw that you sort of, there's always a price for everything.
00:28:02.220And so you, you can get this perfect digital accuracy where you can make sure that every bit, billions of bits and every bit is in the right place, your software will work.
00:28:15.840But the fatal flaw is that if for some reason a bit isn't in the right place, then the whole machine grinds to a halt.
00:28:22.740Whereas the analog machine will keep going as much, much more robust against failure.
00:28:27.960So are you in touch with people who are pursuing this other line of building intelligent machines now?
00:28:36.420I mean, what does analog computation look like circa 2019?
00:28:41.160Well, it's, it's coming at us from two, in two directions.
00:28:44.840There's bottom up and there's sort of top down.
00:28:47.740And the bottom up is actually extremely interesting.
00:28:50.940And I'm, you know, I'm professionally not a computer scientist.
00:29:08.960And there, this was an entire meeting of people working on building analog chips from the bottom up.
00:29:14.640Using the same technology we use to build digital computers, but to build completely different kinds of chips that actually do analog processing on them.
00:30:01.960And then from the top down is a whole other thing.
00:30:04.960That's the part where I think we're sort of missing something.
00:30:07.480That if you look at the sort of internet as a whole, or the whole computational ecosystem, particularly on the commercial side,
00:30:15.560enormous amount of the interesting computing we're doing now is back to analog computing,
00:30:19.720where we're computing with continuous functions, it's pulse frequency coded, something like, you know, Facebook or YouTube doesn't care that, you know, the file that somebody clicks on,
00:30:32.520they don't care what the code is, they just sort of care, the meaning is in the frequency that it's connected to,
00:30:37.840very much the same way a brain or a nervous system works.
00:30:40.560So if you look at these large companies, Facebook or Google or something, actually, you know, they're large analog computers.
00:30:48.360The digital is not replaced, but another layer is growing on top of it.
00:30:53.720The same way that after World War II, we had all these analog vacuum tubes and the oddballs like Alan Turing and von Neumann and even Norbert Wiener figured out how to use the analog components to build digital computers.
00:31:08.580But now we're sort of right in the midst of another revolution where we are taking all this digital hardware and using it to build analog systems.
00:31:18.260But somehow people don't want to talk about that analog is still sort of seen as this archaic thing, I believe, differently.
00:31:26.300In what sense is an analog system supervening on the digital infrastructure?
00:31:33.260Are there other examples that can make it more vivid for people?
00:31:39.460Like, nature uses analog for control systems.
00:31:42.360So you take an example like, you know, an obvious one would be Google Maps with live traffic.
00:31:50.780So you have all these cars driving around and people have their digital cell phone in the car.
00:31:58.280And you sort of have this deal with Google where Google will tell you what the traffic is doing and the optimum path.
00:32:05.980If you tell Google how fast, where you are and how fast you're moving.
00:32:11.200And that becomes an analog computer, sort of an analog system where there is no digital model of the, you know, all the traffic in San Francisco.
00:32:22.860The actual system is its own, it is its own model.
00:32:28.580And that's sort of a Neumann's definition of an organism or a complex system that it constitutes its own simplest behavioral description.
00:32:37.980There is no trying to formally describe what's going on makes it more complicated, not less.
00:32:43.620There's no way to simplify that whole system except the system itself.
00:32:49.400And so you're using, you know, Facebook's very much the same way.
00:34:55.420I mean, obviously the machines are clearly taking over.
00:34:57.780There's no, if you look at the, just the span of my life from when von Neumann built that one computer to where we now, you know, almost biological growth of, of this technology.
00:35:10.780So as a, you know, sort of as a member of living things, it's, it's, it's something to be concerned about.
00:35:16.780Do you know, uh, David Krakauer from the, uh, Santa Fe Institute?
00:35:21.060Yes, I don't know him, but I've, you know, I've, I've met him and talked to him.
00:35:24.000Yeah, because he, he has a rap on this very point where he distinguishes between, I think his phrasing is cognitively competitive and cognitively cooperative technology.
00:35:35.860So there are forms of technology that compete with our intelligence on some level, and insofar as we outsource our cognition to them, we get less and less competent.
00:35:48.300And then there is other forms of technology where we actually become better even in the absence of the technology.
00:35:55.000And so the, unfortunately, the only example of the latter that I can remember is the one he used on the podcast was the abacus,
00:36:02.420which apparently if you learn how to use an abacus, well, you internalize it and you can do calculations you couldn't otherwise do in your head, in the absence even of the physical abacus.
00:36:13.880Whereas if you're relying on a pocket calculator or your phone or for arithmetic or you're relying on GPS, you're eroding whatever ability you had in those areas.
00:36:24.880So if we get our act together and all of this begins to move in a better direction or something like an optimal direction, what does that look like to you?
00:36:35.580If I told you 50 years from now we arrived at something just far better than any of us were expecting with respect to this marriage of increasingly powerful technology with some regime that conserves our deepest values, how do you imagine that looking?
00:36:57.200Well, it's, yeah, it's certainly possible and I guess that's where I would be slightly optimistic in that sort of my knowledge of human culture goes way back and we, we grew up, we, you know, as a species, I'm speaking of just all humanity.
00:37:14.700Actually, most of our history was, you know, was among animals who were bigger and more powerful than we were and things that we completely didn't understand and we sort of made up our, not religions, but just views of the world that, that, that we couldn't control everything.
00:37:35.180We had to, we had to, we had to live with it and I think in a strange way we're kind of returning to that, that childhood of the species in a way that we're, we're building these systems that we no longer have any control over and we in fact no longer even have any real understanding of.
00:37:53.820So we're sort of, so we're sort of in some ways back to that world that we're, that we are, you know, originally we're quite comfortable with where we're, where we're at the power of things that we don't understand.
00:38:02.920Sort of mega fauna and I think that's, that could be a good thing, it could be a bad thing, I don't know, but I'm, it doesn't, it doesn't surprise me.
00:38:12.520And I'm just personally, I'm interested, like if you take, you know, to get back to why we're here, which is John's book, almost everyone in that book is talking about domesticated artificial intelligence.
00:38:26.980I mean, they're talking about sort of commercial systems, products that you can buy, things like that.
00:38:32.500I mean, I'm just personally, I'm in, you know, I'm sort of a naturalist and, and I'm interested in wild AI that, you know, what, what evolves completely in the wild out of, out of human control completely.
00:38:43.040And that's a very interesting part of the whole sphere that, you know, that doesn't, doesn't get looked at that much.
00:38:49.160It's sort of the focus now is so much on, you know, marketable captive AI, self-driving cars, things like that, that, but it's the wild stuff that, that to me, that's.
00:39:02.500Like, I'm not, I'm not afraid of bad AI, but I'm afraid, I'm very afraid of good AI, the kind of AI where some ethics board decides what's good and what's bad.
00:39:12.280I don't think that's, what's going to be really important.
00:39:14.800But don't you see the possibility that, so what we're talking about here is powerful, increasingly powerful AI, so increasingly competent AI.
00:39:23.380But those of us who are worried about the prospect of building what's now called AGI, artificial general intelligence, that is, that proves bad is, is just based on the assumption that there are, there are many more ways to build AGI that is not ultimately aligned with our interests than there are ways to build it perfectly aligned with our interests.
00:39:47.920Which is to say, we could build the, the megafauna that tramples us perhaps more easily than we could build the megafauna that lives side by side with us in a durably benign way.
00:40:32.180To run, so this view is that, well, the programmers are in control, but if you have non-algorithmic, there is, there is no program, there's no, there's, by definition, you don't control it.
00:40:47.460And to expect control is, is absolutely foolish.
00:40:50.900But I think it's much better to be realistic and assume that you won't, you won't, won't have control.
00:40:55.560Well, so then why isn't your bias here one of the true counsel of fear, which says we shouldn't be building machines more powerful than we are?
00:41:06.360Well, we probably shouldn't, but we are.
00:41:09.940I mean, the reality, the fact is we, we're, we've done it.
00:41:12.500I mean, it's not something that we're thinking about.
00:41:14.920It's something we've been doing for, for a long time and it's probably not going to stop.
00:41:19.880And then, then the point is to be realistic about, and then, and maybe optimistic that, you know, humans have not been the best at controlling the world.
00:41:28.480And, and, and something else could well be, could well be better, but, but this illusion that we are going to program artificial intelligence is, I think, provably wrong.
00:41:38.320I mean, Alan Turing would have proved that wrong.
00:41:40.820You can, you know, he, that was how he got into the whole thing at the beginning was, was proving this, this statement called the Entscheidungsproblem, whether by, you know, it's any systematic way to look at a string of code and predict what it's going to do.
00:41:54.480And, and it baffles me that people don't sort of, somehow we've been so brainwashed by this, because the digital revolution was so successful.
00:42:04.660Nobody, you know, it's amazing how it has sort of clouded everyone's thinking.
00:42:08.980They don't think of, you know, if you talk to biologists, of course, they, they know that very well.
00:42:14.340I mean, people who actually work with brains of frogs or mice, you know, they know it's not digital.
00:42:19.820Why, why, why people think more intelligent things would be digital is just, again, it's sort of baffling.
00:42:27.740How did, how did that sort of take over the world, that, that thought?
00:42:31.120Yeah, so it does seem, though, that if you think the development of truly intelligent machines
00:42:40.740is synonymous with machines that not only can we not control, but we, on some level, can't form a reasonable expectation of what they will be inclined to do.
00:42:53.660There's the assumption that there's some way to launch this process that is either provably benign in advance, or, so I'm looking at the book now,
00:43:07.480and, you know, the person there who I think has thought the most about this is Stuart Russell.
00:43:12.480And, you know, he's, he's just trying to think of a way in which AI can be developed where its master value is to continually understand in a deeper and more accurate way what we want, right?
00:43:30.400So, and what we want can obviously change, and it can change in dialogue with this now super intelligent machine, but its value system is in some way durably anchored to our own,
00:43:42.220because its concern is to get our situation the way we want it.
00:43:47.920Right, but all, all the most terrible things that have ever happened in the world happened because somebody wanted them.
00:43:51.920I mean, it's, it's, it's, that's, that's no, there's no safety in that.
00:43:55.800I admire Stuart Russell, but we disagree on this sort of provably good AI.
00:44:00.580Yeah, so I, so, but I guess at least what you're doing there is collapsing it down to one fear rather than the other.
00:44:09.740I mean, the fear that provably benign AI or provably obedient AI could be used by bad people toward bad ends,
00:44:18.780that's obviously a fear, but the greater fear that many of us worry about is that developing AGI in the first place can't be provably benign,
00:44:27.560and we will find ourselves in relationship to something far more powerful than ourselves that doesn't really care about our well-being in the end.
00:44:36.620Right, and that's, again, sort of the world we used to live in, and we, I think we can make ourselves reasonably comfortable there,
00:44:42.300but we, we no longer become the, you know, sort of the classic religious view was there, there are humans, and there's God,
00:44:50.500and there's only nothing but angels in between.
00:44:58.220So, you know, Norbert Wiener sort of, the last thing he published before, well, he was actually published after he died,
00:45:04.800but, I mean, there's a line in there, which I think just gets it right, that the world of the future will be an ever more demanding struggle
00:45:11.980against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
00:45:21.200That's, those are the two, two sort of paths that so many people want.
00:45:26.500So, oh, the cars are going to drive us around and be our slaves.
00:45:29.960It's probably not going to happen that way.
00:45:34.900I mean, it could be, it could be a good thing.
00:45:36.500We've, we've been the sort of chief species for a long time, and it, it, it could be time for something else,
00:45:43.100but, but at least be, be realistic about it.
00:45:45.720Don't, don't have this sort of childish view that, that, that everything's going to be obedient to us.
00:45:51.560That hasn't worked, and I think it was, you know, it did a lot of harm to the world that, that, that, that sort of we had that view.
00:45:58.300And, again, one of the signs of, of any real artificial intelligence would immediately be intelligent enough not to reveal its existence to us.
00:46:07.160I mean, that would be the first smart thing it would do would be not, not reveal itself.
00:46:11.820So the fact that, that AI has not revealed itself is, to me, is no, that, that's zero evidence that it doesn't exist.
00:46:20.660I mean, I would take it the other way, that if, if, if it, if it existed, I would expect it not to reveal itself.
00:46:27.860Unless it's so much more powerful than we are that it, that it, that it perceives no cost and it reveals itself by merely steamrolling over us.
00:46:56.460If you had one piece of advice for someone who wants to succeed in your field, and you can describe that field however you like, what would it be?
00:47:04.500Okay, well, I'm a historian, is what I became, or, or, and a boat builder, and so the advice in all those fields is just specialize.
00:47:14.760I mean, find something and become obsessed with it.
00:47:17.420I became obsessed with the kayaks that the Russians adopted when they came to Alaska, and then I became obsessed with how computing really happened.
00:47:25.880And if you are obsessed with one little thing like that, you immediately become, you know, you can very quickly know more than anybody else.
00:47:32.400And that's a, that helps to be successful.
00:47:36.560What, if anything, do you wish you'd done differently in your 20s, 30s, or 40s?
00:47:42.080Oh, that's, I mean, you can't, you can't replay that, that tape.
00:47:46.160I wish, well, I can be very clear about that.
00:47:47.680I wish in my 20s, I had gone to the Aleutian Islands earlier while, while more of the old time kayak builders were still alive and, and kind of interviewed and learned from them.
00:48:01.640And then very much the same in my 30s, I mean, all these projects I met, I did go find the surviving Project Orion people and technicians and physicists and interviewed them.
00:48:29.28010 years from now, what do you think you'll regret doing too much of or too little of at this point in your life?
00:48:36.220Probably regret not, you know, not getting out more up the coast again, which is what I'm trying to do.
00:48:42.420That's what I'm working very diligently at, but, but I keep getting distracted.
00:48:46.600Yeah, you got to get off the podcast and get into the kayak.
00:48:51.060Yeah, well, podcast, you know, we could be doing this from, uh, Orca Lab, they have a good internet connection.
00:48:55.720I mean, that's the beautiful thing is that you can do this.
00:48:58.000And, and I, I, the other thing I would say is, this is a side, but I grew up, you know, I grew up as a, since a young teenager in Canada where the country was united by radio.
00:49:08.400I mean, in Canada, people didn't get newspapers, but everybody listened to one radio channel.
00:49:12.520And so in a way, podcasts are, again, back to that past where we're all listening to the radio again.
00:49:31.120I mean, literally, it's the only time I've had a true near-death experience seeing the tunnel of light and reliving my whole life and not only thinking about my daughter and other profound things, but thinking how stupid this was.
00:49:44.260You know, this guy who'd, like, kayak to Alaska six times with no life jacket dies in a restaurant on Columbus Avenue in New York.
00:50:12.800We may have touched this in a way, but maybe there's another side to this.
00:50:18.540What most worries you about our collective future?
00:50:20.900Yeah, kind of what I said, that we lose our, we lose all these skills and intelligences that we've built up over such a long period of time.
00:50:31.480The ability to, you know, survive in the wilderness and understand animals and respect them.
00:50:39.320It's, I think that's a very sad thing that we're losing that, of course, and losing the, losing the wildlife itself.
00:50:45.700If you could solve just one mystery as a scientist or historian or journalist, however, however you want to come at it, what would it be?
00:50:57.040Well, one of them would be the one we just talked about.
00:50:59.880You know, cetacean communication, what's really going on with these whales communicating in the ocean.
00:51:05.640That's something I think we could solve, but we're not looking at it in the right way.
00:51:10.020If you could resurrect just one person from history and put them in our world today and give them the benefit of a modern education, who would you bring back?
00:51:19.680I guess the problem is that most people I'm interested in history sort of had extremely good education.
00:51:25.240You're talking about John von Neumann and Alan Turin, yeah, you're right.
00:51:28.760Yeah, and Leibniz, I mean, he was very well, yeah.
00:51:31.020But lately, the character in my, the project I've been working on lately was kind of awful, but fascinating.
00:51:39.660He was so obsessed with science and things like that.
00:51:43.720So I think to have brought him, you know, if he could come back, it might be a very dangerous thing.
00:51:48.220But he sort of wanted to learn so much and was, again, preoccupied by all these terrible things and disasters that were going on at the time.
00:51:57.600What are you doing on Peter the Great?
00:51:59.240I've been writing this very strange book where it kind of starts with him and Leibniz.
00:52:05.760They go to the hot springs together and they basically stop drinking alcohol for a week.
00:52:11.400And Leibniz convinces him, he wants him to support building digital computers, but he's not interested.
00:52:19.380So the computer thing failed, but what Leibniz did convince him was to launch a voyage to America.
00:52:25.640So that's where the, that's how the Russians came to Alaska.
00:52:44.680So I wouldn't know which one to recommend, but he's, you know, again, that's why he's Peter the Great, because he's been well-studied.
00:52:52.580His relationship with Leibniz fascinates me in that that's not, you know, there's just a lot there we don't know.
00:52:59.100But it's kind of amazing how this sort of obscure mathematician becomes very close to this great, you know, leader of a huge part of the world.
00:53:10.660Okay, last question, the Jurassic Park question.
00:53:14.400If we are ever in a position to recreate the T-Rex, should we do it?
00:53:18.640I would say yes, but this, you know, this comes up as a much more real question with the woolly mammoth and these other animals.
00:53:26.760The stellar sea cow, there's another one we could maybe resurrect.
00:53:30.060So I'm, yeah, I've had these arguments with, you know, with Stuart Brand and George Church who are realistic about could we do it?
00:53:37.380So I would say yes, don't expect it to work, but certainly worth trying.
00:53:44.120What are their biases? Do Stuart and George say we should or shouldn't do this?
00:53:51.100Well, yeah, if you haven't talked to them, you definitely, that would be a great program to go to that debate.
00:53:56.800I mean, the question more is, if you can recreate the animal, does that recreate the species?
00:54:03.740One of the things they're working on is, I think, trying to build a park in Kamchatka or somewhere over there in Siberia,
00:54:10.240is so that if you did recreate the woolly mammoth, they would have an environment to go live in.