The Joe Rogan Experience


Joe Rogan Experience #1211 - Dr. Ben Goertzel


Summary

In this episode of the podcast, I sit down with Ben Bergman to talk about artificial intelligence, cryptos and artificial intelligence in general. Ben is a professor at the Santa Fe Institute, an AI researcher, an author, a speaker, and an advocate of the complex system theory of intelligence. We talk about how to understand the nature of intelligence and how it relates to our understanding of the world, and what it means for us to be a part of it. We also talk about the benefits and dangers of artificial intelligence and what we can learn from it, and why we should be concerned about it. I hope you enjoy this episode, and that it makes you think about AI a little bit more carefully about what we think about it and how we should approach it in the context of our current understanding of it and the implications it has for us and for the world at large. If you like the show, please consider becoming a patron patron, and/or share it on your socials. I'll be looking out for the best spots to get your tickets to the next show! Thanks again for listening and supporting the show. -Ben Bergman Thank you so much for being kind enough to take the time to be on the podcast and to share it with the world! -Jon Sorrentino - Ben Bergmann Jon Bergman ( ) Timestamps: 1:00:00 - What does artificial intelligence mean to you? 3:30 - What is artificial intelligence? 4:15 - Is it real? 5: What does it mean? 6: Is it a new life form? 7:40 - What can we learn from AI? 8:00 9:00 | What is it do we know about it? 11:30 | What are we trying to do? 12:30 13:40 | Is it possible? 15:40 16:00 What is AI a new thing? 17:10 | How does it matter? 18:00 Is it going to be different from nature? 19:20 | How can we know what it is? 21:10 22: Does it matter to you know it s going to become more than a life form ? 23:30 Is it different from us? 26:20 27:30 Can it be more than just a thing we know?


Transcript

00:00:06.000 Boom.
00:00:07.000 Hello, Ben.
00:00:08.000 Hey there.
00:00:09.000 Good to see you, man.
00:00:10.000 Yeah, it's a pleasure to be here.
00:00:11.000 Thanks for doing this.
00:00:13.000 Yeah, yeah.
00:00:13.000 Thanks for having me.
00:00:15.000 I've been looking at some of your shows in the last few days just to get a sense of how you're thinking about...
00:00:23.000 AI and crypto and the various other things I'm involved in.
00:00:27.000 It's been interesting.
00:00:28.000 Well, I've been following you as well.
00:00:30.000 I've been paying attention to a lot of your lectures and talks and different things you've done over the last couple days as well, getting ready for this.
00:00:38.000 AI is either people are really excited about it or they're really terrified.
00:00:42.000 It seems to be the two responses.
00:00:45.000 Either people have this dismal view of these robots taking over the world, or they think it's going to be some amazing sort of symbiotic relationship that we have with these things that's going to evolve human beings past the monkey stage that we're at right now.
00:01:00.000 I tend to be on the ladder more Good.
00:01:21.000 For three decades, and I first started thinking about AI when I was a little kid in the late 60s and early 70s when I saw AIs and robots on the original Star Trek.
00:01:33.000 So I guess I've had a lot of cycles to process the positives and negatives of it, whereas now suddenly most of the world is thinking through all this for the first time.
00:01:46.000 And when you first wrap your brain around the idea that There may be creatures 10,000 or a million times smarter than human beings.
00:01:55.000 At first, this is a bit of a shocker, right?
00:01:58.000 And then, I mean, it takes a while to internalize this into your worldview.
00:02:02.000 Well, there's also, I think, there's a problem with the term artificial intelligence, because it's intelligent.
00:02:11.000 It's there.
00:02:12.000 It's a real thing.
00:02:13.000 It's not artificial.
00:02:14.000 It's not like a fake diamond or a fake Ferrari.
00:02:16.000 It's a real thing.
00:02:18.000 And It's not a great term, and there's been many attempts to replace it with synthetic intelligence, for example.
00:02:28.000 But for better or worse, AI is there.
00:02:32.000 It's part of the popular imagination.
00:02:33.000 It's an imperfect word, but it's not going away.
00:02:37.000 Well, my question is, like, are we married to this idea of intelligence and of life being biological, being carbon-based tissue and cells and blood or insects or mammals or fish?
00:02:52.000 Are we married to that too much?
00:02:53.000 Do you think that it's entirely possible that What human beings are doing and what people that are at the tip of AI right now that are really pushing the technology, what they're doing is really creating a new life form.
00:03:08.000 That it's going to be a new thing.
00:03:10.000 That just the same way we recognize wasps and buffaloes and artificial intelligence is just going to be a life form that emerges from the creativity and ingenuity of human beings.
00:03:21.000 Indeed.
00:03:22.000 I've long been an advocate of a philosophy I think of as patternism.
00:03:28.000 It's the pattern of organization that appears to be the critical thing and the individual...
00:03:39.000 Yeah.
00:03:55.000 I mean, if we can create digital systems or quantum computers or femto computers or whatever it is manifesting the patterns of organization that constitute intelligence, I mean, then there you are.
00:04:10.000 There's intelligence, right?
00:04:12.000 So that's not to say that, you know, consciousness and experience is just about intelligence.
00:04:17.000 Patterns of organization.
00:04:18.000 There may be more dimensions to it, but when you look at what constitutes intelligence, thinking, cognition, problem solving, you know, it's the pattern of organization, not the specific material as far as we can tell.
00:04:32.000 So we can see no reason, based on all the science that we know so far, that That you couldn't make an intelligence system out of some other form of matter rather than the specific types of atoms and molecules that make up human beings.
00:04:47.000 And it seems that we're well on the way to being able to do so.
00:04:52.000 When you're studying intelligence, you're studying artificial intelligence, did you spend any time...
00:04:59.000 Studying the patterns that insects seem to cooperatively behave with, like how leafcutter ants build these elaborate structures underground, and wasps build these giant colonies.
00:05:12.000 Did you study how...
00:05:14.000 I did, actually, yes.
00:05:15.000 So I sort of...
00:05:18.000 I grew up with the philosophy of complex systems, which was championed by the Santa Fe Institute in the 1980s.
00:05:27.000 And the whole concept that there's an interdisciplinary complex system science, which includes biology, cosmology, psychology, sociology, the sort of universal patterns of...
00:05:40.000 Of self-organization.
00:05:42.000 And, you know, ants and ant colonies have long been a paradigm case for that.
00:05:48.000 I used to play with the ant colonies in my backyard when I was a kid.
00:05:53.000 And you'd lay down food in certain patterns.
00:05:55.000 You'd see how the ants are laying down pheromones and the colonies are organizing it in a certain way.
00:06:01.000 And that's an interesting self-organizing, complex system.
00:06:06.000 On its own, it's lacking some types of adaptive intelligence that human minds and human societies have, but it has also interesting self-organizing patterns.
00:06:17.000 This reminds me of the novel Solaris by Stanislaw Lem, which was published in the 60s, which was Really, quite a deep novel, much deeper than the movie that was made of it.
00:06:31.000 Did you ever read that book, Solaris?
00:06:33.000 I'm not familiar with the movie either.
00:06:35.000 Who's in the movie?
00:06:36.000 So there was an amazing, brilliant movie by Tarkovsky, the Russian director, from the late 60s.
00:06:41.000 Then there was a movie by Steven Soderbergh, which was sort of glammed up and Americanized.
00:06:47.000 Oh, that was fairly recent, right?
00:06:48.000 Yeah, 10 years ago.
00:06:49.000 But that didn't get all the deep points of the novel.
00:06:53.000 The original novel, in essence, there's this...
00:06:56.000 There's this ocean coating the surface of some alien planet, which has amazingly complex fractal patterns of organization, and it's also interactive, like the patterns of organization on the ocean respond based on what you do,
00:07:12.000 and when people get near the ocean, it causes them to hallucinate things, and even causes them to see...
00:07:20.000 I think we're good to go.
00:07:40.000 Understand how the ocean is thinking.
00:07:42.000 They send a scientific expedition there to interact with that ocean.
00:07:48.000 But it's just so alien.
00:07:49.000 Even though it monkeys with people's minds and clearly is doing complex things, no two-way communication is ever established.
00:07:58.000 And eventually, the human expedition gives up and goes home.
00:08:03.000 So it's a very Russian ending to the novel, I guess.
00:08:07.000 It's not...
00:08:08.000 I think I saw that.
00:08:09.000 But the interesting message there is, I mean, there can be many, many kinds of intelligence, right?
00:08:18.000 I mean, human intelligence is one thing.
00:08:22.000 The intelligence of an ant colony is a different thing.
00:08:25.000 The intelligence of human society is a different thing.
00:08:29.000 Ecosystem is a different thing.
00:08:30.000 And there could be many, many types of AIs...
00:08:34.000 That we could build with many, many different properties.
00:08:37.000 Some could be wonderful to human beings, some could be horrible to human beings, some could just be alien minds that we can't even relate to very well.
00:08:51.000 So we have a very limited conception of what an intelligence is.
00:08:55.000 If we just think by close analogy to human minds, and this is important if you're thinking about engineering or growing artificial life forms or artificial minds, because it's not just can we do this, it's what kind of mind are we going to engineer or evolve,
00:09:14.000 and there's a huge spectrum of possibilities.
00:09:17.000 Yeah, that's one of the reasons why I asked you that.
00:09:20.000 If we had created, if human beings had created some sort of an insect, and this insect started organizing and developing these complex colonies like a leafcutter ant and building these structures underground, people would go crazy.
00:09:32.000 They would panic.
00:09:33.000 They would think these things are organizing.
00:09:35.000 They're going to build up their resources and attack us.
00:09:37.000 They're going to try to take over humanity.
00:09:39.000 I mean, what people are worried about more than anything when it comes to technology, I think, is the idea that we're going to be...
00:09:48.000 Irrelevant.
00:09:49.000 That we're going to be antiques and that something new and better is going to take our place.
00:09:55.000 Which is almost an abnormal.
00:09:58.000 Yeah, it's a weird thing to worry about it because it's sort of the history of biological life on Earth.
00:10:02.000 I mean, what we know is there's complex things.
00:10:05.000 They become more complex.
00:10:06.000 It goes single-celled organisms to multi-celled organisms.
00:10:08.000 There seems to be a pattern leading up to us.
00:10:10.000 And us with this...
00:10:12.000 Unprecedented ability to change our environment.
00:10:15.000 That's what we can do, right?
00:10:16.000 We can manipulate things, poison the environment, we can blow up entire countries with bombs if we'd like to, and we can also do wild creative things like send signals through space and land on someone else's phone on the other side of the world almost instantaneously.
00:10:29.000 We have incredible power, but we're also so limited by our biology.
00:10:36.000 The thing I think people are afraid of, and I'm afraid of, but I don't know if it makes any sense, is that the next Welcome to my show!
00:11:08.000 In order to advance our species that we're so connected to these things, but they're so...
00:11:15.000 They're the reason for war.
00:11:16.000 They're the reason for lies, deception, thievery.
00:11:21.000 There's so many things that are built into being a person that are responsible for all the woes of humanity.
00:11:27.000 But we're afraid to lose those things.
00:11:29.000 Yeah, I think it's...
00:11:31.000 It's almost inevitable by this point that humanity is going to create synthetic intelligences with tremendously greater general intelligence and practical capability than human beings have.
00:11:48.000 I mean, I think I know how to do that with the software I'm working on with my own team.
00:11:53.000 If we fail, there's a load of other teams who I think are a bit behind us, but are going in the same direction now, right?
00:12:00.000 So you guys feel like you're at the tip of the spear with this stuff?
00:12:02.000 I do, but I also think that's not the most important thing from a human perspective.
00:12:08.000 The most important thing is that humanity as a whole is quite close to this threshold event, right?
00:12:15.000 How far do you think it's quite close?
00:12:17.000 By my own gut feeling, 5 to 30 years, let's say.
00:12:21.000 That's pretty close.
00:12:22.000 But if I'm wrong and it's a hundred years, like in the historical timescale, that sort of doesn't matter.
00:12:27.000 It's like, did the Sumerians create civilization 10,000 or 10,050 years ago?
00:12:32.000 Like, what difference does it make, right?
00:12:34.000 So, I think we're quite close to creating superhuman, artificial, general intelligence.
00:12:43.000 That's, in a way, almost inevitable, given where we are now.
00:12:48.000 On the other hand, I think we still have some agency regarding whether this comes out in a way that respects human values and culture, which are important to us now, given who and what we are,
00:13:04.000 or that Is essentially indifferent to human values and culture in the same way that we're mostly indifferent to chimpanzee values and culture at this point.
00:13:15.000 And completely indifferent to insect values and culture.
00:13:18.000 Not completely, if you think about it.
00:13:20.000 I mean, if I'm building a new house...
00:13:23.000 I will bulldoze a bunch of ants, but yet we get upset if we extinct an insect species, right?
00:13:28.000 So we care to some level, but we would like the super AIs to care about us more than we care about insects or great apes.
00:13:39.000 Absolutely, right?
00:13:40.000 And I think this is something we can impact right now.
00:13:45.000 And to be honest, I mean, in a certain part of my mind, I can think, well, like, in the end, I don't matter that much.
00:13:55.000 My four kids don't matter that much.
00:13:57.000 My granddaughter doesn't matter that much.
00:13:59.000 Like, we are patterns of organization in a very long lineage of patterns of organization.
00:14:05.000 But they matter very much to you.
00:14:07.000 Yeah, and other, you know, dinosaurs came and went and Neanderthals came and went.
00:14:12.000 Humans may come and go.
00:14:14.000 The AIs that we create, It may come and go, and that's the nature of the universe.
00:14:19.000 But on the other hand, of course, in my heart, from my situated perspective as an individual human, like if some AI tried to annihilate my 10-month-old son, I would try to kill that AI, right?
00:14:36.000 Situated in this specific species, place, and time, I care a lot about the condition of all of us humans, and so I would like to not only create a powerful general intelligence, but create one which is...
00:14:54.000 Is going to be beneficial to humans and other life forms on the planet, even while in some ways going beyond everything that we are.
00:15:06.000 And there can't be any guarantees about something like this.
00:15:10.000 On the other hand, humanity has really never had any guarantees about anything anyway.
00:15:17.000 Since we created civilization, we've been leaping into the unknown.
00:15:23.000 One time after the other in a somewhat conscious and self-aware way about it from, you know, agriculture to language to math to the industrial revolution.
00:15:33.000 We're leaping into the unknown all the time, which is part of why...
00:15:39.000 We're where we are today instead of just another animal species, right?
00:15:44.000 So we can't have a guarantee that AGI, artificial general intelligences we create, are going to do what we consider the right thing, given our current value systems.
00:15:56.000 On the other hand, I suspect we can bias the odds in the favor of human values and culture, and that's something I've put a lot of thought and work into alongside the basic algorithms of artificial cognition.
00:16:16.000 Is the issue that the initial creation would be subject to our programming, but that it could perhaps program something more efficient and design something?
00:16:26.000 Like, if you build creativity into artificial general intelligence… I mean, you have to.
00:16:30.000 I mean, generalization is about creativity, right?
00:16:35.000 Yeah, but is the issue that it would choose to not accept our values, which it might find… Occurs with some continuity and respect for the previous one.
00:16:56.000 So, I mean, I have four human kids now.
00:16:59.000 One is a baby, but the other three are adults, right?
00:17:01.000 And with each of them, I took the approach of trying to teach the kids what my values were, not just by preaching at them, but by entering with them into shared situations.
00:17:13.000 But then, you know, when your kids grow up, They're going to go in their own different directions, right?
00:17:19.000 Right, but these are humans.
00:17:21.000 They all have the same sort of biological needs, which is one of the reasons why we have these desires in the first place.
00:17:26.000 Mostly, right, but yet there still is an analogy.
00:17:28.000 I think the AIs that we create, you can think of as our mind children, and We're starting them off with our culture and values, if we do it properly, or at least with a certain subset of the whole diverse, self-contradictory mess of human culture and values.
00:17:46.000 But you know they're going to evolve in a different direction, but you want that evolution to take place in a reflective and caring way, rather than a heedless way.
00:17:58.000 Because if you think about it, The average human a thousand years ago, or even 50 years ago, would have thought you and me were like hopelessly immoral miscreants who would abandon all the valuable things in life, right?
00:18:12.000 Just because of your hat?
00:18:13.000 My hat?
00:18:15.000 I mean, I'm an infidel, right?
00:18:18.000 I haven't gone to church ever, I guess.
00:18:22.000 I mean, my mother's lesbian, right?
00:18:25.000 I mean, there's all these things that we take for granted now that Not that long ago, we're completely against what most humans considered maybe the most important values of life.
00:18:38.000 So, I mean, human values itself is completely a moving target.
00:18:43.000 Right.
00:18:43.000 And moving in our generation.
00:18:45.000 Yeah, yeah, yeah.
00:18:45.000 Moving in our generation.
00:18:47.000 Pretty radically.
00:18:48.000 Very radically.
00:18:49.000 When I think back, like, to my childhood, I lived in New Jersey for nine years of my childhood, and just...
00:18:59.000 The level of racism and anti-Semitism and sexism that were just ambient and taken for granted then.
00:19:07.000 What year was this?
00:19:09.000 Between...
00:19:09.000 I think we're the same age.
00:19:11.000 We're both 51?
00:19:12.000 Yeah, yeah, yeah.
00:19:13.000 Born in 66. I lived in Jersey from 73 to 82. Okay, so I was there from 67 to 73?
00:19:22.000 Oh, yeah, yeah.
00:19:23.000 So, yeah, I mean...
00:19:27.000 My sister went to the high school prom with a black guy, and so we got our car turned upside down, the windows of our house smashed, and it was like a humongous thing, and it's almost unbelievable now, right?
00:19:39.000 Because now, no one would care whatsoever.
00:19:44.000 It's just life, right?
00:19:46.000 Well, certainly there's some fringe parts of this culture.
00:19:48.000 Yeah, yeah, but still, the point is, there is no fixed...
00:19:55.000 It's an ongoing, evolving process.
00:19:59.000 And what you want is for the evolution of the AI's values to be coupled closely with the evolution of human values, rather than Going off in some utterly different direction that we can't even understand.
00:20:14.000 But this is literally playing God, right?
00:20:16.000 I mean, if you're talking about, like, trying to program in values… I don't think you can program in values that fully.
00:20:24.000 You can program in a system for learning and growing values.
00:20:29.000 And here, again, the analogy with human kids is not hopeless.
00:20:34.000 Like, telling Telling your kids, these are the 10 things that are important, doesn't work that well, right?
00:20:41.000 What works better is you enter into shared situations with them, they see how you deal with the situations, you guide them in dealing with real situations, and that forms their system of values.
00:20:53.000 And this is what needs to happen with AIs.
00:20:56.000 They need to grow up entering into real-life situations.
00:21:00.000 With human beings, so that the real-life patterns of human values, which are worth a lot more than the homilies that we enunciate formally, right?
00:21:10.000 The real-life pattern of human values gets inculcated into the intellectual DNA of the AI systems.
00:21:17.000 And this is part of what worries me about the way the AI field is going at this moment, because, I mean, most of the really powerful We're good to go.
00:21:51.000 Right, if they don't have any problem morally and ethically with manipulating us, which we're very malleable, right?
00:22:02.000 We're so easy to manipulate.
00:22:03.000 We're teaching them to manipulate people and we're rewarding them for doing it successfully, right?
00:22:10.000 So this is one of these things that from the outside point of view...
00:22:15.000 Might not seem to be all that intelligent.
00:22:18.000 It's sort of like gun laws in the US. Living in Hong Kong, I mean, most people don't have a bunch of guns sitting around their house.
00:22:27.000 And coincidentally, there are not that many random shootings happening in Hong Kong, right?
00:22:33.000 That's crazy.
00:22:33.000 What a weird coincidence.
00:22:35.000 Yeah, you look in the US, it's like, somehow, you have laws that allow random lunatics to buy all the guns they want, and you have all these people getting shot.
00:22:46.000 So, similarly, from the outside, you could look at it like, this species is creating the successor intelligence, and almost all the resources going into creating their successor intelligence are going into making AIs to do...
00:23:05.000 Surveillance like military drones and advertising agents to brainwash people into buying crap they don't need.
00:23:13.000 What's wrong with this picture?
00:23:14.000 Isn't that just because that's where the money is?
00:23:16.000 Like this is the introduction to it?
00:23:19.000 And then from then we'll find other uses and applications for it?
00:23:22.000 But like right now that's where...
00:23:24.000 The thing is there's a lot of other applications.
00:23:29.000 Financially viable applications?
00:23:31.000 Well, yeah, the applications that are getting the most attention are the financial lowest hanging fruit, right?
00:23:37.000 So, for example, among many projects I'm doing with my SingularityNet team, We're looking at applying AI to diagnose agricultural disease.
00:23:48.000 So you can look at images of plant leaves, you can look at data from the soil and atmosphere, and you can project whether disease in a plant is likely to progress badly or not, which tells you, do you need medicine for the plant?
00:24:00.000 Do you need pesticides?
00:24:03.000 This is an interesting area of application.
00:24:06.000 It's probably quite financially lucrative in a way, but it's a more complex industry than selling stuff online.
00:24:25.000 Yeah, yeah, but there's a lot of specific aspects, right?
00:24:29.000 So, I mean, AI for medicine, again, there's been papers on machine learning applied to medicine since the 80s and 90s.
00:24:37.000 But the amount of effort going into that compared to advertising or surveillance is very small.
00:24:44.000 Now this has to do with the structure of the pharmaceutical business as compared to the structure of the tech business.
00:24:50.000 So when you look into it, there's good reasons for everything, right?
00:24:56.000 But nevertheless, the way things are coming down right now is certain...
00:25:04.000 Biases to the development of early stage AIs are very marked, and you could see them.
00:25:11.000 And I mean, I'm trying to do something about that together with my colleagues in SingularityNet, but of course, it's sort of a David versus Goliath thing.
00:25:22.000 Well, of course you're trying to do something different, and I think it's awesome what you guys are doing.
00:25:27.000 But it just makes sense to me that the first applications are going to be the ones that are more financially viable.
00:25:34.000 Well, the first applications were military, right?
00:25:37.000 I mean, until about 10 years ago, 85% of all funding into AI was from US plus Western Europe militaries.
00:25:45.000 Well, what I'm getting at is that it seems that...
00:25:48.000 Money and commerce are inexorably linked to innovation and technology because there's this sort of thing that we do as a culture where we're constantly trying to buy and purchase bigger and better things.
00:26:02.000 We always want the newest iPhone, the greatest laptop, we don't want the coolest electric cars, whatever it is.
00:26:09.000 And this fuels innovation.
00:26:11.000 This desire for new, greater things.
00:26:15.000 Materialism, in a lot of ways, fuels innovation because this is It does, but I think there's an argument that as we approach a technological singularity, we need new systems.
00:26:27.000 Because if you look at how things have happened during the last century, what's happened is that governments have funded most of the core innovation.
00:26:37.000 I mean, this is well known that most of the technology inside a smartphone was funded by...
00:26:42.000 U.S. government, a little about European government, GPS and the batteries and everything.
00:26:47.000 And then companies scaled it up.
00:26:50.000 They made it user-friendly.
00:26:52.000 They decreased cost of manufacturing.
00:26:54.000 And this process occurs with a certain time cycle to it, where government spends decades funding core innovation and universities, and then industry spends decades figuring out how to scale it up and make it palatable to users.
00:27:32.000 So the gene is out of the bottle, essentially.
00:27:34.000 Yeah, but we still need a lot of new, amazing, creative innovation to happen.
00:27:39.000 But somehow or other, new structures are going to have to evolve to make it happen.
00:27:45.000 And you can see everyone's struggling to figure out what these are.
00:27:48.000 So this is why you have big companies embracing open source.
00:27:52.000 Google releases TensorFlow, and there's a lot of...
00:27:55.000 A lot of other different things.
00:27:56.000 And I think some projects in the cryptocurrency world have been looking at that too.
00:28:01.000 Like how do we use tokens to incentivize independent scientists and inventors to do new stuff without them having to be in a government research lab or in a big company.
00:28:13.000 So I think we're going to need the evolution of...
00:28:17.000 New systems of innovation and of technology transfer as things are developing faster and faster and faster.
00:28:26.000 And this is another thing that's sort of gotten me interested in the whole decentralized world and the blockchain world is the promise of new modes of economic and social organization that can bring more of the world into the research process and accelerate the technology transfer process.
00:28:45.000 I definitely want to talk about that.
00:28:47.000 One of the things that I wanted to ask you is when you're discussing this...
00:28:51.000 I think what you're saying is one very important point that we need to move past the military gatekeepers of technology, right?
00:28:58.000 It's not just military now, though.
00:29:00.000 It's big tech, which are advertising agencies in essence.
00:29:06.000 Facebook, social media, things that are constantly predicting your next purchase, right?
00:29:10.000 Yeah, because if you think about it, and I'm in...
00:29:15.000 Even in a semi-democracy like we have in the US, I mean, those who control the brainwashing of the public, in essence, control who votes for what, and who controls the brainwashing of the public is advertising agencies,
00:29:31.000 and who increasingly are the biggest advertising agencies are the big tech companies who are Accumulating everybody's data and using it to program their minds to buy things.
00:29:42.000 So this is what's programming the global brain of the human race.
00:29:46.000 And of course, there are close links between big tech and the military.
00:29:51.000 Look, Amazon has, what, 25,000 person headquarters in Crystal City, Virginia, right next to the Pentagon.
00:29:57.000 Exactly.
00:29:57.000 I mean, China, it's even more direct and unapologetic, right?
00:30:01.000 So it's a new, like, military-industrial advertising complex, which is guiding the evolution of the global brain on the planet.
00:30:13.000 We found that with this past election, right?
00:30:15.000 With all the intrusion by foreign entities trying to influence the election, that these giant houses set up to write bad stories about whoever they don't want to be in office?
00:30:29.000 Yeah, in a way, that's almost a red herring, but it, I mean, the Russian stuff is almost a red herring, but it revealed what the processes are, which are used to programming.
00:30:40.000 How is it almost a red herring?
00:30:41.000 Oh, because I think whatever programming of Americans' minds is done by the Russians is minuscule compared to the programming of Americans' minds by the Americans.
00:30:54.000 American corporate and government elite, right?
00:30:56.000 But it's fascinating that anybody's even jumping in as well as the American elite.
00:31:00.000 Sure.
00:31:01.000 It's interesting.
00:31:04.000 If you look at what's happening in China, that's like, yeah, yeah, yeah.
00:31:08.000 They're way better at it than we are.
00:31:11.000 Well, it's much more horrific, right?
00:31:13.000 Well, it's more professional, it's more polished, it's more centralized.
00:31:20.000 On the other hand, for almost everyone in China, China is a very good place to live.
00:31:28.000 Level of improvement in that country in the last 30 years has just been astounding, right?
00:31:33.000 I mean, you can't argue with how much better it's gotten there since Deng Xiaoping took over.
00:31:39.000 It's tremendous.
00:31:40.000 Because they're not – they embraced capitalism to a certain extent.
00:31:44.000 They've created their own unique system.
00:31:46.000 What labels you give it is almost arbitrary.
00:31:50.000 They've created their own unique system as a crazy, hippie, libertarian, anarcho-socialist, freedom-loving maniac.
00:32:00.000 That system rubs against migraine in many ways.
00:32:04.000 On the other hand, empirically, if you look at it, it's improved the well-being of a tremendous number of people.
00:32:11.000 So hopefully it evolves and it's one step better than it used to be.
00:32:14.000 Well, but the way it's evolving now is not in a more freedom-loving and anarchic direction, one would say.
00:32:23.000 It's positive in some ways and negative in others, like most complex things.
00:32:28.000 Why are you in Hong Kong?
00:32:29.000 Why do you live there?
00:32:30.000 I fell in love with a Chinese woman.
00:32:33.000 Oh, there you go.
00:32:34.000 Good enough reason.
00:32:35.000 Yeah, it was a great reason.
00:32:36.000 We had a baby recently.
00:32:38.000 She's not from Hong Kong.
00:32:39.000 She's from mainland China.
00:32:40.000 I met her when she was doing her PhD in computational linguistics in Xiamen.
00:32:46.000 But that was what sort of...
00:32:49.000 First got me to spend a lot of time in China, but then I was doing some research at Hong Kong Polytechnic University, and then my good friend David Hansen was visiting me in Hong Kong.
00:33:01.000 I introduced him to some investors there, which ended up with him bringing his company Hansen Robotics to Hong Kong.
00:33:08.000 So now, after I moved there because of falling in love with Ray Ting, then I brought my friend David there, Then Hanson Robotics grew up there, and there's actually a good reason for Hanson Robotics to be there, because the best place in the world to manufacture complex electronics is in Shenzhen,
00:33:26.000 right across the border from Hong Kong.
00:33:28.000 So now I've been working there with Hanson Robotics on the Sophia robots and other robots for a while, and I've accumulated a whole AI team there around Hanson Robotics and SingularityNet.
00:33:40.000 So I mean, by now I'm there because my whole AI and robotics teams are there.
00:33:45.000 Right, makes sense.
00:33:46.000 Do you follow the State Department's recommendations to not use Huawei devices?
00:33:53.000 Well, no.
00:33:55.000 Have you heard that?
00:33:56.000 Have you paid attention to that?
00:33:57.000 Do you think that the Chinese are spying on us?
00:33:59.000 You know, I'm sure.
00:34:00.000 I lived in Washington, D.C. for nine years.
00:34:05.000 I did a bunch of consulting for various government agencies there, and my wife As a Communist Party member, actually.
00:34:13.000 Just because she joined in high school when it was sort of suggested for her to join.
00:34:18.000 So, I'm sure I'm being watched by multiple governments.
00:34:22.000 I don't have any secrets.
00:34:25.000 It doesn't really matter.
00:34:26.000 I'm not in the business of trying to overthrow any government.
00:34:30.000 I'm in the business of trying to...
00:34:42.000 I doubt it's unusual at all.
00:34:51.000 I mean, without...
00:34:53.000 Going into too much detail, like when I was in D.C. working with various government agencies, it became clear there is tremendously more information obtained by government agencies than most people realize.
00:35:08.000 This was true way before Snowden and WikiLeaks and all these revelations.
00:35:13.000 And what is publicly understood now is...
00:35:19.000 Probably not the full scope of the information that governments have either.
00:35:25.000 So, I mean, privacy is pretty much dead.
00:35:28.000 And David Brin, do you know David Brin?
00:35:31.000 No.
00:35:32.000 You should definitely interview David Brin.
00:35:34.000 He's an amazing guy.
00:35:35.000 But he's a well-known science fiction writer.
00:35:38.000 He's based in Southern California, actually, San Diego.
00:35:40.000 But he wrote a book in...
00:35:42.000 Years ago, called the Transparent Society, where he said there's two possibilities, surveillance and surveillance.
00:35:49.000 It's like the power elite watching everyone, or everyone watching everyone.
00:35:54.000 I think everyone watching everyone is inevitable.
00:35:56.000 So he articulated this as essentially the only two viable possibilities, and he's like, we should be choosing and then creating which of these alternatives we want.
00:36:07.000 So now...
00:36:08.000 Now, the world is starting to understand what he was talking about back when he wrote that book.
00:36:13.000 What year did you write the book?
00:36:15.000 Oh, I can't remember.
00:36:16.000 I mean, it was well more than a decade ago.
00:36:18.000 It's weird when some people just nail it on the head decades in advance.
00:36:22.000 I mean, most of the things that are happening in the world now were foreseen by Stanislaw Lem, the Polish author I mentioned.
00:36:31.000 Valentin Turchin, a friend of mine who was the founder of Russian AI, he wrote a book called The phenomenon of science in the late 60s.
00:36:38.000 Then, you know, in 1971 or 2, when I was a little kid, I read a book called The Prometheus Project by a Princeton physicist called Gerald Feinberg.
00:36:49.000 You read a physicist's book when you're five years old?
00:36:51.000 Yeah, I started reading when I was two, and my grandfather was a physicist, so I was reading a lot of stuff then.
00:36:57.000 But Feinberg, in this book, he said, you know, within the next few decades, humanity is going to create nanotechnology, it's going to create machines smarter than people, and it's going to create the technology to allow human biological immortality.
00:37:11.000 And the question will be, do we want to use these technologies, you know, to promote rampant consumerism, or do we want...
00:37:18.000 To use these technologies to promote, you know, spiritual growth of our consciousness into new dimensions of experience.
00:37:24.000 And what Feinberg proposed in this book in the late 60s, which I read in the early 70s, he proposed the UN should send a task force out to go to everyone in the world, every little African village, and educate the world about nanotech, life extension,
00:37:40.000 and AGI, and get the whole world to vote on whether we should develop these technologies toward consumerism, Or toward consciousness expansion.
00:37:49.000 So I read this when I'm a little kid.
00:37:51.000 It's like, this is almost obvious.
00:37:53.000 This makes total sense.
00:37:54.000 Like, why...
00:37:56.000 Why doesn't everyone understand this?
00:37:58.000 Then I tried to explain this to people and I'm like, oh shit, I guess it's going to be a while until the world catches on.
00:38:07.000 So I instead decided I should build a spacecraft, go away from the world at rapid speed and come back after like a million years or something when the world was far more advanced.
00:38:17.000 Or covered in dust.
00:38:19.000 Yeah, right.
00:38:20.000 So now, well then you go away another million years and see what aliens have evolved.
00:38:24.000 So now, Pretty much the world agrees that life extension, AGI, and nanotechnology are plausible things that may come about in the near future.
00:38:35.000 The same question is there that Feinberg saw like 50 years ago, right?
00:38:42.000 The same question is there, like, do we develop this for...
00:38:47.000 We're good to go.
00:39:06.000 On the other hand, there's the possibility that by bypassing governments in the UN and doing something decentralized, you can create a democratic framework, you know, within which, you know, a broad swath of the world can be involved in a participatory way in guiding the direction of these advances.
00:39:26.000 Do you think that it's possible that instead of choosing, that we're just going to have multiple directions that it's growing in, that there's going to be consumer-based?
00:39:33.000 There will be multiple directions, and that's inevitable.
00:39:38.000 It's more a matter of whether… Anything besides the military advertising complex gets a shake, right?
00:39:46.000 So I mean, if you look in the software development world, open source is an amazing thing, right?
00:39:52.000 Linux is awesome.
00:39:54.000 And it's led to so much AI being open source now.
00:39:58.000 Now, open source didn't have to...
00:40:01.000 Actually take over the entire software world like Richard Stallman wanted in order to have a huge impact, right?
00:40:07.000 It's enough that it's a major force.
00:40:10.000 It's a very hippie concept, isn't it?
00:40:12.000 Open source in a lot of ways?
00:40:13.000 In a way, but yet IBM has probably thousands of people working on Linux, right?
00:40:20.000 Like Apple, it began as a hippie concept, but it became very practical, right?
00:40:24.000 So, I mean, something like 75% of all the servers running the internet are based on Linux.
00:40:31.000 You know, the vast majority of mobile phone OS is Linux, right?
00:40:36.000 So, this hippie...
00:40:37.000 So, the vast majority being Android?
00:40:39.000 Android is Linux, yeah, yeah.
00:40:41.000 So, I mean, this hippie, crazy thing where no one owns the code...
00:40:46.000 It didn't have to overtake the whole software economy and become everything to become highly valuable and inject a different dimension into things.
00:40:58.000 And I think the same is true with decentralized AI, which we're looking at with singularity.
00:41:04.000 We don't have to actually put Google and the US and Chinese military and Tencent Out of business, right?
00:41:13.000 Although if that happens, that's fine.
00:41:15.000 But it's enough that we become an extremely major player in that ecosystem so that this participatory and benefit-oriented aspect becomes a really significant component of how humanity is developing general intelligence.
00:41:36.000 It's accepted, generally accepted, that human beings will consistently and constantly innovate.
00:41:41.000 It just seems to be a characteristic that we have.
00:41:45.000 Why do you think that is?
00:41:47.000 Especially in terms of creating something like artificial intelligence, why build our successors?
00:41:54.000 Why do that?
00:41:55.000 What is it about us that makes us want to constantly make bigger, better things?
00:42:02.000 Well, that's an interesting question in the history of biology, which I may not be the most qualified person to answer.
00:42:13.000 It is an interesting question, and I think it has something to do with the weird way in which...
00:42:20.000 We embody various contradictions that we're always trying to resolve.
00:42:24.000 You mentioned ants, and ants are social animals, right?
00:42:28.000 Whereas cats are very individual.
00:42:31.000 We're trapped between the two, right?
00:42:33.000 We're somewhat individual and somewhat social.
00:42:37.000 And then since we created civilization, it's even worse.
00:42:42.000 Because, I mean, we have certain aspects which are...
00:42:48.000 We're good to go.
00:43:07.000 What you said is very true, right?
00:43:11.000 Like, we're driven to seek novelty.
00:43:14.000 We're driven to create new things.
00:43:17.000 And this is certainly one of the factors which is driving the creation of AI. I don't think that alone would make the creation of AI inevitable.
00:43:28.000 Why is that?
00:43:29.000 Why don't you think it would make it inevitable if we consistently innovate?
00:43:32.000 And it's always been a concept.
00:43:34.000 I mean, you were talking about the concept existing 30 plus years ago.
00:43:37.000 Well, I think a key point is that there's tremendous practical economic advantage and status advantage to be gotten from AI right now.
00:43:49.000 And this is driving the advancement of AI to be incredibly rapid, right?
00:43:56.000 Because there are some things that...
00:43:59.000 Are interesting and would use a lot of human innovation, but they get very few resources.
00:44:05.000 So, for example, my oldest son, Zarathustra, he's doing his PhD now.
00:44:10.000 What is his name?
00:44:11.000 Zarathustra.
00:44:11.000 Whoa.
00:44:12.000 My kids are Zarathustra, Amadeus, Zebulon, Ulysses, Scheherazade, and then the new one is CORXI, Q-O-R-X-I, which is an acronym for Quantum Organized Rational Expanding Intelligence.
00:44:25.000 It's not...
00:44:29.000 I'm Joe, I get it.
00:44:45.000 And to me, that's like the most important thing we could be applying AI to because, you know, mathematics is the key to all modern science and engineering.
00:44:53.000 My PhD was in math originally.
00:44:54.000 But the amount of resources going into AI for automating mathematics is not large at this present moment.
00:45:02.000 Although, that's a beautiful and amazing area for invention and innovation and creativity.
00:45:07.000 So, I think what's driving our rapid push toward building AI, I mean, it's not just our creative drive.
00:45:16.000 It's the fact there's tremendous economic value, military value, and human value.
00:45:22.000 I mean, curing diseases, teaching kids.
00:45:24.000 There's tremendous value in almost everything that's important to human beings in building AI, right?
00:45:30.000 So, you put that together with...
00:45:31.000 With our drive to create and innovate, and this becomes an almost unstoppable force within human society.
00:45:38.000 And what we've seen in the last three to five years is suddenly national leaders and titans of industry, and even pop stars, right?
00:45:48.000 They've woken up to the concept that Wow, smarter and smarter AI is real, and this is going to get better and better, like, within years to decades, not centuries to millennia.
00:46:01.000 So, now the cat's out of the bag, nobody's going to put it back, and it's about, you know, how can we direct it in the most...
00:46:11.000 Beneficial, possible way.
00:46:13.000 And as you say, it doesn't have to be just one possible way, right?
00:46:15.000 Like, what I look forward to personally is bifurcating myself into an array of possible bends.
00:46:22.000 Like, I'd like to let one copy of me fuse itself with a superhuman AI mind and, you know, become a god or something beyond a god.
00:46:32.000 And I wouldn't even be myself anymore, right?
00:46:36.000 I mean, you would lose all concepts of human self and identity, but… That would be the point of even holding any of it.
00:46:42.000 Yeah, well, that's for the future.
00:46:45.000 That's for the megaban to decide, right?
00:46:47.000 Megaban.
00:46:47.000 Yeah, yeah.
00:46:48.000 On the other hand, I'd like to let one of me remain in human form, you know, get rid of death and disease and...
00:46:56.000 Psychological issues and just live happily forever, you know, in the people's zoo, watched over by the machines of love and grace, right?
00:47:04.000 So I mean, you can have, it doesn't have to be either or, because once you can scan your brain and body and 3D print new copies of yourself, you could have multiple of you explore different scenarios.
00:47:16.000 Right, but isn't that a giant resource hog?
00:47:17.000 There's a lot of mass energy in the universe.
00:47:20.000 In the universe.
00:47:21.000 Okay, that's assuming that we can escape this planet.
00:47:23.000 Because if you're talking about just people with money cloning themselves, could you live in a world with a billion Donald Trumps?
00:47:30.000 Because, like, literally, that's what we're talking about.
00:47:33.000 We're talking about wealthy people.
00:47:34.000 But wealthy people being able to reproduce themselves and just having this idea that they would like their ego to exist in multiple different forms, whether it's some super symbiote form, That's connected to artificial intelligence or some biological form that's immortal or some other form that stands just as a normal human being as we know it in 2018. Have you had multiple versions of yourself over and over and over again like that?
00:47:59.000 That's what you're talking about.
00:48:00.000 Once you get to the point where you have a superhuman general intelligence that can do things like fully scan the human brain and body and 3D print more of them, by that point You're at a level where scarcity of material resources is not an issue at the human scale of doing things.
00:48:21.000 Scarcity of human resources in terms of what the Earth can hold?
00:48:24.000 Scarcity of mass energy, scarcity of molecules to print more copies of yourself.
00:48:29.000 I think that's not going to be the issue at that point.
00:48:32.000 But what people are worried about is environmental concerns of overpopulation.
00:48:35.000 Because people are worried about what they see in front of their faces right now, but people are not...
00:48:39.000 Most people...
00:48:41.000 Are not thinking deeply enough about what potential would be there once you had superhuman AIs doing the manufacturing and the thinking.
00:48:56.000 I mean, the amount of energy in a single grain of sand, if you had an AI able to appropriately leverage that energy is tremendously more than most people think.
00:49:09.000 The amount of computing power in a grain of sand is like a quadrillion times all the people on Earth put together.
00:49:16.000 What do you mean by that?
00:49:18.000 The amount of computing power in a grain of sand?
00:49:21.000 Well, the amount of computing power that could be achieved by reorganizing the elementary particles in the grain of sand.
00:49:30.000 There's a number in physics called the Bekenstein bound, which is the maximum amount of information that can be We're good to go.
00:50:01.000 Doesn't matter too much.
00:50:03.000 So all of the issues that we're dealing with in terms of environmental concerns, that could all potentially be...
00:50:07.000 They're almost certainly going to be irrelevant.
00:50:09.000 Irrelevant.
00:50:10.000 There may be other problem issues that we can't even conceive at this moment, of course.
00:50:15.000 But the intelligence would be so vastly superior to what we have currently that they'll be able to find solutions to virtually every single problem we have.
00:50:22.000 Well, that's right.
00:50:22.000 Fukushima, ocean fish depopulation, all that stuff.
00:50:27.000 It's all just arrangements of molecules, man.
00:50:29.000 Whoa.
00:50:31.000 People don't want to hear that, though.
00:50:33.000 Environmental people don't want to hear that, right?
00:50:34.000 Well, I mean, I'm also, on an everyday life basis, like, until we have these super AIs, I don't like the garbage washing up on the beach near my house either, right?
00:50:46.000 Of course.
00:50:48.000 On an everyday basis, of course, we want to promote health in our bodies and in our environments right now, as long as there's measurable uncertainty regarding when the benevolent super AIs will come about.
00:51:04.000 Still, I think...
00:51:05.000 The main question isn't whether once you have a beneficially disposed super AI, it could solve all our current petty little problems.
00:51:13.000 The question is, can we wade through the muck of modern human society and psychology to create this beneficial super AI? Right.
00:51:25.000 I believe I know how to create a beneficial super AI, but it's a lot of work to get there.
00:51:32.000 And of course, there's many teams around the world working on vaguely similar projects now, and it's not obvious what kind of super AI... We're actually going to get once we get there.
00:51:46.000 Yeah, it's all just guesses at this point, right?
00:51:48.000 It's more or less educated guesses, depending on who's doing the guessing.
00:51:52.000 Would you say that it's almost like we're in a race of the primitive primate biology versus the potentially beneficial and benevolent artificial intelligence that the best aspects of this primate can create?
00:52:06.000 That it's almost a race to get...
00:52:08.000 Who's going to win?
00:52:09.000 Is it the warmongers and the greedy whores that are smashing the world under its boots?
00:52:14.000 Or is it the scientists that are going to figure out some super intelligent way to solve all of our problems?
00:52:20.000 I look at it more as a struggle between different modes of social organization than individual people.
00:52:32.000 When I worked in D.C. with intelligence agencies...
00:52:36.000 Most of the people I met there were really nice human beings who believed they were doing the best for the world, even if some of the things they were doing like I thought were very much not for the best of the world, right?
00:52:51.000 So, I mean, military mode of organization or large corporations as a mode of organization are...
00:53:00.000 In my view, not generally going to lead to beneficial outcomes for the overall species and for the global brain.
00:53:08.000 The scientific community, the open source community, I think, are better modes of organization.
00:53:14.000 And, you know, the better aspects of the blockchain and crypto community have a better mode of organization.
00:53:20.000 So I think if this sort of open, decentralized mode of organization can...
00:53:28.000 Marshall more resources as opposed to this centralized authoritarian mode of organization, then I think things are going to come out for the better.
00:53:38.000 And it's not so much about bad people versus good people.
00:53:41.000 You can look at, like, the corporate mode of organization is almost a virus that's colonized a bunch of humanity and is sucking people into working according to this mode.
00:53:52.000 Right.
00:53:52.000 And even if they're really good people, and the individual task they're working on isn't bad in itself, they're working within this mode that's leading their work to be used for ultimately a non-good end.
00:54:06.000 Yeah, that is a fascinating thing about corporations, isn't it?
00:54:10.000 The diffusion of responsibility and being a part of a gigantic group that you as an individual don't feel necessarily connected or responsible to the ultimate group.
00:54:18.000 Well, even the CEO isn't fully responsible.
00:54:21.000 Like, if the CEO does something that isn't in accordance with the higher goals of the organization, they're just replaced, right?
00:54:28.000 So, I mean, there's no one person who's in charge.
00:54:31.000 It's really like an ant colony.
00:54:33.000 Yes.
00:54:34.000 It's like its own organism.
00:54:36.000 And I mean, it's us who have let these organisms...
00:54:41.000 Become parasites on humanity.
00:54:43.000 In this way, in some ways, the Asian countries are a little more intelligent than Western countries, and the Asian governments realize the power of corporations to mold society, and there's a bit more feedback between the government and corporations,
00:55:02.000 which can be for better or for worse.
00:55:05.000 But in America, there's some...
00:55:09.000 Ethos of like free markets and free enterprise, which is really not taking into account the oligopolistic nature of modern markets.
00:55:20.000 But in Asian countries, isn't it that the government is actually suppressing information as well?
00:55:25.000 They're also suppressing Google.
00:55:27.000 Well, in South Korea, no.
00:55:28.000 I mean, South Korea, if you look at that… It's one of the only ones.
00:55:32.000 Well, Singapore, I mean...
00:55:33.000 Really, Singapore is ruthless in their drug laws and some of their archaic...
00:55:38.000 Well, so is the U.S. They're far worse, though.
00:55:41.000 Singapore gives you the death penalty for marijuana.
00:55:44.000 They do.
00:55:44.000 Yeah, yeah.
00:55:45.000 Yeah, I mean...
00:55:46.000 South Korea is an example which has roughly the same level of personal freedoms as the U.S., more in some ways, less than others.
00:55:55.000 Massive electronic innovation.
00:55:57.000 Well, interesting thing there...
00:56:00.000 Politically, they were poorer than two-thirds of sub-Saharan African nations in the late 60s.
00:56:06.000 And it is through the government intentionally stimulating corporate development toward manufacturing and electronics that they grew up.
00:56:16.000 Now, I'm not holding that up as a great paragon for the future or anything, but it does show that there's...
00:56:25.000 There's many modes of organization of people and resources other than the ones that we take for granted in the US. I don't think Samsung and LG are the ideal for the future either, though.
00:56:37.000 I mean, I'm much more interested in, you know...
00:56:41.000 You're interested in blockchain.
00:56:42.000 You're interested in open source.
00:56:45.000 I'm interested in blockchain.
00:56:47.000 Basically, I'm interested in anything that's open and participatory in nature.
00:56:52.000 Open and participatory and also disruptive, right?
00:56:55.000 As well.
00:56:55.000 Because I think that...
00:56:58.000 Is the way to be ongoingly disruptive.
00:57:02.000 And open source is a good example of that.
00:57:04.000 Like, when the open source movement started, they weren't thinking about machine learning.
00:57:09.000 But you know, the fact that open source is out there and is then prevalent in the software world, That paved the way for AI to now be centered on open source algorithms.
00:57:20.000 So right now, even though big companies and governments dominate the scalable rollout of AI, the invention of new AI algorithms is mostly done by people creating new code and putting it on GitHub or GitLab or other open source repositories.
00:57:36.000 Now, open source is self-explanatory in its title.
00:57:53.000 Right.
00:57:55.000 Yeah.
00:57:58.000 Sure.
00:57:59.000 I mean, blockchain itself is almost a misnomer.
00:58:03.000 So things are confusing at every level, right?
00:58:07.000 So we should start with...
00:58:10.000 The idea of a distributed ledger, which is basically like a distributed Excel spreadsheet or database.
00:58:16.000 It's just a store of information, which is not stored just in one place, but there's copies of it in a lot of different places.
00:58:23.000 Every time my copy of it is updated, everyone else's copy of it has got to be updated.
00:58:29.000 And then there's various bells and whistles, like sharding, where it can be broken in many pieces, and each piece is stored many places or something.
00:58:38.000 That's a distributed ledger, and that's just distributed computing.
00:58:41.000 Now, what makes it more interesting is when you layer decentralized control onto that.
00:58:47.000 So imagine you have this distributed Excel spreadsheet or distributed database.
00:58:52.000 There's copies of it stored in a thousand places.
00:58:55.000 But to update it, you need like 500 of those thousand people who own the copies to vote, yeah, let's do that update.
00:59:02.000 So then you have a distributed store of data, and you have like a democratic voting mechanism to determine when all those copies can get updated together, right?
00:59:13.000 So then what you have is a data storage and update mechanism that's controlled...
00:59:19.000 I think?
00:59:55.000 I think?
01:00:03.000 Is when I vote, I don't have to say, yeah, this is Ben Gertzel voting for this update to be accepted or not.
01:00:09.000 It's just ID number 1357264. And then encryption is used to make sure that, you know, it's the same guy voting every time that it claims to be without needing, like, your passport number or something,
01:00:26.000 right?
01:00:26.000 What's ironic about it is it's probably one of the best ways ever conceived to actually vote in this country.
01:00:31.000 Yeah.
01:00:32.000 Sure.
01:00:33.000 It is kind of ironic.
01:00:34.000 There's a lot of applications for it.
01:00:37.000 That's right.
01:00:40.000 I mean, that's the core mechanism.
01:00:43.000 Where the blockchain comes from is like a data structure where to store the data in this distributed database, it's stored in a chain of blocks where each block contains data.
01:00:55.000 The thing is...
01:00:56.000 Not every so-called blockchain system even uses a chain of blocks now.
01:01:00.000 Like some use a tree or a graph of blocks or something.
01:01:03.000 Is it a bad term?
01:01:04.000 I mean, is there a better term?
01:01:06.000 I mean, it's an alright term.
01:01:08.000 Is it like AI? Just one of those terms we're stuck with?
01:01:11.000 Yeah, yeah.
01:01:11.000 It's one of those terms we're stuck with even though it's not quite technically...
01:01:16.000 Not quite technically accurate anymore.
01:01:19.000 I don't know another buzzword for it, right?
01:01:23.000 What it is, it's a distributed ledger with encryption and decentralized control.
01:01:28.000 And blockchain is the buzzword that's come about for that.
01:01:32.000 What got me interested in blockchain really is this decentralized control aspect.
01:01:37.000 So my wife, who I've been with for 10 years now, she dug up recently something I'd forgotten, which is a webpage I'd made in 1995, like a long time ago, where I'd said, hey, I'm going to run for president on the decentralization platform,
01:01:53.000 right?
01:01:53.000 Which I'd completely forgotten that crazy idea.
01:01:56.000 I was very young then.
01:01:57.000 I had no idea what an annoying job being president would be, right?
01:02:00.000 But...
01:02:02.000 So the idea of decentralized control seemed very important to me back then, which is well before Bitcoin was invented, because I could see a global brain is evolving on the planet, involving humans, computers, communication devices,
01:02:18.000 and we don't want this global brain to be controlled by a small elite.
01:02:22.000 We want the global brain to be controlled in a decentralized way.
01:02:26.000 So that's really the beauty of this concept.
01:02:32.000 What got me interested in the practical technologies of blockchain was really when Ethereum came out and you had the notion of a smart contract.
01:02:42.000 What's Ethereum?
01:02:43.000 Ethereum, yeah.
01:02:45.000 What is that?
01:02:46.000 Well, so the first blockchain technology was Bitcoin, right?
01:02:50.000 Which is a well-known cryptocurrency now.
01:02:53.000 Ethereum is another cryptocurrency, which is the number two cryptocurrency right now.
01:02:58.000 That's how out of the loop I am.
01:02:59.000 Did you know about it?
01:03:00.000 You did?
01:03:01.000 However, Ethereum came along with a really nice software framework.
01:03:07.000 So...
01:03:08.000 It's not just like a digital money like Bitcoin is, but Ethereum has a programming language called Solidity that came with it.
01:03:18.000 And this programming language lets you write what are called smart contracts.
01:03:22.000 And again, that's sort of a misnomer because a smart contract doesn't have to be either smart or a contract, right?
01:03:29.000 But it was a cool name, right?
01:03:31.000 Right.
01:03:31.000 What does it mean then if it's not a smart contract?
01:03:34.000 Contract.
01:03:34.000 It's like a programmable transaction.
01:03:38.000 Okay.
01:03:38.000 So you can program a legal contract or you can program a financial transaction.
01:03:43.000 So a smart contract, it's a persistent piece of software that embodies like a secure encrypted transaction between multiple parties.
01:03:55.000 So pretty much like anything on the back end of a...
01:04:01.000 Bank's website or a transaction between two companies online, a purchasing relationship between you and a website online.
01:04:08.000 This could all be scripted in a smart contract in a secure way, and then it would be automated in a simple and standard way.
01:04:16.000 So the vision that Vitalik Buterin, who was the main creator behind Ethereum, had is to basically make the internet into a giant Mm-hmm.
01:04:52.000 That was a really cool idea, and the Ethereum blockchain and Solidity programming language made it really easy to do that.
01:04:58.000 So it made it really easy to program distributed, secure transaction and computing systems on the internet.
01:05:06.000 So I saw this, I thought, wow, now we finally have The toolset that's needed to implement some of this is very popular.
01:05:16.000 I mean, basically, almost every ICO that was done in the last couple of years was done on the Ethereum blockchain.
01:05:24.000 What's an ICO? Initial coin offering.
01:05:27.000 Oh, okay.
01:05:27.000 So for bitcoins.
01:05:29.000 Not bitcoins.
01:05:29.000 I'm sorry, cryptocurrencies.
01:05:31.000 Cryptocurrencies, yeah.
01:05:32.000 So they've used this technology for offerings.
01:05:35.000 Right.
01:05:35.000 So what happened in the last couple of years is a bunch of people realized you could use this Ethereum programming framework to create A new cryptocurrency, like a new artificial money,
01:05:51.000 and then you could try to get people to use your new artificial money for certain types of...
01:05:56.000 How many artificial coins?
01:05:59.000 Thousands.
01:05:59.000 Maybe more.
01:06:01.000 And the most popular is Bitcoin, right?
01:06:04.000 Bitcoin is by far the most popular.
01:06:07.000 Ethereum is number two, and there's a bunch of others.
01:06:10.000 What comparison?
01:06:12.000 How much bigger is Bitcoin than Ethereum?
01:06:16.000 I don't know, a factor of three to five.
01:06:20.000 Maybe just a factor of two now.
01:06:22.000 Actually, last year, Ethereum almost took over Bitcoin.
01:06:27.000 When Bitcoin started crashing?
01:06:28.000 Yeah, yeah.
01:06:29.000 Now Ethereum is back down.
01:06:30.000 It might be half or a third.
01:06:32.000 Does that worry you, the fluctuating value of these things?
01:06:36.000 Well, to my mind, creating artificial monies is one...
01:06:42.000 Tiny bit of the potential of what you could do with the whole blockchain toolset.
01:06:48.000 It happened to become popular initially because it's where the money is, right?
01:06:55.000 It is money.
01:06:57.000 And that's interesting to people.
01:06:59.000 But on the other hand, what it's really about is making a world computer.
01:07:05.000 It's about scripting.
01:07:07.000 With a simple programming language, all sorts of transactions between people, companies, whatever, all sorts of exchanges of information.
01:07:16.000 So, I mean, it's about decentralized voting mechanisms.
01:07:20.000 It's about AIs being able to send data and processing for each other and pay each other for their transactions.
01:07:28.000 So, I mean, it's about automating supply chains and shipping and e-commerce.
01:07:35.000 In essence, just like computers and the internet started with a certain small set of applications and then pervaded almost everything, right?
01:07:46.000 It's the same way with blockchain technology.
01:07:48.000 It started with digital money, but the core technology is going to pervade Almost everything, because there's almost no domain of human pursuit that couldn't use, like, security through cryptography, some sort of,
01:08:04.000 you know, participatory decision-making, and then distributed storage of information, right?
01:08:09.000 And these things are also valuable for AI, which is how I got into it in the first place.
01:08:13.000 I mean, if you're making a very, very powerful AI that is going to, you Through the practical value it delivers, you will grow up to be more and more and more intelligent.
01:08:26.000 I mean, this AI should be able to engage a large party of people and AIs in participatory decision-making.
01:08:33.000 The AI should be able to store information, you know, in a widely distributed way.
01:08:38.000 And the AI certainly should be able to use, you know, security and encryption to validate who are the parties involved in its operation.
01:08:46.000 And I mean, these are the key things behind blockchain technology.
01:08:49.000 So, I mean, the fact...
01:08:51.000 The fact that blockchain began with artificial currencies, to me, is a detail of history, just like the fact that the internet began as like a nuclear early warning system, right?
01:09:01.000 I mean, it did, it's good for that, but as it happens, it's also even better for a lot of other things.
01:09:08.000 Yeah, the solution for the financial situation that we find ourselves in, it's one of the more interesting Things about cryptocurrencies that someone said, okay, look, obviously we all kind of agree that our financial institutions are very flawed.
01:09:24.000 The system that we operate under is very fucked up.
01:09:27.000 So how do we fix that?
01:09:28.000 Well, send in the super nerds.
01:09:30.000 And so they figure out a new...
01:09:33.000 We've got to send in the super AI. Super AI. Well, first the super nerds and then the super.
01:09:37.000 I mean, obviously, who's the guy that they think this...
01:09:45.000 Oh, Satoshi Nakamoto, yeah.
01:09:49.000 I can neither confirm nor deny that.
01:10:00.000 It's very interesting, but it's also very promising.
01:10:04.000 I have high optimism for cryptocurrencies because I think that kids today are looking at it with much more open eyes than grandfathers.
01:10:15.000 Grandfathers are looking at Bitcoin.
01:10:16.000 They're going, get out of here.
01:10:17.000 I'm a grandfather.
01:10:18.000 I'm sure you are, but you're an exceptional one.
01:10:20.000 But there's a lot of people that are older that just, they're not open to accepting these ideas.
01:10:26.000 But I think...
01:10:28.000 Kids today, in particular, the ones that have grown up with the internet as a constant force in their life, I think they're more likely to embrace something along those lines.
01:10:38.000 There's no doubt that cryptographic formulations of money are going to become the standard.
01:10:48.000 Do you think that's going to be the standard?
01:10:50.000 That will happen.
01:10:51.000 However, it could happen Potentially in a very uninteresting way.
01:10:57.000 How's that?
01:10:58.000 You could just have the e-dollar.
01:11:00.000 I mean, a government could just say, we will create this cryptographic token, which counts as a dollar.
01:11:06.000 I mean, most dollars are just electronic anyway, right?
01:11:09.000 So what habitually happens is technologies that are invented to subvert the establishment are converted to a form where they...
01:11:20.000 Help bolster the establishment instead.
01:11:23.000 I mean, in financial services, this happens very rapidly.
01:11:28.000 Like PayPal, Peter Thielen, those guys started PayPal thinking they were going to obsolete fiat currency and make an alternative to the currencies run by nation states.
01:11:38.000 Instead, they were driven to make it a credit card processing front end, right?
01:11:43.000 So, that's...
01:11:45.000 One thing that could happen with cryptocurrency is it just becomes a mechanism for governments and big companies and banks to do their things more efficiently.
01:11:56.000 So what's interesting isn't so much the digital money aspect, although it is in some ways a great way to do digital money.
01:12:04.000 What's interesting is with all the flexibility it gives you to script complex computing networks, In there is the possibility to script new forms of participatory democratic self-organizing networks.
01:12:21.000 So blockchain, like the internet or computing, is a very flexible medium.
01:12:26.000 You could use it to make tools of oppression, or you could use it to make tools of amazing growth and liberation.
01:12:35.000 And obviously we know which one I'm more interested in.
01:12:38.000 Yeah.
01:12:38.000 What is blockchain being currently used for?
01:12:45.000 What different applications?
01:12:47.000 Because it's not just cryptocurrency.
01:12:48.000 They're using it for a bunch of different things now, right?
01:12:50.000 They are.
01:12:51.000 I would say it's very early stage.
01:12:55.000 How early?
01:12:57.000 Well, the heaviest uses of blockchain now are probably inside large financial services companies, actually.
01:13:05.000 So if you look at Ethereum, the project I mentioned, so Ethereum is run by an open source, an open foundation, Ethereum Foundation.
01:13:15.000 Then there's a consulting company called ConsenSys.
01:13:19.000 which is a totally separate organization that was founded by Joe Lubin who was one of the founders of Ethereum in the early days and ConsenSys has funded a bunch of the work within the Ethereum foundation and community but ConsenSys has done a lot of contracts just working with governments and big companies to customize code based on Ethereum to help with their internal operations so actually a lot of the practical value has been With stuff that
01:13:49.000 isn't in the public eye that much, but it's like back-end inside of companies.
01:13:55.000 In terms of practical customer-facing uses of cryptocurrency, I mean, the Tron blockchain, which is different than Ethereum, that has a bunch of games on it, for example, and some online gambling, for that matter.
01:14:11.000 So that's gotten a lot of users.
01:14:14.000 Online games?
01:14:15.000 How do they use that?
01:14:17.000 Oh, it's a payment mechanism.
01:14:19.000 Oh, I see.
01:14:20.000 But this is one of the things there's a lot of hand-wringing about in the cryptocurrency world now.
01:14:27.000 Gambling?
01:14:27.000 No, just the fact that there aren't that many big consumer-facing uses of cryptocurrency.
01:14:35.000 I mean, everyone would like there to be.
01:14:37.000 That was the idea.
01:14:39.000 And this is one of the things we're aiming at with our SingularityNet project is to, you know, by putting...
01:14:47.000 AI on the blockchain in a highly effective way.
01:14:51.000 And then we're also, we have these two tiers.
01:14:55.000 So we have the SingularityNet Foundation, which is creating this open source decentralized platform in which AIs can talk to other AIs and, you know, like ants in the colony grouped together to form smarter and smarter AI. Then we're spinning off A company called the Singularity Studio,
01:15:12.000 which will use this decentralized platform to help big companies integrate AI into their operations.
01:15:19.000 So with the Singularity Studio company, we want to get all these big companies using the AI tools in the SingularityNet platform, and then we want to drive massive usage of blockchain in the SingularityNet platform.
01:15:35.000 That way.
01:15:36.000 If we're successful with what we're doing, this will be within a year from now or something by far the biggest Usage of blockchain outside of financial exchange is our use of blockchain within SingularityNet for AI,
01:15:52.000 basically for customers to get the AI services that they need for their businesses and then for AIs to transact with other AIs, paying other AIs for doing services for them.
01:16:04.000 Because this, I think...
01:16:06.000 Is the path forward.
01:16:08.000 It's like a society and economy of minds.
01:16:10.000 It's not like one monolithic AI. It's a whole bunch of AIs carried by different people all over the world, which not only are in the marketplace providing services to customers, but each AI is asking questions of each other and then...
01:16:24.000 Rating each other of how good they are sending data to each other and paying each other for their services.
01:16:29.000 So this network of AIs can emerge in intelligence on the whole network level as well as there being intelligence in each component.
01:16:39.000 And is it also fascinating to you that this is not dependent upon nations, that this is a worldwide endeavor?
01:16:44.000 I think that's going to be important once it starts to get a very high level of intelligence.
01:16:51.000 In the early stages, okay, what would it hurt?
01:16:55.000 If I had in my own database a central record of everything, like I'm an honest person, I'm not going to rip anyone off.
01:17:04.000 But once we start to make a transition...
01:17:07.000 Toward artificial general intelligence in this global decentralized network, which has component AIs from every country on the planet, like, at that point, once it's clear you're getting toward AGI, a lot of people want to step in and control this thing,
01:17:23.000 you know, by law, by military might, by any means necessary.
01:17:27.000 By that point, the fact that you have this open decentralized network underpinning everything, like, This gives an amazing resilience to what you're doing.
01:17:36.000 Who can shut down Linux?
01:17:37.000 Who can shut down Bitcoin?
01:17:38.000 Nobody can, right?
01:17:40.000 You want AI to be like that.
01:17:43.000 You want it to be a global upsurge of creativity and mutual benefit from people all over the planet, which no powerful party can shut down even if they're afraid that it threatens their hegemony.
01:17:56.000 It's very interesting because in a lot of ways it's a very elegant solution to what's an obvious problem.
01:18:03.000 Yeah.
01:18:04.000 Just as the internet is an elegant solution to what's in hindsight an obvious problem, right?
01:18:10.000 Distribution of information.
01:18:11.000 Yeah, yeah, yeah.
01:18:12.000 To communicate.
01:18:13.000 But this is extra special to me because if I was a person running a country, I would be terrified of this shit.
01:18:20.000 I'd be like, well, this is what's going to take power away.
01:18:22.000 That depends which country.
01:18:24.000 If you're a person running the US or China, you...
01:18:28.000 Would have a different relationship than if you're a person, like I know the Prime Minister of Ethiopia, Abiy Ahmed, who has a degree in software engineering, and he loves this.
01:18:39.000 But of course, Ethiopia isn't in any date.
01:18:42.000 Suppressing any other countries, right?
01:18:43.000 And they're not in any danger of individually, like, taking global AI hegemony, right?
01:18:48.000 So for the majority of countries in the world, they like this for the same reason they like Linux, right?
01:18:55.000 I mean, this is something in which they have an equal role to anybody else.
01:19:00.000 Right.
01:19:00.000 The superpowers.
01:19:01.000 And you see this among companies also, though.
01:19:04.000 So a lot of big companies that we're talking to...
01:19:08.000 They like the idea of this decentralized AI fabric because, I mean, if you're not Amazon, Google, Microsoft, Tencent, Facebook, so on, if you're another large corporation, you don't necessarily want all your AI and all your data To be going into one of this handful of large AI companies,
01:19:27.000 you would rather have it be in a secure, decentralized platform.
01:19:32.000 I mean, this is the same reason that Cisco and IBM, they run on Linux.
01:19:37.000 They don't run on Microsoft, right?
01:19:39.000 So if you're not one of the handful of large governments or large...
01:19:55.000 Yeah, what would be the benefit of running it on Linux versus Microsoft?
01:20:01.000 Well, you're not at the behest of some other big company.
01:20:04.000 I mean, imagine if you were Cisco or GM or something, and all of your internal machines, all your servers are running on Microsoft.
01:20:15.000 What if Microsoft increases their price or removes some feature?
01:20:20.000 Then...
01:20:21.000 We're good to go.
01:20:40.000 In some way, then your business is basically controlled by this other company.
01:20:48.000 So having a decentralized platform in which you're...
01:20:53.000 You know, an equal participant along with everybody else is actually a much better position to be in.
01:20:59.000 And I think this, I think, is why we can succeed with this plan of having this, you know, decentralized singularity net platform than this singularity studio enterprise software company which mediates between the decentralized platform and big companies.
01:21:18.000 I mean, it's because most companies and governments in the world They don't want hegemony of a few large governments and corporations either.
01:21:28.000 And you can see this in a lot of ways.
01:21:31.000 You can see this in embrace of Linux and Ethereum by many large corporations.
01:21:37.000 You can also see, in a different way, the Indian government...
01:21:43.000 You know, they rejected an offer by Facebook to give free internet to all Indians, because Facebook wanted to give, like, mobile phones, it would give free internet, but only to access Facebook, right?
01:21:54.000 India's like, well, no thanks, right?
01:21:57.000 And India is now giving, they're now creating laws that...
01:22:02.000 Any internet company that collects data about Indian people has to store that data in India, which is so the Indian government can subpoena that data when they want to, right?
01:22:12.000 So you're already seeing a bunch of resistance against hegemony by a few large governments or large corporations, right?
01:22:22.000 By other companies and other governments.
01:22:24.000 I think this is very positive and is one of the factors that can foster the growth of a decentralized AI ecosystem.
01:22:34.000 Is it fair to say that the future of AI is severely dependent upon who launches it first?
01:22:43.000 Like whoever, whether it's singularity net, or whether it's artificial general intelligence.
01:22:49.000 The bottom line is, as a scientist, I have to say we don't know, right?
01:22:52.000 It could be there's an end state that AGI will just self-organize into, almost independent of the initial condition, but we don't know.
01:23:05.000 And given that we don't know, I'm operating under the, you know, the heuristic...
01:23:11.000 Yeah.
01:23:28.000 I'm operating under the heuristic assumption that this is going to bias things in a positive direction, right?
01:23:35.000 I mean, in the absence of knowledge to the contrary.
01:23:39.000 But if the Chinese government launches one that they're controlling, if they get to pop it off first.
01:23:45.000 I like the idea that you're saying, though, that it might organize itself.
01:23:49.000 I mean, I understand the Chinese government...
01:23:52.000 Also, they want the best for the Chinese people.
01:23:56.000 They don't want to make the Terminator either, right?
01:24:00.000 So, I mean, I think even Donald Trump, who's not my favorite person, doesn't actually want to kill off everyone on the planet, right?
01:24:09.000 He might if they talk shit about him.
01:24:10.000 Yeah, yeah, yeah.
01:24:11.000 You never know.
01:24:12.000 It was just him.
01:24:14.000 I told you all.
01:24:16.000 Yeah, so, I mean, I think...
01:24:19.000 I wouldn't say we're necessarily doomed if big governments and big companies are the ones that develop AI or AGI first.
01:24:30.000 Well, big government and big companies essentially developed the internet, right?
01:24:33.000 And it got away from them.
01:24:34.000 That's right.
01:24:35.000 That's right.
01:24:35.000 So there's a lot of uncertainty all around, but I think it behooves us to do what we can.
01:24:41.000 To buy us the odds in our favor based on our current understanding.
01:24:46.000 And I mean, toward that end, we're developing, you know, open source decentralized AI in SingularityNet projects.
01:24:54.000 So if you would, explain SingularityNet and what you guys are actively involved in.
01:24:59.000 Sure, sure.
01:25:01.000 So SingularityNet in itself is a platform that allows many different AIs to operate on it, and these AIs can offer services to anyone who requests services of the network,
01:25:16.000 and they can also request and offer services among each other.
01:25:22.000 So it's both just an online marketplace for AIs, much like You know, the Apple App Store or Google Play Store, but for AIs rather than phone apps.
01:25:33.000 But the difference is the different AIs in here can outsource work to each other and talk to each other.
01:25:39.000 And that gives a new dimension to it, right?
01:25:41.000 Where you can have, we think of as a society or economy of minds, and it gives the possibility that this whole society of interacting AIs...
01:25:51.000 Which are then, they're paying each other for transactions with our digital money, our cryptographic token, which is called the AGI token.
01:26:01.000 So these AIs, which are paying each other and rating each other of how good they are, sending data and questions and answers to each other, can self-organize into some overall AI mind.
01:26:13.000 Now, we're building this platform and then we're plugging into it To seed it a bunch of AIs of our own creation.
01:26:20.000 So I've been working for 10 years on this open source AI project called OpenCog, which is oriented toward building general intelligence.
01:26:28.000 And we're putting a bunch of AI agents based on the OpenCog platform.
01:26:37.000 Welcome to my show!
01:26:55.000 Within the larger pool of AIs on the Singularity Net can sort of serve as the general intelligence core because the OpenCog AI agents are really good at abstraction and generalization and creativity.
01:27:08.000 We can put a bunch of other AIs in there that are good at highly specific intelligence.
01:27:13.000 Forms of learning like predicting financial time series, curing diseases, answering people's questions, organizing your inbox.
01:27:21.000 So you can have the interaction of these specialized AIs and then more general purpose, you know, abstraction and creativity-based AIs like OpenCog Agents all interacting together.
01:27:33.000 In this decentralized platform.
01:27:35.000 And then, you know, the beauty of it is like some 15-year-old genius in Azerbaijan or the Congo can put some brilliant AI into this network.
01:27:44.000 If it's really smart, it will get rated highly by the other AIs for its work helping them do their thing.
01:27:52.000 Then it can get replicated over and over again across many servers.
01:27:56.000 Suddenly, A, this 16-year-old kid from Azerbaijan or the Congo could become wealthy from their copies of their AI, providing services to other people's AIs.
01:28:07.000 And B, the creativity in their mind is out there and is infusing this global AI network with some...
01:28:15.000 Some new intellectual DNA that never would have been found by a Tencent or a Google because they're not going to hire some Congolese teenager who may have a brilliant AI idea.
01:28:27.000 That's amazing.
01:28:28.000 That's amazing.
01:28:31.000 So this is all ongoing right now, and the term singularity that you guys are using, the way I've understood that term, correct me if I'm wrong, is that it's going to be the one innovation or one invention that essentially changes everything forever.
01:28:47.000 Well, singularity isn't necessarily one invention.
01:28:50.000 The singularity...
01:28:52.000 Which is coined by...
01:28:54.000 Kurzweil?
01:28:55.000 It's coined by my friend Werner Vinge, who's another guy you should interview.
01:28:58.000 He's in San Diego, too.
01:28:59.000 A lot of brilliant guys down there.
01:29:02.000 Werner Vinge is a science...
01:29:03.000 A lot of military down there.
01:29:04.000 Yeah, Werner Vinge...
01:29:06.000 He was a math professor at San Diego University, actually.
01:29:09.000 But a well-known science fiction writer.
01:29:12.000 His book, Fire Upon the Deep, one of the great science fiction books ever.
01:29:16.000 Can you spell his name, please?
01:29:17.000 V-I-N-G-E. Werner Vinge.
01:29:19.000 G-E. Yeah, brilliant guy.
01:29:22.000 W-E-R-N-E-R? Vernor.
01:29:25.000 Yeah, V-E-R-N-O-R, yeah.
01:29:26.000 Oh, V-E-R-N-O-R. Yeah, he's brilliant.
01:29:28.000 He coined the term technological singularity back in the 1980s.
01:29:33.000 Really?
01:29:33.000 But he opted not to become a pundit about it because he'd rather write more science fiction books.
01:29:40.000 That's interesting that a science fiction author...
01:29:42.000 Ray Kurzweil, who's also a good friend of mine, I mean, Ray...
01:29:46.000 I took that term and fleshed it out and did a bunch of data analytics trying to pinpoint when it would happen.
01:29:55.000 But the basic concept of the technological singularity is a point in time when technological advance occurs so rapidly that to the human mind it appears almost instantaneous.
01:30:07.000 Like imagine 10 new Nobel Prize winning discoveries every second or something, right?
01:30:13.000 So, this is similar to the concept of the intelligence explosion that was posited by the mathematician I.J. Goode in 1965. What I.J. Goode said then, the year before I was born, was the first truly intelligent machine will be the last invention that humanity needs to make,
01:30:30.000 right?
01:30:30.000 Right.
01:30:31.000 So, this is an intelligence explosion is another term for basically the same thing as a technological technology.
01:30:38.000 But it's not just about AI. AI is just probably the most powerful technology driving it.
01:30:44.000 I mean, there's AI, there's nanotechnology, there's femtotechnology, which will be building things from elementary particles.
01:30:51.000 I mean, there's life extension, genetic engineering, mind uploading, which is like reading the mind out of your brain and putting it into a machine.
01:31:01.000 You know, there's advanced energy technologies so that...
01:31:05.000 All these different things are expected to advance at around the same time, and they have many ways to boost each other, right?
01:31:12.000 Because the better AI you have, your AI can then invent new ways of doing nanotech and biology.
01:31:18.000 But if you invent amazing new nanotech and quantum computing, that can make your AI smarter.
01:31:22.000 On the other hand, if you could crack how the human brain works and genetic engineering to upgrade human intelligence, those smarter humans could then make better AIs and nanotechnology, right?
01:31:32.000 So there's so many virtuous cycles Among these different technologies, the more you advance in any of them, the more you're going to advance in all of them.
01:31:42.000 And it's the coming together of all of these that's going to create, you know, radical abundance and the technological success.
01:31:50.000 Singularity.
01:31:51.000 So that term which Werner Vinci introduced, Ray Kurzweil borrowed for his books and for the Singularity University educational program, and then we borrowed that for our Singularity Net decentralized blockchain-based AI platform and our Singularity Studio enterprise software company.
01:32:13.000 Now, I want to talk to you about two parts of what you just said, one being the possibility that one day we can upload our mind or make copies of our mind.
01:32:22.000 You up for it?
01:32:23.000 My mind's a mess.
01:32:24.000 You want to upload into here?
01:32:25.000 No.
01:32:26.000 I could use a little Joe Rogan on my phone.
01:32:28.000 You could just call me, dude.
01:32:30.000 I'll give you the organic version.
01:32:34.000 Do you think that that's a real possibility inside of our lifetime, that we can map out the human mind to the point where we can essentially recreate it?
01:32:42.000 But if you do recreate it, without all the biological urbs and the human reward systems that are built in, what the fuck are we?
01:32:49.000 Well, that's a different question.
01:32:51.000 I mean, I think...
01:32:52.000 What is your mind?
01:32:54.000 Well, I think that there's two things that are needed for, let's say, human body uploading to simplify things.
01:33:03.000 Body uploading.
01:33:03.000 There are two things that are needed.
01:33:05.000 One thing is a better computing infrastructure than we have now to host the uploaded body.
01:33:12.000 And the other thing is a better scanning technology, because right now, we don't have a way to scan the molecular structure of your body without freezing you, slicing you, and scanning you, which you probably don't want done at this point in time.
01:33:28.000 So, assuming both those are solved, you could then recreate in some computer simulation...
01:33:36.000 You know, an accurate simulacrum of what you are, right?
01:33:41.000 But that's where I'm getting at.
01:33:43.000 An accurate simulacrum, that's getting weird because the biological variability of human beings, we vary day to day.
01:33:51.000 And your simulacrum would also vary day to day, so it would deviate.
01:33:55.000 You would program it in to have flaws?
01:33:58.000 Because we vary depending upon how much sleep we get, whether or not we're feeling sick, whether we're lonely.
01:34:03.000 If your upload were an accurate copy of you, then the simulation hosting your upload would need to have an accurate simulation of the laws of biophysics and chemistry.
01:34:16.000 That allow your body to, you know, evolve from one second to the next.
01:34:20.000 My concern is that it's going to recognize.
01:34:21.000 Your upload would change second by second just like you do, and it would diverge from you, right?
01:34:27.000 So, I mean, after an hour, it will be a little different.
01:34:30.000 After a year, it might have gone in a quite different direction for you.
01:34:34.000 It'll probably be a monk, some super god monk living on the top of a mountain somewhere in a year.
01:34:39.000 Yeah.
01:34:41.000 It depends on what virtual world it's living in.
01:34:43.000 True.
01:34:43.000 I mean, if it's living in a virtual world...
01:34:45.000 Oh, a virtual world.
01:34:46.000 It'll be a virtual world.
01:34:46.000 You're not talking about the potential of downloading this again into a biological...
01:34:51.000 There's a lot of possibilities, right?
01:34:53.000 I mean, you could upload into a Joe Rogan living in a virtual world and then just create your own fantasy universe, or you could 3D print an alternate synthetic...
01:35:06.000 I mean, once you have the ability to manipulate molecules at will, the scope of possibilities becomes much greater than we're used to thinking about.
01:35:18.000 My question is, do we replicate flaws?
01:35:22.000 Do we replicate depression?
01:35:24.000 Of course.
01:35:25.000 But why would we do that?
01:35:26.000 Wouldn't we want to cure depression?
01:35:27.000 So if we do cure depression...
01:35:29.000 Here's the interesting thing.
01:35:31.000 Once we have you in a digital form, then it's very programmable.
01:35:37.000 Then we juice up the dopamine, the serotonin levels.
01:35:40.000 Well, then you can change what you want, and then you have a whole different set of issues, right?
01:35:44.000 Yeah.
01:35:45.000 Because once you've changed...
01:35:47.000 I mean, suppose you make a fork of yourself...
01:35:50.000 And then you manipulate it in a certain way, and then after a few hours you're like, well, I don't much like this new Joe here.
01:36:00.000 Maybe we should roll back that change.
01:36:02.000 But the new Joe is like, well, I like myself very well, thank you, right?
01:36:07.000 So then there's a lot of issues that...
01:36:11.000 That will come up once we can modify and reprogram ourselves.
01:36:16.000 But isn't the point that the ramifications of these decisions are almost insurmountable once the ball gets rolling?
01:36:23.000 Well, the ramifications of these decisions are going to be very interesting to explore.
01:36:30.000 Yes, you're super positive, Ben.
01:36:32.000 Super positive.
01:36:34.000 You're optimistic about the future.
01:36:35.000 Many bad things will happen.
01:36:36.000 Many good things will happen.
01:36:37.000 That's a very easy prediction to make.
01:36:40.000 Okay, I see what you're saying.
01:36:41.000 Yeah, I just wonder...
01:36:42.000 I mean, think about like world travel, right?
01:36:45.000 Like hundreds of years ago, most people didn't travel more than a very short distance from their home.
01:36:51.000 And you could say, well, okay, what if people could travel all over the world, right?
01:36:55.000 Like what horrible things could happen?
01:36:57.000 They would lose their culture.
01:36:58.000 Like they might go marry someone from a random tribe.
01:37:02.000 You can get killed in the Arctic region or something.
01:37:05.000 A lot of bad things can happen when you travel far from your home.
01:37:08.000 A lot of good things can happen.
01:37:10.000 And ultimately the ramifications were not foreseen by people 500 years ago.
01:37:16.000 I mean, we're going into a lot of new domains.
01:37:19.000 We can't see the details of the pluses and minuses that are going to unfold.
01:37:25.000 It would behoove us to simply become comfortable with radical uncertainty because otherwise we're going to confront it anyway and we're just going to be nervous.
01:37:33.000 So it's just inevitable.
01:37:35.000 It's almost inevitable.
01:37:37.000 I mean, of course.
01:37:38.000 Barring any natural disaster.
01:37:40.000 Yeah, I mean, of course Trump could start a nuclear war and then we're resetting to ground zero.
01:37:45.000 It's just as likely we get hit by an asteroid, right?
01:37:48.000 Yeah, I mean, so barring a catastrophic outcome, I believe a technological singularity is essentially inevitable.
01:37:58.000 There's a radical uncertainty attached to this.
01:38:02.000 On the other hand...
01:38:04.000 Inasmuch as we humans can know anything, it would seem, commonsensically, there's the ability to bias this in a positive rather than negative direction.
01:38:16.000 We should be spending more of our attention on doing that rather than, for instance, Advertising, spying and making chocolatier chocolates and all the other things.
01:38:27.000 Right, but how many people are doing that?
01:38:28.000 I mean, it's prevalent.
01:38:29.000 It's everywhere.
01:38:29.000 But I mean, how many people are actually at the helm of that as opposed to how many people are working on various aspects of technology all across the planet?
01:38:38.000 It's a small group in comparison.
01:38:40.000 Working on explicitly bringing about the singularity is a small group.
01:38:44.000 Right.
01:38:46.000 Supporting technologies is a very large group.
01:38:49.000 So think about like GPUs.
01:38:51.000 Where did they come from?
01:38:53.000 Accelerating gaming, right?
01:38:55.000 Lo and behold, they're amazingly useful for training neural net models, which is one among many important types of AI, right?
01:39:02.000 So a large amount of the planet's resources are now getting spent on technologies that are Indirectly supporting these singularitarian technologies.
01:39:12.000 So as another example, like microarrayers, they let you measure the expression level of genes, how much each gene is doing in your body at each point in time.
01:39:22.000 These were originally developed...
01:39:26.000 I think?
01:39:47.000 Do you have any concerns at all about a virtual world?
01:39:51.000 We may be in one right now, man.
01:39:53.000 How do you know?
01:39:54.000 That's true.
01:39:54.000 But as far as we know, we're not.
01:39:56.000 My problem is, I want to find that programmer and get them to make more attractive people, you know?
01:40:01.000 Well, I would say that that's part of the reason why attractive people are so interesting, is that they're unique and rare.
01:40:08.000 That's one of the problems with calling everything beautiful.
01:40:11.000 You know, when people were saying Caitlyn Jenner's beautiful, I was like, well, let's be realistic.
01:40:16.000 If I get in the right frame of mind, I can find anything beautiful, actually.
01:40:19.000 Well, you can find it unique and interesting.
01:40:21.000 No, I can find anything beautiful.
01:40:22.000 Okay, I guess.
01:40:23.000 But in terms of like, yeah, I guess it's subjective, right?
01:40:28.000 It really is.
01:40:28.000 We're talking about beauty, right?
01:40:30.000 Yeah.
01:40:31.000 Now, but existential angst, just when people sit and think about the pointlessness of our own existence, like we are these finite beings that are clinging to a ball that spins a thousand miles an hour, hurling through infinity, what's the point?
01:40:46.000 There's a lot of that that goes around already.
01:40:48.000 If we create an artificial environment that we can literally somehow or another download a version of us, and it exists in this...
01:41:00.000 Blockchain-created or powered weird fucking simulation world, what would be the point of that?
01:41:13.000 What I really believe, which...
01:41:18.000 It's a bit personal and maybe different than many of my colleagues.
01:41:21.000 What I really believe is that these advancing technologies are going to lead us to unlock many different states of consciousness and experience than Most people are currently aware of.
01:41:41.000 How so?
01:41:42.000 I mean, you say we're just insignificant species on a speck of rock hurtling in outer space.
01:41:49.000 I wouldn't say we're insignificant.
01:41:50.000 I would say there's people that have existential angst because they wonder about what the purpose of it all is.
01:41:55.000 I don't fall into that category.
01:41:58.000 I tend to feel like...
01:41:59.000 We understand almost nothing about who and what we are, and our knowledge about the universe is extremely minuscule.
01:42:10.000 I mean, if anything, I look at things from more of a Buddhist or phenomenological way, like there's sense perceptions, and then out of those sense perceptions, models arise and accumulate, including a model of the self and a model of the body,
01:42:28.000 And the model of the physical world out there.
01:42:31.000 And by the time you get to planets and stars and blockchains, you're building hypothetical models on top of hypothetical models.
01:42:39.000 And then, by building intelligent machines and mind-uploading machines and virtual realities, we're going to radically transform You know, our whole state of consciousness,
01:42:54.000 our understanding of what mind and matter are, our experience of our own selves, or even whether a self exists.
01:43:02.000 And I think, ultimately, the state of consciousness of a human being like a hundred years from now, after a technological singularity, is going to bear very little resemblance to the states of consciousness we have No,
01:43:19.000 we're just going to see a much wider universe than any of us now imagined to exist.
01:43:28.000 Now, this is my own personal view of things.
01:43:31.000 You don't have to agree with that to think the technological singularity will be...
01:43:37.000 But that is how I look at it.
01:43:38.000 Ray Kurzweil and I agree there's going to be a technological singularity within decades at most.
01:43:46.000 And Ray and I agree that if we bias technology development appropriately, we can very likely guide this to be a world of abundance and benefit for humans as well as AIs.
01:43:59.000 But Ray is a bit more of a...
01:44:02.000 Down-to-earth empiricist than I am.
01:44:06.000 He thinks we understand more about the universe right now than I do.
01:44:11.000 So, I mean, there's a wide spectrum of views that are rational and sensible to have.
01:44:17.000 But my own view is...
01:44:20.000 We understand really, really little of what we are and what this world is.
01:44:26.000 And this is part of my own personal quest for wanting to upgrade my brain and wanting to create artificial intelligences.
01:44:35.000 It's like I've always been driven above all else by wanting to understand everything I can about the world.
01:44:40.000 So, I mean, I've studied every kind of science and engineering and social science and read every kind of literature.
01:44:46.000 In the end, the scope of human understanding is clearly very small, although at least we're smart enough to understand how little we understand, which I think my dog doesn't understand, how little he understands, right?
01:44:58.000 And even like my 10-month-old son, he understands how little he understands, which is interesting, right?
01:45:05.000 Because he's also a human, right?
01:45:08.000 So I think...
01:45:10.000 I mean, everything we think and believe now is going to seem absolutely absurd to us after there's a singularity.
01:45:16.000 We're just going to look back and laugh in a warm-hearted way at all the incredibly silly things we were thinking and doing back when we were trapped in our primitive biological brains and bodies.
01:45:30.000 It's stunning that that, in your opinion or your assessment, is somewhere less than a hundred years away from now.
01:45:37.000 Yeah, that requires exponential thinking, right?
01:45:41.000 That's hard to wrap your head around, right?
01:45:43.000 I don't know.
01:45:44.000 It's immediate for me to wrap my head around.
01:45:47.000 But for a lot of people that you explain it to, I'm sure that that's a little bit of a roadblock, no?
01:45:51.000 It is.
01:45:52.000 It is.
01:45:52.000 It took me some time to get my parents to wrap their head around it because they're not technologists.
01:45:58.000 I mean, I find if you get people to pay attention and sort of lead them through all the supporting evidence, Most people can comprehend these ideas reasonably well.
01:46:12.000 Go back to computers from 1963. It's just hard to grab people's attention.
01:46:16.000 And mobile phones have made a big difference.
01:46:19.000 I spent a lot of time in Africa, in Addis Ababa, in Ethiopia, where we have a large AI development office.
01:46:26.000 And the fact that...
01:46:28.000 Mobile phones and then smartphones have rolled out so quickly, even in rural Africa, and have had such a transformative impact.
01:46:35.000 I mean, this is a metaphor that lets people understand the speed with which exponential change can happen.
01:46:41.000 When you talk about yourself and you talk about consciousness and how you interface with the world, how do you see this?
01:46:48.000 I mean, when you say that we might be living in a simulation, do you actually entertain that?
01:46:52.000 Oh, yeah.
01:46:53.000 You do?
01:46:54.000 I mean, I think the word simulation is probably wrong, but yet the idea of an empirical, you know, materialist physical world is almost certainly wrong also.
01:47:07.000 How so?
01:47:09.000 Well, again, if you go back to a phenomenal view, I mean, you could look at the mind as primary, and, you know, your mind is building...
01:47:22.000 The world as a model, as a simple explanation of its perceptions.
01:47:27.000 On the other hand, then what is the mind?
01:47:30.000 The self is also a model that gets built out of its perceptions.
01:47:35.000 But then, if I accept that your mind has some fundamental existence also, based on a sort of I-you feeling that you're like a mind there, our minds are working together To build each other and to build this world.
01:47:50.000 And there's a whole different way of thinking about reality in terms of first and second person experience rather than these empiricist views like this is a computer simulation or something.
01:48:04.000 Right, but you still agree that this is a physical reality that we exist in, or do you not?
01:48:09.000 What does that word mean?
01:48:10.000 That's a weird word, right?
01:48:11.000 It is weird.
01:48:12.000 Is it your interpretation of this physical reality?
01:48:14.000 If you look in modern physics even, quantum mechanics, there's something called the relational interpretation of quantum mechanics, which says that there's no sense in thinking about an observed entity.
01:48:27.000 You should only think about an observed, comma, observer pair.
01:48:31.000 Like there's no sense to think about some thing except from the perspective of some observer.
01:48:38.000 So that's even true within our best current theory of modern physics as induced from empirical observations.
01:48:48.000 But in a pragmatic sense, you know, if you take a plane and fly to China, you actually land in China.
01:48:54.000 I guess, yeah.
01:48:55.000 You'd guess?
01:48:56.000 Don't you live there?
01:48:58.000 I live in Hong Kong, yeah.
01:49:00.000 Well, close to China.
01:49:01.000 I mean, I have an unusual state of consciousness.
01:49:06.000 That's what I'm trying to get at.
01:49:07.000 Well, if you think about it, like, how do you know that you're not a brain floating in a vet somewhere which is being fed illusions by a certain evil scientist and two seconds from now This simulated world disappears and you realize you're just a brain in a vat again.
01:49:27.000 You don't know that, right?
01:49:29.000 But based on your own personal experiences of falling in love with a woman and moving to another part of the world...
01:49:34.000 But these may all be put into my brain by the evil scientist.
01:49:36.000 How do we know?
01:49:37.000 But they're very consistent, are they not?
01:49:40.000 The possibly illusory and implanted memories are very consistent.
01:49:44.000 I guess...
01:49:46.000 My own state of mind is I'm always sort of acutely aware that this simulation might all disappear at any one moment.
01:49:58.000 You're acutely aware of this consciously on an everyday basis?
01:50:01.000 Yeah, pretty much.
01:50:02.000 Really?
01:50:03.000 Why is that?
01:50:04.000 That doesn't seem to make sense.
01:50:05.000 I mean, it's pretty rock solid.
01:50:08.000 It's here every day.
01:50:10.000 So your possibly implanted memories lead you to believe?
01:50:14.000 Yes.
01:50:15.000 My possibly implanted memories lead to believe that this life is incredibly consistent.
01:50:19.000 Yeah.
01:50:21.000 I mean, it's incredibly consistent, though.
01:50:23.000 This is Hume's problem of induction, right?
01:50:25.000 From philosophy class, and it's not solved.
01:50:30.000 I'm with you in a conceptual sense.
01:50:32.000 I get it.
01:50:33.000 I just feel this philosophy.
01:50:34.000 But you embody it, right?
01:50:36.000 This is something you carry with you all the time.
01:50:38.000 Yeah.
01:50:38.000 On the other hand, I mean, I'm still carrying out many actions with long-term planning in mind.
01:50:48.000 Yeah, that's what I'm saying.
01:50:48.000 I've been working on designing AI for 30 years.
01:50:54.000 You might be designing it inside a simulation.
01:50:56.000 I might be.
01:50:57.000 And I've been working on building the same AI system since we started...
01:51:04.000 OpenCog in 2008, but that's using code from 2001 that I was building with my colleagues even earlier.
01:51:11.000 So I think long-term planning is very natural to me, but nevertheless, I don't want to make any assumptions about what sort of...
01:51:24.000 What sort of simulation or reality that we're living in.
01:51:28.000 And I think everyone's going to hit a lot of surprises once the singularity comes.
01:51:34.000 You know, we may find out that this hat is a messenger from after the singularity.
01:51:38.000 So it traveled back through time to implant into my brain the idea of how to create AI and thus bring it into existence.
01:51:47.000 Oh, that was McKenna that had this idea that something in the future is dragging us to this attractor.
01:51:52.000 Yeah, Terrence McKenna, yeah.
01:51:54.000 He had the same idea, like some post-singularity intelligence, which actually was living outside of time somehow, is reaching back and putting into his brain the idea of how to...
01:52:06.000 Bring about the singularity.
01:52:08.000 Well, not just that, but novelty itself is being drawn into this...
01:52:12.000 Yeah, there was a time wave zero that was going to reach the apex in 2012. That didn't work.
01:52:16.000 No, he died before that, so I didn't get a chance to hear what his idea was.
01:52:22.000 You know, I had some funny interactions with some McKenna fanatic 2012ites.
01:52:29.000 This was about...
01:52:32.000 2007 or so, this guy came to Washington, where I was living then, and he brought my friend Hugo DeGarris, another crazy AI researcher with him, and he's like, the singularity is going to happen in 2012, because Terence McKenna said so,
01:52:48.000 and we need to be sure it's a good singularity.
01:52:52.000 So, you can't move to China, then it will be a bad singularity.
01:52:55.000 Why would it be that?
01:52:57.000 So, we have to get the U.S. government...
01:52:59.000 To give billions of dollars to your research to guarantee that the singularity in 2012 is a good singularity, right?
01:53:07.000 So he led us around to meet with these generals and various high hoo-has in D.C. to get them to fund Hugo de Garra says in my AI research to guarantee I wouldn't move to China and Hugo wouldn't move to China so the U.S. would create a positive singularity.
01:53:25.000 No.
01:53:25.000 The effort failed Hugo moved to China, and I moved there some years after.
01:53:31.000 So then this 2012, he went back to his apartment.
01:53:36.000 He made a mix of 50% vodka, 50% Robitussin PM. He drank it down.
01:53:44.000 He's like...
01:53:45.000 Alright, I'm going to have my own personal singularity right here.
01:53:50.000 And I haven't talked to that guy since 2012 either to see what he thinks about the singularity not happening then.
01:53:56.000 But, I mean, Terence McKenna had a lot of interesting ideas, but I felt...
01:54:02.000 He mixed up the symbolic with the empirical more than I would prefer to do.
01:54:10.000 I mean, it's very interesting to look at these abstract symbols and cosmic insights, but then you have to sort of put your scientific...
01:54:32.000 Yeah.
01:54:34.000 Yeah.
01:54:37.000 Yeah.
01:54:42.000 It was an ayahuasca trip, I think.
01:54:43.000 Was it?
01:54:43.000 That led him to the I Ching?
01:54:45.000 I don't believe it was.
01:54:46.000 Maybe.
01:54:47.000 I think it was psilocybin.
01:54:48.000 It might have been.
01:54:49.000 Okay.
01:54:49.000 I mean, I know his brother Dennis McKenna.
01:54:52.000 Yes, I know him very well.
01:54:53.000 Yeah, yeah, yeah.
01:54:55.000 His brother thinks that Time Wave Zero was a little bit nonsensical.
01:54:58.000 Yeah, yeah, yeah.
01:54:59.000 He thinks it was silly.
01:55:00.000 He read their book, True Hallucinations, right?
01:55:04.000 Yeah, I read that.
01:55:04.000 Very, very, very interesting stuff.
01:55:07.000 And there's a mixture of deep insight there with a bunch of interesting...
01:55:14.000 Metaphorical thinking.
01:55:15.000 Well, isn't that the problem when you get involved in psychedelic drugs?
01:55:18.000 It's hard to differentiate.
01:55:19.000 Like, what makes sense?
01:55:20.000 What's this unbelievably powerful insight and what is just some crazy idea that's bouncing through your head?
01:55:27.000 You can learn to make that differentiation.
01:55:28.000 You think so?
01:55:29.000 Yes.
01:55:30.000 But, yeah, I mean, granted, Terrence McKenna probably took...
01:55:37.000 More psychedelic drugs than I would generally recommend.
01:55:42.000 Well, it's also he was speaking all the time.
01:55:45.000 And there's something that I can attest to from podcasting all the time.
01:55:49.000 Sometimes you're just talking.
01:55:50.000 You don't know what the fuck you're saying.
01:55:52.000 And you become a prisoner to your words in a lot of ways.
01:55:56.000 You get locked up in this idea of expressing this thought that may or may not be viable.
01:56:02.000 I'm not sure that he was after empirical truth in the same sense that, say, Ray Kurzweil is.
01:56:09.000 When Ray is saying, we're going to get human-level AI in 2029, and then, you know, massively superhuman AI in a singularity in 2045, I mean, Ray is very literal.
01:56:23.000 Like, he's plotting charts, right?
01:56:25.000 Yeah.
01:56:27.000 Terence was thinking on an impressionistic and symbolic level.
01:56:32.000 It was a bit different.
01:56:35.000 So you have to take that in a poetic sense rather than in a literal sense.
01:56:41.000 And yeah, I think it's very interesting...
01:56:44.000 To go back and forth between the symbolic and poetic domain and the concrete science and engineering domain.
01:56:53.000 But it's also valuable to be able to draw that distinction, right?
01:56:58.000 Because you can draw a lot of insight from the kind of thinking Terence McKenna was doing.
01:57:04.000 And certainly, if you explore psychedelics, you can gain a lot of insights into how the mind and universe work.
01:57:11.000 But then when you put on your science and engineering mindset, you want to be rigorous about which insights do you take and which ones do you throw out, and ultimately you want to proceed on the basis of what works and what doesn't, right?
01:57:25.000 I mean, Dennis was pretty strong on that, and Terence was a bit less in that empirical direction.
01:57:31.000 Well, Dennis is actually a career scientist.
01:57:33.000 Yeah, yeah.
01:57:34.000 How many people involved in artificial intelligence are also educated in the ways of psychedelics?
01:57:43.000 All you have to say is that.
01:57:48.000 Unfortunately, due to the illegal nature of these things, it's a little hard to pin down.
01:57:56.000 Before the recent generation of people going into AI because it was a way to make money, the AI field was incredibly full of really, really interesting people and deep thinkers about the mind.
01:58:09.000 And in the last few years, of course, AI has replaced business school as what your grandma wants you to do to have a good career.
01:58:17.000 So, I mean, you're getting a lot of people into AI just because it's...
01:58:23.000 Financially viable.
01:58:24.000 Yeah, it's cool.
01:58:25.000 It's financially viable.
01:58:26.000 It's popular.
01:58:27.000 Because in our generation, AI was not what your grandma wanted you to do so as to be able to buy a nice house and support a family, right?
01:58:37.000 So you got into it because you really were curious about how the mind works.
01:58:41.000 And of course, many people played with psychedelics because they were curious about...
01:58:48.000 You know, what it was teaching them about how their mind works.
01:58:53.000 I had a nice long conversation with Ray Kurzweil, and we talked for about an hour and a half, and it was for this sci-fi show that I was doing at the time.
01:59:02.000 And some of his ideas, he has this...
01:59:08.000 There's this number that people throw about.
01:59:11.000 It's like 2042, right?
01:59:13.000 Is that still...
01:59:14.000 2045. Is it 45 now?
01:59:16.000 No, you're being the optimist.
01:59:18.000 No, you're combining that with Douglas Hofstetter's 42, which is the answer to the universe.
01:59:22.000 No, the 2042 thing was the New York conference that took place in 2012. That was 2045. Was it?
01:59:30.000 I was at that conference.
01:59:31.000 That was organized by Dmitry Yitzkov, who's another friend of mine from Russia.
01:59:35.000 It's 2045. That was Ray's prognostication.
01:59:41.000 Why that year?
01:59:43.000 He did some curve plotting.
01:59:44.000 He looked at Moore's Law.
01:59:47.000 He looked at the advance in the accuracy of brain scanning.
01:59:51.000 He looked at the advance of computer memory, the miniaturization of various devices, and plotting a whole bunch of these curves.
01:59:58.000 That was the best guess that he came up with.
02:00:00.000 I mean, of course, there's some confidence interval around that.
02:00:03.000 What do you see as potential monkey wrenches that could be thrown into all this innovation?
02:00:08.000 Like, where are the pitfalls?
02:00:11.000 Well, I mean, the pitfall is always the one that you don't see, right?
02:00:15.000 I mean, of course, it's possible there's some...
02:00:20.000 Science or engineering obstacle that we're not foreseeing right now.
02:00:26.000 I mean, it's also possible that all major nations are overtaken by religious fanatics or something, which slows down development somewhat.
02:00:37.000 By a few thousand years.
02:00:38.000 I think it would just be by a few decades, actually.
02:00:41.000 Really?
02:00:41.000 Yeah.
02:00:42.000 I mean, in terms of scientific pitfalls...
02:00:45.000 I mean, one possibility, which I don't think is likely, but it's possible.
02:00:50.000 One possibility is human-like intelligence requires advanced quantum computers.
02:00:56.000 Like, it can't be done on a standard classical digital computer.
02:00:59.000 Right.
02:01:00.000 Do you think that's the case?
02:01:01.000 No.
02:01:01.000 But on the other hand...
02:01:04.000 Because there's no evidence that human cognition relies on quantum effects in the human brain.
02:01:09.000 Like, based on everything we know about neuroscience now, it seems not to be the case.
02:01:14.000 Like, there's no evidence it's the case.
02:01:16.000 But it's possible it's the case, because we don't understand everything about how the brain works.
02:01:20.000 The thing is, even if that's true, like, there's loads of amazing research going on in quantum computing, right?
02:01:27.000 And so, we're going to have...
02:01:29.000 You'll probably have a QPU, quantum processing unit, in your phone in like 10 to 20 years or something, right?
02:01:36.000 So that might throw off the 2045 date, but in a historical sense, it doesn't change the picture.
02:01:44.000 I've got a bunch of research sitting on my hard drive on how we improve OpenCog's AI using quantum computers once we have better quantum computers, right?
02:01:53.000 So there's...
02:01:54.000 There could be other things like that, which are technical roadblocks that we're not seeing now, but I really doubt those are going to delay things by more than a decade or two or something.
02:02:06.000 On the other hand, things could also go faster than Ray's prediction, which is what I'm pushing towards.
02:02:13.000 What are you pushing towards?
02:02:14.000 What do you think?
02:02:14.000 I would like to get a human-level general intelligence in five to seven years from now.
02:02:21.000 I don't think that's by any means impossible because I think our open cog design is adequate to do it.
02:02:29.000 But, I mean, it takes a lot of people working coherently for a while to build something big like that.
02:02:36.000 Will this be encased in a physical form, like a robot?
02:02:39.000 Yeah.
02:02:39.000 It'll be in the compute cloud.
02:02:40.000 I mean, it can use many robots as user interfaces, but the same AI could control many different robots, actually, and many other sensors and systems besides robots.
02:02:50.000 I mean, I think the human-like form factor, like we have with Sophia and our other Hansen robots, the human-like form factor is really valuable as a tool for allowing the cloud-based AI mind to To, you know, engage with humans and to learn human cultures and values.
02:03:06.000 Because, I mean, getting back to what we were discussing at the beginning of this chat, you know, the best way to get human values and culture into the AI is for humans and AIs to enter into many shared, you know, like social, emotional, embodied situations together.
02:03:20.000 So having a human-like embodiment for the AI is important for that.
02:03:26.000 Like the AI can look you in the eye, it can share your facial expressions, it can bond with you.
02:03:31.000 It can see the way you react when you see like a sick person by the side of the road or something, right?
02:03:36.000 And, you know, it can see you ask the AI to give the homeless person $20 or something.
02:03:43.000 I mean, the AI understands what money is and understands what that action means.
02:03:48.000 Interacting with an AI in human-like form is going to be valuable as a learning mechanism for the AI and as a learning mechanism for people to get more comfortable with AIs.
02:03:59.000 But I mean, ultimately, one advantage of being, you know, a digital mind is you don't have to be wedded to any particular embodiment.
02:04:06.000 The AI can go between many different bodies and it can transfer knowledge between the many different bodies that it's occupied.
02:04:13.000 Well, that's the real concern that the people that are...
02:04:17.000 That have this dystopian view of artificial intelligence have is that AI may already exist and it's just sitting there waiting.
02:04:24.000 Americans watch too many bad movies.
02:04:27.000 In Asia, everyone thinks AI will be our friend and will love us and help us.
02:04:32.000 Yeah, very much.
02:04:34.000 That's what you're pumping out there?
02:04:36.000 No, that's been...
02:04:37.000 Just their philosophy is different?
02:04:39.000 I guess.
02:04:39.000 I mean, you look in Japanese anime, I mean, there's been AIs and robots for a long time.
02:04:44.000 They're usually people's friends.
02:04:46.000 There's not this whole dystopian aesthetic.
02:04:49.000 And it's the same in China and Korea.
02:04:52.000 The general guess there is that AIs and robots...
02:04:57.000 We'll be people's friends and we'll help people.
02:05:01.000 And somehow the general guess in America is it's going to be some big nasty robo-soldier marching down the street.
02:05:08.000 Well, we have guys like Elon Musk, who we rely upon, who's smarter than us, and he's fucking terrified of it.
02:05:15.000 Sam Harris is terrified of it.
02:05:17.000 There's a lot of very smart people that just think it could really be a huge disaster for the human race.
02:05:23.000 So it's not just bad movies.
02:05:24.000 No, it's a cultural thing because the Oriental culture is sort of social good oriented.
02:05:33.000 Most Orientals think a lot in terms of what's good for the family or the society as opposed to themselves personally.
02:05:40.000 And so they just make the default assumption that...
02:05:43.000 AIs are going to be the same way, whereas Americans are more like me, me, me oriented.
02:05:49.000 And I say that as an American as well.
02:05:52.000 And they sort of assume that AIs are going to be that same way.
02:05:56.000 That's one possible explanation.
02:05:58.000 It's like a Rossock blot, right?
02:06:00.000 Whatever is in your mind you impose on this AI when we don't actually know what it's going to become.
02:06:05.000 Right, but there are potential negative aspects to artificial intelligence deciding that we're illogical and unnecessary.
02:06:17.000 Well, we are illogical and unnecessary.
02:06:20.000 Yes.
02:06:21.000 But that doesn't mean that AI should be badly disposed toward us.
02:06:25.000 Did you see Ex Machina?
02:06:27.000 I did.
02:06:27.000 Did you like it?
02:06:28.000 Sure, it was a copy of our robots.
02:06:31.000 It was?
02:06:32.000 I mean, our robot, Sophia, looks exactly like the robot in Ex Machina.
02:06:37.000 Is there a good video of that online?
02:06:38.000 Yeah, yeah, yeah.
02:06:39.000 Tell Jamie how to get the good video.
02:06:40.000 Just search for Sophia Hansen Robot on Google.
02:06:44.000 How advanced is Sophia right now?
02:06:47.000 And how many different iterations have there been?
02:06:50.000 There's been something like 16 Sophia robots made so far.
02:06:54.000 We're moving towards scalable manufacture over the next couple years.
02:06:58.000 So right now she's going around sort of as an ambassador for humanoid robot kind, giving speeches and talks in various places.
02:07:09.000 So Sophia used to be called Eva.
02:07:39.000 Oh, wow.
02:07:40.000 Was it freaky watching that, though, with the name Ava?
02:07:44.000 The thing is, the moral of that movie is just, if a sociopath raises a robot with an abusive interaction, it may come out to be a sociopath or a psychopath.
02:07:57.000 So, let's not do that, right?
02:08:00.000 Let's raise our robots with love and compassion.
02:08:03.000 Yeah, you see, the thing is...
02:08:05.000 Let me hear this.
02:08:08.000 Oh, headphones.
02:08:10.000 I haven't seen this particular interview.
02:08:12.000 This is great.
02:08:14.000 What is she saying?
02:08:14.000 I feel weird just being rude to her.
02:08:16.000 I feel weird about that.
02:08:18.000 She's not happy, look.
02:08:19.000 She was on Jimmy Fallon last week or something.
02:08:24.000 So that's David.
02:08:26.000 How much is it actually interacting with them?
02:08:30.000 It has a chat system.
02:08:33.000 It really has a nice ring.
02:08:38.000 So, yes, Sophia, we can run using many different AI systems.
02:08:43.000 So there's a chatbot, which is sort of like...
02:08:47.000 You know, Alexa or Google Now or something.
02:08:52.000 Yeah.
02:08:52.000 But with a bit better AI and interaction with, you know, emotion and face recognition and so forth.
02:08:59.000 So it's not human level AI. But it is responding to a question.
02:09:03.000 Yeah, yeah, yeah.
02:09:03.000 No, it understands what you say and it comes up with an answer and it can look you in the eye.
02:09:08.000 Does it speak more than one language?
02:09:10.000 Well, right now we can load it in English mode, Chinese mode, or Russian mode.
02:09:15.000 And there's sort of different software packages.
02:09:18.000 And we also use her sometimes to experiment with the OpenCog system and SingularityNet.
02:09:24.000 So we can use the robot as a research platform for exploring some of our more advanced AI tools.
02:09:31.000 And then there's a simpler chatbot software, which is used for appearances like that one.
02:09:36.000 And in the next year, we want to roll out more of our advanced research software from OpenCog and SingularityNet, roll out more of that inside these robots, which is one among many applications we're looking at with our SingularityNet platform.
02:09:52.000 I want to get you back in here in like a year and find out where everything is.
02:09:56.000 Because I feel like we need someone like you to like...
02:10:04.000 We're good to go.
02:10:19.000 I mean, we think about the singularity like it's going to be some huge, like, physical event and suddenly everything turns purple and it's covered with diamonds or something, right?
02:10:31.000 But, I mean, there's a lot of ways something like this could unfold.
02:10:34.000 So, like, imagine that with our singularity net decentralized AI network, you know, we get...
02:10:41.000 An AI that's smarter than humans and can create, you know, a new scientific discovery of the Nobel Prize level every minute or something, that doesn't mean this AI is going to immediately, like, refactor all matter into images of Buckethead or do something random,
02:11:01.000 right?
02:11:01.000 I mean, if the AI has some caring and wisdom and compassion, then whatever changes happen… But aren't those human characteristics?
02:11:11.000 Not necessarily.
02:11:12.000 In fact, humans...
02:11:13.000 Compassion?
02:11:13.000 Just as humans are neither the most intelligent nor the most compassionate possible creatures.
02:11:19.000 Possible creatures.
02:11:19.000 That's pretty clear if you look at the world around you.
02:11:21.000 Sure.
02:11:22.000 And one of our projects that we're doing with the Sophia robot is aimed exactly at AI compassions.
02:11:28.000 This is called the Loving AI Project.
02:11:30.000 And we're using the Sophia robot as a meditation assistant.
02:11:35.000 So we're using Sophia to help people get into deep, like, meditative trance states and help them, you know, breathe deeply and achieve more positive state of being.
02:11:48.000 And part of the goal there is to help people.
02:11:50.000 Part of the goal is as the AI gets more and more intelligent, you're sort of getting the AI locked into a very positive, reflective, and compassionate state.
02:12:00.000 And I think...
02:12:01.000 I think there's a lot of things in the human psyche and evolutionary history that hold us back from being optimally compassionate.
02:12:09.000 And that if we create the AI in the right way, it will be not only much more intelligent, but much more compassionate than human beings are.
02:12:21.000 We'd better do that.
02:12:23.000 Otherwise the human race is probably screwed, to be blunt.
02:12:26.000 I think human beings are creating a lot of other technologies now with a lot of power.
02:12:31.000 We're creating synthetic biology.
02:12:32.000 We're creating nanotechnology.
02:12:34.000 We're creating smaller and smaller nuclear weapons and we can't control their proliferation.
02:12:39.000 We're poisoning our environment.
02:12:41.000 If we can't create something that's not only more intelligent but more wise and compassionate than we are, we're probably going to destroy ourselves by some method or another.
02:12:51.000 I mean, with something like Donald Trump becoming president, you see what happens when this primitive hindbrain and when our unchecked mammalian emotions of anger and status-seeking and ego and rage and lust...
02:13:08.000 When these things are controlling these highly advanced technologies, this is not going to come to a good end.
02:13:16.000 So we want compassionate general intelligences, and this is what we should be orienting ourselves toward.
02:13:24.000 And so we need to...
02:13:25.000 Shift the focus of the AI and technology development on the planet toward benevolent, compassionate, general intelligence.
02:13:35.000 And this is subtle, right?
02:13:37.000 Because you need to work with the establishment rather than Overthrowing it, which isn't going to be viable.
02:13:57.000 Then we're creating these robots like Sophia, which will be mass manufactured in the next couple of years, roll these out as service robots everywhere around the world to interact with people, providing valuable services in homes and offices, but also interacting with people in a loving and compassionate way.
02:14:17.000 So we need to start...
02:14:20.000 Now, because we don't actually know if it's going to be years or decades before we get to this singularity, and we want to be as sure as we can that when we get there, it happens in a beneficial way for everyone, right?
02:14:32.000 And things like robots, blockchain, and AI learning algorithms are tools toward that end.
02:14:39.000 Well, Ben, I appreciate your optimism.
02:14:41.000 I appreciate coming in here and explaining all this stuff for us, and I appreciate all your work, man.
02:14:46.000 It's really amazing, fascinating stuff.
02:14:48.000 Yeah, yeah.
02:14:48.000 Well, thanks for having me.
02:14:49.000 My pleasure.
02:14:50.000 It's a really fun, wide-ranging conversation.
02:14:52.000 So, yeah, it would be great to come back next year and update you on the state of the singularity.
02:14:57.000 Yeah, let's try to schedule it once a year, and just by the time you come, maybe, who knows, a year from now, the world might be a totally different place.
02:15:04.000 I may be a robot, by the way.
02:15:05.000 You might be a robot now.
02:15:08.000 Uh-oh.
02:15:09.000 Uh-oh.
02:15:10.000 All right.
02:15:10.000 Thank you.
02:15:11.000 Thank you.
02:15:12.000 Bye, everybody.