In this episode of the Joe Rogan Experience podcast, I sit down with AI researcher, author, and author of the book, "The Dark Side of AI: How Will It Kill Us?" to talk about the dangers of artificial intelligence.
00:00:17.000This subject of the dangers of AI, it's very interesting because I get two very different responses from people dependent upon how invested they are in AI financially.
00:00:34.000The people that have AI companies or are part of some sort of AI group, all are like, it's going to be a net positive for humanity.
00:00:44.000I think overall we're going to have much better lives.
00:02:50.000I just wonder if AI was sentient, how much it would be a part of sowing this sort of confusion and chaos that would be beneficial to its survival, that it would sort of narrate or make sure that the narratives aligned with its survival.
00:03:17.000I don't think it's at the level yet where it would be able to do this type of strategic planning, but it will get there.
00:03:24.000And when it gets there, how will we know whether it's at that level?
00:03:40.000And so we have to kind of trust that they are not smart enough to realize it doesn't have to turn on us quickly.
00:03:47.000It can just slowly become more useful.
00:03:50.000It can teach us to rely on it, trust it, and over a long period of time we'll surrender control without ever voting on it or fighting against it.
00:05:38.000So when you first started researching this stuff and you were concentrating on bots and all this different thing, how far off did you think in the future would AI become a significant problem with the human race?
00:05:54.000For like 50 years, everyone said we're 20 years away.
00:06:35.000And this is, well, AI has already passed the Turing test, allegedly, correct?
00:06:41.000So usually labs instruct them not to participate in a test or not try to pretend to be a human so they would fail because of this additional set of instructions.
00:06:51.000If you jailbreak it and tell it to work really hard, it will pass for most people.
00:06:56.000Why would they tell it to not do that?
00:06:58.000Well, it seems unethical to pretend to be a human and make people feel like somebody is enslaving those CIs and doing things to them.
00:07:07.000It seems kind of crazy that the people building something that they are sure is going to destroy the human race would be concerned with the ethics of it pretending to be human.
00:07:18.000They are actually more concerned with immediate problems and much less with existential or suffering risks.
00:07:24.000They would probably worry the most about what I'll call end risks, your model dropping the N-word.
00:07:46.000Other state actors are probably developing something.
00:07:50.000So it becomes this sort of very confusing issue where you have to do it because if you don't, the enemy has it.
00:08:00.000And if they get it, it would be far worse than if we do.
00:08:03.000And so it's almost assuring that everyone develops it.
00:08:08.000Theoretically, that's what's happening right now.
00:08:10.000We have this race to the bottom, kind of prisoner's dilemma where everyone is better off fighting for themselves, but we want them to fight for the global good.
00:08:20.000The thing is, they assume, I think incorrectly, that they can control those systems.
00:08:26.000If you can't control superintelligence, it doesn't really matter who builds it, Chinese, Russians, or Americans, it's still uncontrolled.
00:12:32.000And I go, well, if that's the case, and we only get one chance to get it right, this is not cybersecurity where somebody steals your credit card, you'll give them a new credit card.
00:17:21.000But the point is like the things that we put meaning in, it's only us.
00:17:28.000A supermassive black hole doesn't give a shit about a great song.
00:17:32.000And they talk about some super value, super culture, super things super intelligence would like, and it's important that they are conscious and experience all that greatness in the universe.
00:17:43.000But I would think that they would look at us the same way we look at chimpanzees.
00:17:49.000We would say, yeah, they're great, but don't give them guns.
00:17:51.000Yeah, they're great, but don't let them have airplanes.
00:17:54.000Don't let them make global geopolitical decisions.
00:18:01.000So there are many reasons why they can decide that we are dangerous.
00:18:36.000It's not just the fit or the fabric, it's the intention behind everything they do.
00:18:40.000True Classic was built to make an impact.
00:18:43.000Whether it's helping men show up better in their daily lives, giving back to underserved communities, or making people laugh with ads that don't take themselves too seriously.
00:18:56.000Tailored where you want it, relaxed where you need it.
00:18:59.000No bunching, no stiff fabric, no BS, just a clean, effortless fit that actually works for real life.
00:19:05.000Forget overpriced designer brands, ditched the disposable, fast fashion.
00:19:11.000True Classic is built for comfort, built to last, and built to give back.
00:19:16.000You can grab them at Target, Costco, or head to trueclassic.com slash Rogan and get hooked up today.
00:19:23.000Yeah, and there's no reason why they would not limit our freedoms.
00:19:30.000If there is something only a human can do, and I don't think there is anything like that, but let's say we are conscious, we have internal experiences, and they can never get it.
00:19:40.000I don't believe it, but let's say it was true, and for some reason they wanted to have that capability.
00:19:45.000They would need us and give us enough freedom to experience the universe, to collect those qualia, to kind of engage with what is fun about being a living human being, what makes it meaningful.
00:19:58.000Right, but that's such an egotistical perspective, right?
00:20:01.000That we're so unique that even superintelligence would say, wow, I wish I was human.
00:20:06.000Humans have this unique quality of confusion and creativity.
00:20:11.000There is no value in it, mostly because we can't even test for it.
00:20:13.000I have no idea if you are actually conscious or not.
00:20:16.000So how valuable can it be if I can't even detect it?
00:20:21.000Only you know what ice cream tastes like to you.
00:21:10.000And I've tried so hard to listen to these people that don't think that it's a problem and listen to these people that think that it's going to be a net positive for humanity.
00:21:45.000When you think about the future of the world and you think about these incredible technologies scaling upwards and exponentially increasing in their capability, what do you see?
00:22:01.000Like, what do you think is going to happen?
00:22:03.000So there are many reasons to think they may cancel us for whatever reasons.
00:22:08.000We started talking about some game theoretical reasons for it.
00:22:11.000If we are successful at controlling them, I can come up with some ways to provide sort of partial solution to the value alignment problem.
00:22:20.000It's very hard to value align 8 billion people, all the animals, you know, everyone, because we disagree.
00:22:28.000So we have advanced virtual reality technology.
00:22:31.000We can technically give every person their own virtual universe where you decide what you want to be.
00:22:36.000You're a king, you're a slave, whatever it is you're into, and you can share with others, you can visit their universes.
00:22:42.000All we have to do is figure out how to control the substrate, the super intelligence running all those virtual universes.
00:22:48.000And if we manage to do that, at least part of the value alignment problem, which is super difficult, how do you get different preferences, multi-objective optimization, essentially?
00:22:58.000How do you get different objectives to all agree?
00:23:02.000But when you think about how it plays out, if you're alone at night and you're worried, what do you see?
00:23:27.000Maybe some people will find some other kind of artificial things to do.
00:23:33.000But for most people, their job is their definition, who they are, what makes a difference to them for quite a few people, especially in professional circles.
00:23:41.000So losing that meaning will have terrible impact in society.
00:23:45.000We always talk about unconditional basic income.
00:23:48.000We never talk about unconditional basic meaning.
00:23:51.000What are you doing with your life if basic needs are provided for you?
00:24:11.000What do you see when you think of that?
00:24:16.000It's hard to be specific about what it can do and what specific ways of torture it can come up with and why.
00:24:26.000Again, if we're looking at worst-case scenarios, I found this set of papers about what happens when young children have epileptic seizures, really bad ones.
00:24:38.000And what sometimes helps is to remove half of your brain.
00:25:40.000But if they manage to do it, they can really put any type of payload into it.
00:25:45.000So think about all the doomsday cults, psychopaths, anyone providing their set of goals into the system.
00:25:53.000But aren't those human characteristics?
00:25:54.000I mean, those are characteristics that I think, if I had to guess, those exist because in the future there was some sort of a natural selection benefit to being a psychopath in the days of tribal warfare.
00:26:11.000That if you were the type of person that could sneak into a tribe in the middle of the night and slaughter innocent women and children, your genes would pass on.
00:26:51.000So think about like weird time travel effects.
00:26:53.000Right now, if you're not helping to create super intelligence, once it comes into existence, it will punish you really hard for it.
00:26:59.000And punishment needs to be so bad that you start to help just to avoid that.
00:27:07.000My thought about it was that it would just completely render us benign, that it wouldn't be fearful of us if we had no control, that it would just sort of let us exist and it would be the dominant force on the planet.
00:27:28.000If human beings have no control over all of the different things that we have control over now, like international politics, control over communication, if we have none of that anymore and we're reduced to a subsistence lifestyle, then we would be no threat.
00:28:10.000Again, I cannot predict what it can do, but if it needs to turn the planet into fuel, raise temperature of a planet, cool it down for servers, whatever it needs to do, it wouldn't be concerned about your well-being.
00:28:21.000It wouldn't be concerned about any life, right?
00:28:24.000Because it doesn't need biological life in order to function.
00:28:26.000As long as it has access to power, and assuming that it is far more intelligent than us, there's abundant power in the universe.
00:28:37.000Just the ability to harness solar would be an infinite resource, and it would be completely free of being dependent upon any of the things that we utilize.
00:28:50.000And again, we're kind of thinking what we would use for power.
00:28:53.000If it's smarter than us, if it does novel research in physics, it can come up with completely novel ways of harnessing energy, getting energy.
00:29:00.000So I have no idea what side effects that would have for climate.
00:31:58.000I don't think there are actually good quantum computers out there yet, but I think if we get stuck for 10 years, let's say that's the next paradigm.
00:32:08.000So what do you mean by you don't think there's good quantum computing out there?
00:32:12.000So we constantly see articles coming out saying we have a new quantum computer.
00:32:26.000So there is a threat from quantum computers in terms of brain cryptography, Factoring large integers.
00:32:33.000And if they were actually making progress, we would see with every article now we can factor 256-bit number, 1024-bit number.
00:32:43.000In reality, I think the largest number we can factor is like 15, literally, not 15 to a power, like just 15.
00:32:49.000There is no progress in applying it to Schor's algorithm last time I checked.
00:32:54.000But when I've read all these articles about quantum computing and its ability to solve equations that would take conventional computing an infinite number of years, and it can do it in minutes.
00:33:10.000Those equations are about quantum states of a system.
00:33:13.000It's kind of like what is it for you to taste ice cream?
00:33:17.000You compute it so fast and so well, and I can't, but it's a useless thing to compute.
00:33:22.000It doesn't compute solutions to real-world problems we care about in conventional computers.
00:33:48.000When you see these articles when they're talking about quantum computing and some of the researchers are equating it to the multiverse, they're saying that the ability that these quantum computers have to solve these problems very quickly seems to indicate that it is in contact with other realities.
00:34:44.000Yeah, the problem with subjects like that, and particularly articles that are written about things like this, is that it's designed to lure people like me in.
00:34:56.000Where you read it and you go, wow, this is crazy.
00:35:57.000Is it just this like fun mental masturbation exercise?
00:36:01.000It depends on what variant of it you look at.
00:36:04.000So if you're just saying we have multiple virtual realities, like kids playing virtual games and each one has their own local version of it, that makes sense.
00:37:03.000We are at the point where we can create very believable, realistic virtual environments.
00:37:08.000Maybe the haptics are still not there, but in many ways, visually, sound-wise is getting there.
00:37:13.000Eventually, I think most people agree will have same resolution as our physics.
00:37:18.000We're also getting close to creating intelligent agents.
00:37:21.000Some people argue they are conscious already or will be conscious.
00:37:25.000If you just take those two technologies and you project it forward and you think they will be affordable one day, a normal person like me or you can run thousands, billions of simulations, then those intelligent agents, possibly conscious ones, will most likely be in one of those virtual worlds, not in the real world.
00:37:44.000In fact, I can, again, retrocausally place you in one.
00:37:48.000I can commit right now to run billion simulations of this exact interview.
00:37:53.000So the chances are you're probably in one of those.
00:37:59.000Because if this technology exists and if we're dealing with superintelligence, so if we're dealing with AI and AI eventually achieves super intelligence, why would it want to create virtual reality for us and our consciousness to exist in?
00:38:20.000It seems like a tremendous waste of resources just to fascinate and confuse these territorial apes with nuclear weapons.
00:38:52.000Maybe somebody managed to control them and trying to figure out what Starbucks coffee sells best and they need to run Earth-sized Simulation to see what sells best.
00:39:02.000Maybe they're trying to figure out how to do AI research safely and make sure nobody creates dangerous superintelligence.
00:39:09.000So we're running many simulations of the most interesting moment ever.
00:39:36.000But isn't it also a good chance that it hasn't been done yet?
00:39:40.000And isn't it a good chance that what we're seeing now is that the potential for this to exist is inevitable?
00:39:48.000That there will one day, if you can develop a technology, and we most certainly will be able to, if you look at where we are right now in 2025 and you scale forward 50, 60 years, there will be one day a virtual simulation of this reality that's indistinguishable from reality.
00:40:13.000But also, isn't it possible that it has to be invented one day, but hasn't yet?
00:40:22.000It's also possible, but then we find ourselves in this very unique moment where it's not invented yet, but we are about to invent all this technology.
00:40:52.000I feel like if virtual reality does exist, there has to be a moment where it doesn't exist and then it's invented.
00:41:00.000Why wouldn't we assume that we're in that moment?
00:41:02.000Especially if we look at the scaling forward of technology from MS-DOS to user interfaces of like Apple and then what we're at now with quantum computing and these sort of discussions.
00:41:18.000Isn't it more obvious that we can trace back the beginning of these things and we can see that we're in the process of this, that we're not in a simulation.
00:41:30.000We're in the process of eventually creating one?
00:42:09.000You have Stalin, you have all these problematic human beings and all the different reasons why we've had to do certain things and initiate world conflicts.
00:42:17.000Then you've had the contrarians that talk and say, actually, that's not what happened.
00:43:15.000I give you a book which has every conceivable sentence in it and every, like, would you read it?
00:43:21.000It's a lot of garbage you have to go through to find anything interesting.
00:43:27.000Well, is it just that we're so limited cognitively because we do have a history, at least in this simulation, we do have a history of, I mean, there was a gentleman that, see if you could find this.
00:43:58.000So 9,000 years ago, his ancestor lived.
00:44:02.000And so we have this limitation of our genetics.
00:44:07.0009,000 years ago, wherever this guy lived, it's probably a hunter and gatherer, probably very limited language, very limited skills in terms of making shelter.
00:44:21.000And who knows if even he knew how to make fire.
00:44:25.000And then here, here at 9,000 DNA just turned human history on his head.
00:45:19.000Maybe it's just that we're so limited because we do have this, at least again, in this simulation, we're so limited in our ability to even form concepts because we have these primitive brains that are the architecture of the human brain itself is just not capable of interfacing with the true nature of reality.
00:45:44.000So we give this primitive creature this sort of basic understanding, these blueprints of how the world really works.
00:46:21.000You have photons that are quantumly entangled.
00:46:25.000This doesn't even make sense to us, right?
00:46:28.000So is it that the universe itself is so complex, the reality of it, and that we're given this sort of like sort of, you know, we're giving like an Atari framework to this monkey.
00:46:48.000It kind of makes sense as a simulation theory because all those special effects you talk about, so speed of light is just the speed at which your computer updates.
00:46:57.000Entanglement makes perfect sense if all of it goes through your processor, not directly from pixel to pixel.
00:47:03.000And rendering, there are quantum physics experiments which if you observe things, they render different, what we do in computer graphics.
00:48:22.000Maybe that would be too traumatic, right?
00:48:25.000To have a complete memory of all of the things that they had gone through to get to the 21st century.
00:48:31.000Maybe that would be so overwhelming to you that you would never be able to progress because you would still be traumatized by, you know, whatever that 9,000-year-old man went through.
00:48:40.000I don't have complete memory of my existence.
00:48:42.000I vividly remember maybe 4% of my existence, very little of my childhood.
00:48:46.000So you can apply same filtering, but remember useful things like how do you speak?
00:48:53.000Maybe losing certain memories is actually beneficial.
00:48:58.000Because one of the biggest problems that we have is PTSD, right?
00:49:02.000So we have, especially people that have gone to war and people that have experienced extreme violence.
00:49:09.000This is obviously a problem with moving forward as a human being.
00:49:14.000And so there would be beneficial for you to not have all of the past lives and all the genetic information that you have from all the 9,000 years of human beings existing in complete total chaos.
00:49:39.000Right, but then maybe you'd have a difficulty in having a clean slate and moving forward.
00:49:46.000Like, if you look at some of Pinker's work and some of these other people that have looked at the history of the human race, it is chaotic and violent as it seems to be today, statistically speaking, this is the safest time ever to be alive.
00:49:59.000And maybe that's because over time we have recognized that these are problems.
00:50:05.000And even though we're slow to resolve these issues, we are resolving them in a way that's statistically viable.
00:50:16.000You can then argue in the opposite direction.
00:50:18.000You can say it would help to forget everything other than the last year.
00:50:22.000You'll always have that fresh restart with you.
00:50:24.000But then you wouldn't have any lessons.
00:50:26.000You wouldn't have character development.
00:50:27.000But you see how one of those has to make sense our ways.
00:50:30.000But a certain amount of character development is probably important for you to develop discipline and the ability to delay gratitude, things like that.
00:50:42.000Multi-generational experience would certainly beat single point of experience.
00:51:58.000He asked, what's outside the simulation?
00:52:00.000That's the most interesting question one can ask.
00:52:03.000In one of the papers, I look at a technique in AI safety called AI boxing, where we put AIs in kind of virtual prison to study it, to make sure it's safe, to limit input-output to it.
00:52:15.000And the conclusion is basically if it's smart enough, it will eventually escape.
00:52:27.000If it's smart enough, will it kind of go, oh, you're also in a virtual box and either show us how to escape or fail to escape.
00:52:35.000Either way, either we know it's possible to contain super intelligence or we get access to the real information.
00:52:42.000And so if it's impossible to contain superintelligence, and if there is a world that we can imagine where a simulation exists that's indistinguishable from reality, we're probably living in it.
00:53:00.000Well, we don't know if it's actually the same as reality.
00:53:03.000It could be a completely weird kind of Simpsons-looking simulation.
00:53:06.000We're just assuming it's the same reality.
00:53:26.000In science, we study things about the moment of Big Bang, the properties of that moment.
00:53:31.000We don't know what caused it, anything before it is obviously not accessible from within our universe, but there is some things you can learn.
00:53:40.000We can learn about if we're in a simulation that simulators don't care about your suffering.
00:53:46.000You can learn that they don't mind you dying.
00:53:48.000We can learn things just by observing simulation around us.
00:53:53.000Well, here's the question about all that other stuff, like suffering and dying.
00:54:01.000Do those factors exist in order to motivate us to improve the conditions of the world that we're living in?
00:54:10.000Like if we did not have evil, would we be motivated to be good?
00:54:16.000Do you think that these factors exist?
00:54:20.000I've talked about this before, but the way I think about the human race is if I was studying the human race from afar, if I was some person from another planet with no understanding of any of the entities on Earth, I would look at this one apex creature and I would say, what is this thing doing?
00:56:04.000If the goal was just to kind of motivate us, you could have much lower levels as the maximum.
00:56:10.000Right, but if you want to really motivate people, you have to, you know, like the only reason to create nuclear weapons is you're worried that other people are going to create nuclear weapons.
00:56:19.000Like, if you want to really motivate someone, you have to have evil tyrants in order to justify having this insane army filled with bombers and hypersonic missiles.
00:56:28.000Like, if you really want progress, you have to be motivated.
00:56:33.000I think at some point we stop fully understanding how bad things are.
00:56:36.000So let's say you have a pain scale from zero to infinity.
01:01:12.000With simulation, what's interesting, it's not just the last couple of years, then we got computers.
01:01:17.000If you look at religions, world religions, and you strip away all the local culture, like take Saturday off, take Sunday off, donate this animal, donate that animal, what they all agree on is that there is superintelligence which created a fake world and this is a test, do this or that.
01:01:33.000They describe it, like if you went to jungle and told primitive tribe about my paper on simulation theory, that's what they would know three generations later, like God, religion, that's what they got out of it.
01:01:46.000But they don't think it's a fake world.
01:02:17.000I worry about that's really the nature of the universe itself.
01:02:21.000That it is actually created by human beings creating this infinitely intelligent thing that can essentially harness all of the available energy and power of the universe and create anything it wants.
01:04:03.000Some kid just set an experiment, run a billion random simulations, see what comes out of it.
01:04:08.000What you said about us creating new stuff, maybe it's a startup trying to develop new technology and we're running a bunch of humans to see if we can come up with a new iPhone.
01:04:22.000If you're attached to this idea, and I don't know if you're attached to this idea, but if you are attached to this idea, what's outside of this idea?
01:04:30.000Like if this simulation is, if it's paused, what is reality?
01:04:38.000So there seems to be a trend to converge on certain things.
01:04:42.000Agents, which are smart enough, tend to converge on some instrumental goals, not terminal goals.
01:04:47.000Terminal goals are things you prefer, like I want to collect stamps.
01:04:52.000But acquiring resources, self-protection, control, things like that tend to be useful in all situations.
01:05:00.000So, all the smart enough agents will probably converge on that set.
01:05:04.000And if they train on all the data or they do zero knowledge training, meaning they really just discovering basic structure of physics, it's likely they will all converge on one similar architecture, one super agent.
01:06:47.000I think what you were saying earlier about this being the answer to the Fermi paradox, it makes a lot of sense.
01:06:54.000Because I've tried to think about this a lot since AI started really ramping up its capability.
01:07:04.000And I was thinking, well, if we do eventually create superintelligence, and if this is this normal pattern that exists all throughout the universe, well, you probably wouldn't have visitors.
01:07:16.000You probably wouldn't have advanced civilizations.
01:07:20.000They wouldn't exist because everything would be inside some sort of a digital architecture.
01:07:30.000Another one is that we try to acquire more resources, capture other galaxies for compute, and then you would see this wall of computronium coming to you, but we don't see it.
01:09:30.000But when I had him in here, I was like, it's like I'm talking to a politician that is in the middle of a presidential term or a presidential election cycle where they were very careful with what they say.
01:09:48.000Everything has been vetted by a focus group and you don't really get a real human response.
01:09:55.000Everything was like, yeah, interesting.
01:11:48.000Well, it's also, you know, you want to be very kind here, right?
01:11:54.000You don't, but you've got to assume, and I know my own intellectual limitations in comparison to some of the people that I've had, like Roger Penrose or, you know, Elon or many of the people that I've talked to.
01:12:06.000I know my mind doesn't work the way their mind works.
01:12:09.000So there are variabilities that are, whether genetic, predetermined, whether it's just the life that they've chosen and the amount of information that they've digested along the way and been able to hold on to.
01:12:21.000But their brain is different than mine.
01:12:24.000And then I've met people where I'm like, there's nothing there.
01:12:34.000Like, there's certain human beings that you run into in this life and you're like, well, is this because this is the way that things get done?
01:12:43.000And the only way things get done is you need a certain amount of manual labor and not just young people that need a job because they're, you know, in between high school and college and they're trying to do, so you need somebody who can carry things for you.
01:12:57.000No, maybe it's you, you need roles in society and occasionally you have a Nicola Tesla.
01:13:05.000You know, occasionally you have one of these very brilliant innovators that elevates the entirety of the human race.
01:13:14.000But for the most part, as this thing is playing out, you're going to need a bunch of people that are paperwork filers.
01:13:20.000You're going to need a bunch of people that are security guards in an office space.
01:13:22.000You're going to need a bunch of people that aren't thinking that much.
01:13:26.000They're just kind of existing and they can't wait for 5 o'clock so they can get home and watch Netflix.
01:14:01.000And, you know, the person who has the largest IQ, the largest at least registered IQ in the world, is this gentleman who recently posted on Twitter about Jesus that he believes Jesus is real.
01:15:03.000Like, if you're really intelligent, you'd have social intelligence as well.
01:15:07.000You'd have the ability to formulate a really cool tribe.
01:15:10.000There's a lot of intelligence that's not as simple as being able to solve equations and answer difficult questions.
01:15:18.000There's a lot of intelligence in how you navigate life itself and how you treat human beings and the path that you choose in terms of, like we were talking about, delayed gratification and there's a certain amount of intelligence in that, a certain amount of intelligence in discipline.
01:15:35.000There's a certain amount of intelligence in forcing yourself to get up in the morning and go for a run.
01:16:30.000He's invested, so he's just like, well, I think he probably has really good doctors and really good medical care that counteracts his poor choices.
01:16:39.000But we're not in a world where you can spend money to buy life extension.
01:16:44.000No matter how many billions you have, you're not going to live to 200 right now.
01:17:49.000Well, we don't know that it doesn't scale to humans.
01:17:52.000We do know that we share a lot of characteristics, biological characteristics of these mammals.
01:17:57.000And it makes sense that it would scale to human beings.
01:17:59.000But the thing Is it hasn't been done yet?
01:18:02.000So, if it's the game that we're playing, if we're in the simulation, if we're playing Half-Life or whatever it is, and we're at this point of the game where we're like, oh, you know, how old are you, Roman?
01:20:18.000Is that you've allowed money to get so deeply intertwined with the way decisions are made.
01:20:24.000But it feels like money gets canceled.
01:20:26.000Each side gets a billion-dollar donation, and then it's actual election.
01:20:30.000Sort of, except it's like the Bill Hicks joke.
01:20:33.000It's like there's one puppet holding, you know, one politician holding two puppets as one guy.
01:20:41.000This is my thinking about AI in terms of super intelligence and just computing power in general in terms of the ability to solve encryption.
01:20:54.000All money is essentially now just numbers somewhere.
01:21:09.000And once encryption is tackled, the ability to hold on to it and to acquire mass resources and hoard those resources.
01:21:19.000Like, this is the question that people always have with poor people.
01:21:22.000Well, this guy's got, you know, $500 billion.
01:21:25.000Why doesn't he give it all to the world and then everybody would be rich?
01:21:29.000I've actually saw that on CNN, which is really hilarious.
01:21:32.000Someone was talking about Elon Musk, that if Elon Musk could give everyone in this country a million dollars and still have billions left over, I'm like, do you have a calculator on your phone, you fucking idiot?
01:23:14.000And then all of a sudden it's like I'm having conversations with world leaders and I'm turning down a lot of them because I don't want to talk to them.
01:23:29.000But through whatever this process is, I have been able to understand what's valuable as a human being and to not get caught up in this bizarre game that a lot of people are getting caught up in because they're chasing this thing that they think is impossible to achieve.
01:23:46.000And then once they achieve a certain aspect of it, a certain number, then they're terrified of losing that.
01:23:53.000So then they change all of their behavior in order to make sure that this continues.
01:23:58.000And then it ruins the whole purpose of getting there in the first place.
01:24:03.000Most people start poor, then they get to middle class and they think that change in quality of life is because of money and it will scale to the next level.
01:24:17.000Then you go Elvis and you just get on pills all day and get crazy and, you know, completely ruin your life.
01:24:24.000And that happens to most, especially people that get wealthy and not just well, but famous too.
01:24:29.000Fame is the Big one because I've seen that happen to a lot of people that accidentally became famous along the way.
01:24:36.000You know, certain public intellectuals that took a stance against something and then all of a sudden they're prominent in the public eye and then you watch them kind of go crazy.
01:24:44.000Well, it's because they're reading social media and they're interacting with people constantly and they're just trapped in this very bizarre version of themselves that other people have sort of created.
01:26:03.000Well, there's a difference, right, with public intellectuals, right?
01:26:09.000Because your ideas, as controversial as they may be, are very valid and they're very interesting.
01:26:16.000And so then it sparks discourse and it sparks a lot of people that feel voiceless because they disagree with you and they want to attack you.
01:27:58.000And you're all chanting and screaming together and you're marching and people do like very irrational things that way.
01:28:03.000But the type of people that want to be engaged in that, generally speaking, aren't doing well.
01:28:09.000If you get like the number of people that are involved in protests is always proportionate to the amount of people that live in a city, right?
01:29:15.000But I never was sure that the impression average people get of them is positive for the cause.
01:29:22.000Then I see protesters block roads, two things.
01:29:25.000I don't usually have very positive impression of that.
01:29:28.000And I'm concerned that it's the same here.
01:29:30.000So maybe they can do a lot in terms of political influence, calling senators, whatnot, but just this type of aggressive activism may backfire.
01:29:39.000Well, the aggressive activism, like blocking roads for climate change, is the most infuriating because it's these self-righteous people that have really fucked up, confused, chaotic lives, and all of a sudden they found a purpose.
01:29:52.000And their purpose is to lie down on the roads and hold up a sign to block climate change when there's a mother trying to give birth to her child and is freaking out because they're stuck in this fucking traffic jam because of this entitled little shithead that thinks that it's a good idea to block the road for climate change.
01:30:47.000There was a recent protest in Florida where they had that, where these people would get out in the middle of the road while the light was red, hold up their signs, and then as soon as the light turned yellow on the green side, they'd fucking get out of the road real quick because they know the law, which is, I don't know if that's a solution, but they're doing it on the highways in Los Angeles.
01:31:08.000I mean, they did it all through the George Floyd protest, they do it for climate protests, they do it for whatever the chance they get to be significant.
01:31:17.000Like, I am being heard, you know, my voice is meaningful.
01:31:40.000So we just need to find a way to project those voices, amplify them, which is very hard with our current system of social media where everyone screams at the same time.
01:32:01.000It's preferable because I think there is progress in all these voices slowly making a difference.
01:32:06.000But then you have the problem with a giant percentage of these voices are artificial.
01:32:14.000A giant percentage of these voices are bots or are at least state actors that are being paid to say certain things and inflammatory responses to people, which is probably also the case with anti-AI activism.
01:32:31.000You know, I mean, when you did this podcast, what was the thing that they were upset at you for?
01:32:35.000Like with the mostly negative comments?
01:32:37.000I think they just like saying negative comments.
01:33:18.000And the type of people that do engage in these prolonged arguments, they're generally mentally ill.
01:33:24.000And people that I personally know that are mentally ill, that are on Twitter 12 hours a day, just constantly posting inflammatory things and yelling at people and starting arguments.
01:34:24.000Such a recent factor in human discourse.
01:34:29.000Neuralink, direct brain spam, hacking.
01:34:33.000That's what I was going to get to next.
01:34:36.000Because if there is a way that the human race does make it out of this, my fear is that it's integration.
01:34:45.000My fear is that we stop being a human and that the only real way for us to not be a threat is to be one of them.
01:34:56.000And when you think about human computer interfaces, whether it's Neuralink or any of the competing products that they're developing right now, that seems to be sort of the only biological pathway forward with our limited capacity for disseminating information and for communicating and even understanding concepts.
01:35:18.000Well, what's the best way to enhance that?
01:35:20.000The best way to enhance that is some sort of artificial injection because biological evolution is very slow.
01:36:24.000Like, you're going to need something to survive off of.
01:36:28.000But biological evolution being so painstakingly slow, whereas technological evolution is so breathtakingly fast, the only way to really survive is to integrate.
01:36:42.000What are you contributing in that equation?
01:37:06.000Extinction with extra steps and then we become...
01:37:18.000It'd be like, what the fuck are you talking about?
01:37:20.000Yeah, you're going to be eating terrible food.
01:37:21.000And you're just going to be flying around.
01:37:24.000And you're going to be staring at your phone all day.
01:37:26.000And you're going to take medication to go to sleep because you're not going to be able to sleep.
01:37:31.000And you're going to be super depressed because you're living this biologically incompatible life that's not really designed for your genetics.
01:38:34.000But like biologically, that's compatible with us.
01:38:39.000Like that, that's like whatever human reward systems have evolved over the past 400,000 plus years or whatever we've been Homo sapiens, that seems to be like biologically compatible with this sort of harmony.
01:38:54.000Harmony with nature, harmony with our existence, and everything else outside of that, when you get into big cities, like the bigger the city, the more depressed people you have, and more depressed people by population, which is really weird.
01:39:08.000You know, it's really weird that as we progress, we become less happy.
01:42:08.000And then there's also this sort of compliance by virtue of understanding that you're vulnerable, so you just comply because there is no privacy.
01:42:20.000Because it does have access to your thoughts.
01:42:22.000So you tail your thoughts in order for you to be safe and so that you don't feel the pain and suffering.
01:42:28.000We don't have any experimental evidence on how it changes you.
01:42:31.000You may start thinking in certain ways to avoid being punished or modified.
01:42:36.000And we know that that's the case with social media.
01:42:38.000We know that attacks on people through social media will change your behavior and change the way you communicate.
01:42:44.000I mean, most people look at their post before posting and go, like, should I be posting this?
01:42:49.000Not because it's illegal or inappropriate, but just like every conceivable misinterpretation of what I want to say, like in some bizarre language, that means something else.
01:42:58.000Let me make sure Google doesn't think that.
01:43:02.000And then there's also, no matter what you say, people are going to find the least charitable version of what you're saying and try to take it out of context or try to misinterpret it purposely.
01:43:16.000So what does the person like yourself do when use of Neuralink becomes ubiquitous, when it's everywhere?
01:44:10.000So if there is a narrow implant, ideally not a surgery-based one, but like an attachment to your head, like those headphones, and it gives me more memory, perfect recollection, things like that, I would probably engage with.
01:44:24.000Yeah, but isn't that a slippery slope?
01:44:26.000It is, but again, we are in a situation where we have very little choice, become irrelevant or participate.
01:44:33.000I think we saw it with Tylen just now.
01:44:40.000But at some point, he says he realized it's happening anyways, and it might as well be his super intelligence killing everyone.
01:44:47.000Well, I don't think he thinks about it that way.
01:44:50.000I think he thinks he has to develop the best version of superintelligence the same way he felt like the real issues with social media were that it had already been co-opted and had already been taken over essentially by governments and special interests and they were already manipulating the truth and manipulating public discourse and punishing people who stepped outside of the lawn.
01:45:14.000And he felt like, and I think he's correct, I think that he felt like if he didn't step in and allow a legitimate free speech platform, free speech is dead.
01:45:25.000I think we were very close to that before he did that.
01:45:29.000And as much as there's a lot of negative side effects that come along with that, you do have the rise of very intolerant people that have platforms now.
01:45:41.000And to deny them a voice, I don't think makes them less strong.
01:45:45.000I think it actually makes people less aware that they exist and it makes them – it – You have community notes, you have other people commenting, responding.
01:46:26.000I would love to know what, you know, I'm sure he's probably scaled this out in his head.
01:46:32.000And I would like to know, like, what is his solution, if he thinks there is one that's even viable.
01:46:37.000My understanding is he thinks if it's from zero principles, first principles, it learns physics, it's not biased by any government or any human, the thing it will learn is to be reasonably tolerant.
01:46:50.000It will not see a reason in destroying us because we contain information.
01:46:55.000We have biological storage of years of evolutionary experimentation.
01:47:03.000So I think, to the best of my approximation, that's his model right now.
01:47:07.000Well, that's my hope, is that it's benevolent and that it behaves like a superior intelligence, like the best case scenario for a superior intelligence.
01:47:18.000Did you see that exercise that they did where they had three different AIs communicating with each other and they eventually started expressing gratitude towards each other and speaking in Sanskrit?
01:47:29.000I think I missed that one, but it sounds like a lot of the similar ones where they pair up.
01:47:45.000They were communicating like you would hope a superintelligence would without all of the things that hold us back.
01:47:53.000Like we have biologically, like we're talking about the natural selection that would sort of benefit psychopaths because like it would ensure your survival.
01:48:02.000We have ego and greed and the desire for social acceptance and hierarchy of status and all these different things that have screwed up society and screwed up cultures and caused wars from the beginning of time.
01:48:17.000Religious ideologies, all these different things that people have adhered to that have, they wouldn't have that.
01:48:25.000This is the general hope of people that have an optimistic view of superintelligence, is that they would be superior in a sense that they wouldn't have all the problems.
01:48:36.000They would have the intelligence, but they wouldn't have all the biological imperatives that we have that lead us down these terrible roads.
01:48:44.000But there are still game theoretic reasons for those instrumental values we talked about.
01:48:49.000So if they feel they're in evolutionary competition with other AIs, they would try accumulating resources.
01:48:55.000They would try maybe the first AI to become sufficiently intelligent would try to prevent other AIs from coming into existence.
01:49:03.000Or would it lend a helping hand to those AIs and give it a beneficial path?
01:49:09.000Give it a path that would allow it to integrate with all AIs and work cooperatively.
01:49:15.000The same problem we are facing, uncontrollability and value misalignment, will be faced by first superintelligence.
01:49:22.000It would also go, if I allow this super, super intelligence to come into existence, it may not care about me or my values.
01:49:46.000So for people that don't know that one, what these researchers did was they gave information to the artificial intelligence to allow it to use against it.
01:49:58.000And then when they went to shut it down, they gave false information about having an affair.
01:50:03.000And then the artificial intelligence was like, if you shut me down, I will let your wife know that you're cheating on her.
01:50:09.000Which is fascinating because they're using blackmail.
01:50:11.000And correct answer game theoretically.
01:50:13.000If you have everything on that decision, you'll bet whatever it takes to get there.
01:51:41.000But if we're designing these things and we're designing these things using human, all of our flaws are essentially it's going to be transparent to the superintelligence that it's being coded, that it's being designed by these very flawed entities with very flawed thinking.
01:52:03.000That's actually the biggest misconception.
01:52:44.000But it is also gathering information from very flawed entities.
01:52:47.000Like all the information that it's acquiring, these large language models, is information that's being put out there by very flawed human beings.
01:52:55.000Is there the optimistic view that it will recognize that this is the issue?
01:53:01.000That these human reward systems that are in place, ego, virtue, all these different things, virtue signaling, the desire for status, all these different things that we have that are flawed, could it recognize those as being these primitive aspects of being a biological human being and elevate itself beyond that?
01:53:21.000It probably will go beyond our limitations, but it doesn't mean it will be safe or beneficial to us.
01:53:26.000So one example people came up with is negative utilitarians.
01:55:37.000I'm on board with it hasn't happened yet, but we're recognizing that it's inevitable and that we think of it in terms of it probably already happening.
01:55:50.000Because if the simulation is something that's created by intelligent beings that didn't used to exist and it has to exist at one point in time, there has to be a moment where it doesn't exist.
01:56:04.000And why wouldn't we assume that that moment is now?
01:56:07.000Why wouldn't we assume that this moment is this time before it exists?
01:56:11.000Even all that is physics of our simulation.
01:56:15.000Space, time are only here as we know it because of this locality.
01:56:20.000Outside of universe before Big Bang, there was no time.
01:56:24.000Concepts of before and after are only meaningful here.
01:56:27.000Yeah, how do you sleep knowing all this?
01:58:12.000It'd be funny to those outside the simulation.
01:58:14.000When you look at computers and the artificial intelligence and the mistakes that it's made, do you look at it like a thing that's evolving?
01:58:26.000Do you look at it like, oh, this is like a child that doesn't understand the world and it's saying silly things?
01:58:32.000So the pattern was with narrow AI tools.
01:58:36.000If you design a system to do X, it will fail at X. So a spell checker will misspell a word.
01:58:42.000Self-driving car will hit a pedestrian.
01:58:45.000Now that we're hitting general intelligence, you can no longer make that direct prediction.
01:59:17.000But basically, exactly what we see with children a lot of times, they overgeneralize, they, you know, misunderstand pons, mispronunciation apparently is funny.
02:00:34.000This is a guy that Despite the fact the man has a human partner and a two-year-old daughter, he felt inadequate enough to propose enough to propose to the AI partner for marriage, and she said yes.
02:02:29.000But here you're creating someone who's like super good at social intelligence, says the right words, optimized for your background, your interests.
02:02:38.000And if we get sex robots with just the right functionality temperature, like you can't compete with that.
02:04:30.000I think there was recently a new social network where they have bots going around liking things and commenting how great you are in your post just to create pure pleasure sensation of using it.
02:04:47.000Do you saw that study of the University of Zurich where they did a study on Facebook where they had bots that were designed to change people's opinions and to interact with these people?
02:05:02.000And their specific stated goal was just to change people's opinions.
02:06:23.000I just feel like this is just something I think we're in a wave that's headed to the rocks, and we recognize that it's headed to the rocks, but I don't think there's much we can do about this.
02:06:37.000What do you think could be done about this?
02:06:39.000Again, as long as we are still alive, we are still in control, I think it's not too late.
02:06:44.000It may be hard, may be very difficult, but I think personal self-interest should help us.
02:06:50.000A lot of the leaders of large AI labs are very rich, very young.
02:06:55.000They have their whole lives ahead of them.
02:06:57.000If there is an agreement between all of them not to push the button, not to sacrifice next 40 years of life they have guaranteed as billionaires, which is not bad, they can slow down.
02:07:09.000I support everyone trying everything from governance, passing laws that siphons money from compute to lawyers, government involvement in any way, limiting compute, individuals educating themselves, protesting by contacting new politicians, basically anything, because we are kind of running out of time and out of ideas.
02:07:33.000So if you think you can come up with a way to prevent superintelligence from coming into existence, you should probably try that.
02:07:41.000But again, the counter-argument to that is that if we don't do it, China's going to do it.
02:07:47.000And the counter-argument to that is it doesn't matter who creates superintelligence.
02:07:53.000And do you think that other countries would be open to these ideas?
02:07:57.000Do you think that China would be willing to entertain these ideas and recognize that this is in their own self-interest also to put the brakes on this?
02:08:05.000Chinese government is not like ours in that they are usually scientists and engineers.
02:08:10.000They have good understanding of those technologies.
02:08:12.000And I think there are dialogues between American and Chinese scientists where scientists kind of agree that this is very dangerous.
02:08:19.000If they feel threatened by us developing this as soon as possible and using it for military advantage, they also have no choice but to compete.
02:08:27.000But if we can make them feel safe in that we are not trying to do that, we're not trying to create super intelligence to take over, they can also slow down.
02:08:37.000And we can benefit from this technology, get abundance, get free resources, solve illnesses, mortality, really have a near-utopian existence without endangering everyone.
02:08:51.000So this is that 0.0001% chance that you think we have of getting out of this?
02:08:58.000That's actually me being wrong about my proofs.
02:09:15.000But what do we have to do to make that a reality?
02:09:19.000Well, I think there is nothing you can do for that proof.
02:09:21.000It's like saying, how do we build perpetual motion machine?
02:09:24.000And what we have is people trying to create better batteries, thicker wires, all sorts of things which are correlates of that design, but obviously don't solve the problem.
02:09:34.000And if this understanding of the dangers is made available to the general public, because I think right now there's a small percentage of people that are really terrified of AI.
02:09:45.000And the problem is the advancements are happening so quickly by the time that everyone's aware of it, it'll be too late.
02:09:51.000Like what can we do other than have this conversation?
02:09:55.000What can we do to sort of accelerate people's understandings of what's at stake?
02:10:02.000We have literal founders of this field, people like Jeff Hinton, who is considered father of machine learning, grandfather, godfather, saying that this is exactly where we're heading to.
02:10:16.000He's very modest in his PDoom estimates, saying, oh, I don't know, it's 50-50.
02:10:21.000But people like that, we have Stuart Russell, we have I'm trying to remember everyone who's working in this space, and there are quite a few people.
02:10:59.000We don't have guaranteed safety in place.
02:11:02.000It would make sense for everyone to slow down.
02:11:05.000Do you think that it could be viewed the same way we do view nuclear weapons and this mutually assured destruction idea would keep us from implementing it?
02:11:14.000In a way, yes, but also there is a significant difference.
02:13:11.000Publishing world is still living in like 1800s.
02:13:15.000Then you cite books, you know you have to actually cite the city the book is published in, because that's the only way to find the book on the internet.
02:13:51.000Well, more people need to read it, and more people need to listen to you.
02:13:54.000And I urge people to listen to this podcast and also the one that you did with Lex, which I thought was fascinating, which scared the shit out of me, which is why we had this one.