00:07:07.560It doesn't take a malevolent human to abuse this technology.
00:07:10.760Technology itself has malevolent payload
00:07:14.540and it decides what to do and why to do it.
00:07:17.320Because, so if we use the example of Facebook, Facebook's mantra at the beginning was move fast and break stuff.
00:07:24.060And because they wanted to take over and essentially they didn't care who got in their way.
00:07:28.800They wanted to get to where they want to get to.
00:07:31.260And when we met people from Silicon Valley, from the AI world, bear in mind we didn't meet the top people.
00:07:36.980We just met a small portion of people and we talked to them.
00:07:39.600I was concerned because it didn't seem to me that ethics and the long-term effects of this technology was forefront in their mind.
00:07:49.160I'm not saying they were malevolent, I'm just saying it didn't appear that the long-term impact of this technology was their primary concern.
00:07:59.120That's true. Historically, most people working in AI never took the time to think what happens if we succeed.
00:08:05.940because it was so hard for so many years there was so little progress they had winters one after
00:08:11.540another so they basically just worked on it tried to make as much progress as possible without ever
00:08:17.940stopping and thinking well what if i am successful what if i create competing species something
00:08:23.140smarter than humans is that good for us how will we interact with them and the last 10 years the
00:08:29.220progress went exponential it went from basically we have no progress you have to hand code every
00:08:34.820new application to those systems can scale they can learn they can transfer knowledge
00:08:39.620and now it's hyper exponential because the ai itself is helping with research
00:08:44.020but we haven't spent the time to decide do we want this do 8 billion people agree to this
00:08:51.380experiment are they interested in having their jobs automated and that's just the economic
00:08:57.460concerns not the safety concern well we'll talk about the economic concerns separately but i mean
00:09:02.660One of the things that may seem particular to our audience, which is a not AI-specific audience,
00:09:08.420the people who watch our show are just normal people going about their lives,
00:09:11.940this may feel like we're talking about something in the distant future.
00:09:16.100I was looking at the Calci odds for OpenAI getting AGI by 2030, and it's now over 52%,
00:09:24.580and it's gone up 13 points this year so far. It seems to me like we're heading in the direction
00:09:32.020of getting to AGI within what kind of time frame do you think? 2030 is somewhat conservative. Some
00:09:37.620people are saying we already got there. We just haven't deployed it yet. However, I'm pretty sure
00:09:42.660it could be a year or two. Wow. And so, you know, the big risk that you're talking about,
00:09:50.340which is you create a super intelligence, you've basically created another species,
00:09:55.380which is more powerful than you. And when we had Dworkesh Patel on the show, this is kind of like
00:10:00.820Like I said to him, you've basically created this, like the Unsullied from Game of Thrones,
00:10:07.300except they are not actually obedient.
00:19:47.540No matter what changes we make to those systems, no matter who releases it, U.S., China, what company, what it's trained on, you want it to make zero mistakes.
00:19:57.320Because if it makes one mistake, it could be the last one.