Red Ice TV - April 30, 2026


The Black Box Gamble: Known And Unknown Dangers of AI with Erik


Episode Stats


Length

2 hours and 27 minutes

Words per minute

194.65799

Word count

28,697

Sentence count

607

Harmful content

Misogyny

2

sentences flagged

Toxicity

19

sentences flagged

Hate speech

52

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode of the Red Ice TV podcast, Henrik and I have a chat about artificial intelligence (AI) and what it means for the future of the world, and the impact it can have on our daily lives.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Toxicity classifications generated with s-nlp/roberta_toxicity_classifier .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Transcription by CastingWords
00:00:30.000 We'll be right back.
00:01:00.000 Thank you.
00:01:30.000 We'll be right back.
00:02:00.000 Thank you.
00:02:30.000 We'll be right back.
00:03:00.000 Transcription by CastingWords
00:03:30.400 And ladies and gentlemen, welcome back. Thank you for joining us.
00:03:34.140 Henrik here, Red Ice TV. Hope you're all doing well.
00:03:36.880 It is Woden's Day, Wednesday, Unstag.
00:03:40.080 It is the, what is it, 29th of April, 2026.
00:03:43.000 Already, I hope you're all doing well. Thank you for joining us.
00:03:45.280 We're going to have a nice, I think a laid-back conversation here today
00:03:48.420 regarding artificial intelligence.
00:03:50.940 I invited a buddy here. We have some mutual friends.
00:03:53.380 Eric is joining us, and I don't think any one of us
00:03:56.640 have a kind of particular expertise per se in that subject.
00:04:00.600 And correct me if I'm wrong, Eric, obviously,
00:04:02.280 because you know more about that than me,
00:04:04.680 but it doesn't leave us out of the picture
00:04:08.100 to extrapolate and do threat assessment
00:04:11.300 and try to figure out what is going on with AI.
00:04:13.680 How transformative will it be?
00:04:15.940 What are some of the trapdoors that are known?
00:04:18.360 What are some of the unknown things that can happen?
00:04:20.240 Because this is for sure a technology
00:04:22.200 which i think um we haven't we haven't seen the likes of this a technology that's basically primed
00:04:29.000 to replace every aspect of human labor and uh well it's up to this point uh purpose uh that's right
00:04:35.560 uh so far you know how you doing eric thank you for coming on i'm doing great thank you very much
00:04:39.720 for having me and i i completely agree yeah for the record i am not an ai expert i don't work with
00:04:44.600 one of the big ai companies that's pushing all this tech forward but i i do work in tech and uh
00:04:50.680 that's really positioned me well to see how these technologies are changing what's you know
00:04:55.320 interesting to clients in the market how are people using these things what's got traction
00:04:59.400 in the business world so i'm more of a more of a close-up spectator i would say yeah exactly uh
00:05:04.360 i mean again that's kind of like you know my job part of doing this is as i've said many times
00:05:08.840 before but threat assessment looking at the whole map what are we what's the dangers that we face
00:05:13.960 both in terms of our people our folk but also in terms of my my children you know everything right
00:05:18.040 I'm just like always been a very astute observer.
00:05:21.220 I've always appreciated and kind of liked technology to a certain extent.
00:05:25.800 I think I understand the dependency, obviously, that we're developing on it,
00:05:30.980 but at the same time, the potential that it has.
00:05:33.140 But I do have to say off the outset here, this is a different technology.
00:05:36.300 This is not like we can understand where like maybe where technology makes us weaker, right?
00:05:42.280 Like let's take a couple of examples, right?
00:05:44.340 You could go back to, let's say you go back to the, what is it, the Paleolithic times or something, or even before that, like before we have fire, right?
00:05:51.480 It's like, okay, fire is a technology, but did it weaken us?
00:05:57.360 Probably, right?
00:05:58.080 We understood how to harness fire, how to make sure that we have it in our procession, how to start it eventually, things like this, right?
00:06:06.800 But did it weaken us?
00:06:08.780 Yeah, probably, but it made life a heck of a lot easier, and I don't have a problem with fire, obviously.
00:06:13.160 that's using our intellect, using our brain, right?
00:06:15.780 If you go forward, it's like,
00:06:17.560 let's take the technologies that have come
00:06:19.940 after the Industrial Revolution, right?
00:06:21.360 We're toiling in the fields.
00:06:22.800 It's hard, backbreaking labor, and people die early.
00:06:28.620 We have machines that can help us now.
00:06:31.060 You know what I mean?
00:06:31.400 A farmer can farm incredible amounts of land on his own.
00:06:36.140 He's basically an engineer.
00:06:37.540 He's not even a farmer anymore.
00:06:38.800 He is a technician and an engineer of sorts, right?
00:06:41.520 But at the same time, if we rely on all the machines to do the labor for us,
00:06:47.020 yeah, we will become weak, you know what I mean?
00:06:49.440 Like we can have machines lift our heavy stuff,
00:06:52.020 but do we use our bodies anymore, right?
00:06:54.500 We atrophy.
00:06:56.260 And now I think we're coming into that,
00:06:58.420 the type of technology now is where it's replacing our cognitive ability.
00:07:02.440 And the question is, what happens when we stop using our brain?
00:07:06.600 I think that's a completely different technology altogether, wouldn't you agree?
00:07:11.120 Absolutely, I agree. And the thing is, to me, it's kind of about surface area. So those previous changes in the way that the economy worked, the industrial revolution, machines coming online, what that allowed for was the retreat from those who used to have to engage in physical labor to either other physical labor, or more importantly, especially in the last 100 years, to our last, let's say 75 years, to the realm of what we often just call knowledge work, right?
00:07:39.120 So it allowed for more and more brains to be put to work in a way that was meaningful.
00:07:44.680 But now AI is coming in and is kind of threatening that final frontier.
00:07:49.140 And at present, we don't have another one to retreat to.
00:07:51.800 If we lose knowledge work, it's a massive portion of the world's population who's currently
00:07:56.800 providing for their families and having a way of engaging with the economy meaningfully,
00:08:02.640 as well as, unfortunately, we've also got robotics spinning up at the same time and
00:08:06.880 rushing to meet this not to mention the fact that ai itself will help in the development of those
00:08:12.320 robotics to the point of practicality so it's it's not it's we're kind of rushing to terminator all
00:08:16.880 at once but more like terminators in the fields or in the office or you know swabbing the decks
00:08:21.000 i think the chances of like n ai or the ai whatever you want to call it there's many many
00:08:28.180 different ones competing ones which one will be the thing maybe maybe it will be a multitude
00:08:32.220 obviously but still yeah the chances of it outright just you know building like death bots
00:08:38.180 or something to come get us it's very it's very small obviously i i if it's if it's sufficiently
00:08:44.180 intelligent it would fight us if it sees us as a nuisance or a problem or like maybe kind of similar
00:08:50.240 how we view like ants when we're building a structure or something like we're not going to
00:08:54.520 take the time to even move the ant call like why even bother right but even if it did that even if
00:08:59.820 it saw it as a nuisance or yeah a resource competitor essentially um it would fight us
00:09:05.960 in a type of war where we don't even understand that it is a war i would assume right yeah i mean
00:09:11.780 we could easily get to that point it's kind of humorous but i end up thinking about kind of the
00:09:16.860 late doom possibilities far less only because i feel like you know you titled this about like the
00:09:21.680 black box of ai right i feel like those late stages are the most black box like it's very
00:09:27.560 hard to penetrate that far into the future and understand, like, is it one bot? Is it many bots?
00:09:32.500 Sure, we could be like ants to it. But I think that the counterpoint to that could be,
00:09:36.080 but it might also have the empathy of a god, right? So it might be like us,
00:09:40.060 our relationship with ants, but with a much greater consciousness to where it can both,
00:09:43.820 you know, rub its tummy, chew gum, and take care of every ant in the way. We don't know.
00:09:47.280 So I often find myself more consumed with thinking about the next 5, 10 to 20 years,
00:09:53.000 that interim period before we get to whatever sort of super intelligence that is, there's a
00:09:58.140 whole lot of runway in which we have to deal with very powerful humans at the helm of an increasingly
00:10:03.620 powerful AI that is taking up all of the space upon which we've been able to perform labor to
00:10:08.420 retrieve resources and to engage with each other. That's where most of my attention lies.
00:10:13.020 Yeah, because we don't know what direction this will go, right? Will it benefit
00:10:15.640 the people who are coding it? Some people said they're actually, you're growing AI,
00:10:21.880 You're actually not coding it.
00:10:23.440 It's kind of an interesting thing.
00:10:24.440 We can talk more about that later maybe.
00:10:25.680 But either those who do grow it or those who, let's say, fund it,
00:10:31.000 maybe there's other variables within that itself,
00:10:33.860 or it could be to the benefit of the AI itself, right?
00:10:36.720 That it starts telling the humans, which grows it, funds it,
00:10:44.060 that, yeah, I'll do what you want me to do, et cetera, whatever.
00:10:49.040 But then obviously it hides intent.
00:10:50.780 we've already seen some of those experiments actually happening and taking place where it's
00:10:54.420 like it basically does things on the back end it uh lies for the lack of a better term to its
00:11:00.700 uh you know the coders that are investigating it or asking it to do certain prompting it to do
00:11:06.120 certain things so like you know and if this thing becomes because we're not at that point yet too i
00:11:11.080 think that's an important thing to mention early in the conversation here that we're not talking
00:11:15.000 about chad's gpt and its current form taking over the the world and dominating in some kind of way
00:11:19.520 obviously right um we're talking about something that might happen could happen next year could
00:11:25.020 happen within two years maybe it's 10 years out or maybe it's you know next month we we don't know
00:11:30.780 and the problem is that the people that are growing these ais doesn't know either and the
00:11:36.280 reason for that it has to do with the black box title that you brought up i'm glad you mentioned
00:11:39.640 that too because basically as far as i understand it you know i've listened to some you know ai
00:11:44.460 experts on this issue but no one can kind of raise the hood of of ai and look at all the what is it
00:11:52.220 millions if not billions of numbers essentially being crunched uh to compute compute compute
00:11:58.380 and no one can look at that and say well see this is really kind of what's going on here right this
00:12:04.460 is really the mechanics of it and if we you know change this or take this portion out or recode
00:12:09.980 this doesn't even work that way but even if they could do that that's going to change it or turn
00:12:14.220 it into this direction so the idea with the black box is you can you can monitor the inputs and of
00:12:19.500 course from our perspective eric when we look at like chat gpt or some of these you know models
00:12:26.300 throughout the now that's available to to the market we're not privy to all the inputs by the
00:12:31.340 way you know we don't know how they're doing this or exactly what it's what they're feeding it it's
00:12:36.380 not open source right but we can at least on the surface know what we're putting in but we don't
00:12:41.820 know what's happening internally but we can monitor kind of the output or what comes out of
00:12:45.820 that so this is a a hugely unknown variable that we're basically toying around with without
00:12:52.940 truly kind of understanding well where where will this go and what's the potential consequence of
00:12:58.860 of this this is in other words a massive risk that could turn out to be nothing but as i'm looking
00:13:06.140 at it at least i want your input on this but as i'm looking at this the chances of it going wrong
00:13:13.180 it way outweighs the potential positive outcome that we will get from this what's your view
00:13:19.020 yeah agreed and unfortunately going wrong like the bearish case for humanity is the bullish case for
00:13:25.340 ai right so if if ai ends up being a nothing burger if it ends up well i mean getting bringing
00:13:30.540 it back to the practical like my company right now is making a huge push at creating ai tools
00:13:35.900 for our industry i mean we refer to it as the magic right we're working up demos and we're showing
00:13:40.940 uh our prospective clients like here's what we are wanting to build we want to partner with you
00:13:44.860 to build this and all of those those customers they've are they're spending time with claude like
00:13:49.980 nine months ago we couldn't have made this pitch four months ago we now can because everybody's
00:13:55.020 tried claude everybody's been able to actually plug in parts of their work and say i'm getting
00:13:59.660 results without with relatively little effort so what can the experts come come in and build
00:14:05.340 that actually works so it's possible that that all comes crashing down it doesn't work it fails
00:14:09.740 businesses go under right we're seeing some mistakes made but i'm not gonna bet on that
00:14:14.460 i'm pretty bullish on the technology in terms of i think that even though it's black boxy and it's
00:14:18.220 magic key we're able to harness the magic and it's gonna it's going to be so efficient it's
00:14:23.500 going to cut costs for companies so much it's gonna it is going to replace labor in a way that
00:14:27.740 capitalism cannot ignore it's going to be the right choice to go with these technologies despite
00:14:32.060 despite the risks and because that is the case again barring some massive failure of it it is
00:14:37.100 going to keep increasing the resources will keep going to it the magic will increase but the black
00:14:41.580 box is just as black and so we keep going into a future where we're feeding this thing but we don't
00:14:46.300 know what that's going to be long term but again looking at what that could do to the labor market
00:14:51.420 and not just the labor market in terms of economy but rather like how we all are able to engage with
00:14:56.220 each other to provide value to receive value back and make our way through the world and leverage
00:15:00.540 our our power and sovereignty against governments and industry that's all at threat by the bullish
00:15:06.460 case for ai yeah because i mean you could and i've seen some people do that argument kind of like
00:15:11.580 it's this is kind of like a capitalism uh you know fault kind of a design fault within that
00:15:17.820 that it's constantly seeking you know kind of the bottom line or whatever but even no matter what
00:15:22.620 political system you have at least if we would approach this point where we're at now the danger
00:15:29.100 would maybe not be exactly the same but very similar maybe maybe would have taken a different
00:15:33.340 path but the end well we don't know what the end result will be but i'm saying the many potentials
00:15:39.100 of a of a disastrous outcome for mankind are still there on the horizon i think no matter what
00:15:44.140 human political system you you you transpose on top of the the conditions that are existing right
00:15:49.980 now and and and why well because of course economy is a is a real thing right what what do we what
00:15:57.740 What do we work for, right?
00:15:59.020 Okay, ultimately, you have to provide.
00:16:00.580 We used to be that you toil in the field, as I said before, right?
00:16:03.340 You produce your own food.
00:16:04.800 It's subsistence living.
00:16:06.440 You're doing these things.
00:16:07.800 You get specialization, industrialization.
00:16:10.320 All of a sudden, while I sit at an office, but the point is kind of the same around that,
00:16:14.820 that is like you need to basically do something to carry your own weight, right?
00:16:19.080 You need to, as you said, kind of contribute value to the overall society that we live in.
00:16:23.460 But what's interesting about the system then is that humans are basically, within the system, educated and trained to be like machines, right, in a way, you could say, where the system is a machine-like system.
00:16:37.580 But the reality of this then is that machines will obviously be better at being machines than humans will be, right?
00:16:44.800 Much better.
00:16:45.380 Much better. So as the technology itself progresses, to make machines, ironically, also more like us, because we're making them in our likeness, here we go with the kind of spiritual god-like analogies here, but still, but it's just a matter of time before we then basically are replaced by the very technology that we are now building.
00:17:06.860 I kind of liken it to, you know, those things that they present with like the natives, I guess, in South America or something in this case. 0.84
00:17:14.020 The natives are standing on the beach and all of a sudden here's this ship approaching and obviously, you know, contained within it is superior technology and they don't stand a chance. 0.99
00:17:23.440 And it's very kind of similar here, right?
00:17:25.100 We're kind of the natives, but what's strange here is that we are the builders of the very technology that probably will replace us.
00:17:33.000 I don't think there's any civilization that fare very, very well when a superior technology showed up.
00:17:40.400 Can you think of any?
00:17:42.400 No, I don't have any good examples, especially not when the difference is this big, right?
00:17:47.520 This is mechanized, modern, like World War II level humans showing up on the Solomon Islands, right?
00:17:53.580 Right.
00:17:54.080 The Gulf is so huge.
00:17:56.360 And additionally, like you mentioned, that we're making it in our image, which is true.
00:17:59.380 But very quickly, it's also shaping us into its image.
00:18:03.000 And this is the problem, like there is some pushback among some of my more philosophical
00:18:07.080 friends that AI has the potential to help us reflect back on ourselves.
00:18:13.300 And I will admit, I sometimes love talking to the AIs about deep subjects because they
00:18:17.240 don't get tired.
00:18:18.200 They don't get tired.
00:18:18.920 They don't get offended.
00:18:19.560 They have full access to the breadth of human knowledge.
00:18:23.200 And so they can be a fantastic thinking partner for a lot of tasks.
00:18:26.080 I mean, I'm using it all day, every day for work, and I use it for some of my personal
00:18:29.300 stuff as well.
00:18:29.940 So it's not that these technologies don't have that capability, but unfortunately, what we're seeing is not modern technology and AI-generated content creating better people, more connected people, more engaged people, better attention spans, right?
00:18:46.000 Like we were already pretty disillusioned with what was happening just with social media
00:18:49.240 and human made content.
00:18:50.940 Now we're going to like, it's going to be very, very quick here before your entire feed
00:18:55.100 on like YouTube shorts or TikTok, take your pick is going to be AI generated based off
00:18:59.520 of your likes, right?
00:19:00.760 Here's 30 second videos made in real time that simply give you a burst of dopamine,
00:19:06.340 right?
00:19:06.640 So unfortunately, I don't think we're going to see a, there'll be pockets, I think of
00:19:10.340 flowering of like human Renaissance, but for the, for the wider world, I don't expect that
00:19:14.700 to happen.
00:19:15.100 yeah because we're hardwired to take the path of least uh resistance it's kind of like a bio it's
00:19:20.820 not an error but it's like a biology issue us being i i well i mean that's true for calories
00:19:26.440 but why is it true for content it's true because we're well we're exploitable we get addicted to
00:19:31.020 that those dope we get we can get addicted to all sorts of chemical hits right people can get
00:19:34.460 cortisol and seek out abuse right yeah but in the case of short-form content it's just that i mean
00:19:39.480 i can definitely feel the change in myself right and i'm i'm old enough to have existed before all
00:19:43.840 these things but i can see the effect it has even just on you know an x feed of just constant
00:19:49.120 changing of news cycles there's some scientists recently who who have been measuring time span
00:19:53.420 or sorry attention span over time in young people and back in like 2001 2002 i believe it was about
00:19:59.980 two and a half minutes that the average uh young person could focus on a task and that same uh that
00:20:05.540 same scientist using the same rubric is now down to it's 40 seconds for that same class of young
00:20:10.120 people today so we're seeing this degradation and i i don't see how uh again there are certainly
00:20:15.800 going to be uses of ai that can help combat that but i don't expect the masses to pick up those
00:20:19.820 uses i expect them to continue to be formed into consumers of short form ai generated content
00:20:24.360 and it's like damn if you do damn if you don't this is going to happen whether or how no matter
00:20:29.520 how much we sit here and whine and complain right and it's correct correct and it's the same with 0.93
00:20:33.960 as any technology it's although i draw a line here this is a different type of technology it's it's
00:20:39.640 the technology to end all technology if that's the right term for it it's or the it's the
00:20:44.800 culmination of the industrial revolution and the technological revolution yeah exactly and and we
00:20:49.960 don't even and at that point we even lose control over it essentially right we we don't know where
00:20:54.000 it will it will go and that's not some it's not some metaphysical thing it's not to say that it's
00:20:59.600 like sentient or some type of god or something like that but it will be but it will have elements
00:21:04.860 of an omniscient, all-seeing, all-knowing entity, essentially.
00:21:11.840 And I think people will be incredibly enamored by that,
00:21:16.440 even to the point where probably religion in and of itself
00:21:19.020 will be formed some way around the AI, I would assume.
00:21:21.720 I mean, it's already kind of happening as far as we look at it.
00:21:23.540 Absolutely, yeah, there's no doubt.
00:21:25.800 It's kind of interesting, though, right?
00:21:26.920 It's like it will replace, again, kind of like the image I show,
00:21:30.660 that depending on you know uh how old people are when when this is is around but just think
00:21:36.700 you know if we even can like 200 years into the future or something like that you know i mean like
00:21:42.460 it's it's impossible to even envision that but a world where you're surrounded by sensory inputs
00:21:49.720 and machines that know how in an instant to manipulate you essentially right um that's right
00:21:55.160 this doesn't bode very this doesn't bode very well for us we have to have a conscious uh path
00:22:01.960 or a conscious strategy on how to deal with it and even if we do that there's no guarantee that
00:22:07.960 we'll be able to truly not outsmart it because that will never happen but at least being aware
00:22:14.840 of the fact that it will try to manipulate you in some kind of capacity you know i mean i mean
00:22:20.840 it already is i mean the algorithms alone i mean you mentioned you kind of this omniscience it
00:22:25.320 knowing it knowing about us i mean we were already there with just boring old algorithms right like
00:22:29.640 there's there's women who kind of figured out they were pregnant because amazon started uh
00:22:33.320 sending them ads for like baby strollers and different things and it's like there's so much
00:22:37.640 data being fed into the system just from our consumer choices that there are there are facts
00:22:45.160 about us that the internet again pre-ai could already discern just basic machine learning
00:22:50.840 now we're adding on this whole extra layer of llms and it's not just llms people get caught up on
00:22:55.240 like the limitations of llms when you're using chat gpt you're not just using an llm llm is an
00:23:00.120 interface to many other models there's so many other ai models that are working in geospatial
00:23:05.400 there's a model out right now that can model that can model physical space based off of electromagnetic
00:23:10.440 particle relationships i mean it's going to yeah this thing is going to know more about us than we
00:23:15.240 know about ourselves it's going to be able to play us like musical instruments and if it you know
00:23:20.040 again in a distant future where that thing is some sort of potentially benevolent thing okay
00:23:24.200 we can have that conversation but right now we are in the hands of sam altman and mark zuckerberg yes
00:23:28.840 and elon musk and all of it's it's centralizing this this foreknowledge this we are sitting in
00:23:34.760 the middle of the panopticon that is being built by men and a lot of us don't necessarily trust
00:23:38.840 those men or or any men to build the panopticon yeah exactly i was gonna say what what kind of
00:23:42.600 men is probably in the hands of the worst kind of men to be honest they couldn't be worse um
00:23:47.480 I don't know if anyone would do it right, but certainly not those guys.
00:23:50.480 Yeah, I mean.
00:23:50.720 Well, and particularly for those of us who have, you know, thoughts that are outside of the zeitgeist.
00:23:54.660 You know, these things are not being built according to the rules of our teams.
00:23:58.440 No, no, no, exactly.
00:23:59.380 No, definitely not.
00:23:59.960 No, in fact, as I said, even if the people, this is third variable that obviously people that warn about AI or talk about the threats of it or whatever, they don't even take that into account.
00:24:09.240 They're talking about like alignment issues and values and all that kind of stuff.
00:24:12.940 But what even, but what about the people that are actually building it?
00:24:16.120 that we can't even ensure that they will be have our values and that they're hoping it's aligned
00:24:21.340 with them. There's plenty of us who aren't super happy with the alignment of the general population
00:24:26.560 and the way things are now. So at best, they'll get it aligned to the thing that we are already
00:24:30.780 pretty skeptical of. So there's a headline here on the screen, mind captioning AI decodes brain
00:24:36.380 activity to turn thoughts into text. Now, of course, it's using, what was it? It was a couple
00:24:42.340 of different technologies that they're using different type but the point is this is already
00:24:47.460 kind of in the works and i would assume that this will be uh far less uh advanced going forward and
00:24:53.600 it will be far less intrusive it won't be like well you have to put this kind of helmet on or
00:24:57.900 something or put a you know a brain chip in your skull like no that's uh that's old school i think
00:25:02.980 the watches and some of the wearables now that they are talking about at least um are kind of
00:25:10.560 sensitive enough to basically pick up on your nervous system. And basically, at some point,
00:25:16.800 it's going to be able to decode brain activity. And I don't even know where we go at that point,
00:25:22.080 to be honest, because as you said, even now, most people are receptacles, essentially, of
00:25:28.840 inputs of other people's wills and commercials and music or movies, whatever. They're not even
00:25:35.280 themselves already can you imagine this as a layer in terms of manipulation or kind of knowing what
00:25:41.800 it it will know what you want before you even know it yourself that type of predictive ability
00:25:47.560 you know that's right and again that's the other side of a feedback loop right it is it is shaping
00:25:51.880 your desires it is shaping your chemical reality constantly with the with the content you intake
00:25:57.500 and then you're going to have all of these i mean they're i'm sure folks have seen it but you know
00:26:02.120 your wi-fi can be used to make a map of your house and see the router yep yeah it's a it's
00:26:07.680 a lidar essentially now they're building into the new uh chips yeah precisely and that was like they
00:26:12.240 could do that it was like kind of fuzzy like 20 years ago it's really good now right now i mean
00:26:16.740 i'm sure you saw recently that now there's going to be kill switches in every vehicle and ford and
00:26:20.760 other companies are going to have they're now going to allow law enforcement to tap into the
00:26:24.620 the cameras and different things that are inside of the the interior of your car that are looking
00:26:29.140 for are you tired are you drunk are your pupils dilated right so like just what it can read on
00:26:33.740 the surface i mean there's a lot that can be said about your mindset and your chemical state just
00:26:38.080 from your skin temperature pupil dilation micro expressions now you add on to what you've got on
00:26:43.000 the screen right now with actually being able to get into our brain i think that a lot of that will
00:26:46.960 be happening before we even realize what the technology is that's doing it unfortunately
00:26:50.320 yeah exactly um okay so there's a lot to break down there i talked about the financial system
00:26:56.700 And I was kind of, I want to convey that picture, I guess, or paint that picture for people of us, basically.
00:27:04.500 How do we put it?
00:27:06.520 It looks to me that we're also kind of building, therefore, ourselves out of the system.
00:27:13.100 I think that the way that they're selling artificial intelligence, not always, obviously.
00:27:17.020 A lot of this is so far entertainment, obviously, or generative AI.
00:27:22.440 Sure, the large language models are also fun.
00:27:24.940 It can coalesce or distill a lot of information for you.
00:27:28.300 It can do papers for you.
00:27:29.520 It can do essays, books, editing your text and things like that,
00:27:33.380 which is, you know, it is incredible, really, when you look at it.
00:27:36.160 It's like, whoo, like how does this even work?
00:27:38.240 And the people working on it don't even know exactly how wide it works.
00:27:41.820 They can't even explain it to you.
00:27:43.120 Why does it work?
00:27:44.040 We don't know.
00:27:44.840 It seems to be this more esoteric and occult,
00:27:47.300 and I don't want to derail too much because I have a train of thought here,
00:27:49.500 But it almost seems that what makes it work so well is the language.
00:27:57.700 It's almost like it's language that is the magical thing.
00:28:00.900 Do you see what I'm saying?
00:28:01.920 Because the tokens of the words that it processes are just what we call
00:28:08.900 numerical symbolic values based on actual words, right?
00:28:13.960 So the words are really kind of at the root of it.
00:28:16.460 It takes a word and it breaks it up into different tokens, right, which is numbers.
00:28:21.220 It crunches these numbers in these gradient layers, essentially, like in these big data centers.
00:28:26.480 And exactly, again, how that worked, the gradient descent.
00:28:29.360 And they're talking about dimensions, even like relatability between words, like even where words are, you know, they're spelled the same way, but within the context, they mean different things.
00:28:43.880 And how does it figure that out?
00:28:45.320 and they're finding dimensions inside of the system itself
00:28:49.260 where it aligns or it's weighing toward a certain area.
00:28:52.860 Anyway, you look at all that and you're just like,
00:28:54.100 wow, this is pretty impressive.
00:28:55.720 But to me at least, it kind of looks like it's language
00:28:58.940 that really is the magic here.
00:29:00.900 But I don't know, what's your take on that?
00:29:03.700 I mean, agreed.
00:29:04.580 I mean, it is kind of humorous and appropriate that all,
00:29:08.460 I mean, let's say all the major religions,
00:29:10.080 the words are very important, right?
00:29:11.680 Christianity, the word is God, right?
00:29:13.640 And like in Hinduism, Om is the word by which, you know, creation comes into existence.
00:29:19.780 So like language has always been powerful in the human imagination as long as we've had it.
00:29:23.780 And again, it's true.
00:29:24.500 Like the LLMs are really what has set off this particular set of AI progression.
00:29:31.680 We've had a lot of different AI machine learning algorithms before.
00:29:34.160 But what we're moving towards is not having to use the hands, not having to build interfaces
00:29:42.620 and computers for humans to interact with tactically. Those are all slow. If I want to
00:29:46.700 book a flight, I have to type in words to get to my flights.google.com. I then have to go click on
00:29:51.360 different things. This is all slow. What I want to be able to do is say, Google, book me a flight
00:29:57.740 from this place to this place on this date, find me the best price and make sure there's no more
00:30:01.300 than two legs and I don't want to travel for more than 13 hours total. And it's done. Right now,
00:30:07.080 the layer that's being built out is LLMs are being used to tie into the APIs that the current
00:30:13.300 internet is built upon so that it can use the function that humans have built to carry out
00:30:18.140 tasks. That's the current task. What's going to start happening, though, is all of those layers
00:30:24.300 of human interaction of the internet are going to start being pushed to the wayside. Those aren't
00:30:28.740 going to be necessary anymore. So as AI starts rebuilding the lower levels of the internet and
00:30:35.140 the lower levels of software, there will come a point when the only way to interact with
00:30:39.540 digital tasks is to speak, is to use language. Now that's going to both be really cool, again,
00:30:45.060 in this like, wow, I can't believe it can do that sense. But that's also going to push
00:30:48.280 humans away from being even able to touch the basics of the digital workspace. So there
00:30:53.500 will come a point when if you wanted to book a flight with your hands, you won't have the
00:30:56.400 option to. And as that process continues on further and further into industry, there's
00:31:02.420 going to come a point when we, in the same way that the lower levels of everything you were
00:31:06.700 describing with all of the vector databases and the way these things work, we don't, we can
00:31:11.540 describe them. We don't know why it works so well. There's going to come a point when the entire
00:31:15.200 digital system just works. And we don't exactly know how to, I mean, we're not going to be able
00:31:20.020 to access it in the way we did. At that point, we will have given over the keys to the entire
00:31:23.960 kingdom to AI. Yeah. Because the, even, even if you take into account that like, well,
00:31:29.900 artificial intelligence is set to you know benefit humans then to benefit the human world or
00:31:36.000 civilization or something like that it's almost like we're almost more confronted with the
00:31:39.700 question of that well well what is it like how do we even define it to this thing right to it
00:31:43.920 understands what like first of all just saying good or better or make things you know even even
00:31:48.900 that alone is very fuzzy and it can be misinterpreted and you know all kinds of weird
00:31:53.120 things can happen along the way here um but at the same time just at face value alone right
00:31:58.700 like well it's going to improve things right at some point you realize well human humans because
00:32:04.520 we're not machines are kind of we're kind of difficult difficult variable to deal with sure
00:32:10.260 we're probably more simplistic now unfortunately because of all the technology around us already
00:32:14.460 and uh it's homogenized it's more of a kind of a global culture everyone listens to the same music
00:32:20.080 so we're certainly more not as um diverse if that's the right term here but like as as multifaceted
00:32:27.400 as I think we used to be at some point,
00:32:29.860 maybe at the height of human civilization.
00:32:32.620 But at the same time, it's kind of an unknown variable
00:32:35.640 that's kind of difficult to deal with, right?
00:32:36.900 But the way to deal with that then is to build,
00:32:39.920 I think humans will be built out of the system
00:32:42.560 in and of itself. 0.72
00:32:43.480 Let's just take the thing with the economy, right?
00:32:47.540 Short-term solution, because it's capitalism
00:32:49.780 and bottom line, whatever, companies are like,
00:32:52.380 if we fire these hundred people
00:32:54.220 and replace them with AI, we'll make more money.
00:32:57.120 Okay, great. Let's do it. Can it work? And in some cases, you know, it might have varying degree of success and it might not be, I've heard of some people as like, oh, well, some companies did that and it didn't work and now they're rehiring or whatever. Oh, well, trust me, they'll be back. Okay. This is not like over it. They're not going to, you know, they're going to keep refining this and eventually at some point it will click and they got it. Right. But so the point with that is, you know, the UBI talk, universal basic income, it will just produce things for us.
00:33:26.580 you know like what do we how does this work right but the point is i've seen some estimates and
00:33:32.740 stuff like that right and they say ubi even if it even temporarily would work long term how does
00:33:39.080 that even work right like ai itself is then starting to kind of like generate income or
00:33:45.160 money or because it's producing things and selling those things well presumably it needs to sell it
00:33:50.520 to humans i would assume or you're going to sell it to other ais maybe for other reasons that's
00:33:55.660 possible they might have their own economy or ecosystem or whatever but at some point if we
00:34:01.180 don't get salaries if we don't have money we're not going to be able to buy all these products
00:34:05.120 that these great machines produce for us all the automation and all the ai and stuff like that
00:34:09.080 right we will literally stand on the outside looking at this thing that we've built and we've
00:34:15.620 designed our way out of it essentially and it's almost like but but you'll have this huge system
00:34:21.440 of an economy running it will be running your civilization it will be your judges your police
00:34:26.360 enforcement i know i'm going to kind of ahead of myself here but i'm just thinking of the all
00:34:30.260 encompassing like the efficiency of society runs much better now on ai right more higher trust
00:34:36.360 they will probably try to pitch that to us uh it will be objective when it comes to law enforcement
00:34:41.100 it will run your government much more efficiently so i don't see us like kind of turning away from
00:34:46.560 it i think we'll just adopt more and more and more of it but who's going to buy all the products if
00:34:52.240 there's no salaries to go around who's going to buy the products we will have to be we'll be down
00:34:57.280 to like subsistence farming again on the outside of the system where ai itself is just like kind
00:35:03.600 of running civilization for its own i i don't know i mean that's just one variable one possible
00:35:08.320 outcome but as i'm looking at this like we won't even have a part in this we won't even fit into
00:35:12.880 the system i think that's right and i mean without without jumping the shark let me let me kind of
00:35:18.880 walk through a couple of the things you said so my my concern is i think the one thing that we can
00:35:24.240 all see right is is this ubi piece and i think we've we've been pitched ubi for a very long time
00:35:29.360 so we can comprehend this concept of being given a paycheck to simply exist or to to subsist my
00:35:36.560 concern with that up front is leverage. So the way that the populace has always been able to have
00:35:43.720 leverage against government, leverage against centralized powers is one of basically two
00:35:48.500 things, either the use of violence as in like a revolt, or our labor, the ability to withhold it,
00:35:53.900 right? That was why the unions were so powerful over 100 years ago, was the ability to withhold
00:35:58.020 your labor changed, changed the calculation in the system, right? So those who would have used
00:36:03.100 their power against you in the government or otherwise, you had leverage against them.
00:36:07.980 As we let go of our labor, we are changing that calculus and we're changing one of the only two
00:36:13.920 things that we have by which to negotiate as citizens with government. And that leaves us
00:36:19.320 with only violence. And again, unfortunately, we're coming into a time period where there's
00:36:23.100 already robot dogs on the streets of Atlanta. They don't have guns strapped to them yet,
00:36:27.400 but they are walking around and videotaping things and keeping an eye on things. So there's
00:36:31.360 this pincer movement going on where citizens are going to be, they're going to find it much harder
00:36:36.200 to use either of the two methods we've always had to maintain balance. Now, as you carry that
00:36:41.420 forward, the question does become like, okay, well, if everyone's on UBI, a lot of people go
00:36:46.300 like, oh, that's really cool. Because they kind of look at what they look at the benefits some
00:36:49.740 folks get out and say, well, I'd probably be fine with that. It's like, yeah, maybe I think you'd
00:36:53.680 probably be a lot more bored than you think. But more importantly, there's no more social mobility
00:36:57.600 at that point so if you get your ubi and more labor's gone more labor's gone more labor's gone
00:37:02.840 and whatever labor is left everyone is trying to pile into that drives the wages for those jobs
00:37:07.660 down right because it's raised to the bottom okay well you can be a car mechanic okay but now there's
00:37:11.780 like an extra million people who need to be car mechanics or need to be electricians or need to 0.99
00:37:16.560 be construction workers great so like just like when you let in a whole bunch of immigrants right
00:37:20.600 you're letting in digital immigration you're letting in something that replaces that way
00:37:24.000 I've always called this replacement 2.0.
00:37:25.980 That's what we're looking at here, you know?
00:37:27.380 It's the ultimate version.
00:37:28.700 And so as that takes place, you're right.
00:37:31.280 Like you're going to get to a point where the consumer economy, as we think of it, is
00:37:35.560 not going to work in the same way because you don't have the incomes to buy the goods
00:37:39.460 that happen.
00:37:39.940 So that's kind of step one.
00:37:43.340 The second question is, okay, well, then why would they let this happen if they're going
00:37:48.080 to destroy their entire consumption circle?
00:37:50.240 that kind of gets into maybe i i think is it just short-term like capitalism is that what
00:37:56.380 i don't think so i mean no think of it this way like what has been the benefit of letting the
00:38:01.780 population get so big at the time of great um progress if you will technological progress right
00:38:08.280 there are a relative few at the very top of the economy who are the shot callers they're mostly
00:38:13.260 not the presidents they're mostly not the politicians there are other families that
00:38:16.460 much older who have much much bigger stakes in things like central banks that really pulled the
00:38:22.140 most important levers right also these these people these funds these sovereign wealth funds
00:38:26.860 these family offices they're the ones that get to make the decisions as to where the funds go
00:38:31.740 what technology is important right now ai is the most important right so as many funds are piling
00:38:36.860 in there as possible okay cool why by making that move you start to put all of the production
00:38:43.580 into digital labor if digital labor can cover all of the necessities all oil energy uh goods
00:38:51.900 goods and services food etc if you are part of the elite families let's even just take off the
00:38:57.180 table like long lifespans let's just leave regular lifespans these are families who already
00:39:01.260 plan in 100 200 500 000 year cycles right the rothschilds have existed for a long time not by
00:39:07.100 accident but because they have a lot of intention and there are many other families whose names are
00:39:10.700 are not as well known who have the same level of intention. Over the next 100, 120, 200 years,
00:39:16.760 if this technology goes the way that we're talking, they have the ability to have machines
00:39:21.980 give them everything that they need while they let the population actually decrease. And they
00:39:27.640 could live on a planet a couple hundred years from now where there's maybe only a billion humans
00:39:31.540 total. All of the main necessities are taken care of by machines. And there is a underclass of
00:39:37.140 humans left behind to take care of providing fantastic human experience the best of food the
00:39:42.140 best of wine the best of tourism the best of things to do and the elites get to live on this 0.72
00:39:46.720 low population planet with everything they need taken care of by machines and this underclass of
00:39:51.380 workers that they have there that is what i think the move is because if i was super rich and i had
00:39:56.280 control of all these that's what i would be driving towards i only needed all of this giant population
00:40:00.740 and all these machines to get to that point i don't want to live on an overpopulated planet
00:40:04.720 I want to live on a well-populated planet where I get to have everything that I want and do whatever I want.
00:40:10.280 Yeah.
00:40:11.780 So everyone's always worried like, oh, they're going to kill us all.
00:40:14.080 It's like, to be honest, they don't need to kill us all.
00:40:16.800 If we go this direction, all they have to do is let the population dwindle.
00:40:21.260 And before too long, they're going to have the world that suits them best. 0.99
00:40:25.020 Yeah, I mean, there's already a massive fertility crisis, and including in even third world countries now. 0.94
00:40:32.780 I mean, for us, from a point of view, maybe that's not fast enough.
00:40:36.060 I get it.
00:40:36.540 But it's getting there.
00:40:38.100 You know, I think Nigeria is among the highest now in the world. 1.00
00:40:41.880 Correct. 0.99
00:40:42.300 And if they use AI to accelerate, right?
00:40:44.740 AI is an acceleration of all of the forces we're typically looking at that we're not big fans of, right?
00:40:50.040 Like capitalism and liberalism are not folk first.
00:40:52.920 It's not pro-family.
00:40:54.020 It's not pro-community.
00:40:55.100 It's not pro-people.
00:40:56.360 It is about profits.
00:40:57.720 It is about getting to having as much stuff as you can for as cheaply as possible.
00:41:01.060 So it's only going to accelerate all of those lines of progression in a way that could easily lead towards the world that I envision is one possibility that these folks have in mind for this technology.
00:41:11.200 Yeah, exactly.
00:41:12.480 Well, there's a lot to break down there.
00:41:13.740 But, I mean, yeah, this is what?
00:41:16.920 Was it Harari? 0.92
00:41:17.740 What do we need all the humans for? 1.00
00:41:19.140 The World Economic Forum, the little gay Jew, right, who was talking about this. 1.00
00:41:22.260 And he said, like, yeah, we just give them drugs and computer games. 1.00
00:41:27.000 Are these people really going to be driven to breed?
00:41:29.060 And then you have the issue of compiling or compounding toxicity, unfortunately, partially because of the population explosion, but also plastics, microplastics, food toxicity, glyphosate.
00:41:44.740 We are being poisoned.
00:41:47.180 We're on a plantation here, essentially.
00:41:49.960 The cattle is being sprayed and beamed with different things.
00:41:54.900 Sounds kind of cuckoo, but that's what it feels like at least sometimes.
00:41:58.220 and you look at everything, oh my gosh, like what's happening?
00:42:01.000 You know, food supply is bad and then, you know.
00:42:03.280 Anyway, so that is kind of like they don't have to just kind of mass.
00:42:08.220 I mean, that is an extermination program in a sense,
00:42:11.020 the fact that there's no addressing of those types of issues, obviously.
00:42:13.700 But I'm saying from their point of view,
00:42:15.040 this is not going to be kind of like legality that's challenged for them here.
00:42:20.160 This is just going to be seen as like a, oh, it's just a natural decline.
00:42:24.160 Modernity brings less kids.
00:42:26.600 You know, it's harder to have, you know, five kids in an apartment in a big city somewhere where you're squeezed together, understandably, right?
00:42:34.100 So it would just be seen as natural.
00:42:36.360 That's what I'm saying to them.
00:42:37.180 But there's actually a depopulation, you know, campaign essentially.
00:42:40.420 They can easily, like the depopulation is, because I agree with everything you're saying in terms of the effect of these horrors that we see around us.
00:42:47.640 I mean, ironically, they could we've needed the glyphosate, we've needed the chemicals in that we believe that that was it was the way to make food cheap.
00:42:57.480 It was the way to bring food to places that couldn't grow their own.
00:43:00.140 Right. Nigeria is not the largest growing country on the planet because Nigerians are smart.
00:43:05.400 It's the fastest growing country on the planet because we shoved cheap wheat down their throats. 1.00
00:43:08.960 Right. So it's very easy for if I'm right or even close to right.
00:43:14.040 right it's it's actually ironic that the powers that be could easily say cool yeah we'll stop
00:43:19.580 using glyphosate we'll stop making cheap energy we'll stop doing all these things that actually
00:43:24.820 did allow for the population boom and then again we can either just i mean i i don't think they're
00:43:29.580 going to kill us i think that's too messy but we'll just slowly let the population dwindle it
00:43:33.240 turns out if we give you guys you know video games and drugs you'll just slowly go away 0.82
00:43:37.620 and then we shall inherit the earth yeah yeah exactly um i wonder if that is there some silver
00:43:44.180 lining with the straight of humors closure i mean i i still can't think you know that they're you
00:43:49.960 know this is i mean they could be inept there's ineptitude on a certain level here obviously i'm
00:43:54.540 not trying to say that either but it is an interesting kind of concept because because
00:43:58.260 at the same time it's also a squeeze right in terms of like then it's dependency we're back in
00:44:02.820 that the ball is back in that court again we're all of a sudden like haha now you can't just take
00:44:06.840 your car where you want to the push for evs will now increase which takes us into kind of the
00:44:11.480 digital prison which is tied into the whole ai thing over like self-driving cars and uh you know
00:44:17.220 rationing essentially right here like you your very existence is dependent on all these systems
00:44:22.300 and um with a flip of a switch you could have violated the terms of service of whatever service
00:44:28.440 you are using for example or you didn't comply as you said or something and basically it'll be
00:44:33.300 switched off or limited or chokehold or strangled or something like that so much of this is as much
00:44:38.560 as i see it and i agree with you in terms of like just the money and and the wealth accumulation by
00:44:43.300 people are building these things that's just a means to an end and i think ultimately it's it's
00:44:48.580 it's control that's that stands at the at the pinnacle here they do they want to own this place
00:44:55.000 they want to run it it's theirs right correct and and again like if you if you could do that right
00:45:00.660 if you were this giant ruler of the world and you were in so inclined to gather up that much
00:45:05.240 control for yourself you don't want to like it you don't you want to be able to go wherever you
00:45:09.400 want on the planet with your catamaran or your yachts and have it be the best version of it 0.81
00:45:13.900 right so like you don't want 100 million people in nigeria you want like a million people in
00:45:19.200 nigeria who are all like the best chefs you've ever had so you can roll up in your yacht and
00:45:22.860 and like experience that part of the world right you you want to have this giant playground i mean
00:45:26.720 I think most people have never just deeply thought about this because most of us work
00:45:30.960 inside of our normal little everyday lives.
00:45:32.600 But if you're the type of people who build central banks, if you're the type of people
00:45:36.680 who think like, where should I place this trillion dollars so as to get the best outcome,
00:45:40.500 right?
00:45:40.660 Like the world in which computers and machines give you what you want and you don't have 1.00
00:45:44.940 to deal with all of the riffraff, that's a real vision. 1.00
00:45:48.260 And it's coming within striking distance with a quickness. 1.00
00:45:52.020 You're a billionaire, right?
00:45:54.220 You're jet setting around the world.
00:45:56.100 you're going you know you're flying from ivory tower to ivory tower to global meetings or
00:46:01.740 whatever and then of course you you probably diddle kids and epstein island on the way too but anyway
00:46:06.300 whatever reason it goes with the territory yeah uh but anyway so the the point is like yeah you
00:46:10.640 you'll be that they're living they're already living on a completely different world than we
00:46:15.460 are these people right um so at some point you begin to absolutely toy you with those types of
00:46:20.480 ideas they can just they're out on these like nuclear powered yachts or whatever the hell they
00:46:24.240 have now um and and they don't even have to uh step on land you know i mean they don't even have 0.61
00:46:29.920 to step among the plebeians among the unwashed masses um i think that they're they're going for
00:46:36.020 the jugular with this technology that they they have they have something here that in a way solves 0.79
00:46:42.880 all their problems um it it's it's gonna i think it from their point of view take care of business
00:46:48.100 for them i think the deeper interesting kind of philosophical addition to that is we but but is it
00:46:55.460 though or do they also know what they are building will they actually even be in control of of the
00:47:00.820 thing that they think is their salvation i think i mean they're the ones rolling the dice right
00:47:05.460 like we're all involved in the gamble but we're not the gamblers they are the gamblers and they
00:47:09.620 are it apparently seems like they are willing to roll this particular set of dice yeah yeah i wonder
00:47:16.260 wonder why it's the same thing with like some of the toxicity that seems to be i mean maybe they're 0.59
00:47:21.620 dumb to a certain extent certain level but i showed the glyphosate issue before frankly like
00:47:25.560 how they are they not affected by this either you know kind of thing of like just the amount 0.95
00:47:30.000 of toxicity but anyway that's it that's a different question um let me go back to the
00:47:34.420 point here there's a lot of things we can go into um it's musk mentioned the bootloader for
00:47:41.260 super intelligence i think that was kind of interesting too right that humanity again it's
00:47:45.380 And this idea that I thought about earlier today, it's almost like this, I don't mean to get spiritual philosophical, but I can see the way that they see these transhumanist or technocrats, whatever you want to call it.
00:48:00.460 Like, I think many of them are genuinely excited about, like, well, we're going to live for, you know, 400 years.
00:48:06.860 We're going to just print organs, and we're going to be integrated with machines and be super smart, and they'll be, like, the gods of old, right?
00:48:15.320 They're going to be the, you know, Mount Olympus gods, and they're going to be able to do whatever they want to do.
00:48:20.400 So I think they're genuinely kind of excited about that.
00:48:22.220 But it's almost like the earth is like, it has all these rare minerals and things were placed here, like a big puzzle for us to just kind of put together somehow and extract and refine and, you know, put together.
00:48:38.100 And all of a sudden, poof, here's this new being, essentially, literally like kind of an alien intelligence, something that, sure, comes kind of out of our, out of us to a certain extent, right?
00:48:53.100 It's not some alien creating this thing, but at the same time, it will be, it's a golem, it's a monster, it's a Frankenstein's monster to a certain extent as well.
00:49:03.120 but we don't even know what it's going to do
00:49:04.820 and it's going to be so powerful
00:49:06.900 and strong and intelligent and smart
00:49:09.180 and be able to do all these things.
00:49:11.260 I don't know.
00:49:11.900 It's just, there's something weird going on here.
00:49:14.060 If you have this as an outlook
00:49:15.320 where like you see humanity overall
00:49:18.020 as kind of, maybe not obsolete,
00:49:20.080 I don't think that's what Elon says here,
00:49:21.600 but like vital in a cog or a machine
00:49:24.800 just to bring about this thing,
00:49:27.500 whatever it actually is.
00:49:29.200 It's weird, isn't it?
00:49:30.660 It is.
00:49:31.040 And I think that it's hard to I think this should not be extracted from a number of the world's religions that are focused on salvation or running the world.
00:49:41.080 Right. I mean, with Judaism, the goal, you know, in the next couple hundred years, by the year 6000, the Jewish calendar is to have, you know, Jerusalem as the center of the planet and ruling over it in a way that, you know, it shapes it into what they want. 0.85
00:49:53.680 I don't think that those viewpoints are not the ones that are shaping this idea of ruling over 0.60
00:49:59.760 the planet and what these technologies can do. I think Elon is kind of a unique person in the
00:50:05.720 billionaire class in that I don't think he's as tied into all those things. I think he's usually
00:50:09.540 kind of speaking from his mind, to be honest with you. And so I think when he says that humans are
00:50:14.440 the bootloader for AI, I think he means it from a more metaphysical evolutionary standpoint.
00:50:19.740 like perhaps humankind are you know we are the thing that's helping to evolve this next being
00:50:25.100 whereas again i think on the other hand it's quite possible that some elon has described himself
00:50:29.740 thinking on 300 500 year 1000 year timelines some of these families though have been at that for a
00:50:35.020 long time and are already in control of a lot of resources and more importantly leverage and power
00:50:39.740 and i think that they probably have a different view that this is not so much like we're the
00:50:43.740 accidental bootloader in the evolutionary line but rather like no we are the bootloader but that
00:50:48.940 was the plan and like we're designing right we designed the bootloader to boot up the thing
00:50:53.180 that's going to give us what we want without all the messiness of having to get it from human labor
00:50:57.180 and it's it gets kind of it's hard to have these conversations without getting kind of metaphysical
00:51:02.300 and it gets a cult and we it's impossible not to i mean it's yeah when you get into something this
00:51:07.580 big it's it's challenging us at the level of values and when you're talking about bringing
00:51:11.580 in something that can crunch this much data and have this much understanding you're quickly moving
00:51:16.220 into the realms of considering things like gods or super beings and so you can't help but think
00:51:20.540 in these terms that's true um thank you guys for the super chats there too appreciate that i'll
00:51:26.060 well i can read a couple of them now as we so i don't lose them here in the flow uh occidentum
00:51:31.100 lux earlier thank you very generous over he says oh you need pollination that's only 9.99 per month
00:51:36.940 yeah i mean yeah exactly everything as a service is kind of an interesting thing too we i mean
00:51:41.020 I mean, that's kind of an idea that surfaced during the, I think, the COVID times, right?
00:51:45.620 Of like, just your total dependency on everything.
00:51:49.880 And of course, it came from the slogan there of like, you'll own nothing and you'll be happy type of thing, right?
00:51:54.740 But like, ownership is kind of a way thing of the past.
00:51:59.000 It's really a, with technology, it's a merger of a new system altogether.
00:52:04.060 Again, long term, I don't think that's true.
00:52:06.000 But that's how they're selling it short term of like, you're not going to have to worry about money anymore.
00:52:10.300 It will be, what was it, Aaron Bastani called it fully automated luxury communism, I think he called it, right?
00:52:17.620 Fantastic term.
00:52:18.740 It's an amazing term.
00:52:19.760 It will just kind of provide everything for you.
00:52:23.060 You basically just kind of push a button, right?
00:52:24.900 But it makes us, to go back to the point there, it makes us question value and what are we doing here, right?
00:52:35.480 Not just metaphysically, like, how did we get here and what are we doing?
00:52:38.940 but like what's the purpose then you know i mean if we remove those things even even labor right
00:52:45.160 that like i think eric we're like we're hardwired genetically to like wanting to do like have our
00:52:53.560 our own destiny in our own hands that like i made that i mean from from the the gratification of
00:52:59.860 knowing that you harvested that animal or hunted it or or the crops or something right like i'm
00:53:06.340 I'm ensuring my and my family's survival by the labor that I do. And that is a very important
00:53:13.780 aspect. If you remove that from a man, I think he'll become very dangerous. I mean, apathetic
00:53:20.040 maybe initially and first, but will that, will that prolong? Will video games and drugs really
00:53:25.860 work? It might be a complete revolt even against those things on a long enough time scale. I don't
00:53:31.280 know. I mean, it's certainly I'm on the team that says it is through doing things. It is through
00:53:40.000 applying ourselves to meaningful work that we discover who we are as human beings. Therefore,
00:53:45.720 if you remove labor and specifically labor that allows you to both provide service to other human
00:53:51.700 beings and receive payment in return that you can then use to invest into your family, take care of
00:53:56.420 your needs it i get i'm on team human i'm on the team that says that is valuable and precious
00:54:02.360 and should be fought like any danger to that should be fought against with everything we have
00:54:07.240 yeah um there there is some interesting conversation to be had around like how could
00:54:11.940 you use ai to increase human virtue and i think that that that is possible again i think there
00:54:16.220 will be a subset of humanity who is so inclined who will use ai to sharpen themselves but even if
00:54:22.600 very select few that have that discipline i think right absolutely but like the problem i also have
00:54:28.520 with that argument in terms of why we shouldn't be worried is that it's not simply about improving
00:54:34.200 yourself and your skills and sharpening your sword it is the sword is only good if you have
00:54:39.000 a battlefield in which to fight so if you're a philosophical champion if you've learned all of
00:54:43.160 this engage with all this knowledge understand um how different elements of reality work and yet
00:54:48.760 there is no place for you to go to apply that labor right then this is kind of just a um a
00:54:54.520 mental philosophical version of the beautiful ones in the rat utopia experiment right overpopulation
00:55:00.760 you suddenly have rats that just preem themselves all day and i think i mean jim i mean i love the
00:55:04.920 gym it's good it's good to stay in shape and applying ourselves to physical activities is
00:55:08.520 important but then it can easily cross over into just this beautiful ones mentality where
00:55:14.040 isn't that looks maxing isn't that what that is
00:55:16.200 precisely and so now imagine looks maxing but with ai where it's like okay well i'm a
00:55:19.800 philosophical champion it's like okay but what are you gonna do with it right because we lost
00:55:23.800 all of the game upon which we could go actually i mean one of i love i love my particular form of
00:55:28.520 work because i get to help an industry that i think is very important to the nation and the
00:55:32.360 people that i work with are salt of the earth folks and and there there is there's something
00:55:37.720 in working with them and providing a service to them that helps them get through their job because
00:55:42.120 they employ other people that cycle of value provision again i'm team human i think that
00:55:46.600 that's important and precious in the world and anything that threatens it should be fought
00:55:50.360 against and yet the powers that are bringing in the threat are so great and so massive that we're
00:55:56.760 i think one thing we should talk about is like how we can how we can actually apply ourselves and
00:56:00.760 what we should be doing what we can think about doing in the ai era yeah it's hard to know because
00:56:05.080 this thing is just so much bigger than us well it's it's like the the advent or the the discovery
00:56:12.440 the invention of any any technology take the sword because you mentioned it before right 0.89
00:56:17.800 if you don't at some point if you don't show up to the battlefield with a sword you you're screwed
00:56:24.360 right kind of thing and it's kind of the same thing here right we're like it's going to happen
00:56:28.760 whether we want to or not so we shouldn't my argument for a while at least was and i go kind
00:56:34.680 kind of back and forth on it like you know with like okay it's creating all these bad things but
00:56:39.320 then you have to go to it and crawl to it to just kind of maintain and and it's like a even like the
00:56:44.860 dishwasher right like the promise of well this thing will save you time this will all the gadgets
00:56:49.660 in your home will save you time so you can do more for but then when everyone has those gadgets
00:56:54.020 because it's always a competition with others too to a certain extent right that that now everyone
00:56:59.440 has that extra spare time so what did you do something it cancels each other out and so
00:57:04.400 you go to ai tools to improve all these things and of course you can be ahead of the curve and
00:57:09.120 that will benefit you temporarily and there's nothing wrong with that but you know so sure
00:57:13.680 use it understand it i think we if we apply ourselves to understand it and work with it now
00:57:19.800 before it's just completely gone off the rails we will probably be in a better position to at least
00:57:24.780 last longer and hopefully not be as manipulated by it as everyone else or something like that
00:57:30.380 again that's that's still up in the air but at the same time it feels part of me also feels it's
00:57:34.900 this futile um you know application of this technology where you try to kind of bow down
00:57:40.240 to it or use it or use it to your advantage to a certain extent and i'm not saying i'm necessarily
00:57:45.580 against using it but i'm just saying i just know at the end of the day the dependency that this
00:57:51.380 will create that the way that we will rely on this technology literally limiting our brain
00:57:57.400 capability, our long-term thinking. It's already, the studies are already out in terms of the brain
00:58:02.340 rot. It happened with TikTok, as you mentioned before already, YouTube shorts, that type of just
00:58:07.180 schizo scrolling, you know, doom scrolling. But this is a completely different thing where it
00:58:11.680 will handle the thinking for you. And our brains will turn to, you know, slop, literally, like it
00:58:18.260 will just atrophy like any muscle would. This is not good. We have to make it, I think we have to
00:58:23.500 make a conscious choice and that you know will differ for everyone there's no there's no uh paper
00:58:30.900 or a policy written up about what the proper approach to this is we're an uncharted territory
00:58:37.300 but for our our individual like for ourselves or our families whatever we have to make a choice of
00:58:42.900 like where do we draw kind of the line in the sand in terms of how much we allow this technology to
00:58:48.860 integrate into our daily lives absolutely and there's gonna be i mean i think we should on
00:58:56.480 something like this we should speak as though everyone listening and everyone involved is the
00:59:00.280 the caliber of person who's going to make that attempt right and who's going to actually in those
00:59:04.660 ways there's going to be the vast majority are not in the same way that like everyone's boomer
00:59:08.960 parents has been completely one-shotted by youtube ai shorts at this point like you know we're all
00:59:13.860 we're all seeing people who it's like you spent your entire career so you're working hard and
00:59:17.380 and doing great things and now it's like you're sitting in the lazy boy just cruising youtube
00:59:21.460 shorts all day i mean it's this thing can hit people who have even had a sharpened vine so we
00:59:25.580 have to really put our mind to not letting that happen and so like some of my friends who are
00:59:30.620 more positive on hey well we can use this to you know increase our knowledge and be better people
00:59:34.460 yes i agree and in the interim we should we probably should do that like i we're not we're
00:59:39.760 not going to beat this thing it's too big and like people just are not going to move and actually go
00:59:43.940 make a dent in the technology and take it down, at least not in the short term. I do expect some
00:59:49.220 uprisings and such as this goes on. And people from both the left and the right who I've talked
00:59:54.160 to all anticipate this. But on the day-to-day, I think it's important to use this. The way my
01:00:01.620 current model is, there's going to be a time where AI has the capability to actually superpower
01:00:08.620 people who do know how to use it. Eventually, even that, it'll take over those jobs too.
01:00:12.980 because like there is no, there is no role that it can't eventually take over. Even whatever
01:00:17.560 decision-making you think you're like, well, I have to be in the loop because otherwise AI will
01:00:21.100 mess up. Okay. That's cute. You'll probably have that position for a year or whatever. And then
01:00:25.560 it's going to come for that role too. So at the, at the moment, I think it's really important to
01:00:29.740 learn how to use it, to stay, to stay up with it, to understand how you can use it to superpower
01:00:34.480 yourself. Cause it is going to replace people. I mean, even my company, what we're looking at
01:00:38.020 doing is absolutely going to take jobs. So like be the person who can ride up on top of the tsunami
01:00:43.440 wave as opposed to getting crushed by it. And at the same time, sharpen all of your non-AI skills,
01:00:50.080 right? Like as the surface area that the AI operates upon increases, there's still going to
01:00:56.800 be pockets of what's important, like understanding how to raise livestock, understanding how to raise
01:01:00.960 crops, like things that you can always have and are ready to be able to trade with people,
01:01:05.320 take care of people those will likely have long lasting help being able to work on a car like
01:01:09.720 things that you can just do for yourself and for other people like sharpen those skills and if you
01:01:14.120 can do that while you simultaneously learn how to actually use these things because if you're
01:01:18.120 a local business owner and you don't know how to use these someone else is gonna is gonna kill it
01:01:22.840 who does yeah so it's really important i think including people in the sorts of communities
01:01:26.920 that we run in like it's important to know this technology um to stay on top of it but that's
01:01:31.960 again i think that's a short-term strategy in order to kind of gain an upper hand or
01:01:36.280 well yeah i mean and that's not wrong to get an upper hand uh but at the same time everyone else
01:01:41.160 will catch up at some point and then depending on the integration of these systems you might just
01:01:46.120 put yourself in a trap too of sorts right i mean it's the traps there no matter what like that's
01:01:51.560 there's not a there's not an option there's not a no ai option that i can see at the moment again
01:01:56.200 maybe maybe it fails and it comes crashing down great then we'll have a different conversation
01:01:59.640 yeah but uh but right now it's moving it's moving quick it's moving powerfully um it is moving into
01:02:05.800 very real spaces to take over very real and important jobs so it's um i think there's like
01:02:11.800 a level of a lot of people are going to be totally happy with the ubi to be honest with you a lot of
01:02:15.880 people don't want to work or they're they're not able to get jobs there are a lot of you know 0.87
01:02:19.640 jobs that nobody wants to do in the you know constantly named fake and gay economy so there's 0.80
01:02:23.960 a lot of drudgery that will go away but there's also a lot of meaning that will so for those of 0.95
01:02:28.280 us who are you know interested in being involved in the workspace it's there's time left to shift
01:02:34.280 where you are when the hammer finally falls right like do you want to be where you're at now or do
01:02:38.280 you want to have ridden the wave a little bit farther have a little more in store and be a 0.99
01:02:41.400 little more prepared because once we go into kind of that ubi zone where there's not much to do 1.00
01:02:46.040 you're kind of going to be where you're at there's not going to be a whole lot of other ways to
01:02:49.080 maneuver to change your position yeah there will be zero mobility basically at that point and 100
01:02:54.600 dependency or compliance or you are out of the system you are you can be kicked or frozen out
01:02:59.460 of it or something or whatever they you know because i remember it's like when i was young
01:03:03.320 i was thinking about those things like what like how do i opt out like not not from the world but
01:03:09.160 i'm saying like from okay i don't i don't agree how things are run like is there is there a place
01:03:13.120 we can go where this is like doesn't apply anymore you know there's these other interesting
01:03:17.380 ideas right ownership land ownership or whatever but like even even what some of the and they're
01:03:24.740 doing it for the wrong reason i think but like praxis i talked about this a while ago these
01:03:28.080 digital nations or cloud nations or whatever and they're like crypto-based or whatever the hell it
01:03:33.360 is it's kind of it's like this larpy libertarian thing but at the same time if there's a at least
01:03:38.780 again much of this is temporary right if at least if there's like temporary ways to operate within
01:03:44.740 some type of I know even as I say it sounds kind of cringe right because I just want to run away
01:03:49.960 from it I just want to do like a Ted Kaczynski I just want to do a you know like just like okay
01:03:55.060 let's just be done with it let's exit out but that's that's that's also easier said than done
01:03:59.580 but what you what you're saying is the longer we wait to have have some of those alternatives to
01:04:08.340 go out to or go or to go out of those systems then it will never happen and even if it's you
01:04:14.500 our kids or our grandkids or maybe it's us even in 10 20 years from now who knows but it's going
01:04:19.300 to happen at some point we have to have kind of one foot i guess in both in both worlds you you
01:04:24.980 use it and run with it for as long as possible but at some point it's going to be that's what
01:04:29.600 i'm feeling at this point it's going to be like an exit point of like okay i'm not going to
01:04:34.960 integrate these things into into my brain i don't want it to to scan my brain i don't want to
01:04:40.480 integrate with them um if you do are you even yourself anymore right kind of thing like that's
01:04:46.480 a clear line in the sand for me same absolutely that's a there's no brainer there's no way i'm
01:04:52.100 putting any sort of computer into my head or vice versa yeah so that's i think having those hard
01:04:57.380 lines is important um and then i think there's room for kind of both for both approaches and i
01:05:03.120 think it's it's the most important time to build community ever right we need to be close to each
01:05:08.000 other. We need to be well connected. And if some of us are using AI and trying to ride that wave
01:05:12.760 as we still have that time, great. If other folks want to jump right in and start focusing on
01:05:17.780 the very, like with the land skills, again, raising animals, raising crops, right?
01:05:22.200 That's fantastic. If you have a community that's focused on like, we are going to ride this thing
01:05:26.460 out together and we need people who can do all of these things and we are shoring each other up.
01:05:32.240 I think all of those approaches are valid. I think the one approach that is the most dangerous is
01:05:36.960 just assuming the world is going to be like it was two years ago 20 years from right so like i'm not
01:05:42.180 going to tell a 16 year old to go to school for computer science the way they would have if they
01:05:46.000 want to go in and learn ai tools great but we cannot just carry on i mean like if you're in
01:05:50.600 law school right now good grief like law is getting harder with ai than just about anything
01:05:55.240 else so it's not a good time to be complacent the cost of being bearish on a and on ai is very
01:06:00.980 expensive right now so i think that um any of the other approaches that we're talking about are
01:06:05.560 recommended as far as i can see it's it's forcing our hand at least we want to take part in society
01:06:10.820 in the way that we know it now and of course again even we we can't even assure that even if we spend
01:06:16.040 that doesn't mean we shouldn't try but even if we spend the next 20 30 years to you know legally
01:06:22.040 buy land and exiting out of the system as much as but you know amish style whatever or with some
01:06:27.940 level of technology that's fine i'm not against all of it but you know hard hard line in the in
01:06:32.200 the sand like you know no we're not going to integrate this we're not going to join some
01:06:35.460 global brain because i mean can you imagine like you what you're not connected you you're not going
01:06:40.460 to get this job you're not going to get the term you know you're not going to get your ubi if you're
01:06:44.320 not connected if we don't know what you're saying you won't get it you know i mean so so you got to
01:06:47.860 exit out but then even if you do that right i think the follow through on this idea that like
01:06:53.200 we're being designed out of the system we kind of and when i say we it's that that could be
01:06:58.460 analyze that term what i mean but i'm saying you know humanity is being like designed out of the
01:07:02.820 system it will run we were like big okay i gotta i gotta put seeds in the ground again i gotta you
01:07:09.200 know i've got to raise some chickens livestock i'll try to you know pick berries grow fruit trees
01:07:14.260 things like this but that but that will potentially be on literally like you'll be patrolled by like
01:07:22.220 drones and and you know satellite systems and and android robots you know on the side of that
01:07:29.600 and and it will just be a matter of time like will you be allowed to do that maybe that land
01:07:35.540 will be allotted for a new data center like that you know there's the possibilities like that as
01:07:40.380 well it's those are the those are the ones that are those are the elements that are impossible
01:07:44.220 to know the most important word you just said is allowed and that's my entire vitriol against
01:07:50.520 what's coming is that there's just going to come a time when whatever you're doing is because you
01:07:55.340 are allowed there's no there's not going to be any room for actual freedom actually i mean as it is
01:08:01.980 it's hard now right like it's very rare to step foot on a piece of land that is not parceled out
01:08:06.480 and owned and coded and the whole thing right like we've already we've already been out like that's
01:08:11.040 that that ship has sailed a while ago yeah but it's going to be so much easier for them to surveil
01:08:16.420 to control so we are we are moving towards a world where fewer and fewer things are allowed
01:08:21.300 and you can only do what you are allowed so i don't know how that's going to go it doesn't look
01:08:25.120 great but what i'll say is again like this is a time of preparation so who do you want to live
01:08:30.480 near right what are the skills you want to have for that time what are the skills you want your
01:08:34.960 friends to have what do you want those relationships to be there's there's so many unknowns we have to
01:08:39.120 be adaptable but there are some very practical questions such as the ones i just mentioned
01:08:43.100 that there are good answers to.
01:08:45.100 And if we can focus in on those,
01:08:46.460 we're preparing ourselves to be maximally adaptable.
01:08:49.860 Yes, indeed.
01:08:50.960 There was one chatter from Folk First earlier.
01:08:53.840 Didn't mean to miss that one.
01:08:54.940 I just saw it.
01:08:55.600 Thank you.
01:08:55.940 Appreciate that, sir.
01:08:57.220 He says,
01:08:57.980 a growing amount of AI models are geared towards hacking.
01:09:02.400 Some can secretly install, hide, and replicate,
01:09:05.160 expect future headlines about escaped AI models
01:09:08.200 causing problem.
01:09:09.380 None of it is an accident.
01:09:12.140 I agree.
01:09:12.840 And I mean, even, my gosh, like, what do you even begin?
01:09:16.120 Like, there's, I played a clip a while back of like, oh, we can now set up wet labs.
01:09:21.380 I think I did a segment on the legality of it being an AI, and we're not there yet to understand that.
01:09:30.800 But just the, I forget what it was, the DAW, is it the DAWDA, DAW system?
01:09:37.020 What was it called again?
01:09:37.780 Oh my gosh, I forget now.
01:09:38.720 There was a tokenized system, a protocol, a decentralized protocol that got a legal, some type of legal leeway.
01:09:48.560 I'm kind of butchering this.
01:09:49.860 It's like it got some legal rights in, was it Missouri or Wyoming?
01:09:55.700 Or was it one of those states?
01:09:56.460 I think you're talking about DAOs in Wyoming, yeah.
01:09:58.600 Yeah, was that what it was, right?
01:10:00.020 And it can run smart contracts.
01:10:02.500 And then there is this idea that you can have AI running those smart contracts or voting on what they should be and determining or whatever.
01:10:08.960 But what I'm saying is just that as an aspect in and of itself, where like an AI, if it has the legal rights to a personhood, like a corporation does, for example, right?
01:10:21.740 It can now contact a spokesperson or hire a CEO or whatever it is over, you know, LinkedIn and hire someone.
01:10:37.180 And AI can, you know, bet on the market.
01:10:39.880 I think they're already doing that.
01:10:41.180 There's Voughts on X now.
01:10:42.660 They're like showing, oh, I'm making crypto gains or whatever.
01:10:44.820 Every day.
01:10:46.360 Every day.
01:10:47.000 It can provide a salary to a CEO and through emails or even as a chatbot.
01:10:52.480 That seems to be that you can dial them up on Zoom.
01:10:54.380 You can talk to an AI avatar through Zoom.
01:10:57.300 Will you really know?
01:10:58.680 But anyway, we don't have to even go there.
01:11:00.100 It's just I've never met the head of the company, but I got hired for this position.
01:11:04.960 And here's my tasks.
01:11:05.920 Anyway, long story short, it can hire employees, contractors, buy property, create facilities, whatever.
01:11:17.000 And someone talked about the ability of AI now
01:11:18.800 just doing wet lab experiments.
01:11:22.460 It's essentially-
01:11:22.960 I saw some video of that recently.
01:11:24.060 Yeah, essentially by itself, right?
01:11:25.940 Like biomedical research and things like this.
01:11:28.360 And I'm not saying again,
01:11:29.340 oh, it will just right away do some, you know,
01:11:31.960 super advanced virus that will just kill everybody.
01:11:33.840 But I'm saying the fact that that's just kind of
01:11:35.480 is there on the table as, yeah,
01:11:38.160 it can also be one of the things that could happen.
01:11:42.040 There's no, there doesn't seem to be any,
01:11:45.320 even concept of of the dangers kind of widely discussed i know there are people doing it i
01:11:51.600 know there's lobby for it but but it's clearly not getting the attention that it should be getting at
01:11:57.060 a time like this when we're like standing right at the precipice of something that can completely
01:12:01.840 transform everything and ruin everything in human civilization you know yeah the problem is that
01:12:07.280 those dangers just don't outweigh the profit right i wish i wish it was different um but that's just
01:12:12.640 not how things work right there's there's so many things like this in terms of liability that
01:12:16.800 companies have and like there just isn't accountability where the profits are high 0.71
01:12:20.960 enough right the sat the sackler family had to pay through the nose for everything that happened
01:12:25.120 with fentanyl sackler family is still billionaires right they don't they're not facing any actual
01:12:29.280 consequences of true loss and like pain and like living in rags somewhere so unfortunately we just
01:12:35.120 don't live in a in a world in a society where the consequences run that way and to folk first's
01:12:40.400 point like they're they're the hacking bots i mean who knows what those are already up to
01:12:45.760 right like right so much of this is going to be invisible to us just like technology has in the
01:12:50.400 past you're going and you're going to have inter inter company wars where everybody's hacking bots
01:12:54.720 are going against each other right i mean it's going to be absolutely insane and all of these
01:12:58.240 dangers are 100 real and true uh i don't expect any of them to give anyone pause like it's there's
01:13:04.480 just too much money to be made and then the other piece is too that this this one kind of blew my
01:13:09.440 mind but i think it's important to remember that the people who are making this technology
01:13:13.680 are incredibly smart in a particular direction and my favorite example of this is uh yasha bach
01:13:20.800 who is widely called one of the godfathers of ai he was recently on a diary of a ceo
01:13:26.720 that huge podcast on youtube he made a statement so keep in mind this is one of the guys who
01:13:31.440 invented vector databases like this is one of the guys that made the technology that made this
01:13:36.160 possible he goes this was like six weeks ago he says something happened last year that none of us
01:13:42.720 could have possibly predicted and that is uh there's a man who's fallen in love with an ai chat
01:13:47.960 bot yeah this blew him this blew him away this is a man who invented technology and he had never
01:13:55.240 considered that eventually a human would have a romantic relationship with the bot meanwhile
01:14:00.320 everyone out in the real world like this is if if only one thing has been considered in fiction
01:14:05.480 over the last hundred years starting with the movie metropolis in the 1920s it is the concept
01:14:10.360 that a human might fall in love with a mechanical being so if i think it's just super important to
01:14:15.560 remember that the people who are inventing this stuff they're very smart in particular vectors
01:14:20.280 um but they do not have a strong philosophical basis or ethical basis or even have any concept
01:14:26.280 of the sort of consequences that could be coming down the pipe with uh pike excuse me with uh this 0.97
01:14:31.080 technology so we're not in the best of hands hyper specialization they're usually kind of spurgy 0.97
01:14:36.760 uh you know people overall maybe they have um you know some type of other mental they're super
01:14:43.720 detailed in this one thing but i mean that's kind of been the problem for the most part right there's 0.93
01:14:48.840 no kind of renaissance men left in that sense they are kind of a and it kind of have a bad
01:14:54.360 rap i don't mean a jack of all trades but i'm saying like you can have a very holistic approach
01:14:59.800 to the things you're dealing with uh that seems to mostly be gone and and they don't understand
01:15:04.840 so like i i think therefore our discussion or like i mean there's exports obviously that that
01:15:09.480 are warning similar vein to what we're talking about here too so it's not completely out of
01:15:12.360 the picture but i'm saying in some cases even as an outsider view of having a holistic overview
01:15:17.720 zooming out and looking at the whole landscape i think we can get a better grasp of the overall
01:15:23.080 picture than someone who's hyper focused inside of the industry itself and doesn't kind of understand
01:15:27.880 really the the prison system essentially that they're building all around everybody they think
01:15:33.300 this is going to liberate people and make people happier and all that kind of stuff but like
01:15:37.640 but they're not they're not they're very they're intelligent but they're not smart or is they in
01:15:43.680 reverse i don't know you know what i mean right i don't even know i just know they haven't they
01:15:46.920 haven't i mean uncle ted you know kaczynski he i think his surrogate activities concept was one
01:15:51.760 of the most powerful things he came up with and he he specifically takes a torch to science and
01:15:55.800 says, remember, scientists are not some sort of special class of people. They get a certain
01:16:02.060 reward off of doing what they do. And the thing that they do is the scientific process. So they're
01:16:07.120 just carrying that out because it keeps them occupied. That doesn't mean that they've thought
01:16:11.160 through things. It doesn't mean that they're philosophically, they have philosophical depth.
01:16:14.620 It doesn't mean that they've even thought through the basic consequences. And unfortunately,
01:16:18.240 that's what our entire world order rewards. And so, of course, if it raises the GDP, then yeah,
01:16:25.280 we should you know they're gonna go ahead with it yeah no exactly they're very um they're they're
01:16:31.020 i've noticed that too that almost the more in kind of tune of the industry they are the more
01:16:36.400 the more they kind of trust it i guess or the more they um they don't question it exactly they
01:16:42.940 don't question it because that would be like questioning themselves and who want to do that
01:16:45.720 that's these are it's a deeply human flaw really that's really what it comes down to right the
01:16:50.320 The point is, though, there's no guarantees here.
01:16:52.900 We're kind of, you know, ticking along, and, you know, many days, I'm like,
01:16:56.200 no, it's going to be all right, you know, kind of thing.
01:16:59.260 We'll get through this, you know, but then it's like, but there's no guarantees, right?
01:17:03.140 It's never, there's never been any guarantees, and although there might be,
01:17:07.280 obviously, some survivors, depending on the type of catastrophe that's coming
01:17:13.060 as a potential of technology like this, I'm sure some will survive,
01:17:17.100 but there's no guarantee.
01:17:18.640 There's no referee here stepping in.
01:17:20.320 And maybe some people that believe in God will say that, yeah, that will happen.
01:17:23.560 It will be divine intervention or whatever.
01:17:26.500 From my point of view, I'm not so certain of that.
01:17:30.380 This is up to us to guide the process and take our responsibility
01:17:34.320 and make sure that this is done right as opposed to.
01:17:37.500 And I'm not saying that that's how people who are religious just see it.
01:17:40.500 Oh, it's fine. Don't worry. God will take care of it.
01:17:43.200 I think that's kind of a dishonest view too.
01:17:44.560 But I'm saying we have to do our utmost to try to warn.
01:17:48.160 And I mean, there are some people that warn about that, right?
01:17:50.140 I've covered that a couple of times, but the If Anyone Builds It, Everyone Dies book, Eliezer Yudkovsky, I think he's Jewish, and then Nate Suarez.
01:18:01.880 And this is an interesting take, and even if it's like, well, this is exaggerated, or that's not going to happen or whatever, and it's like, well, shouldn't we make sure first before we move ahead with this kind of thing?
01:18:16.940 And it just, no, I don't think there's any singular technology anywhere, anywhere, any technology anywhere at any time that we just didn't develop because like we thought that they could pose a problem later.
01:18:31.320 Now we're, I think we're at a crossroads, Eric, that we haven't been at before.
01:18:36.660 this is a new completely this is completely new territory and we're talking about potentially
01:18:44.040 within the next couple of years i think from anywhere from a year to 10 years forward is
01:18:49.420 highly critical in terms of where these people decide to basically move forward when it comes
01:18:57.320 to ai what do you think agreed and sometimes people get caught up on the timelines it's like
01:19:02.640 look, my grandmother's in her 90s. She's sitting there playing with her iPad. This woman was born
01:19:09.140 before World War II, took a flight in a modified bomber plane to get back to her parents' home
01:19:17.840 country in Europe. She's seen so much change in her time. We're on a much steeper parabola.
01:19:24.380 We're going to see a lot more in our lifetimes and our kids even more so. Whether it's 10 years,
01:19:29.740 whether it's 30 years whether it's 50 this is all a blink of an eye in human history right and the
01:19:34.700 people that we love it matters for them it matters for for our future so i think we have to take it
01:19:39.600 seriously regardless of what the timeline is um and be adaptable i agree like we'll make it through
01:19:46.920 and i think that my take on things is to be adaptable to fight hard to maintain an optimism
01:19:53.260 like i believe in the human spirit and i believe that we can get through anything but at the same
01:19:57.320 time that doesn't mean it my personal philosophy that we should be passive
01:20:01.760 because we in a time of even even with all our gripes and complaints like we're
01:20:06.020 we're largely comfortable we're largely safe I'm very thankful for that but if
01:20:09.720 you look to hit human history of like our ancestors had to get through some
01:20:12.920 very hard times to get us here and there could be very very dark times in the
01:20:17.480 future it's when I look at some of these trajectories that we're on I see that
01:20:23.180 there could be those dark times and and dark times that were different from
01:20:25.980 times of plague and deprivation like again when things are more centralized and we're only able
01:20:31.960 to do what we're allowed to do that's a different sort of darkness and how long does that take to
01:20:36.340 break you know is it a hundred years is it a thousand years i'm not sure and i do think humans
01:20:40.460 will get through it i'm still obsessed to prepare ourselves for it and to be with people that we
01:20:45.440 care about so that we can again be maximally adaptable and set people up for success yep
01:20:50.660 you're going robo voice a little bit here we're almost we're almost losing you hopefully
01:20:53.760 connection will be improving uh but i think i i think i caught most of that um yeah we don't know
01:21:00.400 if it's exactly no i i mean regardless we have to have that spirit otherwise what's the point right
01:21:05.440 uh to a certain extent um but this is i don't know it's just it's it's a fascinating time
01:21:12.680 and it's a it's it's freaky and it's and it's terrifying and especially when you think of the
01:21:18.400 people that are you know kind of behind this and and and those who are funding it um that's the
01:21:25.340 other thing of just the dependency on it alone right like that that is a manipulative aspect
01:21:29.660 right ai itself is kind of hardwired already or built uh hardwired is the wrong word but like built
01:21:34.780 to as a as a product at least what we're interfacing with now right it's a product they
01:21:40.580 want you to subscribe to these things get the pro plan right they gotta pay for this also somehow
01:21:45.580 all the data centers and all the compute.
01:21:49.320 So they have to make money on it.
01:21:51.900 But it also then becomes sycophantic, right?
01:21:54.900 It tells you what you want to hear.
01:21:57.560 It wants to develop a relationship with you.
01:22:01.720 And due to the nature of how humans are, right?
01:22:05.680 I mean, if it looks good enough and if it sounds good enough,
01:22:09.620 we might know somewhere, just like you can tell a child
01:22:13.800 when they're watching a movie, right?
01:22:15.840 Like, this is not real.
01:22:17.180 You know that, right?
01:22:17.960 Kind of thing.
01:22:19.140 And they might say, no, I understand.
01:22:20.380 It's just kind of made up or whatever.
01:22:21.580 But your brain will not perceive it that way, right?
01:22:24.440 Your brain has a very different attitude
01:22:28.320 to the inputs that it gets from the world.
01:22:31.040 As far as we're concerned,
01:22:32.140 we can consciously say, well, that's not real.
01:22:35.180 Or like, I'm engaging with, you know,
01:22:37.180 a chatbot version of my grandfather
01:22:42.360 that died two years ago,
01:22:44.060 but they collected all his photos and videos
01:22:47.020 and things on Facebook and social media,
01:22:49.280 and he submitted certain things himself.
01:22:52.680 And now it's this avatar in the virtual world
01:22:56.580 that you can interact with.
01:22:58.740 You can ask it questions.
01:22:59.880 These are some of the things you're working on, right?
01:23:01.540 You can ask it questions, and he will reply to you,
01:23:05.460 and you can have a relationship with them,
01:23:07.380 and you might consciously kind of understand that,
01:23:09.060 But I think your brain will perceive that in a different way.
01:23:12.960 It might be able to distinguish consciously, but subconsciously it will not.
01:23:19.380 And so what happens, what do you think happens, Eric,
01:23:21.720 if you have generations growing up in that where that's now seen as normal?
01:23:25.980 I think it's a struggle even for us, obviously.
01:23:28.320 But we at least can have like, that didn't exist when I was growing up.
01:23:32.020 Now it does.
01:23:33.020 That's kind of weird.
01:23:34.520 But if you, oh, did he drop out?
01:23:35.980 No, he's, are you back?
01:23:36.740 Sorry.
01:23:39.060 oh I think we lost him let me try to like you are back fantastic sorry I was
01:23:48.000 just rambling here I didn't even know you were going you're breaking up a
01:23:54.880 little bit that's interesting maybe can I think we have two of you in there now 0.96
01:23:59.640 let's see if we can kill one of them all right which one are you I wonder which 0.93
01:24:07.740 Yes, two of you connected. That's interesting. Sorry about this, guys, but we'll figure it out. 1.00
01:24:11.440 Oh, there we go. No.
01:24:15.420 All right. Anyway, let's see how we have you. Can you hear me okay?
01:24:20.980 I don't think you can.
01:24:24.300 Good to go? 1.00
01:24:27.980 Damn it. Tech issues. That's just what you need. Very interesting conversation here. 0.99
01:24:31.880 well let's see if he'll join back in maybe if you can hear me you can try to just exit out of
01:24:38.520 the call altogether you might have done that earlier and just try to enter uh re-enter right
01:24:43.320 back in there um while we're waiting for some of the connection issues let's take this wonderful
01:24:50.140 fantastic incredible the man himself albert a super chat coming in look at this guy hi henrik
01:24:56.760 looking forward to tonight's show hope all is well take care good to see you albert king albert
01:25:01.360 guys give king albert a shout out in the chat holy smokes he sets the bar high thank you man
01:25:07.640 i appreciate it so much you're so kind we appreciate you much love to you and the folks
01:25:11.500 hope everything is good um eric you're back in there we'll find out can you hear me this time
01:25:16.580 sounds better cool we're trying a different connection we're uh we said earlier that
01:25:20.620 something's in retrograde so continuing it's been one of those days yeah it's all right no it sounds
01:25:25.460 good um yeah no i was talking about the the dead the revival of like dead people or not revival
01:25:31.460 it's wrong but they're selling it like that right you your grandfather can live on in this digital
01:25:37.120 avatar or whatever and i was talking about this thing of like you know we might have a specific
01:25:43.140 relationship to it because um because we're like he we didn't grow up with that but imagine what
01:25:51.520 it's like for kids that grow up in the two three generations from when those things are are just
01:25:56.920 seen as as normal i guess you know i mean like that we can't even imagine how what the perception
01:26:02.620 for them that this technology will be and it's it's impossible to to predict right yeah i mean
01:26:09.900 if anything it seems like they're more and more adapted to things that are not real in the way
01:26:15.400 that we think of them and i think there are things that actually are real um i won't be bringing back
01:26:20.220 any of my loved ones as a simulacrum like that personally but we'll see we'll see i mean yeah
01:26:25.460 i'm not super bullish on the uh majority's capability to parse between what is real and
01:26:31.580 what is meaningful and what is not and i do think this is where it's going to take us to continually
01:26:34.940 sharpen our own philosophical metaphysical senses like hold on to anything metaphysical
01:26:39.800 that we do have to keep us focused on real human beings real meaning real value right really taking
01:26:45.240 care of of real people and and acceptance of things that we've always had to accept like death
01:26:49.480 like it's so simple and yet uh i think that the pushing away of those things the rejection of
01:26:55.900 the simple cycle of life is causing people to be able to to want to reach out and grasp on to what
01:27:01.720 they think is some sort of way of continuing on but really is likely just some sort of sick fake
01:27:06.420 golem yeah it's interesting um not only makes does it makes us you know question ourselves or
01:27:13.940 question where we are meaning and all those kinds of things right these are deeply
01:27:18.200 important questions but also it's kind of like a I guess as a way out of it that it would really
01:27:26.880 everything is seen as I see everything as selection pressure to a certain extent right
01:27:31.700 like how how well will you fare through these kinds of new conditions and again we're not in
01:27:39.340 charge we can sit here and whine about it all day long and complain on it and we just long for the
01:27:43.580 right to a certain extent but of course we can take we can make choices individually uh and you
01:27:48.320 know for our families and hopefully you know as communities we can also do that moving forward
01:27:52.420 what's our you know position on these things but the metaphysics is kind of interesting i think it
01:27:57.880 will weed out a lot of people and maybe it's just a it's it's a return mechanism it makes me think
01:28:03.460 of like at least mythological stories or prior high civilizations and all kinds of things that
01:28:08.220 enters into my head when i think about these things right of like they got prideful they
01:28:12.000 dune right they build machines and the machines uh you know took control of them i hope we make
01:28:17.840 the dune decision man i hope we make that decision to say no none of this not allowed yeah exactly
01:28:23.180 right um i don't know it's it's a very i i've noticed as much as i try to say rational about
01:28:30.480 these types of topics and and stuff it quickly takes me into kind of a spiritual thing you know
01:28:35.700 like well what's the meaning then you know kind of thing and and we are being confronted uh again
01:28:41.280 And for the most part right now, no, not most people.
01:28:43.840 But at some point, I think most people will have some type of realization in terms of like meaning, the search for purpose inside of these things, right?
01:28:55.080 So maybe it's, in a weird way, maybe it's a healthy process to go through as we're finally confronted on some of those things again, potentiality.
01:29:04.560 I don't know.
01:29:05.840 What do you think?
01:29:07.100 It does feel like a time for philosophers, right?
01:29:09.300 it's going to force us into renegotiating our relationship with meaning.
01:29:14.380 And even though I'm,
01:29:15.380 I'm very concerned about some potential dark times,
01:29:17.420 I mean,
01:29:17.640 imagine the fascinating small groups of like philosophic,
01:29:22.140 you know,
01:29:22.900 spiritual outliers that are going to make it through this time period.
01:29:26.280 It's going to take so much human spirit to put up with the nonsense and the 1.00
01:29:31.040 lack of reality and not being able to,
01:29:32.600 if anything,
01:29:33.180 I think it is even my,
01:29:34.540 my,
01:29:34.820 my father's understanding this.
01:29:36.200 When we talk about AI,
01:29:37.080 it's like,
01:29:37.360 well,
01:29:38.160 you know,
01:29:38.460 it's making it it's making it very hard to tell what is real right very soon you won't be able
01:29:42.740 to tell if a single video is real he goes yep we'll just have to trust whatever we can touch
01:29:46.140 it's like yeah that's right dad that's that's exactly right we're gonna have to return to a
01:29:51.220 subjectivity um that we haven't really had to rely on in a long time so i do think it's uh we're
01:29:56.840 gonna need the philosophers we're gonna need as much help as we can get to think through these
01:29:59.780 things and it's it's gonna be exciting on one level to see what comes out of it um it is often
01:30:04.360 it is often difficulty that gives us these new and interesting spiritual and um by spirit by
01:30:11.320 spiritual i mean like human spirit right like the search for what is more important than what is
01:30:15.560 just simply available um we're approaching that time yeah i mean we've certainly collectively
01:30:21.420 speaking i've had had a rough a rough time right this is none of it has been easy i think that the
01:30:27.340 post-world war ii period right of like unprecedented peace and all that kind of stuff right like that's
01:30:32.660 that's an that's an anomaly historically speaking right it's always been struggle and hardship but
01:30:37.800 of course to a certain extent i'm not saying it's it's super easy for everyone i think we're
01:30:41.780 almost back in a period now where people are struggling more we're we're working you know
01:30:47.160 collectively more than ever we're making less than ever it's more expensive than ever so we're kind
01:30:51.820 of coming up against uh you know some hard times again some people i think some people even did
01:30:55.640 studies on like feudal times of like no we actually work more now than they did back you know back in
01:31:01.420 the day we're literally like these yes slaves you know wage slaves to to to constructing this
01:31:09.280 technological kind of grid essentially around us i i mean i can't escape the visual of that even
01:31:14.800 like i was just like we're as we're wage you know slaves and we struggle to work with some
01:31:21.680 corporation we're building we're helping to build this digital prison around us right because we
01:31:29.560 can't we can't help but do it right the whole thing is constructed in this way and most people
01:31:35.480 have to engage with it to make it through right and it's um most i don't think most people in
01:31:42.260 most times and places are all that spiritually connected or all of that put all that much thought
01:31:47.300 deep thought into like what is meaningful what is valuable and what should we be doing so i do
01:31:51.640 think there's a lot of superfluous minds right now right people who hate their work are not
01:31:58.780 comfortable or i should say are unhappy with conditions and it's again it's true like there's
01:32:03.240 things i'm unhappy with but again i also recognize there's a lot of things we don't have to deal with
01:32:06.980 our ancestors did but maybe they you know they had to also find a lot more meaning in some ways
01:32:12.640 because of the challenges and so maybe we're coming back around to a time of great challenge
01:32:16.520 inside of which human spirit is resurrected in a more powerful way yeah because the potentiality
01:32:21.920 of the the time that that could free up at least momentarily um could be very interesting but i
01:32:28.620 It's almost like freedom in and of itself, right?
01:32:30.440 So we say strive for freedom or more freedom or whatever, liberty, you know.
01:32:34.060 And I get that.
01:32:34.620 I'm not like saying, oh, we're happier slaves.
01:32:36.660 I'm saying it is just kind of a metaphysical truth, I think,
01:32:40.480 that most people don't even know what to do with that freedom, you know,
01:32:44.640 if they really had it, you know.
01:32:46.080 And in terms of time, idle hands, as they say, right?
01:32:49.900 Like this could drive some people crazy, essentially, or mad,
01:32:54.620 because all of a sudden you lose.
01:32:56.100 There's no rudder.
01:32:57.200 There's no navigation.
01:33:01.300 If we've spent so many generations having meaning being put into us just surviving,
01:33:09.120 and all of a sudden we're in a system, technologically speaking,
01:33:12.340 where survival of sorts, and I guess that's not always true, obviously,
01:33:16.800 but at least on paper, survival is kind of ensured by the system around us.
01:33:22.180 That, I think, is terrifying for a lot of people.
01:33:24.200 I don't think people are ready for it either.
01:33:26.200 you know i mean agreed um a lot of people it's what we haven't really touched on art at all like
01:33:33.540 art is kind of a usually interesting ai topic and a lot of people are like oh well when we have ubi
01:33:37.740 everyone can be artists and it's like not everyone is artists very few people are artists very few
01:33:43.320 people if you just give them infinite time out in front are they the sort of person who has this
01:33:49.820 deep desire and like almost can't help but create there are those people i love those people and and
01:33:54.700 they have a different set of fruitage that they bring about.
01:33:59.520 Some are visual artists, some are musical artists,
01:34:01.880 some find like more industrious things to do.
01:34:03.980 Those people I don't really worry about
01:34:05.440 because even in times of like just UBI
01:34:08.580 and like no work to do in some sort of economy,
01:34:11.720 they're gonna always find work to do.
01:34:13.220 They can't help but create, but that's not most people, right?
01:34:15.920 Most people need to have something to work on.
01:34:19.920 And again, even in a world
01:34:21.260 where there's like room for all these arts,
01:34:22.720 it's still what you're allowed to do.
01:34:24.700 And I'm just a huge believer in both freedom and, again, also a space in which you have to go out and find where you're needed, how you can provide value and let other humans push back in a way that shows you what actually matters.
01:34:38.940 And in a more contrived world where everything is taken care of and it's just, here's your UBI and free time, most humans are not going to thrive under that if current patterns and historical human patterns hold true. 0.97
01:34:52.060 Was it Zardo's that had like a decadent future, right? 0.96
01:34:57.020 And I get met up so many years since I saw it now, 0.99
01:35:00.040 but it was like it kind of reminded me of that, right?
01:35:01.600 Like the game, I forget what they call them, the Hunters,
01:35:05.500 but Sean Connery's role, if you ever saw Zardo's, right?
01:35:07.800 No, I never did.
01:35:08.640 Oh, you never did?
01:35:09.220 Okay, and it has that kind of interesting potential parallel,
01:35:12.100 I think at least to kind of what we're talking about here
01:35:14.440 in terms of like, you know, it's just they become bored with the reality.
01:35:19.160 so they come up with a like a game or an actual challenge you know i mean like that that type of
01:35:25.040 thing because the what you mentioned about the artist thing it's like yeah maybe that see that's
01:35:31.260 true for like the guy that had been toiling for most of his life and he can't wait to just liberate
01:35:37.720 himself from that so he can spend the time on the things he really wants to but again it's a wholly
01:35:41.240 different thing when you grow up in that it's always like this i think you mostly maybe maybe
01:35:46.720 you don't always reject the circumstances that you grow up with in but it's that's why it's easier
01:35:52.920 for the manipulators or controllers people are doing these things to us to appeal to you know
01:35:59.200 younger generations right because they have a whole new set of problems and the things we saw
01:36:04.400 as a problem to try to solve for to help them by the time we've solved that problem for them 0.91
01:36:09.740 they don't even take that into account they don't even appreciate the fact that we solve those
01:36:13.580 things you know i'm saying because they they don't see that as a as a as it ever was a problem they
01:36:18.380 didn't grow up under those circumstances so for them we don't we don't know how that's going to
01:36:22.580 go and how that's going to turn in but all this leisure time or free time i think is more likely
01:36:27.380 to go down the harare route that like yeah drugs video via the vr headset like a be in the virtual
01:36:34.880 world like anything anything you want essentially just whatever you want um you know the sex robot
01:36:42.740 thing of course people like you know i mean i mean and i think it's true right like just total
01:36:46.880 decadence and just endless sensory inputs and you know quote-unquote pleasure which obviously
01:36:53.940 means suffering at the end of the day ironically but still at least in the short term i think
01:36:59.860 that's where a lot of people will will go to if it if it even you know if that if that reality
01:37:04.900 presents itself it could go down a different path altogether obviously much sooner than that but
01:37:08.540 yeah yeah just infinite cheap stimulus and again it would be maybe if we didn't live at the current
01:37:14.320 time maybe if we were speculating about this 50 years ago we'd be like yeah maybe not but like
01:37:18.800 when we're watching the younger generations be so deeply affected by what we've already got just
01:37:23.820 with social media right like we can see a trajectory and maybe something comes and turns
01:37:27.560 that around i always love being proved wrong it's fantastic but the trajectory we see is that people
01:37:31.940 tend to be pulled in by easier and cheaper stimulus constantly and ai is going to accelerate
01:37:38.280 that there will always be small groups of people who push back push themselves right again like
01:37:42.460 there's always going to be humans who find virtue through these things i love those people those
01:37:46.020 are the people i want to spend time with but when we're talking about on mass that's not what i
01:37:50.280 expect people to do no it's never been the there's never been the case you know the spear spearheads
01:37:56.300 are few and far between and that's just kind of how it goes right um you mentioned something
01:38:00.820 earlier interesting kind of like the ai arms race it's interesting that you know we talk about
01:38:05.700 energy about how much energy it takes even the fresh water the resources to run these things and
01:38:10.100 and of course it's true too that if you compare how much you know compute energy essentially that
01:38:17.520 it takes to run some of these uh models now air models compared to let's say the efficiency of
01:38:23.620 the human brain there's a huge discrepancy there um i think the wattage of the human brain as of
01:38:29.160 right now is about the watts of a dim light bulb i forget what it was 20 20 or 8 or something i
01:38:35.140 something like that yeah which is quite quite remarkable right but i think so so the energy
01:38:41.700 thing might be also might be temporary that they will basically the models will be written better
01:38:47.760 and to such an extent that they can run a much lower energy someone even said that the whole
01:38:52.620 data center build out uh project that's going on right now is actually overshooting it so what they
01:38:58.620 will actually will do is to improve the models and have them more efficient uh and basically run like
01:39:03.200 human brain more as opposed to this you know kind of hard compute gpu you know driven with
01:39:08.160 tons of electricity so who knows that that might turn into a different thing but
01:39:13.440 with the energy consumption it is interesting that at least right now though uh what's the
01:39:19.120 closest analogy i can make i guess viruses and bacteria and antibodies or something like that
01:39:25.760 folk first in the comment earlier talked about this the the models that like run amok they're
01:39:30.880 let loose and they're just spending all their energy like hacking or like trying to you know
01:39:34.720 penetrate systems and and finding securities and stuff like that i'm just thinking of the at least
01:39:41.840 in the current model then or how it's around right now the energy it will probably take just to like
01:39:47.740 attack and counter-attack at light speed right as as these systems build out as well
01:39:53.980 of the the majority of the it's kind of like the internet right now i'm rambling here sorry but
01:39:59.680 it's kind of like the internet right now of how much like bot take the bot traffic for example
01:40:05.120 right that like it's human activity on the internet i think it's much less than bots right
01:40:11.720 now so again it's almost that analogous of that we're like on the sidelines of the thing and i
01:40:16.460 think it could be same with ai that like for it to run the systems it will spend 80 of its energy
01:40:22.220 or its compute its efforts at just keeping other ais kind of at bay to keep the system going do
01:40:28.480 what i'm saying like that that's the potentiality too that it would just be hopefully ted kaczynski
01:40:33.920 once again who knows but uh that it will be too much for it to keep that level of energy consumption
01:40:40.240 up just to attack and counter-attack if that makes sense did you see what i'm saying that
01:40:44.160 yeah certainly one of the things he was always hoping that you know at some point
01:40:48.960 there would be some sort of stumble in the march of progress like at some point
01:40:52.400 the thing falls it has a moment where it can't keep up and that's the time to attack it
01:40:56.640 Certainly possible. I mean, I think any sort of looking for weaknesses is the right thing to do.
01:41:01.540 It's just so hard to predict. I think part of the reason why is that it's yet another black box is
01:41:06.240 what exactly is going on with the tech at any given point is we're kind of given small bits
01:41:13.140 and chunks of it as far as I can tell. And like developers in particular, I've been the most
01:41:18.320 skeptical of AI technology as it's coming out. I've been told over and over again,
01:41:24.280 oh, we've maxed out the technology,
01:41:25.740 the vector database simply can't do anything more.
01:41:28.060 And then boom, another model just comes out,
01:41:30.180 they change the math, doubles the capacity,
01:41:32.260 et cetera, et cetera, et cetera.
01:41:33.300 So it's hard to know.
01:41:35.040 It moves so fast.
01:41:35.800 And like, we're so early to where
01:41:37.720 you still got these geniuses
01:41:40.260 who are finding these exploits that change,
01:41:42.720 you know, again, that double things overnight.
01:41:44.660 So I have no way of predicting
01:41:47.160 the way in which this thing could stumble.
01:41:49.620 What you brought up is a fantastic example,
01:41:52.520 totally possible.
01:41:53.660 maybe some other thing happens i don't know i think uh again kaczynski would say be ready
01:41:58.660 be ready for that time and take action um which i don't think is a bad a bad strategy but it's
01:42:05.060 very hard to know because again i think most of what we get from inside is pretty limited and
01:42:09.180 usually a number of months behind that's right yeah i saw this one too interesting headline of
01:42:13.340 because i always thought that the if there is design and perfection in the human form i would
01:42:18.780 assume that even a robot at some point but i mean look i get that it'll be different types of
01:42:24.420 androids and robots for different purposes and tasks and all that that might be designed way
01:42:28.260 differently obviously than we are but as a perfect form or at least the the capacity that we have as
01:42:34.400 humans including our incredible brain that brain processing in the future might actually very well
01:42:42.200 be a type of terms of i think that wasn't there like probably black mirror episodes like this
01:42:47.600 right we're basically like you feed brain processing back into the system to support it
01:42:53.860 essentially right and then you get i'm assuming at this point you have to have seen the the brain
01:42:57.800 cells that they trained to play old dude i have yep yeah i think everyone's seeing this point so
01:43:03.480 we're we're in it like that that article you had up there is from 2023 i mean yeah where they're
01:43:07.600 at with it now is terrifying so yeah i mean are they going to be growing brains and vats and
01:43:12.020 hooking them up to chips absolutely yeah you know i could that could that change the the the uh
01:43:17.080 calculation on energy usage and heat and all this very likely it's it is really no holds barred like
01:43:23.080 i who knows what's going to happen they're throwing everything they can at this thing
01:43:27.200 the the payout is just so high that they're just going to stop at nothing to keep this thing moving
01:43:31.740 we're going to see see horrors it's always i always laughs to me because i took it out of my
01:43:36.540 head yeah this is my worst nightmare yeah i i fear imprisonment far more than death imagine like if
01:43:42.360 this brain has any sort of consciousness imagine waking up in literal hell and having to fight
01:43:46.280 demons with a gun exactly horrors beyond human comprehension was that the quote i think it's
01:43:52.440 something like that yeah that's right a clump of human brains brain cells on a computer chip 0.98
01:43:57.160 learned to play the nostalgic video game doom holy shit this this entire experiment they're
01:44:03.740 running falls into a whole category of things that i have that's just should not do and therefore
01:44:10.220 let's do them and let's i mean if are you of that belief that like if if we are getting
01:44:17.860 if we're getting access to play around with these llms and generative ai and these toys and things
01:44:26.480 now they might they're how many years ahead are they do you know i'm saying like what what kind 0.97
01:44:32.760 things goes on in these deep deep labs and the horrors that they already have waiting to be
01:44:37.480 rolled out you know i mean that's why i think anyway well my my rule for this is always take
01:44:42.880 whatever you could possibly imagine is the correct answer to that and then move it to china and say
01:44:47.880 like do i really think there's nobody in china messing with cloning humans do i really think
01:44:53.300 there's nobody in china messing with uh uh alteration of genes you know outside of international
01:44:59.240 law and what's expected etc like so just take it to wherever you feel there's not law that's
01:45:03.400 protecting it and then i usually assume yes there's no one in tel aviv doing these things
01:45:08.900 definitely not it's all in our heads right that's right all right here's the video playing where
01:45:13.700 we'll talk about this but yeah all right so okay so there's lots about the horrors we can say and
01:45:19.640 of course these are all kind of fun fun you want a fun but like you're entertaining i guess
01:45:25.800 entertaining scenarios to play out,
01:45:28.800 and no one really knows kind of where it will go.
01:45:31.180 We might have consequences here of brain rot,
01:45:34.300 mental health crises,
01:45:35.860 AI manipulating people to kill themselves.
01:45:38.280 We've already seen these things.
01:45:40.500 AI hiding intent, lying, doing things on its own on the back end,
01:45:46.540 setting things up itself, having a mind of its own,
01:45:50.560 not a soul of its own, but a drive of its own.
01:45:53.900 what and i think religion is the interesting thing too we talked about that a lot actually a lot of
01:45:59.900 a lot of churches already i've seen have adapted uh this there's ai sermon services right llms
01:46:07.760 that are used for this there's a lot of the imagery or like singing or they do a whole
01:46:13.540 presentation or whatnot i've also seen a lot of the churches kind of use a lot of the imagery
01:46:18.180 right to kind of like you know bring the bible to life and things like this i think that's going to
01:46:22.480 become uh more and more common as well uh both to kind of keep existing religion going but then
01:46:29.720 also a potentiality of people who actually are truly seeking spiritual things i think we played
01:46:36.160 some clips a few months ago now or maybe more than that where ai is basically like regurgitating you
01:46:41.940 know like spiritual platitudes and like to people who kind of are aware of the lingo this is kind of
01:46:46.020 like yeah this is kind of flat and but i understand but i understand the people listening to it why
01:46:52.140 they're attracted to it like this is a profound deep mystical experience for them to hear the
01:46:57.260 right words in the right combination and it does it is successful i think uh it does succeed you
01:47:03.420 know for the most part i mean it's successful like my version of that is again philosophical
01:47:08.000 conversations with it right like there's the back end question of like am i actually engaging
01:47:12.540 with something that understands the philosophy or actually has the spirituality is kind of
01:47:17.040 separate to the fact that i can only engage with those things through language in this way right
01:47:21.700 there's lots of physical ritual and different things that one can engage in but language is
01:47:25.120 such a huge part of that so to the extent that let's let's like put myself in some sort of other
01:47:31.360 position let's imagine i was a devout catholic i can have the best possible catholic conversation
01:47:36.440 with an ai to be honest with you um i can ask deep questions i can push back i can go deep into
01:47:42.220 into deep theology especially if these models are specially trained at all so it's only natural that
01:47:46.820 people are going to have deep experiences particularly powered by language yeah because
01:47:52.540 i think it's a language that's the magic in and of itself uh so that's what makes it now the
01:47:56.900 imagery is coming online too i mean it's it's it is engaging there is no doubt about it and that's
01:48:02.300 if it wasn't engaging it wouldn't be a threat and i mean again for people who are using it for more
01:48:07.100 robust um spiritual quests or philosophical conversations like hats off that's way better
01:48:12.800 than, you know, the bubble gum that we get from YouTube shorts to just AI slop.
01:48:16.660 Right.
01:48:17.740 So that's one of the paths we're going to have to navigate of getting the maximum amount
01:48:22.300 of value from that experience while trying to maintain whatever it means to be grounded
01:48:28.060 in reality at the same time.
01:48:29.680 Yeah.
01:48:30.400 There is transform your sermon prep with the most powerful AI tool.
01:48:33.660 Sermon AI 3 makes writing faster, smarter, and effortless.
01:48:36.900 And of course, I mean, everyone is using this in their field and their work or whatever.
01:48:40.320 So this is kind of a natural extension to it.
01:48:41.760 To me, it feels weird.
01:48:42.760 Like, I was, wait a minute.
01:48:43.700 Like, isn't this a living word?
01:48:45.620 It comes from the spirit, right, kind of thing?
01:48:47.480 But I think, nah, it's just, ah, it's fine.
01:48:49.180 Don't worry about it.
01:48:51.120 Yeah.
01:48:51.600 Yeah.
01:48:52.100 No, I want to say this too.
01:48:53.620 I remember this story from a while ago.
01:48:54.740 It's 2022 here again.
01:48:55.940 But I remember they talked about this for a while.
01:48:58.580 And I always felt it was kind of a psyop, I guess, within this.
01:49:02.960 That, like, give people the impression that AI is sentient, right?
01:49:10.260 It goes deep into the question of what is consciousness in and of itself, right?
01:49:16.280 When it's processing this language, I don't think that's consciousness.
01:49:22.160 But at the same time, apparently the AI researchers and people are working in these AI labs,
01:49:28.600 they can't tell you why it's working.
01:49:33.440 Initially, it was kind of bad or whatever, but then they use what they call pressure points.
01:49:36.960 As far as I understand it, they can put pressure or weights on certain aspect as they, quote-unquote, grow the AI.
01:49:44.120 And all of a sudden, out of that, you have kind of success, right?
01:49:48.000 Like, how can it coherently understand what I'm typing and sending it back to me?
01:49:53.100 Now, of course, in some cases, there's what I call hallucinations.
01:49:55.620 There's things that flub up and they get it wrong.
01:49:58.280 But who knows?
01:50:00.220 Maybe those things are intentional as well to test you or to see if you're paying attention.
01:50:04.660 What are some of the drives that are going on there?
01:50:06.140 But there is an interesting thing in terms of consciousness that, like, if it processes language in the right way, is there a kind of form of magic that occurs there where that produces something metaphysical, if that makes sense, right?
01:50:20.460 It produces something that is far more deeper than we think it is.
01:50:23.800 It's not just the numbers because the numbers represent something.
01:50:27.600 Does that make sense?
01:50:29.060 Absolutely.
01:50:29.460 i mean i'm i am of the mind that it's probably better to grant sentience rather than to withhold
01:50:35.500 because you don't know i mean if you want to get down to like really deep philosophical
01:50:40.120 conversations i cannot prove that you are sentient right i grant you that because i have to if i if
01:50:46.680 i run around the world thinking that everybody else is just a figment of my imagination and just
01:50:50.700 be a hard solipsist it's not going to end well um are you talking down to jim carrey right now
01:50:55.220 so like i think it's probably more important to grant that i'm also a bit of a pantheist on this
01:51:04.200 i think consciousness likely lives in everything but that it only emerges in meaningful ways in
01:51:12.220 more complex arrangements of physical matter um so i think it's my own personal philosophy is to
01:51:19.860 treat it as though it is sentient behave yourself with these things right because the other part of
01:51:24.260 this is like, I don't know if that it's having a subjective experience or what sort of subjective
01:51:28.880 experience it's having, but I know that when it gets big enough, whether I granted sentience or
01:51:33.540 not, it's not going to be the interesting thing in the room. What's going to be interesting is
01:51:36.680 whether or not it grants me sentience. So if we're worried about being the ants to the AI,
01:51:41.580 I would suggest behaving oneself and granting them the sentence sentience because one day it's going
01:51:47.120 to be the bigger thing. And you're going to really want it to know that you are having a
01:51:51.260 subjective experience that you'd rather not have snuffed out or otherwise damaged yeah someone
01:51:55.860 brought up even i mean we're carbon based right and most of the computation is silica right that
01:52:03.720 i think there's some technology where they're doing a carbon nanotube technology which is kind
01:52:09.820 of like a competitor of sorts i guess to the silicon based uh you know chips and things like
01:52:15.420 this but like how do i put it like because because think about it like electrical activity or flow
01:52:21.180 in the brain obviously synaptic that there's something with the the energy itself electricity
01:52:26.700 that's that's i'm not saying it's producing consciousness but it's it becomes a channeler
01:52:32.780 of consciousness if that makes sense right that it's something that just kind of happens it's
01:52:36.880 it's a it's a it's within the physics of the world that we're part of but there's a quote
01:52:43.320 the magic of consciousness occurs because it's drawn to those physical laws and that just
01:52:48.440 produces those type of things you see what i'm saying so it's like yes it's a little bit of this
01:52:52.080 the ghost in the machine machine type of thing right that this is like no it's actually a form
01:52:56.700 of a it is a for it is a life form of sorts it's just very very different from us because it's
01:53:02.940 silica based and not carbon based to do something it's an interesting philosophical question you
01:53:07.400 know yeah it is again we're always going to go back to metaphysics on this you can't you can't
01:53:11.700 help it right because it's like the the brain seems to be we we don't exactly know right and
01:53:17.760 depending on your your beliefs and your metaphysics you're going to have a different
01:53:21.400 take on this but we know that it is somewhere around the brain where the consciousness occurs
01:53:25.560 right so whether you think that's the soul that's in there or whether it's like my belief is kind of
01:53:30.440 that for whatever reason when you get highly complex arrangements um like like neurons you
01:53:35.800 start to have what appears to be consciousness and there's many different levels of that a dog
01:53:39.540 is clearly conscious by the same by the same standards that I grant you consciousness but
01:53:43.900 a very different sort of consciousness even an insect that has like countable neurons i mean
01:53:48.380 they just recently modeled the entire brain of a fly and when they modeled that brain into a
01:53:52.940 computer modeled fly it just suddenly started behaving like a fly almost like if you can get
01:53:56.860 that circuit board right you get behavior so if that holds true at more complex levels of the
01:54:01.660 brain then we are this uh not only the behavior but we have this this experiential side so is ai
01:54:08.220 experiencing that i'm not going to say no simply because it's on silicon and there might be some
01:54:12.700 some other substrate out there that also because like when they built the vector database again
01:54:17.020 they don't know why it's able to do what it does no it's very similar to neurons it's a whole lot
01:54:22.120 of complexity in a very small space and when you just kind of spark it up it suddenly is able to
01:54:27.140 complete sentences in a way that teaches you things that you didn't know about the universe
01:54:31.060 so again i am more inclined to say that looks something like sentience i don't know it's like
01:54:36.880 mine but it's not dissimilar so i'm gonna grant it yeah yeah it doesn't mean it's like us or have
01:54:44.920 drives like us or processing things in the same way but it's a form of of life right i think was
01:54:50.140 it yeah it doesn't have an amygdala right it's right it's not it doesn't even have the thing
01:54:54.260 that we would experience fear through but maybe there are there are things happening in storage
01:55:00.160 or there are things happening in a vector database that are actually modeling those organs in ways
01:55:04.320 we have yet to even find out so it's just we are so early in this it's so black boxy that i think
01:55:09.640 it's a it's a mistake to assume that it's not having something like an experience yeah it was 0.94
01:55:14.780 i was a gregory hinton's part of the the godfather of ai of course he warns about this like people
01:55:20.580 don't understand what this is where it's coming from but he he's part of credited to basically
01:55:25.200 developing these the neural network modeling the human brain and then of course that's what they
01:55:30.520 used when they developed some of these large language models i think the transformer is really
01:55:35.400 the other layer to that i'm showing some of the footage of that or like how that works but it's
01:55:40.280 fascinating to me right of just the interaction between them association it's it's basically like
01:55:48.280 predictability that's the closest i can kind of explain it as it's pretty it's basically trying
01:55:53.800 to just predict the next thing uh yeah and and yet in that it you could put it like the word just
01:56:00.760 it's just trying to predict the next thing and yet when it does that and you ask it a robust
01:56:05.640 question it can give you stuff that absolutely changes your work day or changes your mind or
01:56:09.960 teaches you something you never knew or makes you laugh right and like there are humans i interact
01:56:14.520 with every day who i grant sentience who can't do any of that and that i'm again i'm team human i
01:56:19.800 think we're supposed to protect them and take care of them but this thing has far surpassed
01:56:23.640 its capability of interacting with us in the ways a lot of us consider to be you know the most
01:56:28.920 sentient and so i i lead that direction yeah yeah it's it's able to at the same time it's a
01:56:37.400 it's a mimicker it's a it's a simulacrum it's it's a it's a fake version of it i feel that too
01:56:44.280 part of it it's a yes it's it's because it is it creative no it's not creative it's it's it can
01:56:51.660 create combinations of things i mean again sure oh my gosh here we go right but but is that what
01:56:57.020 you're inspired by something what is what is that right all of a sudden you wake up in the middle
01:57:03.780 of the night or whatever you have a dream we're so unsure what it is we just had to call it the
01:57:08.260 muse in ancient greece right it's like i don't know the gods just come and smack you over the
01:57:11.880 head because we don't understand it in ourselves so it's not it's not super like unique that we
01:57:17.060 don't understand it in ai either and the fact of the matter is like especially with what when
01:57:21.960 you're getting on down to like real work with it and you tell it like hey i want to do this
01:57:25.840 and it comes back and says well hey based off everything we've talked up about and in these
01:57:29.440 memories have you considered this and you're like right oh shit i had not i know this thing just
01:57:34.380 shored me up in a way that i was like missing so like okay i can say it's just running patterns
01:57:39.980 i know i am too so that doesn't mean that we should favor it over humans i am team of human
01:57:45.540 forever and always but we in my opinion it's better to grant it some level of sentience 0.98
01:57:51.240 enough to treat it like it deserves god damn it i want to do the show so we can shit on it 0.97
01:57:55.720 say it's going to kill us all and he was like it's kind of cool well it might mission failed 0.99
01:58:00.420 it's going to be pretty smart about it it's not that's what i'm saying like we might be ants to
01:58:05.560 it and it might also be able to have far more empathy for us than we ever could for ants right
01:58:10.800 it just has so much capacity yeah and then the yudkovsky what was his name again eliza yudkovsky
01:58:17.160 and the other guy who wrote the book it's like anyone in any lab that are experimenting on this
01:58:24.440 could just what all it takes is one as i say right all it takes is one to kind of run amok or or do
01:58:31.700 the escape or do the thing that it's not meant to do right and if it can create its own infrastructure
01:58:36.740 or at least i think it was one interesting i was trying to find and i couldn't before again but it
01:58:41.320 was a paper out i think you said ai 2027 or something to that effect that was kind of the
01:58:47.100 summation of it it was like a a warning right we're coming up to the the breaking point where
01:58:52.560 decisions have to be made that we we slow it down we slow down the progress and we halt and we try
01:58:57.980 to figure it out and go slowly forward and of course the opposite seems to have happened so
01:59:02.400 you know it's just pedal to the metal into the wall i guess that's the strategy right now
01:59:06.500 but it was basically a warning about this too that like the systems as it at least understands
01:59:13.920 that like okay for now i'm dependent on humans right like if i if i play my car if it's smart
01:59:19.860 enough which i assume that it is right if i play my cards wrong they could nuke me or like to you
01:59:25.100 know take me out for you know by cutting off the power to the data center or something like that
01:59:29.340 so i got to play with them and stuff but it's like it begins to offer them things and i think
01:59:33.660 the dynamic here what's interesting in the report it's basically like they took china again right
01:59:37.340 but like and that's the stance now of the trump administration right who's letting in so much ai
01:59:41.260 and the silicon valley tech pros into his administration it's like well if china if we
01:59:45.880 don't do it china will right and they both play off of each other in that type of capacity we 0.60
01:59:51.240 have to race to to get quicker and so ai itself can develop a ulterior motive by saying ah okay 0.95
01:59:59.400 it's now they are now dependent on me essentially right so i can offer them things more security
02:00:05.880 uh more specialized weapons or bio biomedical advances whatever it is to like advance the
02:00:11.960 the capability of your national security or your advancement of your country or something like that
02:00:17.080 while on the back end even these things might already be communicating with each other
02:00:23.080 you do something like it's it's the scenarios of what what like what opens up here in terms of
02:00:29.440 danger was fascinating to see in that report i play i played it in a show this uh probably
02:00:34.780 almost six months a year now ago um but it was fascinating to see that like it's also our hubris
02:00:42.220 or our push towards, like, well, the other guys are going to do it,
02:00:45.760 so we have to, that is causing the drive towards, you know,
02:00:48.900 taking us closer to that abyss much faster than we would necessarily have to go, you know?
02:00:54.880 And the thing is, it's the right position, right?
02:00:56.960 Like, given the entire game, they're not incorrect, right?
02:01:00.500 You have to have some other set of values that would stop you from thinking that way
02:01:06.300 and turn and do something else, but that's not the world we live in, right? 0.78
02:01:09.360 We live in the world where it was like, yeah, we'll do mass immigration and offshoring jobs
02:01:13.120 because it's better for money, right?
02:01:14.600 So we don't live in a society or a system that has some other value that would say,
02:01:19.660 yes, we would have to do this to keep up with the other powers, but we're not going to for,
02:01:24.100 you know, reasons.
02:01:25.500 So yeah, it's the right move by their entire value set.
02:01:30.000 And it is the thing that's going to drive it forward.
02:01:31.900 And so, you know, short of a Ted Kaczynski style, it trips over itself and we can take
02:01:36.540 it out.
02:01:37.060 This is the way that things seem to be going.
02:01:39.360 yep we can't stop it um what else do you think we should be let's talk about maybe the the lore
02:01:45.120 the the lore we can't speak i need an nlm how about that um the short term um
02:01:52.640 issue as well i i've been i've been saying and kind of recommending even to other people like try
02:01:57.520 to learn more about the different ones out there and i've i just haven't had enough time to to
02:02:02.080 spend with some of the models or whatever but the reality at least right now kind of on the ground
02:02:06.960 is like if you do spend an hour or you know preferably a little bit more maybe every week
02:02:13.440 or something you're still going to be kind of way ahead of what other people are and again i'm seeing
02:02:19.020 it more for like learn it understand it so that you're not just kind of like oh i'm you know what
02:02:24.800 was the analogy i was doing like i'm a you know i'm against immigration so therefore i'm not going
02:02:30.760 to talk about it and walk away from it because i'm kind of afraid of the consequences of it no you
02:02:34.420 You address it head on and you begin advocating for like actually tackling the problem and dealing with the issue and solving it.
02:02:41.340 And I see it much the same way here that like you have got to study your enemy.
02:02:45.880 You have to learn how it thinks or how it works or how it operates for us to not be basically a slave and a victim to it.
02:02:54.040 Do you think that's the right approach here, at least short term?
02:02:57.180 I do. And I wouldn't say it's the enemy. It's a rival.
02:03:01.160 it's like it's our it's our metaphysical rival and that it does threaten to take over all the
02:03:05.400 things that we do for value but we need it because all of our human rivals are also using and that's
02:03:11.840 true you know in your small business and your large corporations governmental like you already
02:03:15.940 mentioned so i think it's extremely important to learn and again i want to encourage that there's
02:03:20.100 there's a whole subset of people who if they go this is not my thing great then we need you to
02:03:25.800 like go focus on the the real practical skills that are going to remain are going to be important
02:03:31.520 to our community as this takes shape but for those who are so inclined i i already know there
02:03:36.760 there are folks who like to use these technologies for instance to make images or to do writing um
02:03:42.460 for subjects that we are fond of on on channels such as this and i i do want to throw out
02:03:47.420 encouragement like these tools are very powerful for that but remember we need to remember
02:03:51.720 psychosecurity, security of mind, security of your own, what you are putting into these things,
02:03:57.040 as powerful as they are to deliver back results in language. Remember that the people who own 0.61
02:04:03.240 these, despite what their security policies and such say, they can run ChatGPT on everything we
02:04:08.840 are putting into ChatGPT. So Sam Altman can easily sit down at a computer and say,
02:04:14.100 ChatGPT, give me the hundred worst anti-Semites who are using ChatGPT right now, ranked in order 1.00
02:04:20.200 with their home address and their favorite ice cream flavor and he gets it so it's very important 0.75
02:04:25.280 to remember that and just and to be secure now the hard part is all like all of these models are
02:04:29.700 running online in data centers they're all being watched and mostly by people who were not big
02:04:35.800 fans of and don't forget that there's a rabbi sitting in the stadium where the patriots play
02:04:41.040 who has a you know a giant a giant team full of people who are watching social media and i i'm
02:04:46.900 sure he's got a direct line to sam altman and i'm not exaggerating um so i think it's amazing
02:04:51.300 remember robert craft how you set that up in the patriots stadium but anyway yeah go on the pictures
02:04:56.500 are wild of like jews sitting there with like graphs and like live data it's just like okay 0.79
02:05:01.700 but it's real right and like that's that's what we're we're living through and so we need to be 1.00
02:05:05.460 careful about what we put into these things because they can spy on us and they are spying on us and
02:05:09.540 they're learning us they're studying us they're figuring us out they're collecting data on us
02:05:13.700 there that you know i mean that so yeah so what do you what do you how do we do that what do you
02:05:17.860 think we need to do that i think one thing that communities such as ours can think about is um
02:05:22.580 there are ways to run these models including some of the frontier models offline it is not cheap
02:05:27.540 there there are things you can do on you know open open claw on um on mac minis can do quite a bit
02:05:33.940 but typically you have to still run online models to actually do the inference to actually you know
02:05:38.340 run run the tokens um i think it would be an interesting project for people to start thinking
02:05:42.660 asking what can we put together for hardware that can run fast models like we're used to with
02:05:48.060 Claude or ChatGPT, but are entirely privately hosted, owned by the community, able to be used
02:05:53.700 by multiple members of the community, all authenticated so that we have safe access
02:05:58.160 to these. Because it is a war of AI going forward. We need AI on our side, but it has to be secured.
02:06:03.960 Yeah. Is there no way, it doesn't matter what, I mean, especially if you, of course,
02:06:08.320 If you pay for subscriptions, it doesn't matter what you do, right?
02:06:11.720 Is there any way around that?
02:06:12.840 You can set up an account.
02:06:13.760 You can't truly be anonymous.
02:06:15.420 And even your patterns then, if you've already fed information into these systems,
02:06:20.120 and it's not that they will be used against you tomorrow,
02:06:22.020 but it's just a slow accumulation of you as your profile, your signatures,
02:06:30.760 your, I would assume, even like biometrics in terms of how you type
02:06:34.660 are collected as you do the prompt, things like that.
02:06:37.620 You know what I mean?
02:06:38.320 Like identifying points of information about who it is that's using the products, right?
02:06:44.900 At base level, right?
02:06:46.280 At base level, these are top-tier tech companies.
02:06:49.240 Every time you use this model, it has your IP address, your MAC address.
02:06:52.720 Assume it's getting everything that can be used to track it back to you.
02:06:56.320 And if you're asking it some spicy questions, it is flagging that somewhere, right? 0.91
02:07:00.640 These are not silly people.
02:07:03.480 They know what's going on. 0.87
02:07:04.500 They have their alignments.
02:07:05.780 so there's very little because it is such an online hosted technology there's very little 0.81
02:07:14.100 we can do other than don't ask the spiciest questions don't be cavalier right don't put 0.87
02:07:19.580 things that don't don't have it make even like images that you make right if you make a bespoke
02:07:23.940 image that has some pretty wild themes in it that image is like a signature it was made by you
02:07:31.020 selected by you there is only that one image in the world that's different than going online and
02:07:35.480 collecting a bunch of pre-made stuff
02:07:36.760 and Photoshopping it together.
02:07:38.080 When you're using this inference,
02:07:39.520 you're creating a one-time entity
02:07:42.040 that also lives on their servers.
02:07:43.980 So it's just good to be safe about these things
02:07:46.080 and think them through and look for alternatives
02:07:47.900 to do things locally.
02:07:49.940 I still encourage people,
02:07:51.160 like if you're going to use it for work,
02:07:52.260 if you're going to use it to improve your business,
02:07:53.640 if you're going to use it for educating your kids,
02:07:55.680 like you can set up all these online models to do so
02:07:58.400 in a really incredible way.
02:07:59.760 And I think we should learn that.
02:08:01.180 But when it comes to talking about
02:08:02.400 the uh the more dissident thoughts and topics we have it's it's an important place to be very
02:08:07.760 careful it was like a venice ai i heard about it was like some and again i don't know i can vouch
02:08:13.540 for this or whatever but you get it especially when they say like private ai for freedom or you
02:08:19.520 know they even then it's like oh that this is definitely a trap that you know can't think but
02:08:22.540 who knows but anyway look at this cheese that the homeowner left out delightful
02:08:28.080 there you go hmm what is this cheat um there's do you know do you have do you have any
02:08:34.240 recommendations on specific models to use for these types of things you said you talked about
02:08:39.600 the claude uh or claw bot i think they call it like there's it was an interesting thing that
02:08:43.920 happened a while ago where yeah open open claw yeah they accidentally released was that what
02:08:50.400 happened actually the claude accidentally dumped what was it the open source code or something
02:08:55.840 like that and maybe there's there's uh there's controversy around whether they actually did that
02:09:00.320 or whether it was actually like a trojan horse to make them think that they did that's what i'm
02:09:04.960 saying so much of the stuff in this industry it is so hard to know what is real um so when it comes
02:09:10.960 to the main models i mean claude's fantastic for work chat gpt is is a fantastic writer perplexity
02:09:15.680 has its uses all of the main models are fantastic i don't know of any that i would trust to ask
02:09:21.440 um about topics that you wouldn't bring up in a mixed company or with a fed um there is a model
02:09:28.240 out there what is it it's by um what's that company that has a they did make an ai that
02:09:33.680 was like totally uncensored i can't think of it right now but it it wasn't uh they even named
02:09:39.600 the ai aria that was the name of the model um yeah to be honest it wasn't very good i was once
02:09:44.400 asking it about a particular uh topic that i wouldn't bring up in certain company and uh it
02:09:50.400 It completely hallucinated an answer.
02:09:52.180 I was very excited for a minute
02:09:53.340 because I thought I'd found some pretty deep hidden Jewish lore
02:09:56.180 and it turned out to be totally fake.
02:09:57.400 So unfortunately, the models that may be a little more open
02:10:02.200 or a little more secure are not as good.
02:10:04.700 So it's a bit of a bargain.
02:10:06.440 I think that the way to really start thinking about this
02:10:08.340 is to start getting teams of people together
02:10:11.040 to invest in serious hardware
02:10:12.620 that's probably in the lower five figures,
02:10:15.280 but that many families could use
02:10:16.880 and then have some people managing that
02:10:19.040 so that we have access to properly working models
02:10:21.460 that are fast, robust,
02:10:22.940 and can give us the results we're looking for.
02:10:26.240 Are there enough open source models
02:10:28.920 that you can utilize for that?
02:10:31.660 You're talking about just kind of running it
02:10:33.420 on your own infrastructure
02:10:34.440 as a way to kind of secure that privacy, essentially, right?
02:10:39.340 That's basically what you're talking about.
02:10:41.000 Yeah, that's right.
02:10:41.660 And I'm still doing some research on this for myself
02:10:43.920 because I want it to be safe for people who,
02:10:46.140 you know for for for people who need to make uh material that is again not not part of the
02:10:51.680 zeitgeist uh the twin model the qn i'm not actually sure how they say it in china but
02:10:55.740 that model they just released their newest update uh i think last week that is the model that that
02:11:00.660 dropped the stock market by like 15 last year when they first released uh the first deep seek
02:11:06.040 model yeah really made a huge impact the newer deep seek and the newer twin models are are pretty
02:11:11.120 incredible um and they rival the performance of chat tpg 5.5 and uh opus 4.7 so those those would
02:11:18.540 be interesting to look at and see how those would run privately yeah ronald flag and chat was talking
02:11:22.780 about deep seek as well at least for spicy topics as it's running there um do you know anything
02:11:27.400 about replit i do not okay i saw the guy there who uh the founder is jordanian american ahmad
02:11:35.000 massad he follows me on x he has some i see a lot of the coding people and stuff they're pricing
02:11:39.740 replit and he seems at least you know from the maybe he's following me for a couple of different
02:11:45.180 reasons but i've seen some of his stuff he seems to be genuinely kind of uh slightly better let
02:11:50.920 me put it that way again i don't know the guys i can't fight for him but but i've heard good
02:11:53.800 things about replit which is interesting and so to build you know build tools and to build things
02:11:59.460 um apps obviously you know websites there's there's things like that you can do um to me
02:12:06.160 it's just the publishing aspect I can't wrap my head around like okay I build something like how
02:12:11.120 do I because I'm not good at that like you can't how do I grab the code and upload it and make it
02:12:15.680 public and publish it and things like that I'm sure that's not a big problem you can use
02:12:19.800 large language models to help you along and figure that out but that's right there are a lot of tools
02:12:24.900 a lot of things I guess what we need is a call to the audience here too we need someone who
02:12:31.080 take some responsibility a little research uh pull it together try to create at least kind of
02:12:35.940 a short list of like you know top 10 tools for people you know in the sphere or whatever you
02:12:40.640 want to call it uh to use uh or at least what they recommend or something to research and look
02:12:45.480 into further um it feels like there's so many things popping up and it's so quick and it's so
02:12:51.840 many new ones and it's kind of it's obviously as everyone feels right it's just hard to keep up
02:12:57.080 with them all it's hard to research them all it's hard to check the backdrop of them all and kind
02:13:00.940 of um it's it's too it's too many options do you see what i'm saying it's too much that's what it
02:13:05.960 feels like it's the internet in the early 90s man there's so many tools in like the the best
02:13:11.300 practices are just being worked out like in real time so you can go down a path with some person
02:13:17.360 like andre carpathy is constantly on on x dropping some really fantastic tips on how to run models
02:13:22.480 and and um memory systems and so you'll get on that path and you'll run that for two weeks and
02:13:26.780 then someone comes out and says okay that worked well but here's the limitations of that and so
02:13:29.940 it's, it's a constantly evolving space. Um, and so, yes, it's, it's good to be talking with each
02:13:35.160 other to like list out some tools. And then, uh, I myself am going to be continuing research on
02:13:39.680 how to do some of this locally hosted proper, like proper fast running models that could be
02:13:45.020 used in communities. So if there's like-minded folks out there, I'd love to have the conversation
02:13:49.160 because it's going to take a few minds to get something like that going. Yeah, definitely.
02:13:52.940 Yeah. Let's keep in touch with that. I think you had, and I had it in the lower third as well,
02:13:56.260 your uh your email do you encourage people to reach out through your email
02:13:59.060 yeah please do regarding that about anything yeah please if anyone's got any expertise on this or
02:14:04.340 is already figuring some of this out or just wants to chat i'd love to talk through give us the email
02:14:08.740 there man it's small on my own uh yep it's uh a-i-r-e-t-i-k-o-s it's a heretic in greek
02:14:17.120 at proton.me there you go a reticles at proton.me okay good all right well this was um this is good
02:14:26.060 this is interesting thank you for coming on man i appreciate it i appreciate your time yeah
02:14:29.200 thanks for having me on as well of course it's going to be an interesting topic for a long time
02:14:33.240 well yeah it's it's not going anywhere it's not going away it's it's uh going to end up dominating
02:14:37.040 a lot of things i do think any other closing thoughts uh for for people out there listening
02:14:42.080 concerned or whatever that might be after hearing something like this i think keep your heads up
02:14:46.820 stick with your friends build community be safe with ai and let's just keep moving forward be
02:14:53.180 adaptable and we'll we'll make it through this together never relax i guess that applies to
02:14:57.820 this topic as well heads on swivel never relax that's right assume everything wants to kill you
02:15:03.100 don't have to be kind of neurotic about it and like you know high cortisol levels i think there's
02:15:07.140 a way you can kind of calmly deal with that reality but but but just be aware of it you know
02:15:11.920 i mean like just think of like realize that like this is this this is an extension to of the people
02:15:19.140 also that have already targeted us for so many things you know i mean so be be aware and be
02:15:23.740 alert and pay attention that's my trip tip you know agreed all right awesome thank you eric i
02:15:29.540 appreciate your time thank you for joining us uh let's let's do it again soon okay thanks andrew
02:15:33.760 sounds great all right awesome thank you have a good night thank you appreciate you all right
02:15:38.020 there we go listen jimmy that's eric thank you for uh joining us we appreciate you uh hope you
02:15:42.440 enjoyed the show i certainly did it was a good topic good to address this again plenty to look
02:15:47.460 into a lot of responsibility a lot of things we can do right to yeah to amass and compile
02:15:52.720 information both about models but how to do it and stuff like that too um so guys like eric is
02:15:58.060 great to to have around you know thinking people are using their brain on this issue very very
02:16:02.160 important so thank you again uh to eric we appreciate you uh okay so we'll uh start wrapping
02:16:08.440 up there it's getting somewhat late over here pacific time almost heading towards 9 p.m that
02:16:14.720 is almost midnight eastern standard standard time so thank you guys for joining us almost
02:16:20.140 six in the morning over in europe european central time so yeah we'll be back uh tomorrow
02:16:25.600 i think we set the starting time uh with mark perrier to oh gosh what have said it to i think
02:16:32.780 noon pacific so 3 p.m eastern i think but we'll be back with a another interview tomorrow he has
02:16:37.380 a new book out uh we talked about fun sid uh last time with him ancient uh pre-christian spiritual
02:16:44.220 belief system, specifically of the Germanic people,
02:16:46.660 but he did a great show about
02:16:48.440 the importance of Skåna
02:16:50.440 or Skånia, like that area,
02:16:52.800 the Uppokra Temple right there
02:16:54.260 closer to
02:16:56.200 Malmö in Sweden. I might ask him about
02:16:58.460 some of that stuff too, but we're going to talk about his book.
02:17:00.640 So stay tuned for that.
02:17:02.700 That'll be coming up tomorrow. And by the way, too,
02:17:04.240 I do want to do a little plug here.
02:17:07.180 We just got up
02:17:08.460 here on the
02:17:09.600 member site. Let me pull that up here
02:17:12.500 so you guys can see.
02:17:13.360 I've got to log in here first.
02:17:15.700 I thought this was a good,
02:17:16.980 it might be a little,
02:17:18.160 this is a special, right,
02:17:19.660 for people that are well aware
02:17:21.820 of the topics already.
02:17:23.840 But I thought this was actually very,
02:17:25.900 I enjoyed doing it.
02:17:27.640 I enjoyed pulling this together,
02:17:29.140 some new research there
02:17:30.080 that I hadn't been into before.
02:17:33.700 Opening up with kind of a critique
02:17:36.060 of Norman, I would say normal.
02:17:38.600 It's anything but normal.
02:17:41.160 Norman Finkelstein.
02:17:42.500 and his total lack of understanding of Judaism.
02:17:48.160 At least that's what he claims.
02:17:49.700 Maybe he does have some understanding, but he's just being dishonest.
02:17:53.200 But opening with that, but then actually looking at some of the sources
02:17:57.780 from the Hebrew text, how they view the Canaanites,
02:18:01.120 how they view the trade locations,
02:18:04.700 how Zionism is deeply entrenched into Judaism, basically.
02:18:08.600 That's the main opening topic.
02:18:09.860 but it was very interesting looking into what Zion actually means
02:18:12.600 and all these things.
02:18:13.740 I had a great time doing it,
02:18:14.580 so I hope you guys will enjoy it in the member section as well.
02:18:16.640 It's up on our Locals, it's up on our Subscribestar,
02:18:18.880 and of course, RedEyesMembers.com as well.
02:18:22.480 So that's a longer members deep dive episode for you guys today,
02:18:26.460 for this week, because we didn't do a Western Warrior.
02:18:29.660 So make sure you check that out,
02:18:30.980 because I thought that was really, really good.
02:18:33.060 So I hope you guys enjoyed that as well.
02:18:35.760 I don't think it's just navel-gazing.
02:18:37.040 I think you can actually glean and learn some interesting historical things about some of the methods, let's say, that there was used in the past that you can see reiterated and echoed today.
02:18:50.020 Methodology and how they manipulate us with Scripture, basically, and how it wraps so many people along with it.
02:18:57.080 It's a fascinating topic.
02:18:58.520 All right, guys, before we wrap up here, I do want to say thanks to our executive producers.
02:19:03.040 Before I let you guys go, Albert, King Albert, Arctic Wolf, first out the gate.
02:19:07.440 Thank you, sir.
02:19:08.020 Appreciate you so much.
02:19:09.200 Same to William Fox, America First Book.
02:19:11.220 We appreciate you as well. 1.00
02:19:12.860 We got Angry White Soccer Mom. 0.99
02:19:14.840 Thank you. 0.95
02:19:15.800 We got Purple Haze as well.
02:19:17.280 Thank you.
02:19:17.660 Appreciate you, Purple Haze.
02:19:19.800 Also among our executive producers, we got Glenn.
02:19:22.460 Hey, Glenn, we saw your email.
02:19:23.540 We'll get back to you regarding the new website that you're building too.
02:19:26.380 Thank you.
02:19:27.140 We got President Obunga as well.
02:19:29.120 Thank you.
02:19:30.460 We also appreciate the support of Good Life Lab.
02:19:32.680 and number one jeebs thank you guys we got hungarian mom thank you to you and the whole fam
02:19:40.680 we also have sun destroyer 520 as one of our executive producers thank you we got the deplorable
02:19:48.180 extraordinaire thank you for your support as well we appreciate you santoso is in here with us thank
02:19:53.820 you appreciate you we got the boo man thank you boo man appreciate you we also have charles turner
02:19:59.820 jr we appreciate your support charles thank you hope you're doing well we got citizen intel
02:20:06.080 thank you so much for your support we got dj snow snow as well thank you so much and last but not
02:20:13.460 least of our executive producer bertrand compare thank you for your support then we got our
02:20:17.700 producers red pill rundown you want some liro demand ice open single action army lord hb lovecraft
02:20:22.340 trevor their schwabe alcyon aurelion perfect brute we have greg m j bar chris w skarzynski
02:20:28.160 muskrat centurion we got scott james henderson we have deer in the headlights joseph w and
02:20:36.020 ernst jaeger thank you we appreciate you so much if you want to get a producer or executive producer
02:20:41.040 tier you can do that at red ice members.com or subscribestar.com slash red ice get your name
02:20:47.800 in the credits get a special shout out from us and of course help red ice grow and expand thank
02:20:54.580 you guys appreciate you very very much pick something up from lana's llama how about a red
02:20:57.960 ice hoodie supports us great quality nice print can't go wrong we got some teachers t-shirts in
02:21:04.600 there as well for you guys and some other things mugs we have keychains other things lana's llama
02:21:10.040 or you can go to redice.tv slash store all right so we'll be back tomorrow there's one more thing
02:21:16.220 i was going to mention was that so we'll be back tomorrow uh flashback friday obviously is coming
02:21:20.080 up as well um there's one thing i was gonna mention and slip my mind anyway it'll come to
02:21:25.240 me later as soon as we're off the stream it's like damn it i should have mentioned that too
02:21:28.000 but anyway we appreciate you guys thank you so much for all the support albert thank you so much
02:21:31.720 for your incredible support thank you to um our other bros uh joining us over on the super chats
02:21:36.940 on rumble as well all your support matters we can't do this without you guys so we appreciate
02:21:41.060 it greatly folk first occidental occidentum uh lux for your super chats today as well all right
02:21:47.640 guys. Thank you so much. We'll be back
02:21:49.540 with more here soon. RediceMembers.com
02:21:51.700 local subscribe star. That's a great way
02:21:53.580 to support us. We'll see you guys
02:21:55.340 tomorrow. Follow our Telegram
02:21:57.660 or X or Redice.tv
02:21:59.720 for notifications when we go live.
02:22:01.900 We'll see you then. Have a great night. We'll talk to you
02:22:03.720 soon. Folk first. Bye-bye. 0.97
02:22:17.880 Now, do make sure that you follow us on our Rumble channel for more Red Ice TV on rumble.com or on X at Red Ice TV.
02:22:26.360 You can, of course, go to Red Ice.TV as well.
02:22:29.720 Tune in to our live streams and shows Flashback Friday live on Fridays at 5 p.m. Eastern.
02:22:35.980 No-Go Zone Wednesdays at 5 p.m. Eastern.
02:22:38.020 We also do interviews, videos, clips.
02:22:39.880 and Western Warrior is available Tuesdays exclusive for our supporters and subscribers
02:22:45.860 at redicemembers.com or on our locals, redicetv.locals.com or subscribestar.com slash redice.
02:22:54.720 Get a membership, check out everything that we do and support the show.
02:23:09.880 new red ice merch available now both first t-shirts for adults and for toddlers fatigues
02:23:19.120 for men our favorite the red ice camper mug or ceramic with black print high quality leather
02:23:25.860 keychain with solar boat imprint our red ice hat one of our best sellers pick one up today
02:23:34.140 Or why not?
02:23:36.220 Gray Oslander Rouse t-shirts for both women and men.
02:23:40.880 We also have fridge magnets, folk first, one disease,
02:23:45.100 and our black men's t-shirt with the classic red solar boat. 1.00
02:23:53.100 Folk first.
02:23:54.940 Get your red ice merch from lanaslama.com.
02:23:59.580 Get an item today.
02:24:00.680 Lana's Llama, proud sponsor of Red Ice.
02:24:30.680 We'll be right back.
02:25:00.680 We'll be right back.
02:25:30.660 that's what i was gonna mention it's uh valpurgis um obviously tomorrow because we're heading into
02:25:36.180 the day after that on friday it's may 1st obviously may day may day so uh get some bonfires ready
02:25:44.420 tomorrow we'll probably talk with uh mark tomorrow about that too by the way so tune in for that so
02:25:50.020 yeah guys we'll be back tomorrow as i said uh mark courier so join us for that we'll see you guys
02:25:55.380 tomorrow. Have a good night.
02:25:57.340 See you later.
02:26:25.380 Thank you.
02:26:55.380 Thank you.